Date of Award

2020

Document Type

Thesis

Degree Name

Master of Science (MS)

Department

Complex Systems

First Advisor

Joshua Bongard

Second Advisor

Nicholas Cheney

Abstract

Agents are often trained to perform a task via optimization algorithms. One class of algorithms used is evolution, which is ``survival of the fitness'' used to pick the best agents for the objective, and slowly changing the best over time to find a good solution. Evolution, or evolutionary algorithms, have been commonly used to automatically select for a better body of the agent, which can outperform hand-designed models. Another class of algorithms used is reinforcement learning. Through this strategy, agents learn from prior experiences in order to maximize some reward. Generally, this reward is how close the objective is to being complete, or otherwise some stepping stone towards its completion. Evolution and reinforcement learning can both train agents to the point where a task can be successfully completed, or an objective met. In this thesis, we outline a framework for combining evolution and reinforcement learning, and outline experimental designs to test this method against traditional reinforcement learning. Finally, we show preliminary results as a proof of concept for the methods described.

Language

en

Number of Pages

24 p.

Share

COinS