Date of Award


Document Type


Degree Name

Master of Science (MS)


Computer Science

First Advisor

James P. Bagrow


The ever-growing accumulation of data makes automated distillation of understandable models from that data ever-more desirable. Deriving equations directly from data using symbolic regression, as performed by genetic programming, continues its appeal due to its algorithmic simplicity and lack of assumptions about equation form. However, few models besides a sequence-to-sequence approach to symbolic regression, introduced in 2020 that we call y2eq, have been shown capable of transfer learning: the ability to rapidly distill equations successfully on new data from a previously unseen domain, due to experience performing this distillation on other domains. In order to improve this model, it is necessary to understand the key challenges associated with it. We have identified three important challenges: corpus, coefficient, and cost. The challenge of devising a training corpus stems from the hierarchical nature of the data since the corpus should not be considered as a collection of equations but rather as a collection of functional forms and instances of those functional forms. The challenge of choosing appropriate coefficients for functional forms compounds the corpus challenge and presents further challenges during evaluation of trained models due to the potential for similarity between instances of different functional forms. The challenge with cost functions (used to train the model) is mainly the choice between numeric cost (compares y-values) and symbolic cost (compares written functional forms). In this work, we provide evidence for the existence of the corpus, coefficient, and cost challenges; we explore why these challenges exist in the model, and we propose possible solutions. We hope that this work can be used to initiate improvements to this already promising symbolic regression model.



Number of Pages

76 p.