Machine learning offers techniques with immense modeling power. There is, however, no general scheme for optimizing a machine learning algorithm. Namely, the optimum hyperparameter settings depend on the dataset, upon which models are to be trained. A, more or less recent, topic called meta machine learning attempts to optimize certain machine learning algorithms using, e.g., reinforcement learning. Drawing inspiration from this topic, this thesis aims to investigate the possibility to implement a reward-based algorithm for optimization of a neural network’s hyperparameter settings.
- A literature study
- Familiarize oneself with the Keras API, and develop a dense network
- Construct a reward-based algorithm that incorporates the dense network
- Study if the implementation can be made to optimize a few, e.g. three, of the dense network’s hyperparameters.
- Devise a way of quantifying the reward-based algorithm’s performance, and study how the algorithm scales to simultaneous optimization of additional parameters. Compare the implemented optimization algorithm with other typical methods for hyperparameter optimization.
- Write master thesis report
Pre-requisites: Programming, linear algebra and a familiarity with reading scientific articles. Prior knowledge about machine learning is preferrable but not required.
This work can be done by 1-2 students. Different takes on the thesis include the optimisation of: dense networks, and; convolutional networks
At Saab, we constantly look ahead and push boundaries for what is considered technically possible. We collaborate with colleagues around the world who all share our challenge – to make the world a safer place.
Last updated: 02 October 2019 • 13:05