By applying machine learning to itself it might be possible to discover which
technique is best used in a given situation.

By applying machine learning to a set of machine learned algorithms we try to
discover if there are any underlying patterns. For instance the structure of
neural network layers can be extracted as a feature for various data sets.
Clustering and pattern analysis is then run against these features to see if
certain layer configurations perform better than others. The goal then is to
recommend layers for a new dataset.

Another approach of interest is to employ a genetic algorithm to iterate over the parameters of a model in an attempt to search for optimal model parameters for a specific dataset. This would not allow any a priori guess at the best algorithm to use but might find useful models that otherwise would not have been considered for a particular problem.

The convergence properties of machine learning algorithms is also part of our meta analysis. By applying each step of a learning algorithm to all elements of a dataset, an optimal ordering of the data may be discovered that provides the fastest convergence. Having dones this for several data sets an algorithm can then be used to search for patterns in these results.

Various datasets can be approximated in different ways. A simple example being to use integer values rather than real values. By applying such data transformations it may be possible to speed up learning and still acheive a result that yields useful models for field use. Again by applying approximation techniques to various problems a data set may be produced for meta analysis.

A final area of research concerns employing random numbers in machine learning. Many methods will produce the same results irrespective of the order that elements of the datasets are presented to the model. However in cases where this is weaker, randomisation can be used to attempt to bring the power of monte carlo methods to machine learning.