Another approach of interest is to employ a genetic algorithm to iterate over the parameters of a model in an attempt to search for optimal model parameters for a specific dataset. This would not allow any a priori guess at the best algorithm to use but might find useful models that otherwise would not have been considered for a particular problem.
The convergence properties of machine learning algorithms is also part of our meta analysis. By applying each step of a learning algorithm to all elements of a dataset, an optimal ordering of the data may be discovered that provides the fastest convergence. Having done this for several data sets an algorithm can then be used to search for patterns in these results.
Various datasets can be approximated in different ways. A simple example being to use integer values rather than real values. By applying such data transformations it may be possible to speed up learning and still achieve a result that yields useful models for field use. Again by applying approximation techniques to various problems a data set may be produced for meta analysis.
A final area of research concerns employing random numbers in machine learning. Many methods will produce the same results irrespective of the order that elements of the datasets are presented to the model. However in cases where this is weaker, randomisation can be used to attempt to bring the power of monte carlo methods to machine learning.