Jump to content

Machine learning

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Kku (talk | contribs) at 13:08, 20 April 2006 (Inductive reasoning). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

As a broad subfield of artificial intelligence, Machine learning is concerned with the development of algorithms and techniques, which allow computers to "learn". At a general level, there are two types of learning: inductive, and deductive. Inductive machine learning methods create computer programs by extracting rules and patterns out of massive data sets. Machine learning overlaps heavily with statistics, since both fields study the analysis of data, but unlike statistics, machine learning is concerned with the algorithmic complexity of computational implementations. Many inference problems turn out to be NP-hard or harder, so part of machine learning research is the development of tractable approximate inference algorithms.

Machine learning has a wide spectrum of applications including search engines, medical diagnosis, bioinformatics and cheminformatics, detecting credit card fraud, stock market analysis, classifying DNA sequences, speech and handwriting recognition, object recognition in computer vision, game playing and robot locomotion.

Human interaction

Some machine learning systems attempt to eliminate the need for human intuition in the analysis of the data, while others adopt a collaborative approach between human and machine. Human intuition cannot be entirely eliminated since the designer of the system must specify how the data are to be represented and what mechanisms will be used to search for a characterization of the data. Machine learning can be viewed as an attempt to automate parts of the scientific method. Some machine learning researchers create methods within the framework of Bayesian statistics.

Algorithm types

Machine learning algorithms are organized into a taxonomy, based on the desired outcome of the algorithm. Common algorithm types include:

  • supervised learning --- where the algorithm generates a function that maps inputs to desired outputs. One standard formulation of the supervised learning task is the classification problem: the learner is required to learn (to approximate the behavior of) a function which maps a vector into one of several classes by looking at several input-output examples of the function.
  • unsupervised learning --- which models a set of inputs: labeled examples are not available.
  • semi-supervised learning --- which combines both labeled and unlabeled examples to generate an appropriate function or classifier.
  • reinforcement learning --- where the algorithm learns a policy of how to act given an observation of the world. Every action has some impact in the environment, and the environment provides feedback that guides the learning algorithm.
  • transduction --- similar to supervised learning, but does not explicitly construct a function: instead, tries to predict new outputs based on training inputs, training outputs, and new inputs.
  • learning to learn --- where the algorithm learns its own inductive bias based on previous experience.

The performance and computational analysis of machine learning algorithms is a branch of statistics known as computational learning theory.

Machine learning topics

This list represents the topics covered on a typical machine learning course.


See also

Bibliography

  • Bishop, C. M. (1995). Neural Networks for Pattern Recognition, Oxford University Press. ISBN 0198538642
  • Richard O. Duda, Peter E. Hart, David G. Stork (2001) Pattern classification (2nd edition), Wiley, New York, ISBN 0471056693
  • Huang T.-M., Kecman V., Kopriva I. (2006), Kernel Based Algorithms for Mining Huge Data Sets, Supervised, Semi-supervised, and Unsupervised Learning, Springer-Verlag, Berlin, Heidelberg, 260 pp. 96 illus., Hardcover, ISBN 3-540-31681-7[1]
  • KECMAN Vojislav (2001), LEARNING AND SOFT COMPUTING, Support Vector Machines, Neural Networks and Fuzzy Logic Models, The MIT Press, Cambridge, MA, 608 pp., 268 illus., ISBN 0-262-11255-8[2]
  • MacKay, D. J. C. (2003). Information Theory, Inference, and Learning Algorithms, Cambridge University Press. ISBN 0521642981
  • Mitchell, T. (1997). Machine Learning, McGraw Hill. ISBN 0070428077
  • Sholom Weiss and Casimir Kulikowski (1991). Computer Systems That Learn, Morgan Kaufmann. ISBN 1-55860-065-5

General resources

Journals and Conferences

Research groups

Software

  • SPIDER is a complete machine learning toolbox for MATLAB.
  • PRTools is another complete package similar to SPIDER and implemented in MATLAB. SPIDER seems to have more native support and functions for kernel methods, but PRTools has a slightly larger variety of other machine learning tools. PRTools has an accompanying textbook and much better documentation. Both SPIDER and PRTools are available freely for non-commercial applications.
  • Orange is a machine learning suite with Python scripting and a visual programming interface.
  • YALE is a powerful and free tool for Machine Learning and Data Mining.
  • Weka Machine Learning Software
  • MATLAB, by The MathWorks, has toolbox support for many machine learning tools. The Bioinformatics toolbox includes Support Vector Machines and KNN classifiers. The Statistics toolbox includes linear discriminant and decision tree classification. The Neural Network toolbox is a complete set of tools for implementing Neural Networks (PRTools relies on it for its neural network classifiers). New methods for classifier performance evaluation and cross validation make MATLAB more attractive for machine learning.
  • Synapse by Peltarion supports the development of a wide range of machine learning systems and the integration of different types of machine learning into hybrid systems.
  • MLC++ is a library of C++ classes for supervised machine learning
  • MDR is an open-source software package for detecting attribute interactions using the multifactor dimensionality reduction (MDR) method.
  • questsin an Add-In for Microsoft Excel, that uses machine learning to expand your selection similar to the Popular Fill Data Feature.
  • [3] SemiL is the world first efficient software for solving large scale semi-supervised learning or transductive inference problems using graph based approaches when faced with unlabeled data. It implements various semisupervised learning approaches.