Artificial intelligence
- AI redirects here; for alternate uses, see Ai.
Artificial intelligence is defined as intelligence exhibited by anything manufactured (i.e. artificial) by humans or other sentient beings or systems (should such things ever exist on Earth or elsewhere). It is usually hypothetically applied to general-purpose computers. The term is also used to refer to the field of scientific investigation into the plausibility of and approaches to creating such systems. It is commonly abbreviated as AI, and is also known as machine intelligence.
Overview
The question of what artificial intelligence is, even as defined above, can be reduced to two parts: "what is the nature of artifice" and "what is intelligence"? The first question is fairly easy to answer, though it does point to the question of what it is possible to manufacture (within the constraints of certain types of system, e.g. classical computational systems, of available processes of manufacturing and of possible limits on human intellect, for instance).
The second is much harder, raising questions of consciousness and self, mind (including the unconscious mind) and the question of what components are involved in the only type of intelligence it is universally agreed we have available to study: that of human beings. Study of animals and artificial systems that are not just models of what exists already are widely considered very pertinent, too.
Several distinct types of artificial intelligence have been elucidated below. Also, the subject divisions, history, proponents and opponents and applications of research in the subject are described. Finally, references to fictional and non-fictional descriptions of AI are provided.
Definitions
One popular definition of artificial intelligence research, put forth by John McCarthy in 1955 is "making a machine behave in ways that would be called intelligent if a human were so behaving." However this definition seems to ignore the possibility of strong AI (see below). Another definition of artificial intelligence is intelligence arising from an artificial device. Most definitions could be categorized as concerning either systems that think like humans, systems that act like humans, systems that think rationally or systems that act rationally.
Strong artificial intelligence
Strong artificial intelligence research deals with the creation of some form of computer-based artificial intelligence that can truly reason and solve problems; a strong form of AI is said to be sentient, or self-aware. In theory, there are two types of strong AI:
- Human-like AI, in which the computer program thinks and reasons much like a human mind.
- Non-human-like AI, in which the computer program develops a totally non-human sentience, and a non-human way of thinking and reasoning.
Weak artificial intelligence
Weak artificial intelligence research deals with the creation of some form of computer-based artificial intelligence that cannot truly reason and solve problems; such a machine would, in some ways, act as if it were intelligent, but it would not possess true intelligence or sentience.
To date, much of the work in this field has been done with computer simulations of intelligence based on predefined sets of rules. Very little progress has been made in strong AI. Depending on how one defines one's goals, a moderate amount of progress has been made in weak AI.
Development of AI theory
Much of the (original) focus of artificial intelligence research draws from an experimental approach to psychology, and emphasizes what may be called linguistic intelligence (best exemplified in the Turing test).
Approaches to artificial intelligence that do not focus on linguistic intelligence include robotics and collective intelligence approaches, which focus on active manipulation of an environment, or consensus decision making, and draw from biology and political science when seeking models of how "intelligent" behavior is organized.
Artificial intelligence theory also draws from animal studies, in particular with insects, which are easier to emulate as robots (see artificial life), as well as animals with more complex cognition. AI researchers argue that animals, which are simpler than humans, ought to be considerably easier to mimic. But satisfactory computational models for animal intelligence are not available.
Seminal papers advancing the concept of machine intelligence include A Logical Calculus of the Ideas Immanent in Nervous Activity (1943), by Warren McCulloch and Walter Pitts, and On Computing Machinery and Intelligence (1950), by Alan Turing, and Man-Computer Symbiosis by J.C.R. Licklider. See cybernetics and Turing test for further discussion.
There were also early papers which denied the possibility of machine intelligence on logical or philosophical grounds such as Minds, Machines and Gödel (1961) by John Lucas [1].
With the development of practical techniques based on AI research, advocates of AI have argued that opponents of AI have repeatedly changed their position on tasks such as computer chess or speech recognition that were previously regarded as "intelligent" in order to deny the accomplishments of AI. They point out that this moving of the goalposts effectively defines "intelligence" as "whatever humans can do that machines cannot".
John von Neumann (quoted by E.T. Jaynes) anticipated this in 1948 by saying, in response to a comment at a lecture that it was impossible for a machine to think: "You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!". Von Neumann was presumably alluding to the Church-Turing thesis which states that any effective procedure can be simulated by a (generalized) computer.
In 1969 McCarthy and Hayes started the discussion about the frame problem with their essay, "Some Philosophical Problems from the Standpoint of Artificial Intelligence".
Experimental AI research
Artificial intelligence began as an experimental field in the 1950s with such pioneers as Allen Newell and Herbert Simon, who founded the first artificial intelligence laboratory at Carnegie-Mellon University, and McCarthy and Marvin Minsky, who founded the MIT AI Lab in 1959. They all attended the aforementioned Dartmouth College summer AI conference in 1956, which was organized by McCarthy, Minsky, Nathan Rochester of IBM and Claude Shannon.
Historically, there are two broad styles of AI research - the "neats" and "scruffies". "Neat", classical or symbolic AI research, in general, involves symbolic manipulation of abstract concepts, and is the methodology used in most expert systems. Parallel to this are the "scruffy", or "connectionist", approaches, of which neural networks are the best-known example, which try to "evolve" intelligence through building systems and then improving them through some automatic process rather than systematically designing something to complete the task. Both approaches appeared very early in AI history. Throughout the 1960s and 1970s scruffy approaches were pushed to the background, but interest was regained in the 1980s when the limitations of the "neat" approaches of the time became clearer. However, it has become clear that contemporary methods using both broad approaches have severe limitations.
Artificial intelligence research was very heavily funded in the 1980s by the Defense Advanced Research Projects Agency in the United States and by the Fifth Generation Project in Japan. The failure of the work funded at the time to produce immediate results, despite the grandiose promises of some AI practitioners, led to correspondingly large cutbacks in funding by government agencies in the late 1980s, leading to a general downturn in activity in the field known as AI winter. Over the following decade, many AI researchers moved into related areas with more modest goals such as machine learning, robotics, and computer vision, though research in pure AI continued at reduced levels.
Practical applications of AI techniques
Whilst progress towards the ultimate goal of human-like intelligence has been slow, many spinoffs have come in the process. Notable examples include the languages LISP and Prolog, which were invented for AI research but are now used for non-AI tasks. Hacker culture first sprang from AI laboratories, in particular the MIT AI Lab, home at various times to such luminaries as McCarthy, Minsky, Seymour Papert (who developed Logo there), Terry Winograd (who abandoned AI after developing SHRDLU).
Many other useful systems have been built using technologies that at least once were active areas of AI research. Some examples include:
- Chinook was declared the Man-Machine World Champion in checkers (draughts) in 1994.
- Deep Blue, a chess-playing computer, beat Garry Kasparov in a famous match in 1997.
- Fuzzy logic, a technique for reasoning under uncertainty, has been widely used in industrial control systems.
- Expert systems are being used to some extent industrially.
- Machine translation systems such as SYSTRAN are widely used, although results are not yet comparable with human translators.
- Neural networks have been used for a wide variety of tasks, from intrusion detection systems to computer games.
- Optical character recognition systems can translate arbitrary typewritten European script into text.
- Handwriting recognition is used in millions of personal digital assistants.
- Speech recognition is commercially available and is widely deployed.
- Computer algebra systems, such as Mathematica and Macsyma, are commonplace.
- Machine vision systems are used in many industrial applications.
The vision of artificial intelligence replacing human professional judgment has arisen many times in the history of the field, and today in some specialized areas where "expert systems" are used to augment or to replace professional judgment in some areas of engineering and of medicine.
Philosophical criticisms of AI
Several philosophers, notably John Searle and Hubert Dreyfus, have argued on philosophical grounds against the feasibility of building human-like consciousness or intelligence in a disembodied machine. Searle is most known for his Chinese room argument, which claims to demonstrate that even a machine that passed the Turing test would not necessarily be conscious in the human sense. Dreyfus, in his book Why Computers Can't Think, has argued that consciousness cannot be captured by rule- or logic-based systems or by systems that are not attached to a physical body, but leaves open the possibility that a robotic system using neural networks or similar mechanisms might achieve artificial intelligence.
Other philosophers hold opposing views. Many see no problem with Weak AI but there is much support for Strong AI too. Daniel C. Dennett argues in Consciousness Explained that if there is no magic spark or soul, then Man is just a machine, and he asks why the Man-machine should have a privileged position over all other possible machines when it comes to intelligence.
Some philosophers hold that if Weak AI is accepted as possible then so must Strong AI. The Weak AI position, that intelligence might be apparent but would not be real, is debunked by many but one accessible example can be found in Simon Blackburn's introduction to philosophy, Think. Blackburn points out that you might appear intelligent but there is no way of telling if that intelligence is real (whatever that means in this context): We have to take it on trust.
Supporters of Strong AI claim that the anti-AI argument boils down in the end to arrogance (a privileged position is claimed, a magic spark is introduced, perhaps by God) or to definition (by defining intelligence as that of which machines are incapable) or to both.
An argument supporting Strong AI and those which deny its possibility must necessarily attack:
- Given that the mind is the software/hardware brain, and:
- Given the Church-Turing thesis:
- The possibility of Strong AI must be accepted.
Some (including Roger Penrose) attack the Church-Turing thesis. Others say the mind is not (entirely) physical.
Hypothetical consequences of AI
Some observers foresee the development of systems that are far more intelligent and complex than anything currently known. One name for these hypothetical systems is artilects. With the introduction of artificially intelligent non-deterministic systems, many ethical issues will arise. Many of these issues have never been encountered by humanity.
Over time, debates have tended to focus less and less on "possibility" and more on "desirability", as emphasized in the "Cosmist" (versus "Terran") debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to de Garis, is actually seeking to build more intelligent successors to the human species. The emergence of this debate suggests that desirability questions may also have influenced some of the early thinkers "against".
Some issues that bring up interesting ethical questions are:
- Determining the sentience of a system we create.
- Turing test
- Cognition
- Why do we have a need to categorize these systems at all
- Can AI be defined in a graded sense?
- Freedoms and rights for these systems
- Can AIs be "smarter" than humans in the same way that we are "smarter" than other animals?
- Designing systems that are far more intelligent than any one human
- Deciding how much safe-guards to design into these systems
- Seeing how much learning capability a system needs to replicate human thought, or how well it could do tasks without it (eg expert system)
- The Singularity
Sub-fields of AI research
- Combinatorial search
- Computer vision
- Expert system
- Genetic programming
- Genetic algorithm
- Knowledge representation
- Machine learning
- Machine planning
- Neural network
- Natural language processing
- Program synthesis
- Robotics
- Artificial life
- Artificial being
- Distributed Artificial Intelligence
- Swarm Intelligence
Computer programs and robots displaying some degree of "intelligence"
There are many examples of programs displaying some degree of intelligence. Some of these are:
- ALICE
- Alan
- ELIZA
- Creatures
- Cyc
- The Start Project
- X-Ray Vision for Surgeons
- BBC news story on the creator of Creatures latest creation. Steve Grand's Lucy.
Artificial intelligence in literature and movies
- HAL 9000 in 2001 A Space Odyssey
- HARLIE in When H.A.R.L.I.E. was One by David Gerrold
- A.I.: Artificial Intelligence
- Artificial intelligence -- mainly its philosophical implications and its impact on the humanities -- is a major theme in David Lodge's campus novel Thinks ... (2001).
- Rosie and other robot from The Jetsons
- Mike in The Moon is a Harsh Mistress by Robert A. Heinlein
- Neuromancer
- Various novels by Isaac Asimov and the Three Laws of Robotics. Are still considered to be the best in the genre.
- Ghost in the Shell
- The Matrix and sequels
- The Terminator series
- Short Circuit
- Various Star Trek "characters", notably Data.
- Deep Thought in The Hitch Hikers Guide to the Galaxy
- The bomb in Dark Star (1974, by John Carpenter)
- Harry Harrison / Marvin Minsky: The Turing Option (novel)
- The Mind's I edited by Daniel C. Dennett and Douglas Hofstadter
- Personoids Stanislaw Lem's novels/books, maybe most "advanced" science-fiction (search on the Web).
See also List of fictional robots
Non-fiction books about artificial intelligence
- Gödel, Escher, Bach : An Eternal Golden Braid by Douglas R. Hofstadter
- Shadows of the Mind and The Emperor's New Mind by Roger Penrose
- Consciousness Explained by Dennett.
- The Age of Spiritual Machines by Ray Kurzweil
- Understanding Understanding: Essays on Cybernetics and Cognition by Heinz von Foerster
- In the Image of the Brain: Breaking the Barrier Between Human Mind and Intelligent Machines by Jim Jubak
Related articles
Philosophy
Logic
Science
Applications
Uncategorised
- Collective intelligence - the idea that a relatively large number of people co-operating in one process can lead to reliable action.
- Quantum mind - the idea that large-scale quantum coherence is necessary to understand the brain.
- the Singularity - a time at which technological progress accelerates beyond the ability of current-day human beings to understand it, or the point in time of the emergence of smarter-than-human intelligence.
- Mindpixel - A project to collect simple true / false assertions and collaboratively validate them with the aim of using them as a body of human common sense knowledge that can be utilised by a machine.
Sources
- John McCarthy: Proposal for the Dartmouth Summer Research Project On Artificial Intelligence. [2]
AI Researchers
There are many thousands of AI researchers around the world at hundreds of research institutions and companies. Among the many who have made significant contributions are:
AI related organizations
- American Association for Artificial Intelligence
- European Coordinating Committee for Artificial Intelligence
- The Association for Computational Linguistics
- Artificial Intelligence Student Union
- German Research Center for Artificial Intelligence, DFKI GmbH
- Association for Uncertainty in Artificial Intelligence
- Singularity Institute for Artificial Intelligence
See also
External links
- AI Algorithm Steps
- AI Depot -- community discussion, news, and articles
- Loebner Prize website
- AIWiki - a wiki devoted entirely to Artificial Intelligence.
- AIAWiki - this one is devoted to Artificial Intelligence algorithms and research.
- Artificial Intelligence
- Mindpixel "The Planet's Largest Artificial Intelligence Effort"
- OpenMind CommonSense "Teaching computers the stuff we all know"
- Artificially Intelligent Ouija Board - creative example of human-like AI
- German Thesis Philosophical Dissertation arguing for a principal impossibility of AI. The claim is made that intelligence presuppose spirit, and that it is not possible for humans to create a spirit. Hence an intelligent being can at best only be technically imitated.