Jump to content

Transhumanism

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 24.150.61.63 (talk) at 02:03, 3 April 2002 (Joy, DeGaris arguments, "inevitability" doctrine). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Transhumanism is an abbreviation of the term 'transitional human', and was first coined by the futurist FM-2030 to describe the early teetering steps which we as a species are making to becoming posthuman.

Transhumanism also describes an emergent school of speculative philosophy which is predicated on the idea that the human species does not represent the endpoint of evolution but rather its beginning. It is concerned with investigation into the implications of social, technological and scientific means of overcoming physical human limitations, and is also a movement which argues the case for fundamentally altering the human condition through these means. Typically, transhumanists believe that the rapid advances in technology will lead in the foreseeable future to the creation of Artificial Intelligence beyond anything conceived of in the Turing Test, and that this will lead inexorably to radical progress in such fields as nanotechnology and sub-molecular engineering.

Critics or opponents of this view typically hold that ethics, not technology, is the key to surviving future advances in technology and its use as weapons. They forsee collective intelligence as organizing first and defeating any possible artificial intelligence that does not share human morality and risk of bodily harm.

Some believe that the most notable such opponent, if not critic, is Theodore Kaczynski, The Unabomber, who was convicted of sending parcel bombs to prominent people in key technology industries, killing three people and severely wounding two others. Although he published a long manifesto that critiqued the ideal of giving up human powers to machines, it should be noted that Kaczynski wrote in his private journals "I believe in nothing, I don't even believe in the cult of nature-worshipers or wilderness-worshipers." His doctrine was itself mostly a negation, and his actions did not demonstrate any great breakthrough in ethics.

A more notable critic, if not opponent, is Bill Joy, a co-founder of Sun Microsystems, who argued in his essay "Why the Future Doesn't Need Us" that human beings would guarantee their own extinction by transhuman means. A proponent of transhumanism who shares most of Joy's analysis but not his fears is Hugo De Garis, who nonetheless predicts "a gigadeath war" in which those who seek to remain humans or remain safe as unaugmented humans will fight to the death to destroy the proponents of transhumanism, e.g. wave after wave of smarter Unabombers killing every last AI researcher.

According to De Garis, however, the transhuman program is so appealing that it will ultimately survive, and triumph, regardless of violent opposition - it is typical of transhumanists that they define victory as inevitable, much as Marxists did. This, as the anti-futurist Max Dublin noted, seems to provide a certain fanaticism and nihilism useful in advancing such causes.

There are at least two transhumanist organisations in existence: The Extropy Institute (http://www.extropy.org) and the World Transhumanist Association (http://www.transhumanism.com). Another good source of information is Anders Sandberg's Transhuman Resources site (http://www.aleph.se/Trans/).