Jump to content

Connor Leahy

From Wikipedia, the free encyclopedia

Connor Leahy is a German-American[1] artificial intelligence researcher and entrepreneur known for cofounding EleutherAI[2][3] and being CEO of AI safety research company Conjecture.[4][5][6] He has warned of the existential risk from artificial general intelligence, and has called for regulation such as "a moratorium on frontier AI runs" implemented through a cap on compute.[7]

Career

[edit]

In 2019, Leahy reverse-engineered GPT-2 in his bedroom, and later co-founded EleutherAI to attempt to replicate GPT-3.[2]

Leahy is sceptical of reinforcement learning from human feedback as a solution to the alignment problem. “These systems, as they become more powerful, are not becoming less alien. If anything, we’re putting a nice little mask on them with a smiley face. If you don't push it too far, the smiley face stays on. But then you give it [an unexpected] prompt, and suddenly you see this massive underbelly of insanity, of weird thought processes and clearly non-human understanding.”[8]

He was one of the signatories of the 2023 open letter from the Future of Life Institute calling for "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."[9][10]

In November 2023, Leahy was invited to speak at the inaugural AI Safety Summit. He worried that the summit would fail to deal with the risks from "god like AI" stemming from the AI alignment problem, arguing that “If you build systems that are more capable than humans at manipulation, business, politics, science and everything else, and we do not control them, then the future belongs to them, not us.” He cofounded the campaign group ControlAI to advocate for governments to implement a pause on the development of artificial general intelligence.[4] Leahy has likened the regulation of artificial intelligence to that of climate change, arguing that "it's not the responsibility of oil companies to solve climate change", and that governments must step in to solve both issues.[3]

See also

[edit]

References

[edit]
  1. ^ "Memes tell the story of a secret war in tech. It's no joke". ABC News. 2024-02-17. Retrieved 2024-07-01.
  2. ^ a b Smith, Tim (March 29, 2023). "'We are super, super fucked': Meet the man trying to stop an AI apocalypse".
  3. ^ a b Pringle, Eleanor. "Asking Big Tech to police AI is like turning to 'oil companies to solve climate change,' AI researcher says". Fortune. Retrieved 2024-08-06.
  4. ^ a b Stacey, Kiran; Milmo, Dan (2023-10-20). "Sunak's global AI safety summit risks achieving very little, warns tech boss". The Guardian. ISSN 0261-3077. Retrieved 2024-07-01.
  5. ^ "Superintelligent AI: Transhumanism etc". Financial Times. 2023-12-05. Retrieved 2024-08-06.
  6. ^ Werner, John. "Can We Handle Ubertechnology? Yann LeCun And Others On Controlling AI". Forbes. Retrieved 2024-08-06.
  7. ^ Perrigo, Billy (2024-01-19). "Researcher: To Stop AI Killing Us, First Regulate Deepfakes". TIME. Retrieved 2024-07-01.
  8. ^ Perrigo, Billy (2023-02-17). "Bing's AI Is Threatening Users. That's No Laughing Matter". TIME. Retrieved 2024-07-20.
  9. ^ Evans, Greg (2023-03-29). "Elon Musk & Steve Wozniak Sign Open Letter Calling For Moratorium On Some Advanced A.I. Systems". Deadline. Retrieved 2024-07-01.
  10. ^ "Pause Giant AI Experiments: An Open Letter". Future of Life Institute. Retrieved 2024-07-01.