Jump to content

Daniel Kokotajlo (researcher)

From Wikipedia, the free encyclopedia

Daniel Kokotajlo is an artificial intelligence (AI) researcher. He was a researcher in the governance division of OpenAI from 2022 to 2024,[1] and currently leads the AI Futures Project.[2]

Biography

[edit]

Kokotajlo is a former philosophy PhD candidate at the University of North Carolina at Chapel Hill where he was a recipient of the 2018–2019 Maynard Adams Fellowship for the Public Humanities.[3] In 2022, he became a researcher in the governance division of OpenAI.[1]

Kokotajlo is one of the organizers of a group of OpenAI employees that claimed the company has a secretive and reckless culture that is taking grave risks in the rush to achieve artificial general intelligence (AGI).[4][5] When he resigned in 2024, he refused to sign OpenAI's non-disparagement clause, which could have cost him approximately $2 million in equity.[6] As of May 2024, Kokotajlo confirmed he retained the vested equity.[7][8] In June 2024, he, with other former OpenAI employees, signed a letter arguing that top frontier AI companies have strong financial incentives to avoid oversight, and calling for a "right to warn" about AI risks without fear of reprisal and while protecting anonymity.[9]

In 2021, Kokotajlo wrote a blog post named "What 2026 Looks Like". In 2025, Kevin Roose commented that "A number of his predictions proved prescient."[2] He cofounded and leads the AI Futures Project, a nonprofit based in Berkeley, California which researches the future impact of artificial intelligence. In April 2025, it released "AI 2027", a detailed forecast scenario predicting rapid progress in the automation of coding and AI research, followed by AGI. It predicts that fully autonomous AI agents will be better than humans at "everything" around the end of 2027.[2]

References

[edit]
  1. ^ a b Pillay, Tharin (September 5, 2024). "TIME100 AI 2024: Daniel Kokotajlo". TIME.
  2. ^ a b c Roose, Kevin (April 3, 2025). "This A.I. Forecast Predicts Storms Ahead". The New York Times. ISSN 0362-4331. Retrieved May 21, 2025.
  3. ^ "2019-2020 E. Maynard Adams Fellows for the Public Humanities".
  4. ^ "OpenAI Insiders Warn of a 'Reckless' Race for Dominance". The New York Times. June 4, 2024. Archived from the original on June 5, 2024. Retrieved April 19, 2025.
  5. ^ Goldman, Sharon. "OpenAI's AGI safety team has been gutted, says ex-researcher". Fortune.
  6. ^ Pillay, Tharin (September 5, 2024). "TIME100 AI 2024: Daniel Kokotajlo". TIME. Retrieved May 6, 2025.
  7. ^ Piper, Kelsey (May 22, 2024). "Leaked OpenAI documents reveal aggressive tactics toward former employees". Vox. Archived from the original on June 1, 2024. Retrieved May 6, 2025.
  8. ^ "Will Daniel Kokotajlo get back the equity he gave up through not signing an NDA?". Manifold. Archived from the original on June 15, 2024. Retrieved May 6, 2025.
  9. ^ "A Right to Warn about Advanced Artificial Intelligence". righttowarn.ai. Archived from the original on April 30, 2025. Retrieved May 6, 2025.
[edit]