Machine Intelligence Research Institute

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence.

MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

Machine Intelligence Research Institute
Formation2000; 24 years ago (2000)
TypeNonprofit research institute
PurposeResearch into friendly artificial intelligence and the AI control problem
Location
Key people
Eliezer Yudkowsky
Websiteintelligence.org

History

Machine Intelligence Research Institute 
Yudkowsky at Stanford University in 2006

In 2000, Eliezer Yudkowsky founded the Singularity Institute for Artificial Intelligence with funding from Brian and Sabine Atkins, with the purpose of accelerating the development of artificial intelligence (AI). However, Yudkowsky began to be concerned that AI systems developed in the future could become superintelligent and pose risks to humanity, and in 2005 the institute moved to Silicon Valley and began to focus on ways to identify and manage those risks, which were at the time largely ignored by scientists in the field.

Starting in 2006, the Institute organized the Singularity Summit to discuss the future of AI including its risks, initially in cooperation with Stanford University and with funding from Peter Thiel. The San Francisco Chronicle described the first conference as a "Bay Area coming-out party for the tech-inspired philosophy called transhumanism". In 2011, its offices were four apartments in downtown Berkeley. In December 2012, the institute sold its name, web domain, and the Singularity Summit to Singularity University, and in the following month took the name "Machine Intelligence Research Institute".

In 2014 and 2015, public and scientific interest in the risks of AI grew, increasing donations to fund research at MIRI and similar organizations.: 327 

In 2019, Open Philanthropy recommended a general-support grant of approximately $2.1 million over two years to MIRI. In April 2020, Open Philanthropy supplemented this with a $7.7M grant over two years.

In 2021, Vitalik Buterin donated several million dollars worth of Ethereum to MIRI.

Research and approach

Machine Intelligence Research Institute 
Nate Soares presenting an overview of the AI alignment problem at Google in 2016

MIRI's approach to identifying and managing the risks of AI, led by Yudkowsky, primarily addresses how to design friendly AI, covering both the initial design of AI systems and the creation of mechanisms to ensure that evolving AI systems remain friendly.

MIRI researchers advocate early safety work as a precautionary measure. However, MIRI researchers have expressed skepticism about the views of singularity advocates like Ray Kurzweil that superintelligence is "just around the corner". MIRI has funded forecasting work through an initiative called AI Impacts, which studies historical instances of discontinuous technological change, and has developed new measures of the relative computational power of humans and computer hardware.

MIRI aligns itself with the principles and objectives of the effective altruism movement.

Works by MIRI staff

  • Graves, Matthew (8 November 2017). "Why We Should Be Concerned About Artificial Superintelligence". Skeptic. The Skeptics Society. Retrieved 28 July 2018.
  • LaVictoire, Patrick; Fallenstein, Benja; Yudkowsky, Eliezer; Bárász, Mihály; Christiano, Paul; Herreshoff, Marcello (2014). "Program Equilibrium in the Prisoner's Dilemma via Löb's Theorem". Multiagent Interaction without Prior Coordination: Papers from the AAAI-14 Workshop. AAAI Publications.
  • Soares, Nate; Levinstein, Benjamin A. (2017). "Cheating Death in Damascus" (PDF). Formal Epistemology Workshop (FEW). Retrieved 28 July 2018.
  • Soares, Nate; Fallenstein, Benja; Yudkowsky, Eliezer; Armstrong, Stuart (2015). "Corrigibility". AAAI Workshops: Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, January 25–26, 2015. AAAI Publications.
  • Soares, Nate; Fallenstein, Benja (2015). "Aligning Superintelligence with Human Interests: A Technical Research Agenda" (PDF). In Miller, James; Yampolskiy, Roman; Armstrong, Stuart; et al. (eds.). The Technological Singularity: Managing the Journey. Springer.
  • Yudkowsky, Eliezer (2008). "Artificial Intelligence as a Positive and Negative Factor in Global Risk" (PDF). In Bostrom, Nick; Ćirković, Milan (eds.). Global Catastrophic Risks. Oxford University Press. ISBN 978-0199606504.
  • Taylor, Jessica (2016). "Quantilizers: A Safer Alternative to Maximizers for Limited Optimization". Workshops at the Thirtieth AAAI Conference on Artificial Intelligence.
  • Yudkowsky, Eliezer (2011). "Complex Value Systems in Friendly AI" (PDF). Artificial General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3–6, 2011. Berlin: Springer.

See also

References

Tags:

Machine Intelligence Research Institute HistoryMachine Intelligence Research Institute Research and approachMachine Intelligence Research Institute Works by MIRI staffMachine Intelligence Research Institute Further readingMachine Intelligence Research InstituteExistential risks from artificial general intelligenceFriendly artificial intelligenceNon-profit organizationResearch institute

🔥 Trending searches on Wiki English:

Twitch (service)Jaden McDanielsKaya ScodelarioFranklin D. RooseveltAmar Singh ChamkilaGlass (2019 film)Wrexham A.F.C.Anti-Hero (song)Sudhir KakarList of most-streamed artists on SpotifyList of NBA championsJoe BidenAlex PereiraPhilippinesShōgun (2024 miniseries)RihannaBen White (footballer)Saint GeorgeKysre GondrezickPolandSherri MartelLovely RunnerIvy LeagueMauricio PochettinoNitin GadkariBreaking BadMegan Thee StallionThe Rookie (TV series)The Gentlemen (2024 TV series)Jeffrey Dahmer1Donald M. PayneNancy Wilson (rock musician)Kyle MacLachlanLeonardo DiCaprioDhruv RatheeKepler's SupernovaList of American films of 2024Bob MarleyJack NicholsonSylvester StalloneBillie EilishBabe RuthShaquille O'NealFrom the river to the seaShōgun (novel)C (programming language)2024–25 UEFA Champions LeagueSam PitrodaKu Klux KlanShogunShohei OhtaniChallengers (film)Tom Goodman-HillFreddie MercuryYandexKim Ji-won (actress)Olivia RodrigoSexJude BellinghamNew ZealandKung Fu Panda 4Rebel MoonAnna SawaiOpenAIDwight D. EisenhowerSteve JobsAmon-Ra St. BrownColumbine High School massacreKirsten DunstAdolf HitlerTucker CarlsonConan O'BrienMinnie RipertonNewcastle United F.C.2024 Mutua Madrid Open – Women's singlesYoung Sheldon🡆 More