Original Article by Datanami
Artificial intelligence is emerging as an indispensable tool for big data analytics. Still, some high-profile critics worry about “unintended, even potentially disastrous, consequences” should AI manage to get out of control.
Hence, entrepreneur Elon Musk and the Open Philanthropy Project have donated about $7 million to fund a grant program aimed at “keeping AI robust and beneficial” while ensuring the technology “does what we want.”
Musk pledged $10 million for AI research after the Future of Life Institute called for more research on harnessing the technology. Among those also backing the research effort are Facebook, IBM, Microsoft and Google’s DeepMind Technologies.
The largest grant recommendation totaling $1.5 million would be used to create a joint “strategic research center” for AI. According to project summary, researchers from Cambridge and Oxford universities would “develop policies to be enacted by governments, industry leaders, and others in order to minimize risks and maximize benefit from artificial intelligence development.”
The U.K. researchers said they would focus on the “strategic implications of powerful AI systems as they come to exceed human capabilities in most domains of interest, and the policy responses that could best be used to mitigate the potential risks of this technology.”
The institute said the new grant program also would fund 37 individual research projects from among nearly 300 applicants. Grants totals range from $20,000 to more than $342,000 each.
Meanwhile, a handful of projects will focus on areas like “probabilistic inference engines for autonomous agents” (Stanford University) and “Aligning Superintelligence With Human Interests” (Machine Intelligence Research Institute). Another Stanford research project will examine ways to make AI performance more predictable via “failure detection and robustness.”
The largest individual grant went to a University of California at Berkeley research to study “value alignment and moral metareasoning.” UC-Berkeley and Oxford University researchers also will develop techniques for AI systems to learn human preferences based on observing behavior.
A $200,000 grant to the University of Michigan will be used to understand and mitigate “AI threats” to the financial system.
Among the most troubling AI scenarios are autonomous weapons that could be programmed to select their own targets. A University of Denver researcher will investigate AI, lethal autonomous weapons and “meaningful human control.”
Skype-founder Jaan Tallinn, a co-founder of the Future of Life Institute, noted in a statement announcing the grants: “Building advanced AI is like launching a rocket. The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to focus on steering.”
AI activists said the grants would shift the focus of research from “known unknowns” in the 1980s to “unknown unknowns.” “How can an AI system behave carefully and conservatively in a world populated by unknown unknowns—aspects that the designers of the AI system have not anticipated at all?” said Tom Dietterich, president of the Association for the Advancement of Artificial Intelligence.