Amid a worldwide race for supremacy in artificial intelligence, Stanford University on Monday will unveil a new institute dedicated to using AI to build the best-possible future.
The Stanford Institute for Human-Centered Artificial Intelligence is co-directed by Fei-Fei Li, a former chief scientist for AI at Google, now a Stanford computer science professor.
“The scope and scale of impact of the Age of AI will be more profound than any other period of transformation in our history,” Li and co-director John Etchemendy said in an online note about the new institute. “AI has the potential to radically transform every industry and every society.”
The institute will take advantage of Stanford’s strength in a variety of disciplines, including AI, computer science, engineering, robotics, business, economics, genomics, law, literature, medicine, neuroscience and philosophy, according to promotional materials.
“Our goal is for Stanford HAI to become an interdisciplinary, global hub for AI thinkers, learners, researchers, developers, builders and users from academia, government and industry, as well as leaders and policymakers who want to understand and leverage AI’s impact and potential,” the institute said.
Microsoft co-founder Bill Gates is scheduled to deliver the keynote speech at Monday’s official launch.
Artificial intelligence is, essentially, software that can “see,” “hear” and “think” in ways that often mimic human processes, but at hyper-speed, and theoretically, with a great deal more accuracy. However, rapid advancements in the technology have sparked worries that it will eventually get “smart” and perceptive enough to escape control and wreak havoc on humanity. More immediate concerns revolve around the ethics of allowing algorithms to make decisions — as in “predictive policing” — and around potentially harmful results algorithms can produce when their input includes human bias that those feeding in the data may not even be aware of.
“Artificial Intelligence has the potential to help us realize our shared dream of a better future for all of humanity, but it will bring with it challenges and opportunities we can’t yet foresee,” the institute said.
In the technology industry, AI is widely viewed as central to virtually all future products and services. Google in 2017 said it was going “AI first,” and went so far as to re-brand its research unit as “Google AI.” The revolutionary technology is also considered essential to companies’ and nations’ success, and a world-wide battle to acquire the best AI talent and develop the most lucrative and useful AI technologies pits the U.S. against powerhouse China, and even Canada, which has led the way on machine learning and invests heavily in AI research.
Stanford’s AI institute will work in partnership with a number of other university facilities and initiatives, including the Center on AI Safety, the Center for Ethics in Society, the Center for International Security and Cooperation, and the Stanford Institute for Economic Policy Research, plus AI4ALL, which aims to boost diversity in AI fields.
The university has a history of creating effective cross-disciplinary centers “where people from different departments can work on issues that cross boundaries,” said Steve Blank, a professor of management science and engineering at Stanford and a lecturer at U.C. Berkeley’s business school. The AI institute is a “great idea,” Blank said, but added that the weaponization of social media by authoritarian regimes holds a lesson that institute officials should take to heart. Historically, developers of new technology have not been held accountable for its misuse, Blank said.
“I’m hoping that whatever they do they have export controls … for making sure that things aren’t funded and used in places where it could harm the very people it’s intended to help,” Blank said. “I’ve just lived long enough to see almost everything we were jumping up and down about … turned out to have consequences way beyond what the technologists imagined,” Blank said.
The 78 faculty members assigned to the institute reflect the diversity of fields the university intends to cover in its research and teaching, coming from disciplines including computer science, medicine, law, business, economics, environmental science, linguistics, political science and philosophy. Although the institute highlights the importance of AI being “broadly representative of humanity” across gender, ethnicity, nationality, culture and age, its faculty also reflect the gender gap in technology — only 18 percent are women. About three quarters of the faculty are white.
Courses will include “The Politics of Algorithms,” “Theoretical Neuroscience,” “AI-assisted Health Care” and “Regulating Artificial Intelligence.”
Monday’s launch of the institute includes a symposium with panels featuring Stanford faculty, industry leaders and high-profile academics from elsewhere in academia.