MIT scholars awarded seed grants to probe the social implications of generative AI
In July, MIT President Sally Kornbluth and Provost Cynthia Barnhart issued a call for papers to “articulate effective roadmaps, policy recommendations, and calls for action across the broad domain of generative AI.”
Over the next month, they received an influx of responses from every school at MIT proposing to explore generative AI’s potential applications and impact across areas ranging from climate and the environment to education, health care, companionship, music, and literature.
Now, 27 proposals have been selected to receive exploratory funding. Co-authored by interdisciplinary teams of faculty and researchers affiliated with all five of the Institute’s schools and the MIT Schwarzman College of Computing, the proposals represent a sweeping array of perspectives for exploring the transformative potential of generative AI, in both positive and negative directions for society.
“In the past year, generative AI has captured the public imagination and raised countless questions about how this rapidly advancing technology will affect our world,” Kornbluth says. “This summer, to help shed light on those questions, we offered our faculty seed grants for the most promising ‘impact papers’ — basically, proposals to pursue intensive research on some aspect of how generative AI will shape people’s life and work. I’m thrilled to report that we received 75 proposals in short order, across an enormous spectrum of fields and very often from interdisciplinary teams. With the seed grants now awarded, I cannot wait to see how our faculty expand our understanding and illuminate the potential impacts of generative AI.”
Each selected research group will receive between $50,000 and $70,000 to create 10-page impact papers that will be due by Dec. 15. Those papers will be shared widely via a publication venue managed and hosted by the MIT Press and the MIT Libraries.
The papers were reviewed by a committee of 19 faculty representing a dozen departments. Reflecting generative AI’s wide-ranging impact beyond the technology sphere, 11 of the selected proposals have at least one author from the School of Humanities, Arts, and Social Sciences. All submissions were reviewed initially by three members of the committee, with professors Caspar Hare, Dan Huttenlocher, Asu Ozdaglar, and Ron Rivest making final recommendations.
“It was exciting to see the broad and diverse response which the call for papers generated,” says Ozdaglar, who is also deputy dean of the MIT Schwarzman College of Computing and the head of the Department of Electrical Engineering and Computer Science. “Our faculty have contributed some truly innovative ideas. We are hoping to capitalize on the current momentum around this topic and to support our faculty in turning these abstracts into impact that is accessible to broad audiences beyond academia and that can help inform public conversation in this important area.”
The robust response has already spurred new collaborations, and an additional call for proposals will be made later this semester to further expand the scope of generative AI research on campus. Many of the selected proposals act as roadmaps for broad fields of inquiry into the intersection of generative AI and other fields. Indeed, committee members characterized these papers as the beginning of much more research.
“Our goal with this call was to spearhead further exciting work for thinking about the implications of new AI technologies and how to best develop and use them,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing. “We also wanted to encourage new pathways for collaboration and information exchange across MIT.”
Thomas Tull, a member of the MIT School of Engineering Dean’s Advisory Council and a former innovation scholar at the School of Engineering, contributed to the effort.
“While there is no doubt the long-term implications of AI will be enormous, because it is still in its nascent stages, it has been the subject of endless speculation and countless articles — both positive and negative,” says Tull. “As such, I felt strongly about funding an effort involving some of the best minds in the country to facilitate a meaningful public discourse on this topic and, ideally, help shape how we think about and best use what is likely the biggest technological innovation in our lifetime.”
The selected papers are:
“Can Generative AI Provide Trusted Financial Advice?” led by Andrew Lo and Jillian Ross;
“Evaluating the Effectiveness of AI-Identification in Human-AI Communication,” led by Athulya Aravind and Gabor Brody (Brown University);
“Generative AI and Research Integrity,” led by Chris Bourg, Sue Kriegsman, Heather Sardis, and Erin Stalberg;
“Generative AI and Equitable AI Pathway Education,” led by Cynthia Breazeal, Antonio Torralba, Kate Darling, Asu Ozdaglar, George Westerman, Aikaterini Bagiati, and Andres Salazar Gomez;
“How to Label Content Produced by Generative AI,” led by David Rand and Adam Berinsky;
“Auditing Data Provenance for Large Language Models,” led by Deb Roy and Alex “Sandy” Pentland;
“Artificial Eloquence: Style, Citation, and the Right to One’s Own Voice in the Age of A.I.,” led by Joshua Brandon Bennett;
“The Climate and Sustainability Implications of Generative AI,” led by Elsa Olivetti, Vivienne Sze, Mohammad Alizadeh, Priya Donti, and Anantha Chandrakasan;
“From Automation to Augmentation: Redefining Engineering Design and Manufacturing in the Age of NextGen AI,” led by Faez Ahmed, John Hart, Simon Johnson, and Daron Acemoglu;
“Advancing Equality: Harnessing Generative AI to Combat Systemic Racism,” led by Fotini Christia, Catherine D’Ignazio, Munzer Dahleh, Marzyeh Ghassemi, Peko Hosoi, and Devavrat Shah;
“Defining Agency for the Era of Generative AI,” led by Graham M. Jones and Arvind Satyanarayan;
“Generative AI and K-12 Education,” led by Hal Abelson, Eric Klopfer, Cynthia Breazeal, and Justin Reich;
“Labor Market Matching,” led by John Horton and Manish Raghavan;
“Towards Robust, End-to-End Explainable, and Lifelong Learnable Generative AI with Large Population Models,” led by Josh Tenenbaum and Vikash Mansinghka;
“Implementing Generative AI in U.S. Hospitals,” led by Julie Shah, Retsef Levi, and Kate Kellogg;
“Direct Democracy and Generative AI,” led by Lily Tsai and Alex “Sandy” Pentland;
“Learning from Nature to Achieve Material Sustainability: Generative AI for Rigorous Bio-inspired Materials Design,” led by Markus Buehler;
“Generative AI to Support Young People in Creative Learning Experiences,” led by Mitchel Resnick;
“Employer Implementation of Generative AI Future of Inequality,” led by Nathan Wilmers;
“The Pocket Calculator, Google Translate, and Chat-GPT: From Disruptive Technologies to Curricular Innovation,” led by Per Urlaub and Eva Dessein;
“Closing the Execution Gap in Generative AI for Chemicals and Materials: Freeways or Safeguards,” led by Rafael Gomez-Bombarelli, Regina Barzilay, Connor Wilson Coley, Jeffrey Grossman, Tommi Jaakkola, Stefanie Jegelka, Elsa Olivetti, Wojciech Matusik, Mingda Li, and Ju Li;
“Generative AI in the Era of Alternative ‘Facts,’” led by Saadia Gabriel, Marzyeh Ghassemi, Jacob Andreas, and Asu Ozdaglar;
“Who Do We Become When We Talk to Machines? Thinking About Generative AI and Artificial Intimacy, the New AI,” led by Sherry Turkle;
“Bringing Workers’ Voices into the Design and Use of Generative AI,” led by Thomas A. Kochan, Julie Shah, Ben Armstrong, Meghan Perdue, and Emilio J. Castilla;
“Experiment With Microsoft to Understand the Productivity Effect of CoPilot on Software Developers,” led by Tobias Salz and Mert Demirer;
“AI for Musical Discovery,” led by Tod Machover; and
“Large Language Models for Design and Manufacturing,” led by Wojciech Matusik.