What is ‘original scholarship’ in the age of AI?


While moderating a talk on artificial intelligence last week, Latanya Sweeney posed a thought experiment. Picture three to five years from now. AI companies are continuing to scrape the internet for data to feed their large language models. But unlike today’s internet, which is largely human-generated content, most of that future internet’s content has been generated by … large language models.

The scenario is not farfetched considering the explosive growth of generative AI in the last two years, suggested the Faculty of Arts and Sciences and Harvard Kennedy School professor.  

Sweeney’s panel was part of a daylong symposium on AI hosted by the FAS last week that considered questions such as: How are generative AI technologies such as ChatGPT disrupting what it means to own one’s work? How can AI be leveraged thoughtfully while maintaining academic and research integrity? Just how good are these large language model-based programs going to get? (Very, very good.)

“Here at the FAS, we’re in a unique position to explore questions and challenges that come from this new technology,” said Hopi Hoekstra, Edgerley Family Dean of the Faculty of Arts and Sciences, during her opening remarks. “Our community is full of brilliant thinkers, curious researchers, and knowledgeable scholars, all able to lend their variety of expertise to tackling the big questions in AI, from ethics to societal implications.”

In an all-student panel, philosophy and math concentrator Chinmay Deshpande ’24 compared the present moment to the advent of the internet, and how that revolutionary technology forced academic institutions to rethink how to test knowledge. “Regardless of what we think AI will look like down the line, I think it’s clear it’s starting to have an impact that’s qualitatively similar to the impact of the internet,” Deshpande said. “And thinking about pedagogy, we should think about AI along somewhat similar lines.”

Students Naomi Bashkansky, Fred Heiding, and Chloe Loughridge discuss generative AI at the symposium.
Students Naomi Bashkansky (from left), Kevin Wei, and Chloe Loughridge discuss their experiences with AI.

Computer science concentrator and master’s degree student Naomi Bashkansky ’25, who is exploring AI safety issues with fellow students, urged Harvard to provide thought leadership on the implications of an AI-saturated world, in part by offering courses that integrate the basics of large language models into subjects like biology or writing.

Harvard Law School student Kevin Wei agreed.

“We’re not grappling sufficiently with the way the world will change, and especially the way the economy and labor market will change, with the rise of generative AI systems,” Wei said. “Anything Harvard can do to take a leading role in doing that … in discussions with government, academia, and civil society … I would like to see a much larger role for the University.”

The day opened with a panel on original scholarship, co-sponsored by the Mahindra Humanities Center and the Edmond & Lily Safra Center for Ethics. Panelists explored ethics of authorship in the age of instant access to information and blurred lines of citation and copyright, and how those considerations vary between disciplines.

David Joselit, the Arthur Kingsley Professor of Art, Film, and Visual Studies, said challenges wrought by AI have precedent in the history of art; the idea of “authorship” has been undermined in the modern era because artists have often focused on the idea as what counts as the artwork, rather than its physical execution. “It seems to me that AI is a mechanization of that kind of distribution of authorship,” Joselit said. He posed the idea that AI should be understood “as its own genre, not exclusively as a tool.”

Another symposium topic included a review of Harvard Library’s law, information policy, and AI survey research revealing how students are using AI for academic work. Administrators from across the FAS also shared examples of how they are experimenting with AI tools to enhance their productivity. Panelists from the Bok Center shared how AI has been used in teaching this year, and Harvard University Information Technology gave insight into tools it is building to support instructors. 

Throughout the ground floor of the Northwest Building, where the symposium took place, was a poster fair keying off final projects from Sweeney’s course “Tech Science to Save the World,” in which students explored how scientific experimentation and technology can be used to solve real-world problems. Among the posters: “Viral or Volatile? TikTok and Democracy,” and “Campaign Ads in the Age of AI: Can Voters Tell the Difference?”

Students from the inaugural General Education class “Rise of the Machines?” capped the day, sharing final projects illustrating current and future aspects of generative AI.



Source link

About The Author

Scroll to Top