Meta recently demonstrated Galactica, an AI method designed to “store, combine and reason about scientific knowledge.” But testers found it could also generate nonsensical papers that could fool readers. The company quickly took Galacta offline, but experts say it may be a sign of things to come. “Fraudsters can apply AI to generate fake research papers or scientific images,” Hong Zhou, who leads AI research and development for Wiley’s research business, told Lifewire in an email interview. “It is also possible to create fake video speeches by simulating famous scientists using artificial intelligence.”
Fake Science
Meta said that the Galactica AI model is intended to help write scientific papers. But in some cases, the model could generate texts that would be inaccurate or racist. The reaction from the scientific community was immediate and withering. Michael Black, director at the Max Planck Institute for Intelligent Systems in Germany, tweeted: “In all cases, it was wrong or biased but sounded right and authoritative. I think it’s dangerous.” In response to the uproar, Meta deactivated the Galactica demonstration. Yann LeCun, Meta’s top AI Scientist, tweeted, “Galactica demo is offline for now. It’s no longer possible to have some fun by casually misusing it.” The Meta fiasco is part of a larger problem with using AI for science. Flavio Villanustre, global chief information security officer for LexisNexis Risk Solutions, said in an email interview that AI can help identify patterns and correlations and even create derivatives from a training set, but it has limitations. For example, he said, you can train an AI model to predict weather conditions better. However, you can’t expect the weather prediction model to discover “more general weather conditions without a human researcher’s thought process associated with this research,” he said. “On one hand, it is quite difficult for AI to determine if two correlated facts are connected by a causality link (one event causes the second) or they are just correlated due to a third cause (e.g., an increase in drownings is only correlated with increased consumption of ice cream and not caused by it; instead, both are caused by the increased temperature in summer),” he added. Also, said Villanustre, the current generation of AI tools cannot generalize how humans can. “While AI in itself is not dangerous to the scientific process, the overreliance on AI may be, since it can turn discovery into a purely deductive approach where nothing new can be learned because inductive thinking has been eliminated,” he added.
AI for Good
Despite the potential perils of AI, most observers agree that the field can be an enormous boon for science. Zhou pointed out that AI can help researchers solve specific scientific issues. AI can be applied to identify weather patterns better and make predictions about the effects of climate change based on hundreds of years of historical data. AlphaFold from DeepMind has created a deep neural network to predict proteins’ shape, a well-known research problem over the last 50 years. “With this knowledge, researchers can better understand the role that proteins play in diseases and ultimately improve diagnoses and treatments,” Zhou said. AI can also support researchers in carrying out their research, he said. It can facilitate the research journey through many stages, including literature review, experiments, collaboration, authoring, submission/peer review, publishing, content discovery, and research work dissemination. “Put simply, AI can do things humans cannot, including quickly and accurately recommending relevant literature or the right collaborators around the world,” Zhou added. Villanustre predicted that AI would become more valuable as it becomes more complex and precise. AI can also be trained faster, and with smaller training data sets, they are starting to generalize outcomes better than ever. In the longer term, the convergence between quantum computing and AI will help us train huge and complex models more efficiently. “Further in the future, the development of Artificial General Intelligence models may finally take AI to a whole new level, where inductive reasoning is possible, but this may be several decades away,” Villanustre added.