...

Will AI help or hinder trust in science?


by
John Whittle

April 23, 2024
6 minutes read

In the past year, generative AI tools have emerged – e.g ChatGPT, twinAnd OpenAI’s video creation tool Sora – Captured the public’s imagination.

Everything needed for Start experimenting with artificial intelligence It is an Internet connection and a web browser. You can interact with the AI ​​as you would with a human assistant: by talking to it, writing to it, showing it photos or videos, or all of the above.

As public knowledge of artificial intelligence increases, will come greater public scrutiny of how it is used by scientists.

© Unsplash

While this ability represents completely new territory for the general public, scientists have used AI as a tool for many years.

But as public knowledge of AI increases, will come greater public scrutiny of how it is used by scientists.

AI has already revolutionized science – six percent of all scientific work Benefits from artificial intelligencenot only in computer science, but in chemistry, physics, psychology and environmental science.

Nature, one of the most prestigious scientific journals in the world, has included ChatGPT on its site Nature’s Top 10 List for 2023 Some of the most influential scientists in the world, and until then, were exclusively human scientists.

The use of artificial intelligence in science is twofold.

On some level, AI can make scientists more productive.

When Google DeepMind released an AI-generated dataset of more than 380,000 new physical compounds, Lawrence was… Berkeley Lab used artificial intelligence To conduct compound synthesis experiments on a larger scale than humans can achieve.

But AI has an even greater potential: enabling scientists to make discoveries that would otherwise never be made.

It was an artificial intelligence algorithm that was found for the first time Signal patterns in brain activity data This signals the onset of epileptic seizures, a feat that even the most experienced neurologists cannot replicate.

Early success stories of the use of AI in science have led some to envision a future in which scientists collaborate with AI scientific assistants as part of their daily work.

This future is already here. CSIRO researchers are experimenting with artificial intelligence science agents and have developed robots that can follow spoken language instructions to carry out scientific tasks during fieldwork.

While modern artificial intelligence systems are impressively powerful, especially the so-called Artificial general intelligence tools Like ChatGPT and Gemini, they also have drawbacks.

Generative AI systems are vulnerable to “Hallucinations“Where they make up facts.

Or they could be biased. Google Gemini portrays America’s Founding Fathers as a diverse group It is an interesting case of over-correction bias.

There is a very real risk of AI fabricating the results, and this has already happened. It’s relatively easy to get a generative AI tool to do this Cite non-existing publications.

Furthermore, many AI systems cannot explain why they produce the outputs they do.

This is not always a problem. If AI creates a new hypothesis that is then tested with usual scientific methods, there will be no harm.

However, for some applications, the lack of explanation may be a problem.

Replication of results is a fundamental principle of science, but if the steps taken by AI to reach a result remain ambiguous, replication and verification become difficult, if not impossible.

This could damage people’s confidence in the science produced.

A distinction should be made here between general and narrow artificial intelligence.

Narrow AI is artificial intelligence trained to carry out a specific task.

Narrow AI has already made great strides. Google DeepMind Alpha fold This model has revolutionized how scientists predict protein structures.

But there are many other, lesser-known successes, too, such as the CSIRO’s use of artificial intelligence to detect it New galaxies in the night skyIBM Research is developing artificial intelligence He rediscovered Kepler’s third law of planetary motionOr Samsung AI, which builds artificial intelligence that was capable of this Reproducing Nobel Prize-winning scientific achievements.

When it comes to narrow AI applied to science, confidence remains high.

AI systems – especially those based on machine learning methods – rarely achieve 100% accuracy on a given task. (In fact, machine learning systems outperform humans at some tasks, and humans outperform AI systems at many tasks. Humans using AI systems generally outperform humans working alone and also outperform AIs working alone. There is a rule Considerable scientific evidence for this fact, including this study.)

AI working alongside an expert scientist, who confirms and interprets the results, is a perfectly legitimate way to work, and it is Widely seen They perform better than human scientists or AI systems working alone.

General AI systems, on the other hand, are trained to perform a wide range of tasks, which are not limited to any domain or use case.

For example, ChatGPT can generate a Shakespearean sonnet, suggest a recipe for dinner, summarize a body of academic literature, or generate a scientific hypothesis.

When it comes to artificial general intelligence, the problems of hallucinations and bias are even more severe and widespread. This does not mean that AGI is not useful to scientists, but it should be used with caution.

This means that scientists must understand and evaluate the risks of using AI in a specific scenario and compare them to the risks of not doing so.

Scientists now routinely use general artificial intelligence systems To help write papersAssisting in reviewing academic literature, and even preparing experimental plans.

One risk when it comes to these scientific assistants may arise if the human scientist takes the results for granted.

Of course, diligent, well-trained scientists would not do this. But many scientists are trying to survive in a tough industry of publish or perish. Scientific fraud is already on the riseEven without artificial intelligence.

AI could lead to new levels of scientific misconduct, either through deliberate misuse of technology, or through sheer ignorance because scientists don’t realize that AI is making things up.

Both narrow and general AI have great potential to advance scientific discoveries.

In theory, a typical scientific workflow consists of three stages: understanding the problem to focus on, conducting experiments related to that problem and exploiting the results for real-world impact.

AI can help in all three of these stages.

However, there is a big caveat. Current AI tools are not suitable for naive, out-of-the-box use in serious scientific work.

Public trust in both AI and science will only be gained and maintained if researchers design, build, and use the next generation of AI tools responsibly to support the scientific method.

Getting it right is worth it: the possibilities for using AI to transform science are endless.

Famous founder of Google DeepMind Demis Hassabis famously said: “Building a more capable and general AI, safely and responsibly, requires us to solve some of the toughest scientific and engineering challenges of our time.”

The opposite conclusion is also true: solving the toughest scientific challenges of our time requires building more capable, safe, and responsible artificial general intelligence.

Australian scientists are working on this.

This article was originally published by 360 information Under Creative Commons license. Read the Original article.

Professor John Whittle is Director of CSIRO’s Data61, Australia’s national center for research and development in data science and digital technologies. He is co-author of Responsible AI: Best Practices for Creating Trustworthy AI Systems.

Dr Stefan Harrer is Director of the AI ​​for Science Program at CSIRO’s Data61, leading a global innovation, research and commercialization program aimed at accelerating scientific discoveries through the use of AI. He is the author of the Lancet article “Attention Isn’t All You Need: The Complex Case of Ethically Using Large Linguistic Models in Health Care and Medicine.”

Stefan Harrer is the inventor of several granted U.S. and international patents related to the use of artificial intelligence in science.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

Back to top button

Adblock Detected

PLZ DISABLE YOUR ADBLOCK AND REFRESH THE PAGE