Researchers Are Creating AI Scientists, and It’s Going Better Than Expected

Key Takeaways

  • AI scientists can independently develop hypotheses, perform experiments, and write research papers.
  • AI scientists can have bias and make clumsy conclusions due to lack of intuition and experience.
  • The true potential of AI scientists lies in collaboration with human scientists to guide research productively.


Researchers are developing an AI that can come up with hypotheses, perform experiments, and compose research papers independently. I have always felt that AI works best as a tool for humans rather than a replacement for humans, but this AI scientist does seem to have potential.



The Difference Between Scientists Using AI, AI Scientists, and AI Scientists

There are many recent examples of scientists using AI, but my favorite first occurred at MIT in 2019 where an AI was trained on 1,700 FDA-approved drugs and 800 natural products, all of which were antibiotics or had antibacterial properties. The AI analyzed a library of 6,000 compounds to identify one with similar properties. It did. The antibiotic Halicin was discovered. In 2023, the same team discovered a second antibiotic that may combat antibiotic-resistant MRSA.


Usually, the term AI scientist refers to a scientist who is an expert in all aspects or a specific aspect of AI technology, like Large Language models or Artificial Neural Networks. Confusingly, AI scientist is also the term for a new AI that works as a scientist. Thus far, this AI scientist only studies AI models, which means it fits both definitions of an AI scientist.

The AI Scientist

Japanese company Sakana AI is funding a lab at the University of British Columbia and Oxford University to develop an AI scientist that can perform the entire scientific process on its own. It is designed to study scientific literature to make a hypothesis, perform experiments, write a research paper, and peer review its own work against the original scientific literature it started with.


In order to reduce errors, the team at the University of British Columbia developed a step-by-step process for the AI to follow. The AI scientist is given research about an AI model or a type of AI and forms several hypotheses about what could improve the AI. It scores the research ideas based on “interestingness, novelty, and feasibility.” After it’s chosen a hypothesis, it double-checks the database of literature to make sure it is a novel and original idea. The AI then uses a coding assistant program to run its code to test the hypothesis while it takes research notes. It determines if follow-up experiments are needed and composes the research paper.

The last step is to evaluate the research paper and reject the paper if it contains fabricated information or hallucinations that frequently plague AI models. One researcher admitted that they have only been able to reduce hallucinations to ten percent. “We think ten percent is probably unacceptable.”

University of British Columbia


Potential Drawbacks

Popularity bias is a common problem in AI models, and researchers have seen it in the AI scientist as well. It may prioritize an area of study that has already been extensively studied or over-value theories that have the most data.

To humans, the AI can appear to have innovative thinking because it is able to evaluate a problem with no intuition or previous experience to get in the way of outside-the-box solutions. A physicist studying quantum particles in light was wrestling for weeks with how to observe a particular quantum phenomenon. Suspecting their own intuition was getting in the way, he decided to train an AI to work on the problem, and it designed an experiment within hours. The experiment later proved successful.

However, that same lack of intuition and experience can interfere with the interpretation of results. Researchers at UBC relate the AI scientist’s clumsy conclusions to those of young PhD students known for grasping at straws or guessing the implications of their results.


There are also ethical concerns about the development of AI scientists. Who would be given credit for the work of an AI scientist, or, conversely, who would take responsibility for errors, plagiarism, or alterations of data?

What is the Purpose of Science?

MIT scientists discovered two new antibiotics using AI in the hope that these could combat bacteria harming humans. Meteorologists are using AI to better predict dangerous weather patterns. Neuroscientists are using AI to predict cell death in the hopes of one day preventing it. AI in the hands of scientists has already led to new discoveries with the promise of many more to come.

I fully support the creation of an AI scientist, but I suspect its greatest accomplishments will not be achieved independently. It will be in collaboration with a team of human scientists leading the way to provide the questions it needs to answer and to guide its research toward a productive application.


Source link
Exit mobile version