A powerhouse list of professors from Stanford University, MIT, Johns Hopkins University, Harvard School of Public Health, and others want you to know there’s a big problem in AI research.
Dozens of AI experts signed an article in Nature saying that unlike research in other scientific fields, top AI studies are often not transparent and reproducible, and they’re frequently published without details such as full code, models, and methodology. Those findings are then picked up in mainstream media headlines worldwide.
They point to a study also published in Nature this past January, where Google Health reported an AI system that could screen for breast cancer faster and better than radiologists. The study apparently lacked details like methodology and code. “On paper and in theory, the study is beautiful. But if we can’t learn from it, then it has little to no scientific value,” says lead author Benjamin Haibe-Kains, senior scientist at Princess Margaret Cancer Centre. “Journals are vulnerable to the hype of AI.”
We reached out to Google Health for comment and will update this post if we hear back.
The group calls for higher standards among journals, and more sharing among colleagues.