
Image by National Cancer Institute, from Unsplash
New AI Detects Questionable Scientific Journals
Scientists developed an AI detection system for open-access journals with shady practices, revealing integrity threats in science and the need for human assessment
In a rush? Here are the quick facts:
- AI trained on 12,000 reputable and 2,500 low-quality journals.
- AI flagged over 1,000 previously unknown suspect journals.
- The current AI false positive rate is 24%, requiring human oversight.
Open-access journals enable free access to research for scientists worldwide, boosting their global exposure. However, the open-access model has created an environment where questionable journals now proliferate. These outlets often charge authors fees, promise fast publication, but lack proper peer review, putting scientific integrity at risk.
Researchers recently published their findings testing a new AI tool which aims to tackle this problem. They trained the AI using more than 12,000 high-quality journals, together with 2,500 low-quality or questionable publications removed from the Directory of Open Access Journals (DOAJ).
The AI learned to identify red flags by analyzing editorial board gaps, unprofessional website design, and minimal citation activity.
It identified more than 1,000 previously unknown suspicious journals from a dataset of 93,804 open-access journals on Unpaywall, which collectively publish hundreds of thousands of articles. Many of the iffy journals come from developing countries.
“Our findings demonstrate AI’s potential for scalable integrity checks, while also highlighting the need to pair automated triage with expert review,” the researchers write.
The researchers point out that the system is not perfect. It currently produces 24% false positives, meaning one in four genuine journals may be incorrectly flagged. Human experts are still required for final evaluation.
The AI system assesses journal credibility by analyzing website content, design elements, and bibliometric data, including citation patterns and author affiliations. Indicators of questionable journals include high self-citation rates and lower author h-index values, while established institutional diversity and broad citation networks indicate reliability.
The research team expects future development will improve the AI system’s ability to detect deceptive publisher strategies. By combining automated tools with human oversight, the scientific community can better protect research integrity and guide authors toward trustworthy journals.