An analysis found that one-third of generative AI tools, search engines and deep research agents usually make biased claims and can’t correlate information and even provide information that is not backed up by the sources they cite. And even at times, the information is not backed up by reliable sources, and for OpenAI’s GPT 4.5, the figure was even higher, at 47 per cent.
Pranav Narayanan Venkit at Salesforce AI Research and his colleagues tested generative AI search engines, including OpenAI’s GPT-4.5 and 5, You.com, Perplexity and Microsoft’s Bing Chat. Alongside this, they put five deep research agents through their paces: GPT-5’s Deep Research feature, Bing Chat’s Think Deeper option and deep research tools offered by You.com, Google Gemini and Perplexity.
Saleforce AI Researcher, Pranav Narayanan Venkit, and his colleagues tested generative AI search engines such as OpenAI’s GPT-4.5 and 5, Perplexity, Microsoft’s Bing Chat, and You.com, they also tested five deep research agents, Deep Research feature of GPT-5, the Think Deeper option in Bing Chat, Google Gemini, Perplexity, and deep research tools provided by You.com.
A study by Narayanan Venkit and colleagues aimed to evaluate the accuracy, diversity, and sourcing of AI-generated answers in search engines.
The researchers assessed the AI responses against eight metrics, known as DeepTrace, which test whether an answer is one-sided or overconfident, relevant to the question, and how thorough the citations are.
The questions were divided into two groups: contentious issues, which allowed for the detection of biases in AI responses, and expertise-based questions, such as meteorology, medicine, and human-computer interaction.
The AI answers were evaluated using a large language model (LLM) that was trained to understand how best to judge an answer through a training process. The results showed that many models provided one-sided answers, with about 23 per cent of Bing Chat’s claims including unsupported statements. You.com and Perplexity AI search engines had about 31 percent of unsupported claims, while GPT-4.5 produced 47 per cent of unsupported claims and 97.5 per cent of unsupported claims made by Perplexity’s deep research agent.. OpenAI declined to comment on the findings, while Perplexity disagreed with the methodology of the study.
Story continues below this ad
“There have been frequent complaints from users and various studies showing that despite major improvements, AI systems can produce one-sided or misleading answers. As such, this paper provides some interesting evidence on this problem, which will hopefully help spur further improvements on this front” says Felix Simon at the University of Oxford.
However, Aleksandra Urman at the University of Zurich, Switzerland, has concerns about the LLM-based annotation of the collected data and the statistical technique used to check that the relatively small number of human-annotated answers align with LLM-annotated answers.
Despite these disputes, Simon believes more work is needed to ensure users correctly interpret the answers they get from these tools. Improving the accuracy, diversity, and sourcing of AI-generated answers is needed, especially as these systems are rolled out more broadly in various domains.