
AI chatbots like Grok, ChatGPT, and Gemini are being used by netizens for quick fact-checking. But instead of helping, they have only spread more misinformation as the crisis between India and Pakistan escalates.
On social media, questions like "Hey @Grok, is this real?" are trending, but experts say chatbots often get it wrong. One example: an old video from Sudan mistakenly claimed a missile strike in Pakistan. There was even an AI chatbot that said a fake video of a giant anaconda was real — even though it was AI-generated.
According to NewsGuard and Columbia University's Tow Center, AI chatbots are not yet reliable when it comes to breaking news. They often make up stories, invent information, or spread biased answers especially when the topic is sensitive.
As tech companies reduce the number of human fact-checkers, more and more people are turning to AI as a source of news. But researchers say it can be dangerous, especially if the user is not well-informed.
Experts warn: “AI chatbots are not a replacement for real fact-checkers.” Humans are still needed to provide accurate information, especially in times of crisis and disinformation.