ChatGPT’s hallucination problem is getting worse according to OpenAI’s own tests and nobody understands why

by lucky
0 comments


ChatGPT’s hallucination problem is getting worse according to OpenAI’s own tests and nobody understands why

Remember when we reported a month ago or thus that Anthropic discovered that what is happening within AI models is very different from how the models themselves described their “thinking” process. Well, about this mystery, along with the latest language model (LLM), with countless others, you can now add the worst deception. And this chat is in accordance with the well -known name test in the boats, Openi.

New York Times reported It has shown that Openi’s latest GPT O3 and GPT O4-Mini major LLMS investigations have shown that they are at a much higher risk of making false information than the previous GPT O1 model.

You may also like

At PokoGame, we bring you the latest and most exciting updates from the gaming world. Whether you’re a casual gamer, an esports enthusiast, or a hardcore gaming fan, our platform is designed to keep you informed and entertained.

Stay connect with us

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Copyright @2025- All Right Reserved. Designed and Developed by Pro