Be careful about the words, the pictures that you post on social media. Open’s latest AI model, which was released last weekGiven a new viral crease for boot -powered geographical geography. In other words, using AI using it where the picture was taken. Not to keep a point on it, but it can be a dream of documentation and privacy.
Openai’s new O3 and O4-mini model Both images are worth “reasoning”. In broad terms, this means comprehensive image analysis skills. Models can harvest and manipulate images, zooms can read, text, read tasks. Add this agent to the web search capabilities, and you have theoretically a killer image location tool, aimed at somewhat.
According to the Openi himself, “For the first time, these models can integrate the images directly in their thinking. They do not see just one image-they think with it. It opens a new class to solve the problem that combines visual and mediocre arguments.”
Especially the early users of the O3 model found (Via Takkarch) Numerous posts are popped on social media, challenging users to new chat GPT models to play Jiggir with uploaded images.
Snap of a nearby crop of Some books on a shelf? The library at Melbourne University was identified correctly. In another X -Post, the model spotting cars are shown to the left side steering wheels, but also driving on the left side of the road, reducing the options of some countries where the left side is needed to drive, but left -handed cars are common, including the final evidence of Surinam in South America.
O3 is Isnaii told a friend of mine that he knows me a random photo of a random photo that took me in the library 3 and knows it in 20 seconds and it’s right pic.twitter.com/0k8dxifkoyApril 17, 2025
The models are also capable of making all their arguments, including the clusters they see and translate. He said, research published earlier this year shows that the explanations that these models give to the answers do not always reflect AI’s original academic process, if this can be called.
When researchers of internal measures used by his own Claude Model to complete mathematical tasks were “tracked”, he found completely differences with the method that the model claimed that he had inquired.
Whatever the case, privacy concerns are quite clear. Only on someone’s social media feed point to Chat GPT and ask it to make a place to make a triangle. Hack, it is not difficult to imagine that a social media user’s posts can be enough to allow an AI model to predict future movements and places.
Everyone was told, this is another reason how much you do on social media, especially when it comes to completely public posts. On the same note, Tech Crunch questioned the openness of the same concern.
“Openi O3 and O-4-Meni Chat bring visual arguments for GPT, which helps to train our models to refuse requests for private or sensitive information, which is more helpful in emergency reactions. To do, and to use our private persons for the use of it is to be practically monitored and practically monitored against it and practically monitoring it and practically monitoring it and implementing it as a matter of time. Will refuse to provide geography.