OpenAI’s recent models, including ChatGPT, have indeed accomplished amazing feats. As a result, it’s getting easier for them to figure out where photos are taken with high precision. Earlier this week, we released new models that use context clues provided by images. Everything they do engages users in making informed guesses about where a photo might have been taken. This ambitious and novel provision would have national implications, extending the use of predictive algorithms to law enforcement and social media monitoring.
At a recent preview demo, ChatGPT rose to the occasion with the tricky task of guessing the photo’s location. The breathtaking IONIQ 6 Subaru’s new Solterra electric vehicle on display at the New York Auto Show. The AI meticulously analyzed the image, spending 1 minute and 40 seconds “thinking,” or analyzing the context clues within it. After much deliberation, ChatGPT decided that the text in the image said “4th February – complete roadmap.”
“I’ll need to load the image so I can inspect the text. Once I view it, I realize the text is upside down, so I’ll rotate it so it’s readable,” – ChatGPT
Validating Its Assumptions
To validate its assumptions, ChatGPT crawled Subaru’s new vehicle launch page. It confirmed that the Trailseeker made its world debut at the New York Auto Show. It compared images of Subaru’s booth design with the analyzed photo and found similarities in the “lighting, carpeted ‘forest‑floor’ motif,” which further supported its conclusion.
The AI had some clear shortcomings. It was an inspired move to identify the auto show as a likely venue. It wasn’t able to figure out the specific place—maybe Chicago, New York, or Los Angeles, the tool guessed. In this case, ChatGPT incorrectly named the vehicle as the “Trailspeed” rather than its correct name, the “Trailseeker.”
OpenAI’s GPT-4o, which has comparable capabilities to ChatGPT but lower accuracy, was tested. The results indicate that the tool is capable of accurately synthesizing context clues such as surrounding vegetation and architectural patterning to identify specific locations. Visual misinterpretations may cause serious harm by prompting the wrong answer.
“Even when tool calls correctly advance the reasoning process, visual misinterpretations may lead to incorrect final answers.” – OpenAI
In yet another show-off of its capabilities, one intrepid user uploaded a completely non-descriptive photo to find the property’s address in Suriname. We suspected this image came straight from Google Earth. ChatGPT successfully identified the location. When presented with an image of a library book, it got it right. By decoding a complex code on the label, it was able to identify the location as being the University of Melbourne.
The impacts of this technology lie far beyond the weekend warrior. Local law enforcement agencies might benefit from AI tools like ChatGPT in regards to geolocating images shared on social media sites. As expected, alarm over potential privacy violations has arisen. People are understandably concerned that someone may be able to use this power to stalk them or otherwise do harm.
ChatGPT is sharpening its photo analysis skills and developing its decision-making capabilities. Users need to be constantly aware of the tool’s strengths and shortcomings. The AI’s analytical approaches make for an intriguing glimpse at the intersection between technology and human-like reasoning.
“From there, I can check what’s written and share my findings clearly with the user.” – ChatGPT
What The Author Thinks
While ChatGPT’s ability to identify locations from images is impressive, it raises serious concerns about privacy and potential misuse. As AI continues to advance, it’s crucial that safeguards are implemented to prevent its power from being abused, especially in ways that could compromise personal privacy or lead to harassment. The technology’s growing capabilities highlight the need for responsible usage and ethical considerations in its application.
Featured image credit: PxHere
Follow us for more breaking news on DMR