An imaginary city in Peru, a version of the Eiffel Tower in Beijing: more and more passengers use tools such as Chatgpt for itinerary ideas – and reach destinations that … do not exist.
You have in tourism: useful but only if you check the Linkedin DMS photo information
Miguel Angel Gongora Meza, founder and director of Evolution Treks Peru, was in a Peruvian rural village, preparing for a hike in Andes, when he surprised a strange conversation. Two unprepared tourists talked with enthusiasm about their plans to go alone in the mountains to the so-called “Humantay sacred canyon”.
“They showed me the screenshot, written with confidence and full of picturesque adjectives, but there was nothing true. There is no Humantay canyon! The name is a combination between two places that have nothing to do with the description. The tourist paid almost 160 dollars to reach a rural road near the village of Mollepata, without a guide and without a destination. said Gongora Meza, quoted by BBC.
Moreover, Gongora Meza stressed that this seemingly harmless mistake could have endangered them: “This type of misinformation is extremely dangerous in Peru. Altitude, climatic changes and accessibility of routes must be carefully planned. If you use a program like Chatgptwhich combines images and names to create a kind of fantasy, you can wake up at 4,000 meters altitude, without oxygen and without signal on the phone. ”
In the following rows, you will find out how important is the verification of the information provided by AI before you go on travel, to avoid non-existent or dangerous destinations, you will discover real examples of tourists who have faced with The “hallucinations” artificial intelligence and essential tips to plan your holidays safe using technology.
Danger you have on holidays: tourists trapped in non -existent destinations
In just a few years, artificial intelligence instruments (AI) such as Chatgpt, Microsoft Childot or Google Gemini have gone from simple curiosity to an integral element in vacation planning. According to a survey, 30% of international travelers are now using generatively and dedicated sites, such as Wonderplan or Layla, to organize their travels.
Although these programs can provide valuable travel tips when they work properly, they can and lead people in frustrating or even dangerous situations when they do not. This is a lesson that some travelers learn on their skin, reaching the destination “Dream” Only to discover that the information received is inaccurate or that they have been directed to a place that exists only in the digital imagination of a robot.
Chatgpt: couple stuck on the mountain in Japan because of the wrong indications
Dana Yao and her husband experienced this recently. The couple used Chatgpt to plan a romantic hiking to the top of Misen Mountain, on the Japanese island Ithukushima earlier this year. After exploring the city of Miyajima without problems, they started at 3:00 pm to reach the sunset exactly as Chatgpt had indicated them.
“Then the problem arose, when I was ready to get off the cable car station. Chatgpt said that the last race was at 5:30 pm, but in reality the cable car had already closed. So we were stuck in the top.”said Yao, content creator and travel blogger in Japan.
A young woman from Spain, who planned her vacation with Chatgpt in Puerto Rico, missed the flight after AI told her she did not need ESTA authorization, emphasizing the risks of misinformation generated by artificial intelligence.
Ai invent destinations: Eiffel Tower in Beijing and impossible routes in Italy
A 2024 BBC article reported that the Layla platform told users that there is an Eiffel tower in Beijing and suggested to a British tourist a marathon route, completely unattainable. “The itineraries did not have much logical meaning. We would have spent more time in transport than the places actually exploring”, the traveler told.
According to a 2024 survey, 37% of the participants who used AI for travel planning said that the program did not provide enough information, and about 33% said that the recommendations generated by AI contained false information.
Expert explains: Chatgpt combines words and inventing answers
These problems come from the way you generate the answers. According to Rayid Ghani, a renowned professor in automatic learning at Carlon University, although programs like Chatgpt seem to provide rational and useful tips, the way I get this information makes you never sure to tell the truth.
“It does not make the difference between travel tips, indications or recipes. It only knows words. So he continues to combine them so that everything says to sound realistic, and many of the basic problems appear here.”explains Ghani.
Large linguistic models, such as Chatgpt, work by analyzing huge collections of texts and combining words and phrases so that, statistically, it seems appropriate answers. Sometimes it provides perfectly correct information. Otherwise, what experts in AI call “Hallucinations”that is invented information. Because programs present real hallucinations and responses in the same way, users are often difficult to make the difference between real and false.
Destinations invented by AI: Sacred Canyon and Telecabina in Malaysia
In the case “Humantay sacred canyon”Ghani believes that the AI simply combined a few words that seemed appropriate for the region. Similarly, the analysis of all those data does not necessarily offer Chatgpt a real understanding of the physical world. The program could confuse a slightly 4,000 m walk through a city with an ascent of 4,000 m on a mountain-and this is just before the problem of erroneous information arises.
A recent Fast Company article reported the case of a couple who went on a hike to a picturesque cable car in Malaysia they had seen on Tiktok, just to discover that such a structure did not exist. The video watched had been fully generated by AI, either to attract views or for other inexplicable reasons.
How do you change the perception of reality: Examples from YouTube and Netflix
Such incidents are part of a wider tendency to implement AI, which can subtly change-or less subtle-our perception of the world. A recent example was in August, when content creators found that YouTube used to alter their videos without permission, subtly changing things like clothes, hair or faces of real people in the clips. Netflix was also criticized at the beginning of 2025 for the use AI in “Remastenization” The ancient sitcoms, which created bizarre distortions in the faces of the beloved actors of the 1980s. As you have it is more and more used for such small changes without our knowledge, the border between reality and a perfect universe begins to fade for travelers.
The “hallucinations” of you and travel: how to misinform the experience of tourists
Javier Labourt, a licensed psychotherapist and promoter of the positive effects of travel on mental health and social connections, worries that the spread of these problems could cancel the benefits that travel naturally offers. He believes that a journey offers a unique opportunity to interact with people that we would not otherwise meet and discover different cultures directly, which develops empathy and understanding. But when The “hallucinations” You will provide erroneous information, a fake narrative is created about a place before tourists leave home.
Currently, there are attempts to regulate how you present information to users, including several proposals from the EU and the US to include watermarks or other distinctive signs, so that viewers have known when something has been modified or generated by AI. But, according to Ghani, it’s a difficult fight: “There is a lot of effort to combat misinformation: how do you detect it? How do you help people identify it? But today, the effects of effects are a safer solution than prevention.”
If these regulations are adopted, it could be easier for travelers to detect AI images or videos. But the new rules will not help you when a chatbot invent something in the middle of a conversation.
How do you protect yourself from “hallucinations” in travel planning: Essential tips for tourists
Experts, including Google CEO, Sundar Pichai, said that The “hallucinations” It could be an inherent feature of large linguistic models, such as Chatgpt or Google Gemini. Therefore, the only method of protection is to remain vigilant.
A tip of Ghani is to be as specific as possible in the questions and to check everything. However, he recognizes that this is difficult on travel, because tourists often ask about places they do not know. But if an instrument you have gives you a travel suggestion that sounds too perfect, check it twice. Finally, says Ghani, the time spent to check for information can do the process as laborious as classical planning.
For Labourt, the key to a successful trip – with or without – is to have an open mind and adapt when things go stupid: “Try to direct your disappointment somewhere other than the idea that you have been fooled. If you are there, how can you turn the situation? Anyway you are on a fine trip, right?”