Big tech companies developing generative AI have said they envision their tech as an everyday personal assistant, and that includes performing travel tasks.
Google, Microsoft, and Apple have all had advantages over OpenAI because they also produce a variety of their own devices. (Apple’s iPhone 16 has a side button to activate an AI assistant, for example.)
OpenAI, on the other hand, is mostly limited to its app. But it wants to change that. The company is establishing its own device company with the help of Jony Ive, who led the design of the iPhone and other Apple devices. OpenAI said last week that it plans to acquire the design company Ive co-founded for $6.5 billion.
To win the race, each of the tech companies is trying to make their AI fit into users’ lives the most naturally and seamlessly. That means building AI into phones, laptops, watches, TVs, extended reality headsets, glasses, and more.
When devices are connected, a single digital assistant could tap into multiple sources — seeing what you see, hearing what you hear, and learning from your behavior.
For travelers, the idea is that the digital assistant can act as a travel agent that anticipates needs and doesn’t just follow commands. Paired with fast-moving tech that could allow AI to search and book on a user’s behalf, the AI assistants themselves could be the main way users purchase travel.
Maybe you took a virtual stroll around Machu Picchu on your virtual reality headset, so later you get an alert on your phone about lower-than-usual priced flights to Peru.
Maybe your smart glasses saw you lingering in front of a Monet painting at the museum, so you get an alert on your phone a few weeks later about a new impressionist exhibit.
Maybe the AI sees an outdoor tour scheduled on your calendar, so your watch alerts you that it’s about to rain, with suggestions for an indoor activity it knows you like based on past searches.
It’s easy to see how Apple could take this next step. The button on the side of the iPhone 16 activates the camera, and the AI can pull information from the internet about what it sees. And Apple has already created the technology to connect devices through the Apple ID.
Google is taking steps toward this vision, as well. The company is reimagining Search, powered by AI with connections across the user’s apps. And its Gemini bot is coming to cars, watches, TVs, smart glasses, and extended reality headsets. The glasses should be able to search for nearby restaurants and give walking directions with just a voice command.
Apple and Google could be somewhat limited in their ability to move as quickly as they’d like, however. It’s tough to completely overhaul an established suite of products.
OpenAI has an opportunity to fully reimagine how devices operate, built from the ground up with AI at their heart. There’s also the question of ensuring the hardware can handle the computing power that AI demands.
Just because OpenAI releases products doesn’t mean they’ll be successful, however. Microsoft last decade dropped its phone and Zune portable media player in the face of competition. And Google ditched its first try at smart glasses, Google Glass, in 2023 after a couple of redesigns.
OpenAI only said that it’s planning a “family of products,” and the team hopes to share more details next year.

Staying ahead of the next wave of change.
June 4, 2025 – NEW YORK CITY