News
News Categories

Google is integrating its Gemini model capabilities into Google Maps Platform

By Aaron Yip - on 16 May 2024, 12:49am

Google is integrating its Gemini model capabilities into Google Maps Platform

Image Credit: HardwareZone

At the Google I/O 2024 conference, Google announced the integration of Gemini model capabilities into its Google Maps Platform, starting with the Places API. This new capability will allow developers to show generative AI summaries of locations in their own apps or websites.

This is made possible thanks to Gemini (Google’s latest large language model aka LLM), which uses insights from over 300 million contributors to create AI-generated summaries of places and areas. This saves developers time and help ensures a consistent experience for users. For instance, in a restaurant-booking app, users can quickly see essential details like the restaurant’s specialty and happy hour deals, making it easier to choose a dining spot.

Image Credit: Google

Additionally, Google is also introducing AI-powered contextual search results to the Places API. This allows users to see more relevant search outcomes based on specific queries. For example, searching for “dog-friendly restaurants” will display suitable options with relevant reviews and photos of dogs at the restaurants too.

Generative AI summaries are already available in the US, and Google already plans to roll it out to other countries in phases (Google didn’t say when Singapore will get this). What I really like about the Gemini model integration with Google Maps is that it seems to simplify content creation for developers and provides users with more detailed and engaging information, making interactions with local businesses and areas more intuitive and insightful.

More information available at Google Maps Platform.

Join HWZ's Telegram channel here and catch all the latest tech news!
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.