Google’s new ‘multi-search’ features hint at a future for AR glasses – TechCrunch

0

In April, Google introduced a new “multisearch” feature that offered a way to search the web using both text and images. Today at Google’s I/O Developer Conference, the company announced an extension to this feature, called “Multisearch Near Me.” This addition, coming later in 2022, will allow Google app users to combine an image or screenshot with the text “near me” to be directed to options for local retailers or restaurants that may have the clothes, household items or food you’re looking for. It also pre-announces an upcoming development of Multiple Search which appears to be designed with AR glasses in mind, as it can visually search for multiple objects in a scene based on what you are currently “seeing” through the viewfinder. a smartphone camera.

With the new “near me” multi-search query, you’ll be able to find local options related to your current visual and text search combination. For example, if you were working on a DIY project and found a part you needed to replace, you could take a picture of the part with your phone’s camera to identify it, then find a local hardware store that has a spare part in stock. .

It’s not that different from the way multiple search already works, Google explains – it’s just adding the local component.

Picture credits: Google

The idea behind multiple search was to allow users to ask questions about an object in front of them and narrow down those results by color, brand, or other visual attributes. Today, the feature works best with shopping searches, as it allows users to narrow down product searches in a way that standard text-based web searches might sometimes struggle to do. For example, a user can take a picture of a pair of sneakers and then add text asking to see them in blue to only show those shoes in the specified color. They could choose to visit the sneaker website and purchase them right away. The extension to include the “near me” option now simply limits results to direct users to a local retailer where the given product is available.

To help users find local restaurants, the feature works the same way. In this case, a user can search based on a photo they found on a food blog or elsewhere on the web to find out what the dish is and which local restaurants might have the option on their dinner menu, pickup or delivery. . Here, Google search combines the image with the intent that you are looking for a nearby restaurant and analyzes millions of images, reviews, and community contributions to Google Maps to find the local spot.

The new “near me” feature will be available globally in English and will roll out to other languages ​​over time, Google says.

The most interesting addition to multi-search is the ability to search within a scene. Going forward, Google says users will be able to move their camera around to learn more about multiple objects in this larger scene.

Google suggests the feature could be used to scan the shelves of a bookstore and then see several useful pieces of information superimposed in front of you.

Picture credits: Google

“To make this possible, we not only combine computer vision, natural language understanding, but we also combine them with knowledge of the web and on-device technology,” said Nick Bell, senior director of Google Search. “So the possibilities and capabilities of that are going to be huge and significant,” he noted.

The company – which came to the AR market early with its Google Glass release – hasn’t confirmed that it has some sort of new AR glasses-like device in the works, but has hinted at the possibility.

“With today’s artificial intelligence systems, what’s possible today — and will be over the next few years — kind of opens up a lot of opportunities,” Bell said. In addition to voice search, desktop search and mobile search, the company believes that visual search will also be more prominent in the future, he noted.

Picture credits: Google

“There are 8 billion visual searches on Google with Lens every month now and that number is three times higher than just a year ago,” Bell continued. “What we definitely see in people is that the appetite and the desire to search visually is there. And what we’re trying to do now is look at the use cases and identify where it’s most useful,” he said. “I think when we think about the future of search, visual search is definitely a key part of it.”

The company, of course, is reportedly working on a secret project, dubbed Project Iris, to build a new AR headset with an expected release date of 2024. It’s easy to imagine not only how this scene scanning capability might work on a such a device, but also how any sort of image plus text (or voice!) search function could be used on an AR headset. Imagine again looking at the pair of sneakers you like, for example, then asking a device to navigate to the nearest store where you could make the purchase.

“Looking further, this technology could be used beyond day-to-day needs to help address societal challenges, like helping conservationists identify plant species that need protection, or assisting quickly sought-after rescuers. through donations when needed,” suggested Prabhakar Raghavan, Google. Search SVP, speaking on stage at Google I/O.

Unfortunately, Google didn’t offer a timeline for when it expected to get the scene scanning capability into the hands of users, as the feature is still “in development.”

"Lily

Share.

Comments are closed.