Google is introducing a new way to search the web using a combination of text and images at the same time.
Multisearch, available as an added function in Google Lens, gives you a different way to find what you’re looking for.
This tool is designed to be used for searches that aren’t as straightforward as a singular image or textual phrase.
If you want to find out more about an object in front of you, but you don’t have all the words to describe what you’re looking for, that’s where Multisearch comes in.
With multisearch in Lens you can ask Google questions about what you see. This is made possible through Google’s advances in AI.
Here’s how to use Google Multisearch and what you can do with it.
How To Use Google Multisearch
First, download the latest update for your Google app and then follow the steps below:
- Open the Google app on Android or iOS
- Tap the Lens camera icon
- Upload a saved image or snap a photo of the world around you
- Swipe up and tap the “+ Add to your search” button to add text
You can ask Google a question about an object in front of you, and use text to refine your search by color, brand, or a visual attribute.
Google provides the following examples of the types of use cases Multisearch is designed for:
- Screenshot a stylish orange dress and add the query “green” to find it in another color
- Snap a photo of your dining set and add the query “coffee table” to find a matching table
- Take a picture of your rosemary plant and add the query “care instructions”
Google’s advances in AI make it easier to understand the world in natural and intuitive ways.
Google is currently exploring ways in which this feature might be enhanced by MUM to further improve the results it can deliver.
if( typeof sopp !== “undefined” && sopp === ‘yes’ )
fbq(‘dataProcessingOptions’, [‘LDU’], 1, 1000);
fbq(‘trackSingle’, ‘1321385257908563’, ‘ViewContent’,
content_category: ‘news seo ‘