Google maintains that it will segregate the different podcasts not only on names but also content.
Google is ramping up its efforts to introduce Artificial Intelligence in the different platforms and in the wake, at the IO 2019, it rolled out a number of new significant features which will be soon incorporated into Google Search and Google Assistant. The basic premise of the discussion was that instead of collating information from scattered areas, the different Google service should allow users to be more empowered.
We take a look at Google Search and find that it is getting a number of enhancements, two of which are quite significant – the ability to use augmented reality in order to create search results which are information in virtual 3D models.
Also, the podcast is quite a simple feature – Google maintains that it will segregate the different podcasts not only on names but also what is the content. As per the company, this can be achieved using the speech recognition and machine learning systems.
Another significant feature which will be made available to smartphones is the use of augmented reality which will allow users to see a 3D model of what they are trying to search. For instance, if the user wants to search about a house, she can tap on the OD search which will then allow him to view a perfect 3D model of the same.
“With new AR features in Search rolling out later this month, you can view and interact with 3D objects right from Search and place them directly into your own space, giving you a sense of scale and detail. For example, it’s one thing to read that a great white shark can be 18 feet long. It’s another to see it up close in relation to the things around you,” says Aparna Chennapragada VP, Google Lens and AR as reported in India Today.