On the heels of last week’s rollout on Android, Google’s new AI-powered technology, Google Lens, is now arriving on iOS. The feature is available within the Google Photos iOS application, where it can do things like identify objects, buildings, and landmarks, and tell you more information about them, including helpful details like their phone number, address, or open hours. It can also identify things like books, paintings in museums, plants, and animals. In the case of some objects, it can also take actions.
For example, you can add an event to your calendar from a photo of a flyer or event billboard, or you can snap a photo of a business card to store the person’s phone number or address to your Contacts.
Google Lens was first announced last year at Google’s I/O developer conference, and is made possible by the recent advancements in machine learning and image recognition technologies. The eventual goal is to allow smartphone cameras to understand what it is they’re seeing across any type of photo, then helping you take action on that information, if need be – whether that’s calling a business, saving contact information, or just learning about the world on the other side of the camera.
During the demo at I/O, Google showed off how Google Lens could do other things, too. It could be used to help you configure your Wi-Fi, for instance. That is, if you took a photo of the sticker on your router, it could help you to paste that information in your Wi-Fi settings to get you connected. The company also demonstrated a translation feature that converted signs in a foreign language to English.
It doesn’t seem we’re quite there yet with all these promised features, but they could become possible in the future as Google Lens matures.
According to a tweet from the Google Photos’ Twitter account, Google Lens in Google Photos on iOS began rolling out on Thursday to those who have the latest version (3.15) of the app installed.
The rollout will complete over the course of the week ahead.