By now, just about everyone has heard of GPT-4. Generative Pre-trained Transformer 4 (GPT-4) is the fourth-generation multimodal large language model created by OpenAI. Basically, GPT-4 accepts both image and text inputs, and then generates text outputs. What makes this artificial intelligence (AI) so exciting is how it is being applied.
Powered by GPT-4, the Be My Eyes app is adding the ‘Virtual Volunteer’ feature to assist people who are visually impaired. The Virtual Volunteer is the object-recognition tool that can answer questions about an image in ways that that have never imagined. These AI responses have highly nuanced explanations in a friendly, conversational format. For example: If a Be My Eyes user sends a picture of the inside of their refrigerator, the Virtual Volunteer will not only be able to correctly identify items within, but also extrapolate and analyze what can be prepared with those ingredients. The tool can also then offer a number of recipes for those ingredients and send a step-by-step guide on how to make them. (BeMyEyes.com)
Be My eyes Virtual Volunteer video by Lucy Edwards:
The plan is to integrate the Virtual Volunteer feature into the existing app. But if the user does not get a good response or simply prefers a human connection, a human volunteer will still be available.
Currently, Be My Eyes is currently in a closed beta testing with a small subset of users. However, download the app to register for the Virtual Volunteer waitlist available for both iOS and Android. Open the app’s Home screen and tap on the Register button. Then tap the OK to confirm your registration.
Founded in 2015, Be My Eyes has connected millions of volunteers to assist visually impaired users with everyday tasks. Using the smartphone’s camera, users can take a picture and ask a volunteer about the item in the picture.
By Diane Brauner
Back to Paths to Technology’s Home page