Meta AI, the digital assistant powered by Meta’s advanced Llama series of large language models (LLMs), has been expanding its reach and capabilities. Initially available only in the United States and Canada, Meta AI has now entered international markets with additional features tailored for different regions. While the United Kingdom sees a voice-only version of the assistant on smart glasses, Australia enjoys more advanced functionalities, including visual assistance. These developments mark Meta’s efforts to further integrate AI into everyday life, offering users enhanced convenience through voice and vision-based interactions.
In the UK, Meta AI’s integration with smart glasses currently allows users to interact with the assistant via voice commands. The decision to initially offer voice-only support suggests a gradual rollout aimed at ensuring the technology is seamlessly adopted. However, the potential for future updates is significant, given that the same device in Australia already supports visual capabilities. In Australia, Meta AI can leverage the smart glasses’ camera to “see” the environment. This allows users to engage with the assistant in a more intuitive manner, such as when asking it to respond to a query based on what the camera captures or performing specific actions based on visual input.
Meta AI’s functionality is built on the same core technology that powers popular platforms like ChatGPT, demonstrating the increasing role of large language models in consumer technology. With its combination of voice and vision features, Meta AI is designed to act as a more natural and responsive assistant. The assistant’s capabilities have been extended further with a recent software update that introduces practical enhancements.
Last week, Meta rolled out an update for the AI assistant, offering users more flexible and practical tools. One of the key new features is the ability to ask Meta AI to remember visual or spoken information. For instance, users can now instruct the assistant to remind them where they parked their car or to take note that they are running low on certain groceries like milk. This added functionality enhances the assistant’s role as a memory aid, making it an even more integral part of daily routines. Other newly added capabilities include the ability to set timers, scan QR codes, and automatically call phone numbers displayed on posters or flyers.
Additionally, the update improves the way users interact with the visual AI capabilities on smart glasses. Previously, users had to explicitly say “look” at the beginning of a query to activate the vision feature. This requirement has been removed, allowing for a more seamless and natural interaction. This change aligns with Meta’s broader goal of making technology less intrusive and more integrated into the user’s daily activities. With a more fluid and intuitive interaction model, Meta AI becomes an even more effective tool for those relying on it in both casual and work environments.
Meta’s plans for the AI assistant extend well into the future, with further updates expected later this year. Among the most notable upcoming features is the introduction of live translation capabilities. The assistant will be able to translate conversations in real time between English and other major languages, including French, Italian, and Spanish. This feature is expected to be particularly useful for travelers and those engaging in multilingual environments, further solidifying Meta AI as a versatile tool for communication.
Beyond translation, Meta has also announced plans to enhance the visual capabilities of the smart glasses by enabling continuous, real-time interaction. Instead of simply analyzing single images or snapshots, Meta AI will be able to interpret a livestream from the user’s perspective. This functionality will allow the AI to offer ongoing assistance as users move through their day. For example, whether someone is navigating a new city or preparing a complex recipe in the kitchen, the AI will be able to provide live guidance, enhancing productivity and simplifying tasks.
These advances in Meta AI’s capabilities represent a significant step forward in the development of digital assistants and augmented reality. By integrating vision and voice features and continually expanding the range of supported tasks, Meta AI is evolving into a more powerful and versatile assistant. Its ongoing updates reflect a clear commitment to improving user experiences through cutting-edge technology, making everyday interactions with AI more natural and seamless. As Meta AI continues to develop, its smart glasses integration hints at a future where AI becomes even more deeply embedded in daily life, offering both practical assistance and transformative possibilities.