According to the NVIDIA Technical Blog, Vision Language Models (VLM), an exciting innovation in AI technology, provide a more dynamic and flexible way to analyze video. VLM makes the technology more accessible and adaptable by allowing users to interact with image and video inputs using natural language. These models can run on the NVIDIA Jetson Orin edge AI platform or on discrete GPUs via NIM.
What is a Visual AI Agent?
Visual AI agents are powered by VLM, which allows users to ask a wide range of questions in natural language and gain insights that reflect true intent and context from recorded or live video. These agents can be interacted with and integrated with other services and mobile apps via easy-to-use REST APIs. This new generation of visual AI agents helps summarize scenes using natural language, create broad alerts, and extract actionable insights from videos.
NVIDIA Metropolis provides a visual AI agent workflow, a reference solution that accelerates the development of AI applications powered by VLM, extracting insights through contextual understanding from video, whether deployed at the edge or in the cloud.
For cloud deployments, developers can power visual AI agents using NVIDIA NIM, a set of inference microservices that include industry-standard APIs, domain-specific code, optimized inference engines, and enterprise runtimes. Visit the API catalog to explore and try out basic models directly in your browser.
Building Visual AI Agents for the Edge
Jetson Platform Services is a set of pre-built microservices that provide essential built-in capabilities for building computer vision solutions on NVIDIA Jetson Orin. These microservices include AI services that support generative AI models such as zero-shot detection and state-of-the-art VLM. VLM combines large-scale language models with vision transformers to enable complex inference on text and visual inputs.
The VLM of choice for Jetson is VILA, which optimizes tokens per image to deliver cutting-edge inference capabilities and speed. Combining VLM with Jetson Platform Services allows you to create VLM-based visual AI agent applications that detect events on live streaming cameras and send notifications to users via mobile apps.
Integration with mobile apps
The entire end-to-end system can now be integrated with mobile apps to build VLM-based Visual AI Agents. To receive video input for VLM, Jetson Platform Services networking services and VST automatically discover and provide network-connected IP cameras. These cameras are available to VLM services and mobile apps via the VST REST API.
In the app, users can set up custom notifications in natural language, such as “Is there a fire?”, on selected live streams. Once the notification rules are set, VLM evaluates the live stream and notifies the user in real time via a WebSocket connected to the mobile app. This triggers a pop-up notification on the mobile device, allowing the user to ask follow-up questions in chat mode.
conclusion
This development highlights the potential of VLM combined with Jetson Platform Services to build advanced Visual AI agents. The full source code for the VLM AI service is available on GitHub, which developers can use to learn how to use VLM and build their own microservices.
For more information, visit the NVIDIA Technology Blog.
Image source: Shutterstock