Managing large, complex GPU clusters in a data center is a massive undertaking, requiring careful oversight of cooling, power, networking, and more. According to the NVIDIA Technical Blog, NVIDIA has developed an observable AI agent framework that leverages the OODA loop strategy to address this complexity.
AI-based observation framework
The NVIDIA DGX Cloud team, which manages the global GPU fleet across major cloud service providers and NVIDIA’s own data centers, has implemented this innovative framework that allows operators to interact with their data centers to ask questions about GPU cluster stability and other operational metrics.
For example, an operator can query the system for the top five most frequently replaced components that are at risk for supply chain risk, or assign a technician to fix a problem in the most vulnerable cluster. This feature is part of a project called LLo11yPop (LLM + Observability), which uses the OODA loop (Observe, Orient, Decide, Act) to improve data center management.
Accelerated Data Center Monitoring
With each new generation of GPUs comes the need for comprehensive observability. Standard metrics like utilization, errors, and throughput are just the bar. To fully understand the operating environment, additional factors like temperature, humidity, power stability, and latency must be taken into account.
NVIDIA’s system leverages existing observability tools and integrates them with the NIM microservice, allowing operators to talk to Elasticsearch in human language, providing accurate and actionable insights into issues like fan failures across the fleet.
Model Architecture
The framework consists of different agent types.
- Orchestrator Agent: Pass your questions to the right analyst and choose the best course of action.
- Analyst Agent: Translates broad questions into specific queries that search agents answer.
- Action Agent: Coordinate response, including notifying Site Reliability Engineers (SREs).
- Search Agent: Execute a query against a data source or service endpoint.
- Task Execution Agent: Perform specific tasks through the workflow engine.
This multi-agent approach mimics an organizational hierarchy, with directors coordinating tasks, managers leveraging domain knowledge to assign work, and workers optimizing for specific tasks.
Moving to a multi-LLM composite model
To manage the diverse telemetry required for effective cluster management, NVIDIA uses a Mixed Agent (MoA) approach, which involves using multiple large-scale language models (LLMs) to process different types of data, from GPU metrics to orchestration layers like Slurm and Kubernetes.
By linking small, focused models, the system can fine-tune specific tasks, such as generating SQL queries for Elasticsearch, thereby optimizing performance and accuracy.
Autonomous agent with OODA loop
The next step is to close the loop with autonomous supervisory agents operating within the OODA loop. These agents observe data, take direction, decide on actions, and execute them. Initially, human supervision ensures the reliability of these actions, forming a reinforcement learning loop that improves the system over time.
Lessons learned
Key insights gained while developing this framework include the importance of rapid engineering rather than initial model training, selecting the right model for a particular task, and maintaining human supervision until the system is proven to be reliable and secure.
Building AI Agent Applications
NVIDIA offers a variety of tools and technologies for those interested in building their own AI agents and applications. Resources are available at ai.nvidia.com, and detailed guides can be found on the NVIDIA Developer Blog.
Image source: Shutterstock