Perplexity AI, a leading AI-powered search engine, successfully manages over 435 million searches every month thanks to NVIDIA’s advanced inference stack. According to NVIDIA’s official blog, the platform integrates NVIDIA H100 Tensor Core GPUs, Triton Inference Server, and TensorRT-LLM to efficiently deploy large language models (LLMs).
Provides multiple AI models
To meet diverse user needs, Perplexity AI operates more than 20 AI models simultaneously, including variants of the open source Llama 3.1 model. Each user request is matched to the best-fitting model using smaller classification models that determine user intent. These models are distributed across GPU pods, each managed by an NVIDIA Triton inference server, ensuring efficiency under strict service level agreements (SLAs).
Pods are hosted within a Kubernetes cluster with an internal frontend scheduler that directs traffic based on load and usage. This ensures consistent SLA compliance and optimizes performance and resource utilization.
Performance and cost optimization
Perplexity AI uses a comprehensive A/B testing strategy to define SLAs for a variety of use cases. This process aims to maximize GPU utilization while optimizing the cost of inference services while maintaining the target SLA. Smaller models focus on minimizing latency, while larger user-targeted models such as the Llama 8B, 70B, and 405B undergo detailed performance analysis to balance cost and user experience.
Performance is further improved by parallelizing model deployment across multiple GPUs and increasing tensor parallelism to lower servicing costs for latency-sensitive requests. This strategic approach allowed Perplexity to save approximately $1 million per year, exceeding the cost of third-party LLM API services, by hosting models on cloud-based NVIDIA GPUs.
Innovative technology for improved throughput
Perplexity AI is working with NVIDIA to implement ‘separate serving’, a method of separating inference stages to different GPUs to significantly increase throughput while complying with SLAs. This flexibility allows Perplexity to leverage a variety of NVIDIA GPU products to optimize performance and cost-effectiveness.
Further improvements are expected with the upcoming NVIDIA Blackwell platform, which promises significant performance gains through technological innovations including the second-generation Transformer Engine and advanced NVLink features.
Perplexity’s strategic use of the NVIDIA inference stack highlights the potential for AI-based platforms to efficiently manage massive query volumes and deliver high-quality user experiences while remaining cost-effective.
Image source: Shutterstock