Crypto Flexs
  • DIRECTORY
  • CRYPTO
    • ETHEREUM
    • BITCOIN
    • ALTCOIN
  • BLOCKCHAIN
  • EXCHANGE
  • TRADING
  • SUBMIT
Crypto Flexs
  • DIRECTORY
  • CRYPTO
    • ETHEREUM
    • BITCOIN
    • ALTCOIN
  • BLOCKCHAIN
  • EXCHANGE
  • TRADING
  • SUBMIT
Crypto Flexs
Home»ADOPTION NEWS»Perplexity AI leverages the NVIDIA inference stack to process 435 million queries per month.
ADOPTION NEWS

Perplexity AI leverages the NVIDIA inference stack to process 435 million queries per month.

By Crypto FlexsDecember 6, 20243 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Perplexity AI leverages the NVIDIA inference stack to process 435 million queries per month.
Share
Facebook Twitter LinkedIn Pinterest Email

Terrill Dickey
December 6, 2024 04:17

Perplexity AI leverages NVIDIA’s inference stack, including H100 Tensor Core GPUs and Triton Inference Server, to manage over 435 million search queries per month, optimizing performance and reducing costs.





Perplexity AI, a leading AI-powered search engine, successfully manages over 435 million searches every month thanks to NVIDIA’s advanced inference stack. According to NVIDIA’s official blog, the platform integrates NVIDIA H100 Tensor Core GPUs, Triton Inference Server, and TensorRT-LLM to efficiently deploy large language models (LLMs).

Provides multiple AI models

To meet diverse user needs, Perplexity AI operates more than 20 AI models simultaneously, including variants of the open source Llama 3.1 model. Each user request is matched to the best-fitting model using smaller classification models that determine user intent. These models are distributed across GPU pods, each managed by an NVIDIA Triton inference server, ensuring efficiency under strict service level agreements (SLAs).

Pods are hosted within a Kubernetes cluster with an internal frontend scheduler that directs traffic based on load and usage. This ensures consistent SLA compliance and optimizes performance and resource utilization.

Performance and cost optimization

Perplexity AI uses a comprehensive A/B testing strategy to define SLAs for a variety of use cases. This process aims to maximize GPU utilization while optimizing the cost of inference services while maintaining the target SLA. Smaller models focus on minimizing latency, while larger user-targeted models such as the Llama 8B, 70B, and 405B undergo detailed performance analysis to balance cost and user experience.

Performance is further improved by parallelizing model deployment across multiple GPUs and increasing tensor parallelism to lower servicing costs for latency-sensitive requests. This strategic approach allowed Perplexity to save approximately $1 million per year, exceeding the cost of third-party LLM API services, by hosting models on cloud-based NVIDIA GPUs.

Innovative technology for improved throughput

Perplexity AI is working with NVIDIA to implement ‘separate serving’, a method of separating inference stages to different GPUs to significantly increase throughput while complying with SLAs. This flexibility allows Perplexity to leverage a variety of NVIDIA GPU products to optimize performance and cost-effectiveness.

Further improvements are expected with the upcoming NVIDIA Blackwell platform, which promises significant performance gains through technological innovations including the second-generation Transformer Engine and advanced NVLink features.

Perplexity’s strategic use of the NVIDIA inference stack highlights the potential for AI-based platforms to efficiently manage massive query volumes and deliver high-quality user experiences while remaining cost-effective.

Image source: Shutterstock


Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Hong Kong regulators have set a sustainable finance roadmap for 2026-2028.

January 30, 2026

ETH has recorded a negative funding rate, but is ETH under $3K discounted?

January 22, 2026

AAVE price prediction: $185-195 recovery target in 2-4 weeks

January 6, 2026
Add A Comment

Comments are closed.

Recent Posts

The cryptocurrency veteran is back with caricatures, privacy apps, and Gasless L2.

January 30, 2026

Ethereum leverage remains at an all-time high. What happens next?

January 30, 2026

Hong Kong regulators have set a sustainable finance roadmap for 2026-2028.

January 30, 2026

Bybit Unveils 2026 Vision As “The New Financial Platform,” Expanding Beyond Exchange Into Global Financial Infrastructure

January 30, 2026

How to Claim Vault12 Promo Code FALLOUT26 for Android and iOS

January 29, 2026

Crypto Veteran Returns With Satirical Cartoon, Privacy App, And Gasless L2

January 29, 2026

Some Have Embraced Hashrate, Daily Returns Quietly Approaching $7777

January 29, 2026

US Senator Submits Amendment to Cryptocurrency Bill

January 29, 2026

XRP ‘Millionaire’ Wallets Increase in ‘Encouraging Signal’

January 29, 2026

Cardano (ADA) rises — signs of recovery emerge

January 28, 2026

QXMP Labs Announces Activation Of RWA Liquidity Architecture And $1.1 Trillion On-Chain Asset Registration

January 28, 2026

Crypto Flexs is a Professional Cryptocurrency News Platform. Here we will provide you only interesting content, which you will like very much. We’re dedicated to providing you the best of Cryptocurrency. We hope you enjoy our Cryptocurrency News as much as we enjoy offering them to you.

Contact Us : Partner(@)Cryptoflexs.com

Top Insights

The cryptocurrency veteran is back with caricatures, privacy apps, and Gasless L2.

January 30, 2026

Ethereum leverage remains at an all-time high. What happens next?

January 30, 2026

Hong Kong regulators have set a sustainable finance roadmap for 2026-2028.

January 30, 2026
Most Popular

Flux offers Proof-of-Work (PoUW) and FluxCore.

December 13, 2023

Venture capitalist Chris Burnie says subtle signs of a crypto bull market are flashing amid depressed sentiment

September 15, 2024

Legendary VC Tim Draper Doubles Down Bitcoin Price Prediction for 2024 to $250,000

December 29, 2023
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms and Conditions
© 2026 Crypto Flexs

Type above and press Enter to search. Press Esc to cancel.