Crypto Flexs
  • DIRECTORY
  • CRYPTO
    • ETHEREUM
    • BITCOIN
    • ALTCOIN
  • BLOCKCHAIN
  • EXCHANGE
  • TRADING
  • SUBMIT
Crypto Flexs
  • DIRECTORY
  • CRYPTO
    • ETHEREUM
    • BITCOIN
    • ALTCOIN
  • BLOCKCHAIN
  • EXCHANGE
  • TRADING
  • SUBMIT
Crypto Flexs
Home»ADOPTION NEWS»Perplexity AI leverages the NVIDIA inference stack to process 435 million queries per month.
ADOPTION NEWS

Perplexity AI leverages the NVIDIA inference stack to process 435 million queries per month.

By Crypto FlexsDecember 6, 20243 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Perplexity AI leverages the NVIDIA inference stack to process 435 million queries per month.
Share
Facebook Twitter LinkedIn Pinterest Email

Terrill Dickey
December 6, 2024 04:17

Perplexity AI leverages NVIDIA’s inference stack, including H100 Tensor Core GPUs and Triton Inference Server, to manage over 435 million search queries per month, optimizing performance and reducing costs.





Perplexity AI, a leading AI-powered search engine, successfully manages over 435 million searches every month thanks to NVIDIA’s advanced inference stack. According to NVIDIA’s official blog, the platform integrates NVIDIA H100 Tensor Core GPUs, Triton Inference Server, and TensorRT-LLM to efficiently deploy large language models (LLMs).

Provides multiple AI models

To meet diverse user needs, Perplexity AI operates more than 20 AI models simultaneously, including variants of the open source Llama 3.1 model. Each user request is matched to the best-fitting model using smaller classification models that determine user intent. These models are distributed across GPU pods, each managed by an NVIDIA Triton inference server, ensuring efficiency under strict service level agreements (SLAs).

Pods are hosted within a Kubernetes cluster with an internal frontend scheduler that directs traffic based on load and usage. This ensures consistent SLA compliance and optimizes performance and resource utilization.

Performance and cost optimization

Perplexity AI uses a comprehensive A/B testing strategy to define SLAs for a variety of use cases. This process aims to maximize GPU utilization while optimizing the cost of inference services while maintaining the target SLA. Smaller models focus on minimizing latency, while larger user-targeted models such as the Llama 8B, 70B, and 405B undergo detailed performance analysis to balance cost and user experience.

Performance is further improved by parallelizing model deployment across multiple GPUs and increasing tensor parallelism to lower servicing costs for latency-sensitive requests. This strategic approach allowed Perplexity to save approximately $1 million per year, exceeding the cost of third-party LLM API services, by hosting models on cloud-based NVIDIA GPUs.

Innovative technology for improved throughput

Perplexity AI is working with NVIDIA to implement ‘separate serving’, a method of separating inference stages to different GPUs to significantly increase throughput while complying with SLAs. This flexibility allows Perplexity to leverage a variety of NVIDIA GPU products to optimize performance and cost-effectiveness.

Further improvements are expected with the upcoming NVIDIA Blackwell platform, which promises significant performance gains through technological innovations including the second-generation Transformer Engine and advanced NVLink features.

Perplexity’s strategic use of the NVIDIA inference stack highlights the potential for AI-based platforms to efficiently manage massive query volumes and deliver high-quality user experiences while remaining cost-effective.

Image source: Shutterstock


Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

ETH ETF loses $242M despite holding $2K in Ether

February 15, 2026

Hong Kong regulators have set a sustainable finance roadmap for 2026-2028.

January 30, 2026

ETH has recorded a negative funding rate, but is ETH under $3K discounted?

January 22, 2026
Add A Comment

Comments are closed.

Recent Posts

Best Altcoins to Buy Now as Bitcoin Is Watching Important Moving Averages

February 22, 2026

As privacy talk heats up, Dash integrates Zcash privacy pool.

February 22, 2026

Cardano (ADA) Bears Active — Token Risks Another Downside

February 21, 2026

Spot Bitcoin ​ETF records total net withdrawals of $3.8 billion over 5 weeks

February 21, 2026

Why the Unleash Protocol hack occurred due to governance failure

February 20, 2026

IP Strategy Announces Share Repurchase Program of Up to 1 Million Shares

February 20, 2026

Phemex Completes Full Integration Of Ondo Finance Tokenized Equity Suite

February 20, 2026

Unicity Labs Raises $3M To Scale Autonomous Agentic Marketplaces

February 19, 2026

Web3 Advertising Grows Up What Brands Will Demand In 2026

February 19, 2026

Are Sweeps Coins A Cryptocurrency Or Something Else?

February 19, 2026

XRP gains momentum as Arizona adds XRP to state cryptocurrency reserves.

February 19, 2026

Crypto Flexs is a Professional Cryptocurrency News Platform. Here we will provide you only interesting content, which you will like very much. We’re dedicated to providing you the best of Cryptocurrency. We hope you enjoy our Cryptocurrency News as much as we enjoy offering them to you.

Contact Us : Partner(@)Cryptoflexs.com

Top Insights

Best Altcoins to Buy Now as Bitcoin Is Watching Important Moving Averages

February 22, 2026

As privacy talk heats up, Dash integrates Zcash privacy pool.

February 22, 2026

Cardano (ADA) Bears Active — Token Risks Another Downside

February 21, 2026
Most Popular

Stablecoin money transfer company Bridge raises $58 million

August 30, 2024

Best Cryptocurrencies to Profit on Your Christmas Tree

December 8, 2023

Analysts Support XRP for Recovery, InQubeta Presale Exceeds $8 Million

January 12, 2024
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms and Conditions
© 2026 Crypto Flexs

Type above and press Enter to search. Press Esc to cancel.