Crypto Flexs
  • DIRECTORY
  • CRYPTO
    • ETHEREUM
    • BITCOIN
    • ALTCOIN
  • BLOCKCHAIN
  • EXCHANGE
  • ADOPTION
  • TRADING
  • HACKING
  • SLOT
  • CASINO
Crypto Flexs
  • DIRECTORY
  • CRYPTO
    • ETHEREUM
    • BITCOIN
    • ALTCOIN
  • BLOCKCHAIN
  • EXCHANGE
  • ADOPTION
  • TRADING
  • HACKING
  • SLOT
  • CASINO
Crypto Flexs
Home»ADOPTION NEWS»Perplexity AI leverages the NVIDIA inference stack to process 435 million queries per month.
ADOPTION NEWS

Perplexity AI leverages the NVIDIA inference stack to process 435 million queries per month.

By Crypto FlexsDecember 6, 20243 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Perplexity AI leverages the NVIDIA inference stack to process 435 million queries per month.
Share
Facebook Twitter LinkedIn Pinterest Email

Terrill Dickey
December 6, 2024 04:17

Perplexity AI leverages NVIDIA’s inference stack, including H100 Tensor Core GPUs and Triton Inference Server, to manage over 435 million search queries per month, optimizing performance and reducing costs.





Perplexity AI, a leading AI-powered search engine, successfully manages over 435 million searches every month thanks to NVIDIA’s advanced inference stack. According to NVIDIA’s official blog, the platform integrates NVIDIA H100 Tensor Core GPUs, Triton Inference Server, and TensorRT-LLM to efficiently deploy large language models (LLMs).

Provides multiple AI models

To meet diverse user needs, Perplexity AI operates more than 20 AI models simultaneously, including variants of the open source Llama 3.1 model. Each user request is matched to the best-fitting model using smaller classification models that determine user intent. These models are distributed across GPU pods, each managed by an NVIDIA Triton inference server, ensuring efficiency under strict service level agreements (SLAs).

Pods are hosted within a Kubernetes cluster with an internal frontend scheduler that directs traffic based on load and usage. This ensures consistent SLA compliance and optimizes performance and resource utilization.

Performance and cost optimization

Perplexity AI uses a comprehensive A/B testing strategy to define SLAs for a variety of use cases. This process aims to maximize GPU utilization while optimizing the cost of inference services while maintaining the target SLA. Smaller models focus on minimizing latency, while larger user-targeted models such as the Llama 8B, 70B, and 405B undergo detailed performance analysis to balance cost and user experience.

Performance is further improved by parallelizing model deployment across multiple GPUs and increasing tensor parallelism to lower servicing costs for latency-sensitive requests. This strategic approach allowed Perplexity to save approximately $1 million per year, exceeding the cost of third-party LLM API services, by hosting models on cloud-based NVIDIA GPUs.

Innovative technology for improved throughput

Perplexity AI is working with NVIDIA to implement ‘separate serving’, a method of separating inference stages to different GPUs to significantly increase throughput while complying with SLAs. This flexibility allows Perplexity to leverage a variety of NVIDIA GPU products to optimize performance and cost-effectiveness.

Further improvements are expected with the upcoming NVIDIA Blackwell platform, which promises significant performance gains through technological innovations including the second-generation Transformer Engine and advanced NVLink features.

Perplexity’s strategic use of the NVIDIA inference stack highlights the potential for AI-based platforms to efficiently manage massive query volumes and deliver high-quality user experiences while remaining cost-effective.

Image source: Shutterstock


Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

As you challenge the mixed technology signal, OnDo Price Hovers challenges the August Bullish predictions.

August 7, 2025

XRP Open Interests decrease by $ 2.4B after recent sale

July 30, 2025

KAITO unveils Capital Launchpad, a Web3 crowdfunding platform that will be released later this week.

July 22, 2025
Add A Comment

Comments are closed.

Recent Posts

Vitalik Buterin regains the title of ‘Onchain Billionaire’, where ether reaches $ 4.2K.

August 10, 2025

Did you miss the TRON ‘S (TRX) 100X? Ruvi AI (Ruvi)

August 9, 2025

Re -creation attack in ERC -721 -Ackee Blockchain

August 8, 2025

The New Bybit Web3 Is Here–Fueling On-Chain Thrills With $200,000 Up For Grabs

August 8, 2025

Stella (XLM) Eye 35% Rally and Ripple and SEC END 5 years legal battle

August 8, 2025

Builders Are Proving What’s Possible With CARV’s AI Stack

August 8, 2025

Caldera Announces Partnership With EigenCloud To Integrate EigenDA V2

August 7, 2025

Are Monero in danger? Five orphan blocks were found during the Cubic Mining War.

August 7, 2025

One Card To Seamlessly Bridge Web3 Assets And Real-World Spending

August 7, 2025

Coinbase’s USDC fee, encryption or other banks?

August 7, 2025

Protocol Update 001 -scale L1

August 7, 2025

Crypto Flexs is a Professional Cryptocurrency News Platform. Here we will provide you only interesting content, which you will like very much. We’re dedicated to providing you the best of Cryptocurrency. We hope you enjoy our Cryptocurrency News as much as we enjoy offering them to you.

Contact Us : Partner(@)Cryptoflexs.com

Top Insights

Vitalik Buterin regains the title of ‘Onchain Billionaire’, where ether reaches $ 4.2K.

August 10, 2025

Did you miss the TRON ‘S (TRX) 100X? Ruvi AI (Ruvi)

August 9, 2025

Re -creation attack in ERC -721 -Ackee Blockchain

August 8, 2025
Most Popular

Bridging the Accessibility Gap in AI: New Research Insights

March 13, 2024

Quant Price prediction: Can QNT see more than $ 96.80?

February 16, 2025

$30B RIA Platform Carson Group Approved to Offer Spot Bitcoin ETFs to Clients

February 24, 2024
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms and Conditions
© 2025 Crypto Flexs

Type above and press Enter to search. Press Esc to cancel.