Crypto Flexs
  • DIRECTORY
  • CRYPTO
    • ETHEREUM
    • BITCOIN
    • ALTCOIN
  • BLOCKCHAIN
  • EXCHANGE
  • ADOPTION
  • TRADING
  • HACKING
  • SLOT
Crypto Flexs
  • DIRECTORY
  • CRYPTO
    • ETHEREUM
    • BITCOIN
    • ALTCOIN
  • BLOCKCHAIN
  • EXCHANGE
  • ADOPTION
  • TRADING
  • HACKING
  • SLOT
Crypto Flexs
Home»ADOPTION NEWS»Perplexity AI leverages the NVIDIA inference stack to process 435 million queries per month.
ADOPTION NEWS

Perplexity AI leverages the NVIDIA inference stack to process 435 million queries per month.

By Crypto FlexsDecember 6, 20243 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Perplexity AI leverages the NVIDIA inference stack to process 435 million queries per month.
Share
Facebook Twitter LinkedIn Pinterest Email

Terrill Dickey
December 6, 2024 04:17

Perplexity AI leverages NVIDIA’s inference stack, including H100 Tensor Core GPUs and Triton Inference Server, to manage over 435 million search queries per month, optimizing performance and reducing costs.





Perplexity AI, a leading AI-powered search engine, successfully manages over 435 million searches every month thanks to NVIDIA’s advanced inference stack. According to NVIDIA’s official blog, the platform integrates NVIDIA H100 Tensor Core GPUs, Triton Inference Server, and TensorRT-LLM to efficiently deploy large language models (LLMs).

Provides multiple AI models

To meet diverse user needs, Perplexity AI operates more than 20 AI models simultaneously, including variants of the open source Llama 3.1 model. Each user request is matched to the best-fitting model using smaller classification models that determine user intent. These models are distributed across GPU pods, each managed by an NVIDIA Triton inference server, ensuring efficiency under strict service level agreements (SLAs).

Pods are hosted within a Kubernetes cluster with an internal frontend scheduler that directs traffic based on load and usage. This ensures consistent SLA compliance and optimizes performance and resource utilization.

Performance and cost optimization

Perplexity AI uses a comprehensive A/B testing strategy to define SLAs for a variety of use cases. This process aims to maximize GPU utilization while optimizing the cost of inference services while maintaining the target SLA. Smaller models focus on minimizing latency, while larger user-targeted models such as the Llama 8B, 70B, and 405B undergo detailed performance analysis to balance cost and user experience.

Performance is further improved by parallelizing model deployment across multiple GPUs and increasing tensor parallelism to lower servicing costs for latency-sensitive requests. This strategic approach allowed Perplexity to save approximately $1 million per year, exceeding the cost of third-party LLM API services, by hosting models on cloud-based NVIDIA GPUs.

Innovative technology for improved throughput

Perplexity AI is working with NVIDIA to implement ‘separate serving’, a method of separating inference stages to different GPUs to significantly increase throughput while complying with SLAs. This flexibility allows Perplexity to leverage a variety of NVIDIA GPU products to optimize performance and cost-effectiveness.

Further improvements are expected with the upcoming NVIDIA Blackwell platform, which promises significant performance gains through technological innovations including the second-generation Transformer Engine and advanced NVLink features.

Perplexity’s strategic use of the NVIDIA inference stack highlights the potential for AI-based platforms to efficiently manage massive query volumes and deliver high-quality user experiences while remaining cost-effective.

Image source: Shutterstock


Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Arthur Breitman talks about the strategic evolution of Tezos in the Coinshares interview.

May 9, 2025

COREWEAVE completes the AI ​​developer platform weight and bias acquisition.

May 9, 2025

HKMA reports stable credit conditions for SMEs in the first quarter of 2025.

May 9, 2025
Add A Comment

Comments are closed.

Recent Posts

Arthur Breitman talks about the strategic evolution of Tezos in the Coinshares interview.

May 9, 2025

COREWEAVE completes the AI ​​developer platform weight and bias acquisition.

May 9, 2025

Ether Lee’s Staying Surges: Is PECTRA attracting more than retail investors?

May 9, 2025

The new blockchain T-Rex raises $ 17 million in Web3 to convert the Layer Layer.

May 9, 2025

HKMA reports stable credit conditions for SMEs in the first quarter of 2025.

May 9, 2025

SEC’s CRENSHAW Slams Ripple Settlement, ‘Regulatory Vacuum’ Warning

May 9, 2025

Tether launches USD ES in KAIA blockchain to promote Web3 adoption in Asia.

May 9, 2025

Easy to get Daily Crypto -Bow Miner’s AI Cloud Mining can benefit while sleeping!

May 9, 2025

Bitcoin hit $ 101K to reclaim six pictures as Trump confirmed us. British trade transaction

May 9, 2025

Bitcoin’s APRIL SURGE sets a promising summer stage.

May 8, 2025

Bitcoin Options BTC’s potential to emphasize the new all -time high

May 8, 2025

Crypto Flexs is a Professional Cryptocurrency News Platform. Here we will provide you only interesting content, which you will like very much. We’re dedicated to providing you the best of Cryptocurrency. We hope you enjoy our Cryptocurrency News as much as we enjoy offering them to you.

Contact Us : Partner(@)Cryptoflexs.com

Top Insights

Arthur Breitman talks about the strategic evolution of Tezos in the Coinshares interview.

May 9, 2025

COREWEAVE completes the AI ​​developer platform weight and bias acquisition.

May 9, 2025

Ether Lee’s Staying Surges: Is PECTRA attracting more than retail investors?

May 9, 2025
Most Popular

Gemini agreed to pay $5 million to settle CFTC costs.

January 6, 2025

XRP Struggling to Maintain $0.52: What’s Next for Altcoins?

May 27, 2024

Trader Predicts Under-the-Radar Altcoin Breakout Rally and Says Cardano (ADA) Uptrend Looks Healthy.

March 6, 2024
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms and Conditions
© 2025 Crypto Flexs

Type above and press Enter to search. Press Esc to cancel.