According to together.ai, Together AI has announced a significant improvement to its GPU clusters by integrating NVIDIA H200 Tensor Core GPUs. The upgrade is accompanied by Together Kernel Collection (TKC), a custom kernel stack designed to optimize AI operations, delivering significant performance improvements for both training and inference tasks.
Improved performance with TKC
Together Kernel Collection (TKC) is designed to significantly accelerate common AI tasks. Compared to the standard PyTorch implementation, TKC delivers up to 24% speedup for commonly used training operators and up to 75% speedup for FP8 inference tasks. These improvements can reduce GPU time, leading to cost-effectiveness and faster time to market.
Training and inference optimization
TKC’s optimized kernels, such as multilayer perceptron (MLP) with SwiGLU activation, are essential for training large-scale language models (LLMs) such as Llama-3. These kernels are reported to be 22-24% faster than standard implementations, with potential improvements of up to 10% over the best existing baselines. Inference tasks benefit from a powerful FP8 kernel stack that Together AI has optimized to deliver over 75% speedup over the default PyTorch implementation.
Native PyTorch compatibility
TKC is fully integrated with PyTorch, allowing AI developers to seamlessly leverage optimizations within their existing frameworks. This integration simplifies the adoption of TKC, making it as easy as changing an import statement within PyTorch.
Production level testing
Together AI ensures that TKC undergoes rigorous testing to meet production-grade standards, ensuring high performance and stability for real-world applications. All Together GPU clusters (H200 or H100) are TKC ready out of the box.
NVIDIA H200: Faster performance and more memory
The NVIDIA H200 Tensor Core GPU, based on the Hopper architecture, is designed for high-performance AI and HPC workloads. According to NVIDIA, the H200 delivers 40 percent faster inference performance on the Llama 2 13B and 90 percent faster on the Llama 2 70B than its predecessor, the H100. The H200 features 141 GB of HBM3e memory and 4.8 TB/s of memory bandwidth, nearly doubling the capacity and 1.4x the bandwidth of the H100.
High-performance interconnectivity
Together GPU Clusters leverage the SXM form factor to deliver high bandwidth and fast data transfer, and support ultra-fast GPU-to-GPU communication via NVIDIA’s NVLink and NVSwitch technologies. Combined with NVIDIA Quantum-2 3200Gb/s InfiniBand Networking, this setup is ideal for large-scale AI training and HPC workloads.
Cost-effective infrastructure
Together AI offers significant cost savings with an infrastructure designed to be up to 75% more cost-effective than cloud providers like AWS. The company also offers flexible commitment options from one month to five years, ensuring adequate resources at every stage of the AI development lifecycle.
Reliability and Support
Together AI’s GPU clusters come with a 99.9% uptime SLA and are backed by rigorous acceptance testing. The company’s White Glove Service provides end-to-end support from cluster setup to ongoing maintenance, ensuring peak performance of your AI models.
Flexible deployment options
Together AI offers multiple deployment options, including Slurm for high-performance workload management, Kubernetes for containerized AI workloads, and bare metal clusters running Ubuntu for direct access and ultimate flexibility. These options meet a variety of AI project needs, from large-scale training to production-level inference.
Together AI continues to support the entire AI lifecycle with high-performance NVIDIA H200 GPU clusters and the Together Kernel Collection. The platform is designed to optimize performance, reduce costs, and ensure stability, making it the ideal choice for accelerating AI development.
Image source: Shutterstock