According to AMD.com, AMD has announced new features in its Radeon PRO GPUs and ROCm software that will allow small businesses to leverage large-scale language models (LLMs), including Meta’s Llama 2 and 3, and the newly released Llama 3.1.
New capabilities for small businesses
AMD’s Radeon PRO W7900 dual-slot GPU, with dedicated AI accelerators and significant onboard memory, delivers market-leading performance per dollar, enabling small businesses to run custom AI tools locally. These include applications such as chatbots, technical documentation search, and personalized sales pitches. The specialized Code Llama model allows programmers to generate and optimize code for new digital products.
AMD’s latest release of the open software stack, ROCm 6.1.3, supports running AI tools on multiple Radeon PRO GPUs. This enhancement allows small and medium-sized enterprises (SMEs) to handle larger and more complex LLMs, supporting more users simultaneously.
Extending Use Cases for LLM
AI technologies are already prevalent in data analytics, computer vision, and generative design, but the potential use cases for AI extend far beyond these fields. Specialized LLMs like Meta’s Code Llama enable app developers and web designers to generate code that works from simple text prompts or debug existing code bases. The top model, Llama, has broad applications in customer service, information retrieval, and product personalization.
Small businesses can use augmented search generation (RAG) to let AI models recognize internal data, such as product documentation or customer records. This customization reduces the need for manual editing and makes AI-generated output more accurate.
Benefits of Local Hosting
Even though cloud-based AI services are available, local hosting of LLM offers significant advantages.
- Data Security: Running AI models locally eliminates the need to upload sensitive data to the cloud, solving key issues related to data sharing.
- Lower latency: Local hosting reduces latency and provides immediate feedback and real-time support for applications like chatbots.
- Control over your work: Local deployment allows technical staff to troubleshoot and update AI tools without relying on remote service providers.
- Sandbox environment: Local workstations can serve as sandbox environments for prototyping and testing new AI tools before full-scale deployment.
AMD’s AI Performance
For small businesses, hosting custom AI tools doesn’t have to be complicated or expensive. Applications like LM Studio make it easy to run LLM on standard Windows laptops and desktop systems. LM Studio is optimized to run on AMD GPUs via the HIP runtime API, leveraging the dedicated AI accelerators on current AMD graphics cards to improve performance.
Professional GPUs such as the 32GB Radeon PRO W7800 and 48GB Radeon PRO W7900 provide enough memory to run larger models such as Llama-2-30B-Q8 with 30 billion parameters. ROCm 6.1.3 introduces support for multiple Radeon PRO GPUs, allowing enterprises to deploy systems with multiple GPUs to handle requests from multiple users simultaneously.
Performance testing using Llama 2 shows that the Radeon PRO W7900 delivers up to 38% better price-performance than NVIDIA’s RTX 6000 Ada generation, making it a cost-effective solution for small and medium-sized businesses.
As AMD’s hardware and software capabilities continue to evolve, even small businesses can deploy and customize LLM to improve a variety of business and coding tasks, without having to upload sensitive data to the cloud.
Image source: Shutterstock