Crypto Flexs
  • DIRECTORY
  • CRYPTO
    • ETHEREUM
    • BITCOIN
    • ALTCOIN
  • BLOCKCHAIN
  • EXCHANGE
  • TRADING
  • SUBMIT
Crypto Flexs
  • DIRECTORY
  • CRYPTO
    • ETHEREUM
    • BITCOIN
    • ALTCOIN
  • BLOCKCHAIN
  • EXCHANGE
  • TRADING
  • SUBMIT
Crypto Flexs
Home»ADOPTION NEWS»NVIDIA unveils DoRA, a superior fine-tuning method for AI models
ADOPTION NEWS

NVIDIA unveils DoRA, a superior fine-tuning method for AI models

By Crypto FlexsJune 30, 20243 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
NVIDIA unveils DoRA, a superior fine-tuning method for AI models
Share
Facebook Twitter LinkedIn Pinterest Email





NVIDIA announced that it has developed a new fine-tuning method called Weight-Decomposed Low-Rank Adaptation (DoRA) that provides a high-performance alternative to the widely used Low-Rank Adaptation (LoRA). According to the NVIDIA Technology Blog, DoRA improves both the learning ability and stability of LoRA without introducing additional inference overhead.

Advantages of DoRA

DoRA has demonstrated significant performance gains across a variety of large language models (LLMs) and vision language models (VLMs). For example, in the common sense reasoning task, DoRA outperformed LoRA with improvements of +3.7 points on Llama 7B and +4.4 points on Llama 3 8B. DoRA also showed better results in multi-turn benchmarks, image/video text understanding, and visual command coordination tasks.

This innovative method was accepted as an oral paper at ICML 2024, demonstrating its reliability and potential impact in the field of machine learning.

DoRA’s Mechanism

DoRA works by decomposing the pre-trained weights into magnitude and direction components and fine-tuning both. This method ensures efficient fine-tuning by leveraging LoRA for orientation adaptation. After the training process is over, DoRA merges the fine-tuned components back into the pre-trained weights, preventing additional latency during inference.

Visualizing the magnitude and directional differences between DoRA and the pretrained weights shows that DoRA makes significant directional adjustments with minimal change in magnitude, which is very similar to a full-fine-tuning (FT) learning pattern.

Performance across models

In a variety of performance benchmarks, DoRA consistently outperforms LoRA. For example, in large-scale language models, DoRA significantly improves common sense reasoning and conversation/following directions. In visual language models, DoRA shows excellent results in image-to-text and video-to-text understanding and visual instruction tuning tasks.

Large-scale language model

Comparative studies highlight that DoRA outperforms LoRA on common sense reasoning benchmarks and multi-turn benchmarks. In tests, DoRA demonstrates strong performance by achieving higher average scores across a variety of datasets.

Vision Language Model

DoRA also excels at vision language models, outperforming LoRA in tasks such as image-to-text understanding, video-to-text understanding, and visual command coordination. The effectiveness of this method is evident through higher average scores across multiple benchmarks.

Compression Awareness LLM

DoRA can be integrated into the QLoRA framework to improve the accuracy of low-bit pretrained models. Our joint efforts with Answer.AI on the QDoRA project have shown that QDoRA outperforms FT and QLoRA on Llama 2 and Llama 3 models.

Text-to-Image Generation

DoRA’s applications extend to text-to-image personalization via DreamBooth, delivering significantly better results than LoRA on challenging data sets such as 3D icons and Lego sets.

Meaning and future applications

DoRA is poised to become the default choice for fine-tuning AI models compatible with LoRA and its variants. Its efficiency and effectiveness make it an invaluable tool for applying foundational models to a variety of applications, including NVIDIA Metropolis, NVIDIA NeMo, NVIDIA NIM, and NVIDIA TensorRT.

For more information, visit the NVIDIA Technology Blog.

Image source: Shutterstock



Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Crypto Exchange Rollish is expanded to 20 by NY approved.

October 2, 2025

SOL Leverage Longs Jump Ship, is it $ 200 next?

September 24, 2025

Bitcoin Treasury Firm Strive adds an industry veterans and starts a new $ 950 million capital initiative.

September 16, 2025
Add A Comment

Comments are closed.

Recent Posts

MEXC Celebrates ZEROBASE (ZBT) Listing With Airdrop+ Event Featuring 55,000 USDT Prize Pool

October 16, 2025

How MasterQuant’s AI Trading Bot Is Becoming Every Investor’s Favorite Trade Machine

October 16, 2025

Seascape Launches First Tokenized BNB Treasury Strategy On Binance Smart Chain

October 16, 2025

ETH And BTC Holders Are Flocking To OAK Mining For Stable Profits Of $8,600 Daily

October 16, 2025

Will Solana price fall to $170 once it gets close to the important support level?

October 16, 2025

Crypto Market Rebound, L2 Surge and ZEC Shock: Daily Insights

October 16, 2025

ZBCN is tradable!

October 15, 2025

Analysts expect a breakout of $135 as ETF approval buzz grows.

October 15, 2025

Chinese woman pleads guilty ahead of trial in $7 billion British Bitcoin fraud case

October 15, 2025

XMoney Launches $XMN On Sui, Expands Listings Across Global Exchanges

October 15, 2025

ZNB) STRENGTHENS BALANCE SHEET WITH USD 231 MILLION BITCOIN-BACKED INVESTMENT AMID MARKET TURBULENCE

October 15, 2025

Crypto Flexs is a Professional Cryptocurrency News Platform. Here we will provide you only interesting content, which you will like very much. We’re dedicated to providing you the best of Cryptocurrency. We hope you enjoy our Cryptocurrency News as much as we enjoy offering them to you.

Contact Us : Partner(@)Cryptoflexs.com

Top Insights

MEXC Celebrates ZEROBASE (ZBT) Listing With Airdrop+ Event Featuring 55,000 USDT Prize Pool

October 16, 2025

How MasterQuant’s AI Trading Bot Is Becoming Every Investor’s Favorite Trade Machine

October 16, 2025

Seascape Launches First Tokenized BNB Treasury Strategy On Binance Smart Chain

October 16, 2025
Most Popular

Security Alert (November 24, 2016): Consensus bug in geth v1.4.19 and v1.5.2.

March 30, 2024

Shiba Inu’s Q1 2025 Roadmap – What should SHIB holders expect from memecoin?

December 7, 2024

KuCoin leverages FLOKI, WIF and MOODENG for in-store card payments

November 29, 2024
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms and Conditions
© 2025 Crypto Flexs

Type above and press Enter to search. Press Esc to cancel.