Crypto Flexs
  • DIRECTORY
  • CRYPTO
    • ETHEREUM
    • BITCOIN
    • ALTCOIN
  • BLOCKCHAIN
  • EXCHANGE
  • ADOPTION
  • TRADING
  • HACKING
  • SLOT
Crypto Flexs
  • DIRECTORY
  • CRYPTO
    • ETHEREUM
    • BITCOIN
    • ALTCOIN
  • BLOCKCHAIN
  • EXCHANGE
  • ADOPTION
  • TRADING
  • HACKING
  • SLOT
Crypto Flexs
Home»ADOPTION NEWS»Llama-3 Fine-Tuning Achieves 90% of GPT-4 Performance at Lower Cost.
ADOPTION NEWS

Llama-3 Fine-Tuning Achieves 90% of GPT-4 Performance at Lower Cost.

By Crypto FlexsJuly 14, 20243 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Llama-3 Fine-Tuning Achieves 90% of GPT-4 Performance at Lower Cost.
Share
Facebook Twitter LinkedIn Pinterest Email

Louisa Crawford
July 14, 2024 02:46

According to together.ai, Llama-3 fine-tuning showed significant performance improvements, achieving 90% of the accuracy of GPT-4 at a much lower cost.





According to together.ai, the success of Llama-3 is remarkable, showing that open-source models are closing the gap with their closed-source counterparts. By leveraging proprietary data, customers have been able to fine-tune small open-source software (OSS) models like Llama-3 to achieve higher accuracy than top closed-source models.

Fine-tuning process

Together AI’s platform allows users to fine-tune Llama-3-8B on proprietary data to create custom models that outperform large-scale OSS alternatives like Llama-3-70B and are comparable to leading closed-source models like GPT-4—all at a fraction of the cost. Our detailed guide shows how a fine-tuned Llama-3-8B model improves accuracy from 47% to 65%, outperforming Llama-3-70B’s 64% and approaching GPT-4’s 71% accuracy.

The fine-tuning process involves several steps, including transforming the dataset, uploading and validating the dataset, starting the fine-tuning job, and running the evaluation to compare the results. The initial step is to download the Math Instruct dataset from HuggingFace, clean it, and convert it into a JSONL file format suitable for the Together platform.

Transform Data Set

The transformation process involves loading the original JSON data, defining the Llama-3 prompt format, and converting the data into the correct format. This formatted data set is then validated using Together’s SDK before being uploaded for fine-tuning.

Upload and fine-tune

Once the dataset is ready, it is uploaded to Together AI via the Python SDK. Then, a fine-tuning job is created using the Llama-3-8B base model, specifying the dataset, number of epochs, and other parameters. Users can monitor the fine-tuning job via the Together AI dashboard.

Evaluation and Results

After fine-tuning, the performance of the model is evaluated using 1000 math problems. The accuracy of the fine-tuned Llama-3-8B model is compared with the baseline Llama-3-8B, Llama-3-70B, and GPT-4. The fine-tuned model achieves an accuracy of 65.2%, which outperforms the baseline model’s 47.2% and Llama-3-70B’s 64.2%, and approaches the 71.4% accuracy of GPT-4.

According to the results, the fine-tuned Llama-3-8B model outperforms the baseline model by about 20%, outperforms the best OSS model Llama-3-70B, and achieves over 90% of the accuracy of GPT-4. In addition, the fine-tuned model is faster, 50x cheaper than GPT-4, and provides full ownership of the model and weights.

conclusion

This fine-tuning approach demonstrates that small open source models like Llama-3-8B can be customized to perform specific tasks with high accuracy, speed, and cost efficiency. Users can fine-tune the models using proprietary data and host them on Together AI or run them independently, maintaining full control and ownership.

Trained on math problems, the Llama-3-8B model outperforms leading OSS models and approaches the performance of GPT-4 with a total fine-tuning cost of less than $100 on Together AI.

Image source: Shutterstock


Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Is Bitcoin Price Rally $ 150K by the end of the year?

June 7, 2025

Gala Games introduces a step -by -step approach to founder node staking.

June 7, 2025

Solana (SOL) introduces Alpenglow for faster blockchain agreement.

June 7, 2025
Add A Comment

Comments are closed.

Recent Posts

Is Bitcoin Price Rally $ 150K by the end of the year?

June 7, 2025

How does it affect Bitcoin?

June 7, 2025

Gala Games introduces a step -by -step approach to founder node staking.

June 7, 2025

AB starts in binance

June 7, 2025

ETF publisher’s latest warning -SEC’s approval process ‘Innovation, AIDS GIANTS’

June 7, 2025

Solana (SOL) introduces Alpenglow for faster blockchain agreement.

June 7, 2025

The Foresight Ventures report shows a collection shift where more than 32,000 sellers around the world accept encryption.

June 7, 2025

$ AB is live on Binance, guiding the new era of new cross chain asset mobility.

June 7, 2025

Trump memoin is faced with a $ 520m lock in July and the price drops by 85%.

June 7, 2025

Vaneck launches GPZ ETF for alternative asset managers.

June 7, 2025

Apple, X, Airbnb Eye Stablecoin Integration

June 7, 2025

Crypto Flexs is a Professional Cryptocurrency News Platform. Here we will provide you only interesting content, which you will like very much. We’re dedicated to providing you the best of Cryptocurrency. We hope you enjoy our Cryptocurrency News as much as we enjoy offering them to you.

Contact Us : Partner(@)Cryptoflexs.com

Top Insights

Is Bitcoin Price Rally $ 150K by the end of the year?

June 7, 2025

How does it affect Bitcoin?

June 7, 2025

Gala Games introduces a step -by -step approach to founder node staking.

June 7, 2025
Most Popular

Bitfinex Alpha | Bitcoin outlook is positive, but geopolitical risks are dangerous

April 23, 2024

Exploring VeChain’s Innovation, XRP’s Promising Future, and BlockDAG’s 5000X ROI Potential in 2024

March 9, 2024

XRP Breakout Probability – Here’s what needs to happen first for a rally to occur.

January 12, 2025
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms and Conditions
© 2025 Crypto Flexs

Type above and press Enter to search. Press Esc to cancel.