Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Panic Over DeepSeek Exposes AI’s Weak Foundation On Hype
 Originally published in Forbes The story about DeepSeek has disrupted...
AI Drives Alphabet’s Moonshot To Save The World’s Electrical Grid
 Originally published in Forbes Note: Ravi Jain, Chief Technology Officer...
Why Alphabet’s Clean Energy Moonshot Depends On AI
 Originally published in Forbes Note: Ravi Jain, Chief Technology Officer...
Predictive AI Only Works If Stakeholders Tune This Dial
 Originally published in Forbes I’ll break it to you gently:...
SHARE THIS:

5 months ago
Nvidia improves Meta’s Llama model with new training approach

 

Originally published in the-decoder.com, Oct 18, 2024.

Nvidia has introduced a new large language model that outperforms others in alignment benchmarks. The company achieved this through a special training procedure combining evaluation and preference models.

The new model, called Llama-3.1-Nemotron-70B-Instruct, is based on Meta’s open-source Llama 3.1 model. Nvidia optimized it to provide helpful answers to user queries by combining different training methods.

However, the results only show that the answers align better with human preferences, not that the content is necessarily more accurate. In fact, the Nemotron variant performs slightly worse than the base model on the MMLU Pro benchmark, which tests factual knowledge.

Nvidia created two new datasets for training: HelpSteer2 and HelpSteer2-Preference. HelpSteer2 contains over 20,000 prompt-response pairs. Multiple annotators rated each response on a 1-5 scale for criteria like helpfulness, correctness, and coherence. HelpSteer2-Preference adds comparisons between two answers to the same prompt. Annotators indicated which answer they preferred and how strong their preference was.

To continue reading this article, click here.

One thought on “Nvidia improves Meta’s Llama model with new training approach

  1. Pingback: Nvidia improves Meta’s Llama model with new training approach - revtech