At Nvidia's GTC keynote today, CEO Jen-Hsun Huang announced that the company will soon be rolling out a collection of large language model (LLM) frameworks, known as Nvidia AI Foundations(opens in new tab).
Jen-Hsun is so confident about the Foundation package, he's calling it a «TSMC for custom, large language models.» Definitely not a comparison I was expecting to hear today, but I guess it fits alongside Huang's wistful comments about AI having had it's «iPhone moment.»
The Foundations package includes the Picasso and BioNeMo services that will serve the media and medical industries respectively, as well as NeMo: a framework aimed at businesses looking to integrate large language models into their workflows.
NeMo is «for building custom language, text-to-text generative models» that can inform what the company calls "intelligent applications(opens in new tab)".
With a little something called P-Tuning, companies will be able to train their own custom language models to create more apt branded content, compose emails with personalised writing styles, and summarise financial documents so us humans don't have waste away staring at numbers all day—that sounds like a nightmare for me.
Hopefully it'll take some weight off the everyman, and stop your boss shouting «BUNG IT IN THE CHATBOT THING,» because that's supposedly faster.
Best CPU for gaming(opens in new tab): The top chips from Intel and AMDBest gaming motherboard(opens in new tab): The right boardsBest graphics card(opens in new tab): Your perfect pixel-pusher awaitsBest SSD for gaming(opens in new tab): Get into the game ahead of the rest
NeMo's language models include 8 billion, 43 billion, 530 billion parameter versions, meaning there will be distinct tiers to
Read more on pcgamer.com