On Monday, the CEOs of Nvidia and Meta discussed the latest advancements in generative AI and what’s possible with the new technology at SIGGRAPH 2024, an annual industry conference for computer graphics enthusiasts and professionals.
“It’s exciting. There are so many new things to build,” Zuckerberg said. “The pace of progress in fundamental AI research is accelerating. It’s a wild time.”
Zuckerberg said that current AI modeling technology alone will drive product innovation for the next five years, and he believes every business will one day be able to converse with AI agents to interact with their customers.
Meta executives also envision a world in which the next version of its Llama AI model, version 4.0, will be an agent that, when given a command, will research, calculate and then return a definitive answer weeks or months later. The company released version 3.1 of Llama last week.
Advertisement – Scroll to continue
Huang praised Meta for creating the Llama open-source AI model family, which he said is helping more developers and businesses gain access to AI model technology.
Meta is one of Nvidia’s major customers. In January, Zuckerberg pledged that his company would have 350,000 Nvidia H100 graphics processing units by the end of the year, for a total of about 600,000 H100 computing equivalent GPUs. “Our long-term vision is to build general-purpose intelligence, open source it responsibly, and make it broadly available for everyone to benefit from,” Zuckerberg wrote in a post at the time. “We are building a massive computing infrastructure to support the future roadmap for artificial intelligence.”
Nvidia currently dominates the market for chips used in AI applications. Both startups and large enterprises favor the company’s products because of its robust programming platform called CUDA, which provides AI-related tools that accelerate the development of AI projects.
Advertisement – Scroll to continue
Email Tae Kim at tae.kim@barrons.com