What you need to know:
- entrepreneur Elon Musk disclosed to investors that his artificial intelligence startup, xAI, aims to construct a supercomputer to enhance the next iteration of its AI chatbot, Grok.
- According to The Information, Elon Musk stated in a presentation to investors in May that once completed, the interconnected groups of chips—Nvidia’s flagship H100 graphics processing units (GPUs)—would surpass the size of today’s largest GPU clusters by at least four times.
According to The Information, U.S. entrepreneur Elon Musk disclosed to investors that his artificial intelligence startup, xAI, aims to construct a supercomputer to enhance the next iteration of its AI chatbot, Grok. Musk expressed his aspiration to have the envisioned supercomputer operational by autumn 2025, suggesting potential collaboration with Oracle to develop this large-scale computing system.
xAI was unavailable for immediate comment, and Oracle did not respond to a request for comment from Reuters. According to The Information, Elon Musk stated in a presentation to investors in May that once completed, the interconnected groups of chips—Nvidia’s flagship H100 graphics processing units (GPUs)—would surpass the size of today’s largest GPU clusters by at least four times. Nvidia’s H100 GPUs, renowned for their potency in the data center chip market for AI, are often challenging to acquire due to high demand.
Established last year by Musk, xAI aims to compete with Microsoft-backed OpenAI and Google, a subsidiary of Alphabet. Musk is also a co-founder of OpenAI.
Earlier this year, Musk revealed that training the Grok 2 model utilized approximately 20,000 Nvidia H100 GPUs. He further stated that future iterations such as the Grok 3 model will necessitate 100,000 Nvidia H100 chips.
![](https://falconposts.com/wp-content/uploads/2024/04/falconpostslogo-4.webp)
Do you have a story or an opinion to share? Email us on: info@falconposts.com Or follow the Falconposts on X Platform or WhatsApp for the latest updates.