Elon Musk, the CEO of SpaceX and Tesla, recently informed investors that his artificial intelligence (AI) business, xAI, is developing a supercomputer to power the company’s next AI chatbot, Grok. The Information stated on May 25 that he will require 100,000 semiconductors in order to train and operate them.
Musk stated that he hopes to have this supercomputer operational by the fall of 2025 and mentioned that Oracle and xAI might collaborate on its development. The firm reportedly discussed renting cloud servers for $10 billion with Oracle. At Oracle, xAI is the biggest H100 user, utilizing more than 15,000 NVIDIA AI chips. Even Musk’s Tesla has been producing its electric cars on supercomputers driven by NVIDIA.
Known by Elon Musk as the “gigafactory of compute,” the xAI supercomputer aims to integrate 100,000 chips into a single, enormous computer. According to The Information, when finished, the interconnected chip clusters—Nvidia’s flagship H100 graphics processing units (GPUs)—would be at least four times larger than the largest GPU clusters currently in use.
When compared to all other computers, supercomputers run at incredibly fast rates. Over the years, they have been crucial in pushing the frontiers of knowledge. Advanced AI models that can speak several languages, learn from billions of examples, analyze text, photos, and videos flawlessly, produce augmented reality tools, and more can be created with it. Supercomputers are used for a variety of tasks, such as climate research and forecasting, weather forecasting, space exploration, and genome sequencing.
In July 2023, Musk established xAI to challenge OpenAI, which was funded by Microsoft, and Alphabet’s Google. xAI raised $6 billion in Series B fundraising on May 26 and was valued $24 billion post-money. The company stated in a blog post that investors Andreessen Horowitz and Sequoia Capital, among others, supported the investment round.
According to xAI, the funds will be utilized to construct cutting-edge infrastructure, launch the company’s initial goods, and quicken the creation of new technologies.
Due to their great demand, Nvidia’s H100 GPUs are the industry leader in data center chips for artificial intelligence. The leading tech giants in the world, Microsoft and Meta, are already in line for the company’s new Blackwell chip.
In September 2023, Yotta, a data center firm back home, made an order for 16,000 H100 chips, which included the recently revealed Blackwell AI-training GPU. The initial shipment of 4,000 chips, which included NVIDIA H100 Tensor Core GPUs, came in March. In addition to providing managed cloud services, the Mumbai-based company will enable businesses to train large language models (LLMs) and develop apps similar to OpenAI’s ChatGPT on Yotta’s cloud.
12,000 A100 chips were used to train ChatGPT-4. The H100 has four times the power of that.
According to senior research analyst Akshara Bassi of Counterpoint Technology Market Research, India is now trailing behind other countries in terms of the amount of compute that is accessible for the nation.