Meta is developing a record-breaking supercomputer to power the metaverse
Join gaming leaders, alongside GamesBeat and Facebook Gaming, for their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 coming January 25-27, 2022. Learn more about the event.
After Meta (formerly Facebook) announced in October that it plans to stake its claim to the metaverse, the company today announced that it has developed the AI Research SuperCluster (RSC) that it claims is one of the fastest AI supercomputers. currently being performed. Once fully built, Meta says it will be the fastest-working supercomputer — a project the company aims to complete by the middle of this year.
CEO Mark Zuckerberg noted that the experiences the company is building for the metaverse require tremendous computing power — up to trillions of operations per second. The RSC enables new AI models to learn from trillions of examples, understand hundreds of languages and more.
Data storage company Pure Storage and medical device company Nvidia are part of the supercluster that Facebook has built. Notably, Nvidia has been a major player supporting the metaverse, with its omniverse product being billed as “engineer metaverse.”
When fully implemented, Meta’s RSC will be the largest customer installation of Nvidia DGX A100 systems, Nvidia said in its press release Today.
The 2nd Annual GamesBeat and Facebook Gaming Summit and GamesBeat: Into the Metaverse 2
Rob Lee, CTO at Pure Storage, told VentureBeat via email that the RSC is important to other companies outside of Meta because the technologies (such as AI and AR/VR) powering the metaverse are more widely applicable and sought after in industries across the globe. whole line.
According to Lee, technical decision-makers are always looking to learn from pioneering practitioners, and the RSC offers great validation of the core components that power the world’s largest AI supercomputer.
“Meta’s world-class team saw the value in pairing the performance, density, and simplicity of Pure Storage products with Nvidia GPUs built for this groundbreaking work that pushes the boundaries of performance and scale,” said Lee. He added that enterprises of all sizes can benefit from Meta’s work, expertise and lessons in advancing the way they pursue their data, analytics and AI strategies.
Scale is going to be a big deal
In a blog published today, Metaclaims that AI supercomputing is needed on a large scale. According to Meta, realizing the benefits of self-directed learning and transformer-based models requires different domains – be it vision, speech, language, or for critical applications such as identifying malicious content.
AI at the scale of Meta requires massively powerful computing solutions capable of instantly analyzing ever-increasing amounts of data. Meta’s RSC is a breakthrough in supercomputing that will lead to new technologies and customer experiences powered by AI, Lee said.
“Scale is important here in several ways,” Lee continued. He noted that Meta primarily processes a huge amount of information on a continuous basis, and so there is a degree of scale in data processing performance and capacity that is required.
“Secondly, AI projects rely on large amounts of data – with more diverse and complete datasets yielding better results. Third, all this infrastructure needs to be managed at the end of the day, which is why space and energy efficiency and simplicity of management at scale are also critical. Each of these elements is equally important, whether it’s a more traditional business project or working at Meta’s scale,” Lee added.
Addressing the security and privacy issues of supercomputing
In recent years, Meta has received several responses to its privacy and data policies, with the Federal Trade Commission (FTC) announcing that it was investigating substantial concerns about Facebook’s privacy practices in 2018. Meta aims to address security and privacy issues from the get-go -go, stating that the company secures data in RSC by designing RSC from the ground up with privacy and security in mind.
Meta claims this will allow its researchers to securely train models using encrypted, user-generated data that is only decrypted right before training.
“RSC, for example, is isolated from the wider Internet, with no direct inbound or outbound connections, and traffic can only flow from Meta’s production data centers. To meet our privacy and security requirements, the entire data path from our storage systems to the GPUs is end-to-end encrypted and has the necessary tools and processes in place to verify that these requirements are met at all times.” the company blog.
Meta explains that data must go through a privacy review process to confirm that it has been properly anonymized before being subsequently imported into the RSC. The company also claims that the data is also encrypted before it can be used to train AI models, and decryption keys are regularly deleted to ensure that old data is no longer accessible.
To build this supercomputer, Nvidia provided the compute layer, including the Nvidia DGX A100 systems as compute nodes. The GPUs communicate through a two-level Nvidia Quantum 200Gbps InfiniBand Clos fabric. Lee noted that Penguin Computing hardware and software contributions are “the glue” that unites Penguin, Nvidia, and Pure Storage. Together, these three partners were crucial to provide Meta with a massive supercomputing solution.
VentureBeat’s mission is to be a digital city square for tech decision makers to learn about transformative technology and transactions. Our site provides essential information on data technologies and strategies to guide you in leading your organizations. We invite you to join our community to access:
up-to-date information on the topics that interest you
gated thought-leader content and discounted entry to our valued events, such as Transform 2021: Learn More
network features and more