Eric V's Blog A space where I rant about computers

A Tale of Two Technologies: Why Large Language Models are the Future and the Metaverse Isn't

The Beginning

In the digital landscape of recent years, two major technologies have vied for the spotlight: the Metaverse and Large Language Models (LLMs). Though the Metaverse, a virtual reality-based universe, initially garnered significant attention and expectations, it ultimately failed to revolutionize the digital world. Meanwhile, LLMs such as OpenAI’s ChatGPT and GPT-4, Google’s Bard, and Meta’s LLaMA have risen seemingly out of nowhere, poised to disrupt the future of computing. A closer examination of these contrasting trajectories reveals how the research-driven innovation behind LLMs can triumph over consumerism-driven hype that puts the cart of “monetizable experiences” before the horse of technology capable of sustaining them.

The Metaverse: A Dream That Never Materialized

The Metaverse, a concept that seemed to promise a new era of digital interaction, sold users the opportunity to engage in a wide variety of activities, from virtual real estate transactions to attending concerts and shopping for digital goods to put in their fancy new digital homes.

Through the hype cycle, the Metaverse gained so much notoriety that a trillion-dollar company built on the backs of violating users’ privacy and acquiring their competition to “create” new products decided to rebrand itself and hard pivot to “building the metaverse” as its “next big thing”. With such a prominent and successful industry leader at the head of the charge, how could the Web 3.0 landscape not blossom into a utopia of modern technology and a golden age of digital consumerism? Hell, they even bought the most promising VR company a few years back, so surely they’d be uniquely positioned to bid to become the “platform of the metaverse”, right?

Unfortunately, we quickly discovered that the dreams being sold by Metaverse Evangelists were based a bit too much in Sci-Fi, their bullshit being minted by Web 3.0 snake oil salesmen trying to build the next Ponzi Scheme.

Why did it end up like this? Why did the dreams of immersive virtual experiences first imagined by the likes of William Gibson and Neal Stephenson result in buggy, lackluster virtual storefronts that served only to link you out of the Metaverse and to a brand’s website?

Everyone’s Selling Nonexistent Products

The Metaverse was largely driven by consumerism, with many users hoping to profit from selling digital goods that cost little to nothing to create. This gold rush mentality created an environment rife with get-rich-quick schemes that ultimately undermined the integrity and appeal of the virtual world.

We’ve seen its like before whenever real-world currency intersects with virtual worlds. Those who remember SecondLife will remember the promise of paying your rent solely from the sale of digital goods you design in cyberspace. Real-Money Trading (RMT) has become so prevalent in online gaming that entire outfits of botters spring up every time a new MMO launches to farm in-game gold and sell it to new players for cold, hard cash. In all these instances, the introduction of RMT into game-like worlds creates a fundamental opposition between the classes of players: those who live to play, and those who play to live. The opposing forces ruin the fun of the game, contesting for virtual resources with people whose very livelihoods are at stake is not enjoyable and even if you do manage to win in some contest of in-game markets, you know it’s often at the expense of someone who can’t afford to lose.

Rent-seeking behaviors like monthly subscription models, transaction fees, memberships, and virtual leases all attempt to generate ever-increasing revenue at vanishingly small costs; what does it cost a Metaverse vendor to sell you their 122nd copy of the virtual jacket they modeled a few months ago? What does it cost Decentraland to rent you a plot of virtual land in an artificially limited neighborhood?

Companies and enterprising individuals saw the Metaverse as a way to cheat the system, to reduce their cost of goods sold so incredibly that they could mint infinite money. All they needed to do was sell some people on their vision of the “future”, making sure the FOMO was strong enough that a crowd of suckers with more money than sense would blow it all in their virtual casino before they realized how bad the tech was and how little value the goods had as a result.

VR Still isn’t Ready for Mainstream

Let’s talk about the tech for a minute, because the Metaverse somehow managed to Web 3.0 their way into an impossible promise that resulted in awful experiences.

Job interviews, concerts, and immersive experiences were all promises of SecondLife back in the early 2000s, the experiences back then beating out what the best of the Metaverse has to offer today in the likes of Decentraland and Horizon Worlds.

What the Metaverse promised and failed to deliver, was realistic VR at the scale of hundreds if not thousands of people packed in the same virtual spaces, interacting with each other and their environment.

In the VR space, the gold standard for social spaces is VR Chat, it’s been around for nearly a decade and has had plenty of time to mature, gaining features and optimizing performance continuously. Even VR Chat, though, requires powerful PC hardware and an expensive VR headset to experience to its fullest. To have a non-nauseating experience in VR Chat, you need a ~$2,000 computer, a $1,000 headset, and a hard-line internet connection with enough bandwidth to stream in avatar models and textures every time you enter a new room. VR Chat starts to fall apart at around 30 people in the same room, the framerate drops and avatars stutter, if you aren’t used to it, it can quickly become nauseating. You can maybe pack 100 people into a single instance in VR Chat if you really try and people have well optimized avatars, but the thought of thousands of independent avatars moving around and interacting in a single virtual space is beyond imaginable given the current technological limitations of networked VR engines.

The Metaverse promised way too much, far ahead of what was possible and without a clear idea of how it’d be able to make it possible. If the Metaverse decided to build itself on VR Chat, things might have gone differently, there may have been some pleasant experiences to be had and folks might have come back to a VR space hosted by some miscellaneous corporation more than once. Unfortunately, the Web 3.0 part of the Great Vision of the Metaverse required everything be “decentralized” and “on-chain” which led developers to try building their own new VR engines and netcode from scratch, attempting to pack hundreds of users into virtual spaces on untested frameworks.

If the minds behind these new engines and frameworks were battle-tested VR engine-eers, they might have succeeded in a few years when the hardware caught up a bit more to their ambitions. Unfortunately, the kinds of developers driven to the Metaverse were gold-diggers that wanted to launch an MVP as quickly as possible in a great land-grab caused by FOMO hype.

The Metaverse didn’t fail because “VR is bad” or because “people just didn’t get it”, it failed because instead of building technology and experiences worthy of people’s time, the Metaverse was instead a series of get-rich-quick schemes, poorly executed by developers who knew just enough to make it look promising in a YouTube video. Every project became about making money; even as the consumer, there was a way for you to profit from buying land from the developer or some way for you to loan out your digital goods to others and make some money from them. This wasn’t an experience focused, immersive digital world like the ones dreamt up by Gibson and Stephenson, but a late-stage capitalist dystopia where everything was a ploy to get more hands in your wallet.

Large Language Models: A Surprising Success Story

In stark contrast to the Metaverse, LLMs like OpenAI’s ChatGPT and GPT-4, Google’s Bard, and Meta’s LLaMA have experienced rapid adoption and widespread success.

Where the Metaverse started with its Evangelists trying to sell everyone products that didn’t fully exist, LLMs started as a science experiment that Google thought had hit a dead end 6 years ago.

Back in 2017, Google Brain (now DeepMind) released a paper called “Attention is All You Need” which gave birth to the first known “Transformer” (the “T” in “GPT”) neural-network model.

The Transformer model was unique in its training cost (it was incredibly cheap compared to other model architectures at the time: 3 orders of magnitude less compute required) and in its accuracy in written language translation tasks (on par with the best competing models).

After a bit of hype around the paper’s release, Google quietly went back underground to occasionally train new Transformer models and pursue other things having dropped a proverbial bombshell in the Machine Learning research community.

Under the radar, OpenAI picked up Google’s Transformer model and decided to throw some more money at it. With its greatly reduced training cost, OpenAI opted to increase the number of parameters in the model, growing it significantly to 110 million parameters in its original GPT model in 2018, breaking accuracy benchmarks and surprising the research community yet again.

After this first finding, the development of LLMs followed a hockey-stick trajectory, with improvements in capabilities occurring at an accelerating pace. Compute became more and more available thanks to cloud-hosted GPUs from the likes of Nvidia, allowing OpenAI to train their GPT-2 model on up to 1.5 billion parameters (for the XL model) in 2019. Finally, the world started paying attention when the 175 billion parameter GPT-3 model was announced in May of 2020, showing unprecedented language competency and forcing the world to pay attention to the present and future of LLMs.

This sudden and significant leap in capability revealed a multitude of unexpected applications. From chatbots and virtual assistants to content generation and data analysis, LLMs found uses across various industries, thus fueling their rapid rise in popularity.

ChatGPT’s launch as a consumer-facing product quickly showed everyday people how powerful generative transformer models have become, providing a helpful product that people could actually use immediately to make their lives a bit easier. The free-to-use nature of these products have ensured everyone who hears about them gets a chance to try them out and see what utility they may be able to derive from them.

Rather than trying to sell a product that didn’t exist, LLMs emerged naturally from the research community such that it became useful to everyone who experienced it, having clear and immediate value. LLMs aren’t trying to sell you anything (yet), you don’t use them because you think you’ll be able to sell them to someone else and make a tidy profit. They’re tools being provided to everyone to make their lives easier, leveraging the wealth of compute resources we’ve built as a people to efficiently and effectively accomplish common tasks through conversation with a computer.

GPT-4 has reached competency levels that allow it to pass the BAR exam, write at the level of a college graduate, and develop software like a well-trained engineer. Of course these models don’t get everything right all the time, but they’re self-correcting and are only going to get better. In 6 years we’ve gone from a model that does a shitty job of translating French into English to one capable of writing books, software, and emails to your HOA in defense of the new paint-job on your front door (all of these accomplished by the same model mind you). In another 6 years I have no idea how much further we’ll have come but I do know that LLMs are the inevitable disruptive force we’ve been waiting for in the space of computing.

Lessons Learned: The Power of Research-Driven Innovation

The contrasting fates of the Metaverse and Large Language Models serve as a powerful reminder of the importance of research-driven innovation and the perils of consumerism-driven hype. While the Metaverse failed to live up to its lofty promises, LLMs have emerged as a game-changing force in computing, reshaping industries and altering the way we interact with technology.

The success of LLMs is a testament to the power of curiosity, innovation, and perseverance in the face of uncertainty. It also serves as a reminder that true breakthroughs in technology often arise from unexpected sources, and that a focus on research and discovery can yield far-reaching rewards.

Who knows, maybe in a few years LLMs will be able to implement the Metaverse as Gibson and Stephenson envisioned.