A Brief History of the Web
This is part 2 of my series, The Black Purposes of Web3, where I post my undergraduate thesis in sections. Read the series intro.
This post corresponds to the second chapter ("History of the Web"). I significantly condensed it because...it was pretty wordy and boring. I maintained the structure, but only kept the highlights to ground the subsequent posts.
Before we can evaluate Web3's promises, we need to understand what came before it. The web has evolved significantly since its invention, and each iteration has brought new possibilities—and new problems.
Web 1.0: The Read-Only Web
Tim Berners-Lee invented the first iteration of the World Wide Web, now known as Web 1.0, in 1989 while working at CERN, a European research center. With scientists and researchers spread across the world, they needed a better way to share information. Berners-Lee's solution was a system of linked documents that could be accessed through a browser—the "web" of information we know today.
Initially, the web was mainly accessible to scientists and academics. But once user-friendly browsers became available to the general public in the early 1990s—particularly the Mosaic browser in 1993 with its "point-and-click" interface—the web rapidly expanded to millions of users.[1]
Here's the key limitation of Web 1.0: it was essentially read-only. Most users didn't change what was on the web—they simply read the static pages created by information providers. The web was a collection of documents you could browse, but not really interact with.
Web 2.0: The Read-Write (and Surveillance) Web
Web 2.0 is the implementation of the web we use now. Though it wasn't really "invented" at a specific moment, the term reflects new developments in technology and a shift in how users could interact online. This iteration is characterized by dynamic webpages, user-generated content, and interactivity between users.
Rather than just reading static pages, web users could now post information, contribute to sites, and interact with each other. This became known as the "read-write" web. Web 2.0 is thought to have emerged around 2004 with the rise of social networking platforms like Facebook and MySpace, which combined user-generated content with dynamic, interactive interfaces.[2] Users became central to how websites worked—they could contribute content, collaborate, and share knowledge. It was seen as "harnessing collective intelligence, turning the web into a kind of global brain."[3]
This sounds utopian. And in some ways, it was transformative. But there was a major side effect: companies could now own and profit from the content and data users were creating. The business model of Web 2.0 was built on providing "free" platforms in exchange for user data. Over time, platforms became increasingly sophisticated at collecting and monetizing this data.
As one researcher put it, the "architecture of participation" sometimes turned into an "architecture of exploitation."[4] This is the origin of the current controversies around Big Tech, the data economy, and user privacy online.
This evolution from read-only to read-write brought incredible new capabilities—but also created new problems around data, privacy, and corporate control. In the next post, I'll explore how the data economy emerged from Web 2.0's business model, and how it fundamentally changed our relationship with the internet.
References
1. M. Gillies and M. Cailliau, How the Web was Born: The Story of the World Wide Web. Oxford University Press, 2000.
2. T. O'Reilly, "What Is Web 2.0: Design Patterns and Business Models for the Next Generation of Software," O'Reilly Media, 2005.
3. T. O'Reilly, "Web 2.0 Compact Definition: Trying Again," O'Reilly Radar, 2006.
4. T. Scholz, "What the MySpace generation should know about working for free," Re-public, 2008.