The fundamental elements of the difference between the networked information economy and the mass media are network architecture and the cost of becoming a speaker. The first element is the shift from a hub-and-spoke architecture with unidirectional links to the end points in the mass media, to distributed architecture with multidirectional connections among all nodes in the networked information environment. The second is the practical elimination of communications costs as a barrier to speaking across associational boundaries. Together, these characteristics have fundamentally altered the capacity of individuals, acting alone or with others, to be active participants in the public sphere as opposed to its passive readers, listeners, or viewers. For authoritarian countries, this means that it is harder and more costly, though not perhaps entirely impossible, to both be networked and maintain control over their public spheres. China seems to be doing too good a job of this in the middle of the first decade of this century for us to say much more than that it is harder to maintain control, and therefore that at least in some authoritarian regimes, control will be looser. In liberal democracies, ubiquitous individual ability to produce information creates the potential for near-universal intake. It therefore portends significant, though not inevitable, changes in the structure of the public sphere from the commercial mass-media environment. These changes raise challenges for filtering. They underlie some of the critiques of the claims about the democratizing effect of the Internet that I explore later in this chapter. Fundamentally, however, they are the roots of possible change. Beginning with the cost of sending an e-mail to some number of friends or to a mailing list of people interested in a particular subject, to the cost of setting up a Web site or a blog, and through to the possibility of maintaining interactive conversations with large numbers of people through sites like Slashdot, the cost of being a speaker in a regional, national, or even international political conversation is several orders of magnitude lower than the cost of speaking in the mass-mediated environment. This, in turn, leads to several orders of magnitude more speakers and participants in conversation and, ultimately, in the public sphere. The change is as much qualitative as it is quantitative. The qualitative change is represented in the experience of being a potential speaker, as opposed to simply a listener and voter. It relates to the self-perception of individuals in society and the culture of participation they can adopt. The easy possibility of communicating effectively into the public sphere allows individuals to reorient themselves from passive readers and listeners to potential speakers and participants in a conversation. The way we listen to what we hear changes because of this; as does, perhaps most fundamentally, the way we observe and process daily events in our lives. We no longer need to take these as merely private observations, but as potential subjects for public communication. This change affects the relative power of the media. It affects the structure of intake of observations and views. It affects the presentation of issues and observations for discourse. It affects the way issues are filtered, for whom and by whom. Finally, it affects the ways in which positions are crystallized and synthesized, sometimes still by being amplified to the point that the mass media take them as inputs and convert them into political positions, but occasionally by direct organization of opinion and action to the point of reaching a salience that drives the political process directly. The basic case for the democratizing effect of the Internet, as seen from the perspective of the mid-1990s, was articulated in an opinion of the U.S. Supreme Court in Reno v. ACLU: The Web is thus comparable, from the readers' viewpoint, to both a vast library including millions of readily available and indexed publications and a sprawling mall offering goods and services. From the publishers' point of view, it constitutes a vast platform from which to address and hear from a world-wide audience of millions of readers, viewers, researchers, and buyers. Any person or organization with a computer connected to the Internet can "publish" information. Publishers include government agencies, educational institutions, commercial entities, advocacy groups, and individuals. . . . Through the use of chat rooms, any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of Web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer. As the District Court found, "the content on the Internet is as diverse as human thought."1
The observations of what is different and unique about this new medium relative to those that dominated the twentieth century are already present in the quotes from the Court. There are two distinct types of effects. The first, as the Court notes from "the readers' perspective," is the abundance and diversity of human expression available to anyone, anywhere, in a way that was not feasible in the mass-mediated environment. The second, and more fundamental, is that anyone can be a publisher, including individuals, educational institutions, and nongovernmental organizations (NGOs), alongside the traditional speakers of the mass-media environment--government and commercial entities. Since the end of the 1990s there has been significant criticism of this early conception of the democratizing effects of the Internet. One line of critique includes variants of the Babel objection: the concern that information overload will lead to fragmentation of discourse, polarization, and the loss of political community. A different and descriptively contradictory line of critique suggests that the Internet is, in fact, exhibiting concentration: Both infrastructure and, more fundamentally, patterns of attention are much less distributed than we thought. As a consequence, the Internet diverges from the mass media much less than we thought in the 1990s and significantly less than we might hope. I begin the chapter by offering a menu of the core technologies and usage patterns that can be said, as of the middle of the first decade of the twentyfirst century, to represent the core Internet-based technologies of democratic discourse. I then use two case studies to describe the social and economic practices through which these tools are implemented to construct the public sphere, and how these practices differ quite radically from the mass-media model. On the background of these stories, we are then able to consider the critiques that have been leveled against the claim that the Internet democratizes. Close examination of the application of networked information economy to the production of the public sphere suggests that the emerging networked public sphere offers significant improvements over one dominated by commercial mass media. Throughout the discussion, it is important to keep in mind that the relevant comparison is always between the public sphere that we in fact had throughout the twentieth century, the one dominated by mass media, that is the baseline for comparison, not the utopian image of the "everyone a pamphleteer" that animated the hopes of the 1990s for Internet democracy. Departures from the naive utopia are not signs that ¨ the Internet does not democratize, after all. They are merely signs that the medium and its analysis are maturing.
BASIC TOOLS OF NETWORKED COMMUNICATION
Analyzing the effect of the networked information environment on public discourse by cataloging the currently popular tools for communication is, to some extent, self-defeating. These will undoubtedly be supplanted by new ones. Analyzing this effect without having a sense of what these tools are or how they are being used is, on the other hand, impossible. This leaves us with the need to catalog what is, while trying to abstract from what is being used to what relationships of information and communication are emerging, and from these to transpose to a theory of the networked information economy as a new platform for the public sphere. E-mail is the most popular application on the Net. It is cheap and trivially easy to use. Basic e-mail, as currently used, is not ideal for public communications. While it provides a cheap and efficient means of communicating with large numbers of individuals who are not part of one's basic set of social associations, the presence of large amounts of commercial spam and the amount of mail flowing in and out of mailboxes make indiscriminate e-mail distributions a relatively poor mechanism for being heard. E-mails to smaller groups, preselected by the sender for having some interest in a subject or relationship to the sender, do, however, provide a rudimentary mechanism for communicating observations, ideas, and opinions to a significant circle, on an ad hoc basis. Mailing lists are more stable and self-selecting, and
therefore more significant as a basic tool for the networked public sphere. Some mailing lists are moderated or edited, and run by one or a small number of editors. Others are not edited in any significant way. What separates mailing lists from most Web-based uses is the fact that they push the information on them into the mailbox of subscribers. Because of their attention limits, individuals restrict their subscriptions, so posting on a mailing list tends to be done by and for people who have self-selected as having a heightened degree of common interest, substantive or contextual. It therefore enhances the degree to which one is heard by those already interested in a topic. It is not a communications model of one-to-many, or few-to-many as broadcast is to an open, undefined class of audience members. Instead, it allows one, or a few, or even a limited large group to communicate to a large but limited group, where the limit is self-selection as being interested or even immersed in a subject. The World Wide Web is the other major platform for tools that individuals use to communicate in the networked public sphere. It enables a wide range of applications, from basic static Web pages, to, more recently, blogs and various social-softwaremediated platforms for large-scale conversations of the type described in chapter 3--like Slashdot. Static Web pages are the individual's basic "broadcast" medium. They allow any individual or organization to present basic texts, sounds, and images pertaining to their position. They allow small NGOs to have a worldwide presence and visibility. They allow individuals to offer thoughts and commentaries. They allow the creation of a vast, searchable database of information, observations, and opinions, available at low cost for anyone, both to read and write into. This does not yet mean that all these statements are heard by the relevant others to whom they are addressed. Substantial analysis is devoted to that problem, but first let us complete the catalog of tools and information flow structures. One Web-based tool and an emerging cultural practice around it that extends the basic characteristics of Web sites as media for the political public sphere are Web logs, or blogs. Blogs are a tool and an approach to using the Web that extends the use of Web pages in two significant ways. Technically, blogs are part of a broader category of innovations that make the web "writable." That is, they make Web pages easily capable of modification through a simple interface. They can be modified from anywhere with a networked computer, and the results of writing onto the Web page are immediately available to anyone who accesses the blog to read. This technical change resulted in two divergences from the cultural practice of Web sites
in the 1990s. First, they allowed the evolution of a journal-style Web page, where individual short posts are added to the Web site in short or large intervals. As practice has developed over the past few years, these posts are usually archived chronologically. For many users, this means that blogs have become a form of personal journal, updated daily or so, for their own use and perhaps for the use of a very small group of friends. What is significant about this characteristic from the perspective of the construction of the public sphere is that blogs enable individuals to write to their Web pages in journalism time--that is, hourly, daily, weekly--whereas Web page culture that preceded it tended to be slower moving: less an equivalent of reportage than of the essay. Today, one certainly finds individuals using blog software to maintain what are essentially static Web pages, to which they add essays or content occasionally, and Web sites that do not use blogging technology but are updated daily. The public sphere function is based on the content and cadence--that is, the use practice--not the technical platform. The second critical innovation of the writable Web in general and of blogs in particular was the fact that in addition to the owner, readers/users could write to the blog. Blogging software allows the person who runs a blog to permit some, all, or none of the readers to post comments to the blog, with or without retaining power to edit or moderate the posts that go on, and those that do not. The result is therefore not only that many more people write finished statements and disseminate them widely, but also that the end product is a weighted conversation, rather than a finished good. It is a conversation because of the common practice of allowing and posting comments, as well as comments to these comments. Blog writers--bloggers-- often post their own responses in the comment section or address comments in the primary section. Blog-based conversation is weighted, because the culture and technical affordances of blogging give the owner of the blog greater weight in deciding who gets to post or comment and who gets to decide these questions. Different blogs use these capabilities differently; some opt for broader intake and discussion on the board, others for a more tightly edited blog. In all these cases, however, the communications model or information-flow structure that blogs facilitate is a weighted conversation that takes the form of one or a group of primary contributors/authors, together with some larger number, often many, secondary contributors, communicating to an unlimited number of many readers. The writable Web also encompasses another set of practices that are distinct, but that are often pooled in the literature together with blogs. These
are the various larger-scale, collaborative-content production systems available on the Web, of the type described in chapter 3. Two basic characteristics make sites like Slashdot or Wikipedia different from blogs. First, they are intended for, and used by, very large groups, rather than intended to facilitate a conversation weighted toward one or a small number of primary speakers. Unlike blogs, they are not media for individual or small group expression with a conversation feature. They are intrinsically group communication media. They therefore incorporate social software solutions to avoid deterioration into chaos--peer review, structured posting privileges, reputation systems, and so on. Second, in the case of Wikis, the conversation platform is anchored by a common text. From the perspective of facilitating the synthesis of positions and opinions, the presence of collaborative authorship of texts offers an additional degree of viscosity to the conversation, so that views "stick" to each other, must jostle for space, and accommodate each other. In the process, the output is more easily recognizable as a collective output and a salient opinion or observation than where the form of the conversation is more free-flowing exchange of competing views. Common to all these Web-based tools--both static and dynamic, individual and cooperative--are linking, quotation, and presentation. It is at the very core of the hypertext markup language (HTML) to make referencing easy. And it is at the very core of a radically distributed network to allow materials to be archived by whoever wants to archive them, and then to be accessible to whoever has the reference. Around these easy capabilities, the cultural practice has emerged to reference through links for easy transition from your own page or post to the one you are referring to--whether as inspiration or in disagreement. This culture is fundamentally different from the mass-media culture, where sending a five-hundred-page report to millions of users is hard and expensive. In the mass media, therefore, instead of allowing readers to read the report alongside its review, all that is offered is the professional review in the context of a culture that trusts the reviewer. On the Web, linking to original materials and references is considered a core characteristic of communication. The culture is oriented toward "see for yourself." Confidence in an observation comes from a combination of the reputation of the speaker as it has emerged over time, reading underlying sources you believe you have some competence to evaluate for yourself, and knowing that for any given referenced claim or source, there is some group of people out there, unaffiliated with the reviewer or speaker, who will have access to the source and the means for making their disagreement with the
speaker's views known. Linking and "see for yourself" represent a radically different and more participatory model of accreditation than typified the mass media. Another dimension that is less well developed in the United States than it is in Europe and East Asia is mobility, or the spatial and temporal ubiquity of basic tools for observing and commenting on the world we inhabit. Dan Gillmor is clearly right to include these basic characteristics in his book We the Media, adding to the core tools of what he describes as a transformation in journalism, short message service (SMS), and mobile connected cameras to mailing lists, Web logs, Wikis, and other tools. The United States has remained mostly a PC-based networked system, whereas in Europe and Asia, there has been more substantial growth in handheld devices, primarily mobile phones. In these domains, SMS--the "e-mail" of mobile phones--and camera phones have become critical sources of information, in real time. In some poor countries, where cell phone minutes remain very (even prohibitively) expensive for many users and where landlines may not exist, text messaging is becoming a central and ubiquitous communication tool. What these suggest to us is a transition, as the capabilities of both systems converge, to widespread availability of the ability to register and communicate observations in text, audio, and video, wherever we are and whenever we wish. Drazen Pantic tells of how listeners of Internet-based Radio B-92 in Belgrade reported events in their neighborhoods after the broadcast station had been shut down by the Milosevic regime. Howard Rheingold describes in Smart Mobs how citizens of the Philippines used SMS to organize real-time movements and action to overthrow their government. In a complex modern society, where things that matter can happen anywhere and at any time, the capacities of people armed with the means of recording, rendering, and communicating their observations change their relationship to the events that surround them. Whatever one sees and hears can be treated as input into public debate in ways that were impossible when capturing, rendering, and communicating were facilities reserved to a handful of organizations and a few thousands of their employees.
NETWORKED INFORMATION ECONOMY MEETS THE PUBLIC SPHERE
The networked public sphere is not made of tools, but of social production practices that these tools enable. The primary effect of the Internet on the
public sphere in liberal societies relies on the information and cultural production activity of emerging nonmarket actors: individuals working alone and cooperatively with others, more formal associations like NGOs, and their feedback effect on the mainstream media itself. These enable the networked public sphere to moderate the two major concerns with commercial mass media as a platform for the public sphere: (1) the excessive power it gives its owners, and (2) its tendency, when owners do not dedicate their media to exert power, to foster an inert polity. More fundamentally, the social practices of information and discourse allow a very large number of actors to see themselves as potential contributors to public discourse and as potential actors in political arenas, rather than mostly passive recipients of mediated information who occasionally can vote their preferences. In this section, I offer two detailed stories that highlight different aspects of the effects of the networked information economy on the construction of the public sphere. The first story focuses on how the networked public sphere allows individuals to monitor and disrupt the use of mass-media power, as well as organize for political action. The second emphasizes in particular how the networked public sphere allows individuals and groups of intense political engagement to report, comment, and generally play the role traditionally assigned to the press in observing, analyzing, and creating political salience for matters of public interest. The case studies provide a context both for seeing how the networked public sphere responds to the core failings of the commercial, mass-media-dominated public sphere and for considering the critiques of the Internet as a platform for a liberal public sphere. Our first story concerns Sinclair Broadcasting and the 2004 U.S. presidential election. It highlights the opportunities that mass-media owners have to exert power over the public sphere, the variability within the media itself in how this power is used, and, most significant for our purposes here, the potential corrective effect of the networked information environment. At its core, it suggests that the existence of radically decentralized outlets for individuals and groups can provide a check on the excessive power that media owners were able to exercise in the industrial information economy. Sinclair, which owns major television stations in a number of what were considered the most competitive and important states in the 2004 election-- including Ohio, Florida, Wisconsin, and Iowa--informed its staff and stations that it planned to preempt the normal schedule of its sixty-two stations to air a documentary called Stolen Honor: The Wounds That Never Heal, as a news program, a week and a half before the elections.2 The documentary
was reported to be a strident attack on Democratic candidate John Kerry's Vietnam War service. One reporter in Sinclair's Washington bureau, who objected to the program and described it as "blatant political propaganda," was promptly fired.3 The fact that Sinclair owns stations reaching one quarter of U.S. households, that it used its ownership to preempt local broadcast schedules, and that it fired a reporter who objected to its decision, make this a classic "Berlusconi effect" story, coupled with a poster-child case against media concentration and the ownership of more than a small number of outlets by any single owner. The story of Sinclair's plans broke on Saturday, October 9, 2004, in the Los Angeles Times. Over the weekend, "official" responses were beginning to emerge in the Democratic Party. The Kerry campaign raised questions about whether the program violated election laws as an undeclared "in-kind" contribution to the Bush campaign. By Tuesday, October 12, the Democratic National Committee announced that it was filing a complaint with the Federal Elections Commission (FEC), while seventeen Democratic senators wrote a letter to the chairman of the Federal Communications Commission (FCC), demanding that the commission investigate whether Sinclair was abusing the public trust in the airwaves. Neither the FEC nor the FCC, however, acted or intervened throughout the episode. Alongside these standard avenues of response in the traditional public sphere of commercial mass media, their regulators, and established parties, a very different kind of response was brewing on the Net, in the blogosphere. On the morning of October 9, 2004, the Los Angeles Times story was blogged on a number of political blogs--Josh Marshall on talkingpointsmemo. com, Chris Bower on MyDD.com, and Markos Moulitsas on dailyKos.com. By midday that Saturday, October 9, two efforts aimed at organizing opposition to Sinclair were posted in the dailyKos and MyDD. A "boycottSinclair" site was set up by one individual, and was pointed to by these blogs. Chris Bowers on MyDD provided a complete list of Sinclair stations and urged people to call the stations and threaten to picket and boycott. By Sunday, October 10, the dailyKos posted a list of national advertisers with Sinclair, urging readers to call them. On Monday, October 11, MyDD linked to that list, while another blog, theleftcoaster.com, posted a variety of action agenda items, from picketing affiliates of Sinclair to suggesting that readers oppose Sinclair license renewals, providing a link to the FCC site explaining the basic renewal process and listing public-interest organizations to work with. That same day, another individual, Nick Davis, started a Web site,
BoycottSBG.com, on which he posted the basic idea that a concerted boycott of local advertisers was the way to go, while another site, stopsinclair.org, began pushing for a petition. In the meantime, TalkingPoints published a letter from Reed Hundt, former chairman of the FCC, to Sinclair, and continued finding tidbits about the film and its maker. Later on Monday, TalkingPoints posted a letter from a reader who suggested that stockholders of Sinclair could bring a derivative action. By 5:00 a.m. on the dawn of Tuesday, October 12, however, TalkingPoints began pointing toward Davis's database on BoycottSBG.com. By 10:00 that morning, Marshall posted on TalkingPoints a letter from an anonymous reader, which began by saying: "I've worked in the media business for 30 years and I guarantee you that sales is what these local TV stations are all about. They don't care about license renewal or overwhelming public outrage. They care about sales only, so only local advertisers can affect their decisions." This reader then outlined a plan for how to watch and list all local advertisers, and then write to the sales managers--not general managers--of the local stations and tell them which advertisers you are going to call, and then call those. By 1:00 p.m. Marshall posted a story of his own experience with this strategy. He used Davis's database to identify an Ohio affiliate's local advertisers. He tried to call the sales manager of the station, but could not get through. He then called the advertisers. The post is a "how to" instruction manual, including admonitions to remember that the advertisers know nothing of this, the story must be explained, and accusatory tones avoided, and so on. Marshall then began to post letters from readers who explained with whom they had talked--a particular sales manager, for example--and who were then referred to national headquarters. He continued to emphasize that advertisers were the right addressees. By 5:00 p.m. that same Tuesday, Marshall was reporting more readers writing in about experiences, and continued to steer his readers to sites that helped them to identify their local affiliate's sales manager and their advertisers.4 By the morning of Wednesday, October 13, the boycott database already included eight hundred advertisers, and was providing sample letters for users to send to advertisers. Later that day, BoycottSBG reported that some participants in the boycott had received reply e-mails telling them that their unsolicited e-mail constituted illegal spam. Davis explained that the CANSPAM Act, the relevant federal statute, applied only to commercial spam, and pointed users to a law firm site that provided an overview of CANSPAM. By October 14, the boycott effort was clearly bearing fruit. Davis
reported that Sinclair affiliates were threatening advertisers who cancelled advertisements with legal action, and called for volunteer lawyers to help respond. Within a brief period, he collected more than a dozen volunteers to help the advertisers. Later that day, another blogger at grassroots nation.com had set up a utility that allowed users to send an e-mail to all advertisers in the BoycottSBG database. By the morning of Friday, October 15, Davis was reporting more than fifty advertisers pulling ads, and three or four mainstream media reports had picked up the boycott story and reported on it. That day, an analyst at Lehman Brothers issued a research report that downgraded the expected twelve-month outlook for the price of Sinclair stock, citing concerns about loss of advertiser revenue and risk of tighter regulation. Mainstream news reports over the weekend and the following week systematically placed that report in context of local advertisers pulling their ads from Sinclair. On Monday, October 18, the company's stock price dropped by 8 percent (while the S&P 500 rose by about half a percent). The following morning, the stock dropped a further 6 percent, before beginning to climb back, as Sinclair announced that it would not show Stolen Honor, but would provide a balanced program with only portions of the documentary and one that would include arguments on the other side. On that day, the company's stock price had reached its lowest point in three years. The day after the announced change in programming decision, the share price bounced back to where it had been on October 15. There were obviously multiple reasons for the stock price losses, and Sinclair stock had been losing ground for many months prior to these events. Nonetheless, as figure 7.1 demonstrates, the market responded quite sluggishly to the announcements of regulatory and political action by the Democratic establishment earlier in the week of October 12, by comparison to the precipitous decline and dramatic bounce-back surrounding the market projections that referred to advertising loss. While this does not prove that the Weborganized, blog-driven and -facilitated boycott was the determining factor, as compared to fears of formal regulatory action, the timing strongly suggests that the efficacy of the boycott played a very significant role. The first lesson of the Sinclair Stolen Honor story is about commercial mass media themselves. The potential for the exercise of inordinate power by media owners is not an imaginary concern. Here was a publicly traded firm whose managers supported a political party and who planned to use their corporate control over stations reaching one quarter of U.S. households, many in swing states, to put a distinctly political message in front of this
We can now reveal for the first time the location of a complete online copy of the original data set. As we anticipate attempts to prevent the distribution of this information we encourage supporters of democracy to make copies of these files and to make them available on websites and file sharing networks: http:// users.actrix.co.nz/dolly/. As many of the files are zip password protected you may need some assistance in opening them, we have found that the utility available at the following URL works well: http://www.lostpassword.com. Finally some of the zip files are partially damaged, but these too can be read by using the utility at: http://www.zip-repair.com/. At this stage in this inquiry we do not believe that we have come even remotely close to investigating all aspects of this data; i.e., there is no reason to believe that the security flaws discovered so far are the only ones. Therefore we expect many more discoveries to be made. We want the assistance of the online computing community in this enterprise and we encourage you to file your findings at the forum HERE [providing link to forum].
A number of characteristics of this call to arms would have been simply infeasible in the mass-media environment. They represent a genuinely different mind-set about how news and analysis are produced and how censorship and power are circumvented. First, the ubiquity of storage and communications capacity means that public discourse can rely on "see for yourself" rather than on "trust me." The first move, then, is to make the raw materials available for all to see. Second, the editors anticipated that the company would try to suppress the information. Their response was not to use a counterweight of the economic and public muscle of a big media corporation to protect use of the materials. Instead, it was widespread distribution of information--about where the files could be found, and about where tools to crack the passwords and repair bad files could be found-- matched with a call for action: get these files, copy them, and store them in many places so they cannot be squelched. Third, the editors did not rely on large sums of money flowing from being a big media organization to hire experts and interns to scour the files. Instead, they posed a challenge to whoever was interested--there are more scoops to be found, this is important for democracy, good hunting!! Finally, they offered a platform for integration of the insights on their own forum. This short paragraph outlines a mechanism for radically distributed storage, distribution, analysis, and reporting on the Diebold files. As the story unfolded over the next few months, this basic model of peer production of investigation, reportage, analysis, and communication indeed worked. It resulted in the decertification of some of Diebold's systems in California, and contributed to a shift in the requirements of a number of states, which now require voting machines to produce a paper trail for recount purposes. The first analysis of the Diebold system based on the files Harris originally found was performed by a group of computer scientists at the Information Security Institute at Johns Hopkins University and released as a working paper in late July 2003. The Hopkins Report, or Rubin Report as it was also named after one of its authors, Aviel Rubin, presented deep criticism of the Diebold system and its vulnerabilities on many dimensions. The academic credibility of its authors required a focused response from Diebold. The company published a line-by-line response. Other computer scientists joined in the debate. They showed the limitations and advantages of the Hopkins Report, but also where the Diebold response was adequate and where it provided implicit admission of the presence of a number of the vulnerabilities identified in the report. The report and comments to it sparked two other major reports, commissioned by Maryland in the fall of 2003 and later in January 2004, as part of that state's efforts to decide whether to adopt electronic voting machines. Both studies found a wide range of flaws in the systems they examined and required modifications (see figure 7.2). Meanwhile, trouble was brewing elsewhere for Diebold. In early August 2003, someone provided Wired magazine with a very large cache containing thousands of internal e-mails of Diebold. Wired reported that the e-mails were obtained by a hacker, emphasizing this as another example of the laxity of Diebold's security. However, the magazine provided neither an analysis of the e-mails nor access to them. Bev Harris, the activist who had originally found the Diebold materials, on the other hand, received the same cache, and posted the e-mails and memos on her site. Diebold's response was to threaten litigation. Claiming copyright in the e-mails, the company demanded from Harris, her Internet service provider, and a number of other sites where the materials had been posted, that the e-mails be removed. The e-mails were removed from these sites, but the strategy of widely distributed replication of data and its storage in many different topological and organizationally diverse settings made Diebold's efforts ultimately futile. The protagonists from this point on were college students. First, two students at Swarthmore College in Pennsylvania, and quickly students in a number of other universities in the United States, began storing the e-mails and scouring them for evidence of impropriety. In October 2003, Diebold proceeded to write to the universities whose students were hosting the materials. The company invoked provisions of the Digital Millennium Copyright Act that require Web-hosting companies to remove infringing materials when copyright owners notify them of the presence of these materials on their sites. The universities obliged, and required the students to remove the materials from their sites. The students, however, did not disappear quietly into the
CRITIQUES OF THE CLAIMS THAT THE INTERNET HAS DEMOCRATIZING EFFECTS
It is common today to think of the 1990s, out of which came the Supreme Court's opinion in Reno v. ACLU, as a time of naive optimism about the ¨ Internet, expressing in political optimism the same enthusiasm that drove the stock market bubble, with the same degree of justifiability. An ideal liberal public sphere did not, in fact, burst into being from the Internet, fully grown like Athena from the forehead of Zeus. The detailed criticisms of the early claims about the democratizing effects of the Internet can be characterized as variants of five basic claims:
Money will end up dominating anyway. A point originally raised by Eli Noam is that in this explosively large universe, getting attention will be as difficult as getting your initial message out in the mass-media context, if not more so. The same means that dominated the capacity to speak in the mass-media environment--money--will dominate the capacity to be heard on the Internet, even if it no longer controls the capacity to speak.
Fragmentation of attention and discourse. A point raised most explicitly by Cass Sunstein in Republic.com is that the ubiquity of information and the absence of the mass media as condensation points will impoverish public discourse by fragmenting it. There will be no public sphere.
Individuals will view the world through millions of personally customized windows that will offer no common ground for political discourse or action, except among groups of highly similar individuals who customize their windows to see similar things.
Polarization. A descriptively related but analytically distinct critique of Sunstein's was that the fragmentation would lead to polarization. When information and opinions are shared only within groups of likeminded participants, he argued, they tend to reinforce each other's views and beliefs without engaging with alternative views or seeing the concerns and critiques of others. This makes each view more extreme in its own direction and increases the distance between positions taken by opposing camps.
IS THE INTERNET TOO CHAOTIC, TOO CONCENTRATED, OR NEITHER?
The first-generation critique of the claims that the Internet democratizes focused heavily on three variants of the information overload or Babel objection. The basic descriptive proposition that animated the Supreme Court in Reno v. ACLU was taken as more or less descriptively accurate: Everyone would be equally able to speak on the Internet. However, this basic obser- vation was then followed by a descriptive or normative explanation of why this development was a threat to democracy, or at least not much of a boon. The basic problem that is diagnosed by this line of critique is the problem of attention. When everyone can speak, the central point of failure becomes the capacity to be heard--who listens to whom, and how that question is decided. Speaking in a medium that no one will actually hear with any reasonable likelihood may be psychologically satisfying, but it is not a move in a political conversation. Noam's prediction was, therefore, that there would be a reconcentration of attention: money would reemerge in this environment as a major determinant of the capacity to be heard, certainly no less, and perhaps even more so, than it was in the mass-media environment.11 Sunstein's theory was different. He accepted Nicholas Negroponte's prediction that people would be reading "The Daily Me," that is, that each of us would create highly customized windows on the information environment that would be narrowly tailored to our unique combination of interests. From this assumption about how people would be informed, he spun out two distinct but related critiques. The first was that discourse would be fragmented. With no six o'clock news to tell us what is on the public agenda, there would be no public agenda, just a fragmented multiplicity of private agendas that never coalesce into a platform for political discussion. The second was that, in a fragmented discourse, individuals would cluster into groups of self-reinforcing, self-referential discussion groups. These types of groups, he argued from social scientific evidence, tend to render their participants' views more extreme and less amenable to the conversation across political divides necessary to achieve reasoned democratic decisions. Extensive empirical and theoretical studies of actual use patterns of the Internet over the past five to eight years has given rise to a second-generation critique of the claim that the Internet democratizes. According to this critique, attention is much more concentrated on the Internet than we thought a few years ago: a tiny number of sites are highly linked, the vast majority of "speakers" are not heard, and the democratic potential of the Internet is lost. If correct, these claims suggest that Internet use patterns solve the problem of discourse fragmentation that Sunstein was worried about. Rather than each user reading a customized and completely different "newspaper," the vast majority of users turn out to see the same sites. In a network with a small number of highly visible sites that practically everyone reads, the discourse fragmentation problem is resolved. Because they are seen by most people, the polarization problem too is solved--the highly visible sites are not small-group interactions with homogeneous viewpoints. While resolving Sunstein's concerns, this pattern is certainly consistent with Noam's prediction that money would have to be paid to reach visibility, effectively replicating the mass-media model. While centralization would resolve the Babel objection, it would do so only at the expense of losing much of the democratic promise of the Net. Therefore, we now turn to the question: Is the Internet in fact too chaotic or too concentrated to yield a more attractive democratic discourse than the mass media did? I suggest that neither is the case. At the risk of appearing a chimera of Goldilocks and Pangloss, I argue instead that the observed use of the network exhibits an order that is not too concentrated and not too chaotic, but rather, if not "just right," at least structures a networked public sphere more attractive than the mass-media-dominated public sphere. There are two very distinct types of claims about Internet centralization. The first, and earlier, has the familiar ring of media concentration. It is the simpler of the two, and is tractable to policy. The second, concerned with the emergent patterns of attention and linking on an otherwise open network, is more difficult to explain and intractable to policy. I suggest, however, that it actually stabilizes and structures democratic discourse, providing a better answer to the fears of information overload than either the mass media or any efforts to regulate attention to matters of public concern. The media-concentration type argument has been central to arguments about the necessity of open access to broadband platforms, made most forcefully over the past few years by Lawrence Lessig. The argument is that the basic instrumentalities of Internet communications are subject to concentrated markets. This market concentration in basic access becomes a potential point of concentration of the power to influence the discourse made possible by access. Eli Noam's recent work provides the most comprehensive study currently available of the degree of market concentration in media industries. It offers a bleak picture.12 Noam looked at markets in basic infrastructure components of the Internet: Internet backbones, Internet service providers (ISPs), broadband providers, portals, search engines, browser software, media player software, and Internet telephony. Aggregating across all these sectors, he found that the Internet sector defined in terms of these components was, throughout most of the period from 1984 to 2002, concentrated according to traditional antitrust measures. Between 1992 and 1998, however, this sector was "highly concentrated" by the Justice Department's measure of market concentration for antitrust purposes. Moreover, the power of the top ten firms in each of these markets, and in aggregate for firms that had large market segments in a number of these markets, shows that an ever-smaller number of firms were capturing about 25 percent of the revenues in the Internet sector. A cruder, but consistent finding is the FCC's, showing that 96 percent of homes and small offices get their broadband access either from their incumbent cable operator or their incumbent local telephone carrier.13 It is important to recognize that these findings are suggesting potential points of failure for the networked information economy. They are not a critique of the democratic potential of the networked public sphere, but rather show us how we could fail to develop it by following the wrong policies. The risk of concentration in broadband access services is that a small number of firms, sufficiently small to have economic power in the antitrust sense, will control the markets for the basic instrumentalities of Internet communications. Recall, however, that the low cost of computers and the open-ended architecture of the Internet protocol itself are the core enabling facts that have allowed us to transition from the mass-media model to the networked information model. As long as these basic instrumentalities are open and neutral as among uses, and are relatively cheap, the basic economics of nonmarket production described in part I should not change. Under competitive conditions, as technology makes computation and communications cheaper, a well-functioning market should ensure that outcome. Under oligopolistic conditions, however, there is a threat that the network will become too expensive to be neutral as among market and nonmarket production. If basic upstream network connections, server space, and up-to-date reading and writing utilities become so expensive that one needs to adopt a commercial model to sustain them, then the basic economic characteristic that typifies the networked information economy--the relatively large role of nonproprietary, nonmarket production--will have been reversed. However, the risk is not focused solely or even primarily on explicit pricing. One of the primary remaining scarce resources in the networked environment is user time and attention. As chapter 5 explained, owners of communications facilities can extract value from their users in ways that are more subtle than increasing price. In particular, they can make some sites and statements easier to reach and see--more prominently displayed on the screen, faster to load--and sell that relative ease to those who are willing to pay.14 In that environment, nonmarket sites are systematically disadvantaged irrespective of the quality of their content.
The critique of concentration in this form therefore does not undermine the claim that the networked information economy, if permitted to flourish, will improve the democratic public sphere. It underscores the threat of excessive monopoly in infrastructure to the sustainability of the networked public sphere. The combination of observations regarding market concentration and an understanding of the importance of a networked public sphere to democratic societies suggests that a policy intervention is possible and desirable. Chapter 11 explains why the relevant intervention is to permit substantial segments of the core common infrastructure--the basic physical transport layer of wireless or fiber and the software and standards that run communications--to be produced and provisioned by users and managed as a commons.
ON POWER LAW DISTRIBUTIONS, NETWORK TOPOLOGY, AND BEING HEARD
A much more intractable challenge to the claim that the networked information economy will democratize the public sphere emerges from observations of a set or phenomena that characterize the Internet, the Web, the blogosphere, and, indeed, most growing networks. In order to extract information out of the universe of statements and communications made possible by the Internet, users are freely adopting practices that lead to the emergence of a new hierarchy. Rather than succumb to the "information overload" problem, users are solving it by congregating in a small number of sites. This conclusion is based on a new but growing literature on the likelihood that a Web page will be linked to by others. The distribution of that probability turns out to be highly skew. That is, there is a tiny probability that any given Web site will be linked to by a huge number of people, and a very large probability that for a given Web site only one other site, or even no site, will link to it. This fact is true of large numbers of very different networks described in physics, biology, and social science, as well as in communications networks. If true in this pure form about Web usage, this phenomenon presents a serious theoretical and empirical challenge to the claim that Internet communications of the sorts we have seen here meaningfully decentralize democratic discourse. It is not a problem that is tractable to policy. We cannot as a practical matter force people to read different things than what they choose to read; nor should we wish to. If users avoid information overload by focusing on a small subset of sites in an otherwise
open network that allows them to read more or less whatever they want and whatever anyone has written, policy interventions aimed to force a different pattern would be hard to justify from the perspective of liberal democratic theory. The sustained study of the distribution of links on the Internet and the Web is relatively new--only a few years old. There is significant theoretical work in a field of mathematics called graph theory, or network topology, on power law distributions in networks, on skew distributions that are not pure power law, and on the mathematically related small-worlds phenomenon in networks. The basic intuition is that, if indeed a tiny minority of sites gets a large number of links, and the vast majority gets few or no links, it will be very difficult to be seen unless you are on the highly visible site. Attention patterns make the open network replicate mass media. While explaining this literature over the next few pages, I show that what is in fact emerging is very different from, and more attractive than, the mass-media-dominated public sphere. While the Internet, the Web, and the blogosphere are indeed exhibiting much greater order than the freewheeling, "everyone a pamphleteer" image would suggest, this structure does not replicate a mass-media model. We are seeing a newly shaped information environment, where indeed few are read by many, but clusters of moderately read sites provide platforms for vastly greater numbers of speakers than were heard in the mass-media environment. Filtering, accreditation, synthesis, and salience are created through a system of peer review by information affinity groups, topical or interest based. These groups filter the observations and opinions of an enormous range of people, and transmit those that pass local peer review to broader groups and ultimately to the polity more broadly, without recourse to market-based points of control over the information flow. Intense interest and engagement by small groups that share common concerns, rather than lowest-commondenominator interest in wide groups that are largely alienated from each other, is what draws attention to statements and makes them more visible. This makes the emerging networked public sphere more responsive to intensely held concerns of a much wider swath of the population than the mass media were capable of seeing, and creates a communications process that is more resistant to corruption by money. In what way, first, is attention concentrated on the Net? We are used to seeing probability distributions that describe social phenomena following a Gaussian distribution: where the mean and the median are the same and the
probabilities fall off symmetrically as we describe events that are farther from the median. This is the famous Bell Curve. Some phenomena, however, observed initially in Pareto's work on income distribution and Zipf 's on the probability of the use of English words in text and in city populations, exhibit completely different probability distributions. These distributions have very long "tails"--that is, they are characterized by a very small number of very high-yield events (like the number of words that have an enormously high probability of appearing in a randomly chosen sentence, like "the" or "to") and a very large number of events that have a very low probability of appearing (like the probability that the word "probability" or "blogosphere" will appear in a randomly chosen sentence). To grasp intuitively how unintuitive such distributions are to us, we could think of radio humorist Garrison Keillor's description of the fictitious Lake Wobegon, where "all the children are above average." That statement is amusing because we assume intelligence follows a normal distribution. If intelligence were distributed according to a power law, most children there would actually be below average--the median is well below the mean in such distributions (see figure 7.4). Later work by Herbert Simon in the 1950s, and by Derek de Solla Price in the 1960s, on cumulative advantage in scientific citations15 presaged an emergence at the end of the 1990s of intense interest in power law characterizations of degree distributions, or the number of connections any point in a network has to other points, in many kinds of networks--from networks of neurons and axons, to social networks and communications and information networks. The Internet and the World Wide Web offered a testable setting, where large-scale investigation could be done automatically by studying link structure (who is linked-in to and by whom, who links out and to whom, how these are related, and so on), and where the practical applications of better understanding were easily articulated--such as the design of better search engines. In 1999, Albert-Laszlo Barabasi and Reka Albert published a paper ´´ ´ in Science showing that a variety of networked phenomena have a predictable topology: The distribution of links into and out of nodes on the network follows a power law. There is a very low probability that any vertex, or node, in the network will be very highly connected to many others, and a very large probability that a very large number of nodes will be connected only very loosely, or perhaps not at all. Intuitively, a lot of Web sites link to information that is located on Yahoo!, while very few link to any randomly selected individual's Web site. Barabasi and Albert hypothesized a mechanism ´
WHO WILL PLAY THE WATCHDOG FUNCTION?
A distinct critique leveled at the networked public sphere as a platform for democratic politics is the concern for who will fill the role of watchdog. Neil Netanel made this argument most clearly. His concern was that, perhaps freedom of expression for all is a good thing, and perhaps we could even overcome information overflow problems, but we live in a complex world with powerful actors. Government and corporate power is large, and individuals, no matter how good their tools, cannot be a serious alternative to a well-funded, independent press that can pay investigative reporters, defend lawsuits, and generally act like the New York Times and the Washington Post when they published the Pentagon Papers in the teeth of the Nixon administration's resistance, providing some of the most damning evidence against the planning and continued prosecution of the war in Vietnam. Netanel is cognizant of the tensions between the need to capture large audiences and sell advertising, on the one hand, and the role of watchdog, on the other. He nonetheless emphasizes that the networked public sphere cannot investigate as deeply or create the public salience that the mass media can. These limitations make commercial mass media, for all their limitations, necessary for a liberal public sphere. This diagnosis of the potential of the networked public sphere underrepresents its productive capacity. The Diebold story provides in narrative form a detailed response to each of the concerns. The problem of voting machines has all the characteristics of an important, hard subject. It stirs deep fears that democracy is being stolen, and is therefore highly unsettling. It involves a difficult set of technical judgments about the functioning of voting machines. It required exposure and analysis of corporate-owned materials in the teeth of litigation threats and efforts to suppress and discredit the criticism. At each juncture in the process, the participants in the critique turned iteratively to peer production and radically distributed methods of investigation, analysis, distribution, and resistance to suppression: the initial observations of the whistle-blower or the hacker; the materials made available on a "see for yourself" and "come analyze this and share your insights" model; the distribution by students; and the fallback option when their server was shut down of replication around the network. At each stage, a peer-production solution was interposed in place of where a well-funded, high-end mass-media outlet would have traditionally applied funding in expectation of sales of copy. And it was only after the networked public sphere developed the analysis and debate that the mass media caught on, and then only gingerly. The Diebold case was not an aberration, but merely a particularly rich case study of a much broader phenomenon, most extensively described in Dan Gilmore's We the Media. The basic production modalities that typify the networked information economy are now being applied to the problem of producing politically relevant information. In 2005, the most visible example of application of the networked information economy--both in its peer-production dimension and more generally by combining a wide range of nonproprietary production models--to the watchdog function of the media is the political blogosphere. The founding myth of the blogosphere's journalistic potency was built on the back of then Senate majority leader Trent Lott. In 2002, Lott had the indiscretion of saying, at the onehundredth-birthday party of Republican Senator Strom Thurmond, that if Thurmond had won his Dixiecrat presidential campaign, "we wouldn't have had all these problems over all these years." Thurmond had run on a segregationist campaign, splitting from the Democratic Party in opposition to Harry Truman's early civil rights efforts, as the postWorld War II winds began blowing toward the eventual demise of formal, legal racial segregation in the United States. Few positions are taken to be more self-evident in the national public morality of early twenty-first-century America than that formal, state-imposed, racial discrimination is an abomination. And yet, the first few days after the birthday party at which Lott made his statement saw almost no reporting on the statement. ABC News and the Washington Post made small mention of it, but most media outlets reported merely on a congenial salute and farewell celebration of the Senate's oldest and longestserving member. Things were different in the blogosphere. At first liberal blogs, and within three days conservative bloggers as well, began to excavate past racist statements by Lott, and to beat the drums calling for his censure or removal as Senate leader. Within about a week, the story surfaced in the mainstream media, became a major embarrassment, and led to Lott's resignation as Senate majority leader about a week later. A careful case study of this event leaves it unclear why the mainstream media initially ignored the story.32 It may have been that the largely social event drew the wrong sort of reporters. It may have been that reporters and editors who depend on major Washington, D.C., players were reluctant to challenge Lott. Perhaps they thought it rude to emphasize this indiscretion, or too upsetting to us all to think of just how close to the surface thoughts that we deem abominable can lurk. There is little disagreement that the day after the party, the story was picked up and discussed by Marshall on TalkingPoints, as well as by another liberal blogger, Atrios, who apparently got it from a post on Slate's "Chatterbox," which picked it up from ABC News's own The Note, a news summary made available on the television network's Web site. While the mass media largely ignored the story, and the two or three mainstream reporters who tried to write about it were getting little traction, bloggers were collecting more stories about prior instances where Lott's actions tended to suggest support for racist causes. Marshall, for example, found that Lott had filed a 1981 amicus curiae brief in support of Bob Jones University's effort to retain its tax-exempt status. The U.S. government had rescinded that status because the university practiced racial discrimination--such as prohibiting interracial dating. By Monday of the following week, four days after the remarks, conservative bloggers like Glenn Reynolds on Instapundit, Andrew Sullivan, and others were calling for Lott's resignation. It is possible that, absent the blogosphere, the story would still have flared up. There were two or so mainstream reporters still looking into the story. Jesse Jackson had come out within four days of the comment and said Lott should resign as majority leader. Eventually, when the mass media did enter the fray, its coverage clearly dominated the public agenda and its reporters uncovered materials that helped speed Lott's exit. However, given the short news cycle, the lack of initial interest by the media, and the large time lag between the event itself and when the media actually took the subject up, it seems likely that without the intervention of the blogosphere, the story would have died. What happened instead is that the cluster of political blogs--starting on the Left but then moving across the Left-Right divide--took up the subject, investigated, wrote opinions, collected links and public interest, and eventually captured enough attention to make the comments a matter of public importance. Free from the need to appear neutral and not to offend readers, and free from the need to keep close working relationships with news subjects, bloggers were able to identify something that grated on their sensibilities, talk about it, dig deeper, and eventually generate a substantial intervention into the public sphere. That intervention still had to pass through the mass media, for we still live in a communications environment heavily based on those media. However, the new source of insight, debate, and eventual condensation of effective public opinion came from within the networked information environment. The point is not to respond to the argument with a litany of anecdotes. The point is that the argument about the commercial media's role as watchdog turns out to be a familiar argument--it is the same argument that was made about software and supercomputers, encyclopedias and immersive entertainment scripts. The answer, too, is by now familiar. Just as the World Wide Web can offer a platform for the emergence of an enormous and effective almanac, just as free software can produce excellent software and peer production can produce a good encyclopedia, so too can peer production produce the public watchdog function. In doing so, clearly the unorganized collection of Internet users lacks some of the basic tools of the mass media: dedicated full-time reporters; contacts with politicians who need media to survive, and therefore cannot always afford to stonewall questions; or public visibility and credibility to back their assertions. However, networkbased peer production also avoids the inherent conflicts between investigative reporting and the bottom line--its cost, its risk of litigation, its risk of withdrawal of advertising from alienated corporate subjects, and its risk of alienating readers. Building on the wide variation and diversity of knowledge, time, availability, insight, and experience, as well as the vast communications and information resources on hand for almost anyone in advanced economies, we are seeing that the watchdog function too is being peer produced in the networked information economy. Note that while my focus in this chapter has been mostly the organization of public discourse, both the Sinclair and the Diebold case studies also identify characteristics of distributed political action. We see collective action emerging from the convergence of independent individual actions, with no hierarchical control like that of a political party or an organized campaign. There may be some coordination and condensation points--like BoycottSBG.com or blackboxvoting.org. Like other integration platforms in peer-production systems, these condensation points provide a critical function. They do not, however, control the process. One manifestation of distributed coordination for political action is something Howard Rheingold has called "smart mobs"--large collections of individuals who are able to coordinate real-world action through widely distributed information and communications technology. He tells of the "People Power II" revolution in Manila in 2001, where demonstrations to oust then president Estrada were coordinated spontaneously through extensive text messaging.33 Few images in the early twentyfirst century can convey this phenomenon more vividly than the demonstrations around the world on February 15, 2003. Between six and ten million protesters were reported to have gone to the streets of major cities in about sixty countries in opposition to the American-led invasion of Iraq. There had been no major media campaign leading up to the demonstrations-- though there was much media attention to them later. There had been no organizing committee. Instead, there was a network of roughly concordant actions, none controlling the other, all loosely discussing what ought to be done and when. MoveOn.org in the United States provides an example of a coordination platform for a network of politically mobilized activities. It builds on e-mail and Web-based media to communicate opportunities for political action to those likely to be willing and able to take it. Radically distributed, network-based solutions to the problems of political mobilization rely on the same characteristics as networked information production more generally: extensive communications leading to concordant and cooperative patterns of behavior without the introduction of hierarchy or the interposition of payment.
USING NETWORKED COMMUNICATION TO WORK AROUND AUTHORITARIAN CONTROL
The Internet and the networked public sphere offer a different set of potential benefits, and suffer a different set of threats, as a platform for liberation in authoritarian countries. State-controlled mass-media models are highly conducive to authoritarian control. Because they usually rely on a small number of technical and organizational points of control, mass media offer a relatively easy target for capture and control by governments. Successful control of such universally visible media then becomes an important tool of information manipulation, which, in turn, eases the problem of controlling the population. Not surprisingly, capture of the national television and radio stations is invariably an early target of coups and revolutions. The highly distributed networked architecture of the Internet makes it harder to control communications in this way. The case of Radio B92 in Yugoslavia offers an example. B92 was founded in 1989, as an independent radio station. Over the course of the 1990s, it developed a significant independent newsroom broadcast over the station itself, and syndicated through thirty affiliated independent stations. B92 was banned twice after the NATO bombing of Belgrade, in an effort by the Milosevic regime to control information about the war. In each case, however, the station continued to produce programming, and distributed it over the Internet from a server based in Amsterdam. The point is a simple one. Shutting down a broadcast station is simple. There is one transmitter with one antenna, and police can find and hold it. It is much harder to shut down all connections from all reporters to a server and from the server back into the country wherever a computer exists. This is not to say that the Internet will of necessity in the long term lead all authoritarian regimes to collapse. One option open to such regimes is simply to resist Internet use. In 2003, Burma, or Myanmar, had 28,000 Internet users out of a population of more than 42 million, or one in fifteen hundred, as compared, for example, to 6 million out of 65 million in neighboring Thailand, or roughly one in eleven. Most countries are not, however, willing to forgo the benefits of connectivity to maintain their control. Iran's
population of 69 million includes 4.3 million Internet users, while China has about 80 million users, second only to the United States in absolute terms, out of a population of 1.3 billion. That is, both China and Iran have a density of Internet users of about one in sixteen.34 Burma's negligible level of Internet availability is a compound effect of low gross domestic product (GDP) per capita and government policies. Some countries with similar GDP levels still have levels of Internet users in the population that are two orders of magnitude higher: Cameroon (1 Internet user for every 27 residents), Moldova (1 in 30), and Mongolia (1 in 55). Even very large poor countries have several times more users per population than Myanmar: like Pakistan (1 in 100), Mauritania (1 in 300), and Bangladesh (1 in 580). Lawrence Solum and Minn Chung outline how Myanmar achieves its high degree of control and low degree of use.35 Myanmar has only one Internet service provider (ISP), owned by the government. The government must authorize anyone who wants to use the Internet or create a Web page within the country. Some of the licensees, like foreign businesses, are apparently permitted and enabled only to send e-mail, while using the Web is limited to security officials who monitor it. With this level of draconian regulation, Myanmar can avoid the liberating effects of the Internet altogether, at the cost of losing all its economic benefits. Few regimes are willing to pay that price. Introducing Internet communications into a society does not, however, immediately and automatically mean that an open, liberal public sphere emerges. The Internet is technically harder to control than mass media. It increases the cost and decreases the efficacy of information control. However, a regime willing and able to spend enough money and engineering power, and to limit its population's access to the Internet sufficiently, can have substantial success in controlling the flow of information into and out of its country. Solum and Chung describe in detail one of the most extensive and successful of these efforts, the one that has been conducted by China-- home to the second-largest population of Internet users in the world, whose policies controlled use of the Internet by two out of every fifteen Internet users in the world in 2003. In China, the government holds a monopoly over all Internet connections going into and out of the country. It either provides or licenses the four national backbones that carry traffic throughout China and connect it to the global network. ISPs that hang off these backbones are licensed, and must provide information about the location and workings of their facilities, as well as comply with a code of conduct. In-
dividual users must register and provide information about their machines, and the many Internet cafes are required to install filtering software that will filter out subversive sites. There have been crackdowns on Internet cafes to enforce these requirements. This set of regulations has replicated one aspect of the mass-medium model for the Internet--it has created a potential point of concentration or centralization of information flow that would make it easier to control Internet use. The highly distributed production capabilities of the networked information economy, however, as opposed merely to the distributed carriage capability of the Internet, mean that more must be done at this bottleneck to squelch the flow of information and opinion than would have to be done with mass media. That "more" in China has consisted of an effort to employ automatic filters--some at the level of the cybercafe or the local ISP, some at the level of the national backbone networks. The variability of these loci and their effects is reflected in partial efficacy and variable performance for these mechanisms. The most extensive study of the efficacy of these strategies for controlling information flows over the Internet to China was conducted by Jonathan Zittrain and Ben Edelman. From servers within China, they sampled about two hundred thousand Web sites and found that about fifty thousand were unavailable at least once, and close to nineteen thousand were unavailable on two distinct occasions. The blocking patterns seemed to follow mass-media logic--BBC News was consistently unavailable, as CNN and other major news sites often were; the U.S. court system official site was unavailable. However, Web sites that provided similar information--like those that offered access to all court cases but were outside the official system--were available. The core Web sites of human rights organizations or of Taiwan and Tibet-related organizations were blocked, and about sixty of the top one hundred results for "Tibet" on Google were blocked. What is also apparent from their study, however, and confirmed by Amnesty International's reports on Internet censorship in China, is that while censorship is significant, it is only partially effective.36 The Amnesty report noted that Chinese users were able to use a variety of techniques to avoid the filtering, such as the use of proxy servers, but even Zittrain and Edelman, apparently testing for filtering as experienced by unsophisticated or compliant Internet users in China, could access many sites that would, on their face, seem potentially destabilizing. This level of censorship may indeed be effective enough for a government negotiating economic and trade expansion with political stability and control. It suggests, however, limits of the ability of even a highly dedicated
government to control the capacity of Internet communications to route around censorship and to make it much easier for determined users to find information they care about, and to disseminate their own information to others. Iran's experience, with a similar level of Internet penetration, emphasizes the difficulty of maintaining control of Internet publication.37 Iran's network emerged from 1993 onward from the university system, quite rapidly complemented by commercial ISPs. Because deployment and use of the Internet preceded its regulation by the government, its architecture is less amenable to centralized filtering and control than China's. Internet access through university accounts and cybercafes appears to be substantial, and until the past three or four years, had operated free of the crackdowns and prison terms suffered by opposition print publications and reporters. The conservative branches of the regime seem to have taken a greater interest in suppressing Internet communications since the publication of imprisoned Ayatollah Montazeri's critique of the foundations of the Islamic state on the Web in December 2000. While the original Web site, montazeri.com, seems to have been eliminated, the site persists as montazeri.ws, using a Western Samoan domain name, as do a number of other Iranian publications. There are now dozens of chat rooms, blogs, and Web sites, and e-mail also seems to be playing an increasing role in the education and organization of an opposition. While the conservative branches of the Iranian state have been clamping down on these forms, and some bloggers and Web site operators have found themselves subject to the same mistreatment as journalists, the efficacy of these efforts to shut down opposition seems to be limited and uneven. Media other than static Web sites present substantially deeper problems for regimes like those of China and Iran. Scanning the text of e-mail messages of millions of users who can encrypt their communications with widely available tools creates a much more complex problem. Ephemeral media like chat rooms and writable Web tools allow the content of an Internet communication or Web site to be changed easily and dynamically, so that blocking sites becomes harder, while coordinating moves to new sites to route around blocking becomes easier. At one degree of complexity deeper, the widely distributed architecture of the Net also allows users to build censorship-resistant networks by pooling their own resources. The pioneering example of this approach is Freenet, initially developed in 19992000 by Ian Clarke, an Irish programmer fresh out of a degree in computer science and artificial intelligence at Edinburgh University. Now a broader free-software project, Freenet
is a peer-to-peer application specifically designed to be censorship resistant. Unlike the more famous peer-to-peer network developed at the time--Napster--Freenet was not intended to store music files on the hard drives of users. Instead, it stores bits and pieces of publications, and then uses sophisticated algorithms to deliver the documents to whoever seeks them, in encrypted form. This design trades off easy availability for a series of security measures that prevent even the owners of the hard drives on which the data resides--or government agents that search their computers--from knowing what is on their hard drive or from controlling it. As a practical matter, if someone in a country that prohibits certain content but enables Internet connections wants to publish content--say, a Web site or blog--safely, they can inject it into the Freenet system. The content will be encrypted and divided into little bits and pieces that are stored in many different hard drives of participants in the network. No single computer will have all the information, and shutting down any given computer will not make the information unavailable. It will continue to be accessible to anyone running the Freenet client. Freenet indeed appears to be used in China, although the precise scope is hard to determine, as the network is intended to mask the identity and location of both readers and publishers in this system. The point to focus on is not the specifics of Freenet, but the feasibility of constructing user-based censorship-resistant storage and retrieval systems that would be practically impossible for a national censorship system to identify and block subversive content. To conclude, in authoritarian countries, the introduction of Internet communications makes it harder and more costly for governments to control the public sphere. If these governments are willing to forgo the benefits of Internet connectivity, they can avoid this problem. If they are not, they find themselves with less control over the public sphere. There are, obviously, other means of more direct repression. However, control over the mass media was, throughout most of the twentieth century, a core tool of repressive governments. It allowed them to manipulate what the masses of their populations knew and believed, and thus limited the portion of the population that the government needed to physically repress to a small and often geographically localized group. The efficacy of these techniques of repression is blunted by adoption of the Internet and the emergence of a networked information economy. Low-cost communications, distributed technical and organizational structure, and ubiquitous presence of dynamic authorship
tools make control over the public sphere difficult, and practically never perfect.
TOWARD A NETWORKED PUBLIC SPHERE
The first generation of statements that the Internet democratizes was correct but imprecise. The Internet does restructure public discourse in ways that give individuals a greater say in their governance than the mass media made possible. The Internet does provide avenues of discourse around the bottlenecks of older media, whether these are held by authoritarian governments or by media owners. But the mechanisms for this change are more complex than those articulated in the past. And these more complex mechanisms respond to the basic critiques that have been raised against the notion that the Internet enhances democracy. Part of what has changed with the Internet is technical infrastructure. Network communications do not offer themselves up as easily for single points of control as did the mass media. While it is possible for authoritarian regimes to try to retain bottlenecks in the Internet, the cost is higher and the efficacy lower than in mass-media-dominated systems. While this does not mean that introduction of the Internet will automatically result in global democratization, it does make the work of authoritarian regimes harder. In liberal democracies, the primary effect of the Internet runs through the emergence of the networked information economy. We are seeing the emergence to much greater significance of nonmarket, individual, and cooperative peerproduction efforts to produce universal intake of observations and opinions about the state of the world and what might and ought to be done about it. We are seeing the emergence of filtering, accreditation, and synthesis mechanisms as part of network behavior. These rely on clustering of communities of interest and association and highlighting of certain sites, but offer tremendous redundancy of paths for expression and accreditation. These practices leave no single point of failure for discourse: no single point where observations can be squelched or attention commanded--by fiat or with the application of money. Because of these emerging systems, the networked information economy is solving the information overload and discourse fragmentation concerns without reintroducing the distortions of the mass-media model. Peer production, both long-term and organized, as in the case of Slashdot, and ad hoc and dynamically formed, as in the case of blogging or
the Sinclair or Diebold cases, is providing some of the most important functionalities of the media. These efforts provide a watchdog, a source of salient observations regarding matters of public concern, and a platform for discussing the alternatives open to a polity. In the networked information environment, everyone is free to observe, report, question, and debate, not only in principle, but in actual capability. They can do this, if not through their own widely read blog, then through a cycle of mailing lists, collective Web-based media like Slashdot, comments on blogs, or even merely through e-mails to friends who, in turn, have meaningful visibility in a smallish-scale cluster of sites or lists. We are witnessing a fundamental change in how individuals can interact with their democracy and experience their role as citizens. Ideal citizens need not be seen purely as trying to inform themselves about what others have found, so that they can vote intelligently. They need not be limited to reading the opinions of opinion makers and judging them in private conversations. They are no longer constrained to occupy the role of mere readers, viewers, and listeners. They can be, instead, participants in a conversation. Practices that begin to take advantage of these new capabilities shift the locus of content creation from the few professional journalists trolling society for issues and observations, to the people who make up that society. They begin to free the public agenda setting from dependence on the judgments of managers, whose job it is to assure that the maximum number of readers, viewers, and listeners are sold in the market for eyeballs. The agenda thus can be rooted in the life and experience of individual participants in society--in their observations, experiences, and obsessions. The network allows all citizens to change their relationship to the public sphere. They no longer need be consumers and passive spectators. They can become creators and primary subjects. It is in this sense that the Internet democratizes.
Previous Chapter | Contents | Next Chapter