Tuesday, September 25, 2007

The Future of Content Pt 2

A few weeks ago I rashly agreed with Martin Weller to try an experiment in constructing an academic article on the future of digital content via a blog debate. Martin explains the idea here.

Well Part 1 is up and it's now my turn.

Next time I do this I'll try and make sure I'm better prepared in order to avoid what follows coming across as a stream of consciousness but Martin's post did trigger a whole series of ideas and possible responses in my mind. I'm tempted to launch into addressing the specifics of his essay and will do some of that but in an attempt to give the following some semblance of structure I'll try and address his primary points about digital content becoming free and widely available due to the pressures of economics and quality.

The idealist in me would like to believe that he's got it right that content will become widely and easily accessible but like Larry Lessig, I'm a bit of a pessimist and I fear a large chunk of this response may well end up parroting the message of several of Lessig's writings, with a dose of James Boyle and Yochai Benkler thrown in for good measure. Essentially I don't believe that the economics of information has changed, though networked digital technologies have re-shaped the distribution channels, the markets and business opportunities. Neither do I think that the technologies alone will lead to a kind of ecological emergence of widely accessible high quality content, especially since the market forces that Martin alludes to are distorted by an existing set information market dynamics dominated by small groups of established powerful interests, such as the entertainment, software and pharmaceutical giants.

A bit of recent history

In the early days of the general public's awakening consciousness of the World Wide Web, John Perry Barlow (another idealist) issued a Declaration of the Independence of Cyberspace.

"Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.

We have no elected government, nor are we likely to have one, so I address you with no greater authority than that with which liberty itself always speaks. I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear... Cyberspace does not lie within your borders...

We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before."

The Web and the Net were to become a self-organising free libertarian utopia where people could escape from the tyrannies of the real world - the anarchic emergence of freedom and democracy through technology. Needless to say it didn't happen because the people that use the technologies come under the jurisdiction of the real world governments. Actually round that time the notion that the Net would spontaneously lead to the inexorable spread of freedom around the world, something that democratic governments had spectacularly failed to achieve, became something of a mantra amongst technogeeks. Barlow was saying that geography didn't matter on the Net - someone in the UK or China could access a document in the US, where free speech was nominally protected by the First amendment to the Constitution. The comparatively low cost of access to speakers and listeners alike "guarantees" that speech in the most liberal regimes is available everywhere, even in the most restrictive regimes. Local laws and borders can be bypassed by the technology and the norms of cyberspace would converge towards the norms of the most liberal places connected to it.

Barlow's meme has often been taken to mean that we should rely on technology and geography and NOT laws, judges or politicians, to protect freedom of speech. Well government censorship of the Net in places like Saudi Arabia and large tech organisation's responsibility for fingering journalists for arrest in China have provided stark indications of totalitarian governments' ability to incorporate the existence of such technologies into their operations.

The idea that the technologies could not be regulated was distilled into the folklore of the Net partly in the form of three well known aphorisms that James Boyle labelled "a kind of Internet Holy Trinity." (Faith in the trinity is a condition of acceptance to the community. The sayings were (and indeed are still repeated and believed today in spite of the contrary developments of the past decade):

1/ In cyberspace the First Amendment is local ordinance (John Perry Barlow again)

2/ The Net interprets censorship as damage and routes around it (John Gilmore)

3/ Information wants to be free (Stewart Brand)

I guess I've already tackled Barlow but John Gilmore's "The net interprets censorship as damage and routes around it" was both a terrific sound-bite and, initially at least, it was technologically accurate. The Internet's original distributed architecture and packet switching were built around the problem of getting information packets delivered regardless of blockages, holes and malfunctions.

The Network was designed from the outlet to be neutral and operate in a way that would transcend its own inherent unreliability. The intelligence in the network would be kept at the ends or the nodes and the network itself would be relatively simple. Arguably, this end to end principle of network architecture is the most important factor underlying the explosion of innovation facilitated by the Internet. But remember end to end is a design principle and it doesn't hold in modern broadband networks. Increasing generations of network technology incorporate more and more 'intelligence' into the heart of the network facilitating network control by network owners.

The Net neutrality debate has been rumbling in the US for a few years now and if recent statements opposing net neutrality by the Department of Justice are anything to go by, it looks as though regulators on that side of the pond are leaning heavily towards favoring Network owners perspective in their approach to regulation.

John Naughton tells a wonderful story in his book, A Brief History of the Future, about when Paul Baran wanted to build a packet switching network in the 1960s. Jack Osterman, a senior executive at AT&T, the telecommunications monopolist in the US at the time, said "First, it can’t possibly work, and if it did, damned if we are going to allow the creation of a competitor to ourselves." Look at that word "allow". If the network owners control the networks, who uses them and how they get to use them then the network owners have a veto on innovation, which is the story told by Larry Lessig in the Future of Ideas.

Lessig also argues persuasively that the reason for the explosion in innovation facilitated by the Net was an accident of legal regulation - the telecommunications act of 1996 working together with the end to end architecture of the Net preventing discrimination by network owners - largely the telecoms companies - against innovators. The law and the technology disabled control. With broadband networks avoiding the end to end principle and the regulators facilitating network owner-operator control this has worrying implications for the future of freely accessible content postulated by Martin.

Now let's get back to Stewart Brand's declaration that "information wants to be free" - the implication being that free is it's natural state. It is an attractive notion and supported by weighty historical figures like Thomas Babbington Macaulay in his two speeches on copyright to the House of Commons in the 1840s and Thomas Jefferson widely quoted by cyberlibertarians:

"If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of everyone, and the receiver cannot dispossess himself of it. Its peculiar character too is that no one possess the less, because every other possess the whole of it. He who receives an idea from me receives instruction himself without lessening mine; as he who lights his taper at mine receives light without darkening me. That ideas should spread from one to another over the globe, for the moral and mutual instruction of man and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them like fire expansible over all space, without lessening their density at any point."

Sterling stuff and there's light at the end of the tunnel for Martin's accessible content future again. But people conveniently forget that when Brand first said information wants to be free the complete quote from that hackers conference in 1984 was:

"On the one hand information wants to be expensive, because it's so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other."

And unfortunately for those who would like content to be free the most powerful actors in the decision making process surrounding the deployment and regulation of the networks that will distribute the content are those who want it to support their profit margins.

Arhitecture

With apologies to Larry and those immersed in Lessigisms, I'm going to start parroting Lessig closely here. In Code he offers and simple but powerful model explaining the four regulators of behaviour - social norms, market forces, law and technology architecture. In the Future of Ideas and Free Culture he goes on to tell the tale of how established vested interests in the content industries reacted to the innovations of the Internet age - MP3, the Rio, Napster and peer to peer networking - by launching a counter-revolution through law and architecture to protect their interests. Copyright laws now cover more things for longer and with hugely more severe penalties for infringement than ever before. DRM digital locks were introduced on content and it became illegal to bypass these locks, or build tools to bypass them or tell someone where they might possibly find a tool to bypass them. You could get jailed in the US, like Dimitri Sklyarov, for writing a program in Russia (where it was legal and in fact required in order to facilitate the backing up of digital content) to bypass the digital locks on an Adobe eBook.

And remember digital content comes with a licence written by a lawyer in lawyerland - so the Adobe ebook first edition of Alice's Adventures in Wonderland came with a licence which proclaimed "This book may not be read aloud". Bill Gates won't sign a Windows CD because the user doesn't own the software, she is just licensed to use it.

Established interests with established well paid lobbyists and politicians can influence the development of regulations in their favour as the story of intellectual property laws has demonstrated in the internet age. And content companies have had a surprisingly disproportionate influence (amazing considering the relative sizes of the industries - with the tech folk dwarfing the entertainment folk by a factor of ten economically) on the technology industry's outputs, convincing the Microsofts, Apples and Sonys (of course Sony is slightly schizophrenic having a foot in both camps) of the world to build disabling drm and surveillance technologies into their products.

As Lessig says, the content industries, wielding law and technology, have fought a bitter counterrevolution against Napster, DeCSS, CPHack, My.MP3, Grokster, Eldred et al and won, at least in the courts, every time. Though millions of copyrighted works are still circulating online it is an increasingly risky business to engage in such infringement with industrial scale operations the content companies now have in place monitoring this activity and targetting [tens of thousands] of people for legal action.

At the same time the broad-reaching effects of these changes in the intellectual property arena have been completely off the cognitive radar of interest of people they are likely to affect like educators (at least until the panic over the Blackboard patent last summer).

Lessig Again

The Internet provides a particularly stark example of the interraction of Lessig's four regulating forces - law, norms, architecture and markets. Depending upon its design, its architecture - it can support or undermine law, social norms, and market forces.

The Internet is an entirely artificially created entity

The architecture of the Internet or cyberspace is programmed in the code, the logical layer and protocols that determine how it operates

That code and architecture

- Embeds certain values
- Supports/allows certain behaviours (how many times have you been on the line to a service e.g. bank or utility only when you do get a real person to be faced with the line "the computer won't let me!")
- Sets the terms according to which life in cyberspace will be lived

- Just like the laws of nature set the terms on which life is lived in the real world.

The Net - an entirely artificially created entity

The architecture programmed in the code and protocols that determine how it operates

It had an end to end DESIGN and complementary regulations which made control of activity on that network difficult

IT is an entirely artificially created entity. The design changed - the architecture was re-programmed (or at least the architecture of the broadband networks the Net has migrated to is substantially different to the TCP/IP protocols piggybacking on the phone networks) - as did the laws.

The Original Net Made Control Difficult

Geeks and wannabe geeks like me who try to bridge the geek-realworld divide liked to think about the architecture of the Net as a given - the distributed architecture of packet switching, TCP/IP, end to end. And that it couldn't be changed, just like the laws of nature. Our experience, our values our perception of the way the Internet was was fixed and we could not see beyond that.

So as Lessig said, we recited slogans -

the net treats censorship as damage,

the 1st is local ordinance,

information wants to be free

you can't regulate the Net

But this was not a view about nature - it was just a report about a particular design, a design cyberspace had about the time it was coming to the public's attention

This initial design did disable control and it did make regulation difficult.

It disabled control by governments - which, according to libertarians, theoretically increases liberty

Supported by law, it also disabled control by commerce e.g. network owners (phone companies) and content companies (entertainment, publishing and software )- thereby facilitating innovation and competition.

The architecture protected free speech - you could easily say what you wanted without that speech being controlled by others. China could not censor news about China. News of terror in Bosnia could flow freely outside state borders.

The architecture protected privacy - it supported relaying technologies and made it relatively easy to hide whom you were. Life could be lived anonymously.

The architecture protected free flow of information - text, music and pictures could be copied perfectly and for free and distributed anywhere on the Net, theoretically anywhere in the world.

It also protected an individual against local state regulation - A state that banned gambling within its borders couldn't prevent people from gambling on the Net. Geography didn't matter.

Each of these "liberties" depended on a certain design and features of the early Net

1.Users were not always identifiable
2.Data could not always be classified

Hence data access could not always be controlled. It was a nuisance for regulators and content owners. It changed.

1. It became easier to ID someone - who they are and where they are
2. Content could be identifiable and traceable and more easily classified

The controlling technologies are private (Cisco's new generation intelligent routers deployed by network owners) - tools commerce has built to

• Know who the customer is
• Control more easily what the customer does
• - where she goes (AOL would like you to stick with AOL content)
• - what she does with the content she gets

and laws commerce has bought like the DMCA of 1998 and the intellectual property rights enforcement directive of 2004, which do likewise.

Speech became less free - Chinese bloggers identified by Yahoo got jailed.

Privacy/anonymity became less secure - ISPs are the chokepoints. They are obliged to retain your traffic data and identify alleged transgressors to the content industries and government authorities.

Content could begin to flow less freely.

ICraveTV, Lessig and Code

Lessig tells the story of ICraveTV in Code.

A place of liberty and explosive innovation becomes something else.

For example ICraveTV a Canadian company, were allowed, under Canadian law to capture broadcast signals and re-broadcast them in any medium. The used the Net. Free TV is not allowed in the US though. Under US law ICraveTV should have negotiated with the original broadcaster. They used filters to try and keep Americans away from their free TV. But the Net made this difficult - it sees filters as damage and routes around them.

Hollywood didn't like Americans having free TV and sued asking a US court to shut down this Canadian company.

What if a German court said Mein Kampf could not be sold by Amazon.com anywhere because a German might get hold of it and its illegal in Germany?

OR a French court said a US auction site could not sell Nazi memorabilia because this was illegal in France

OR a Chinese court said a US ISP should be shut down because it broadcast information illegal in China

The US would be up in arms waving the HOLY of HOLIES - The First Amendment - such things violate free speech

Free speech did not register in the Pittsburgh court decision in the ICraveTV case. The judge ordered the site closed until they could prove free TV from Canada could not be seen by even one American.

ICraveTV promised to try and developed filters to zone cyberspace based on geography. ID technologies make zoning possible.

Incidentally, the French judge, in the Yahoo v France nazi memorabilia case, so vilified by the US media for interfering with American rights to free speech, only wanted the filters to be 80-90% effective.

The ICraveTV judge wanted 100% effectiveness and to hell with Canadian rights to free speech.

The US is very vocal about championing the cause of free speech on the Net and in the real world.

When it comes to issues of national security, though, which of course copyright is in the US, the values of the free flow of information fall away.

The lessons of the ICraveTV and Yahoo cases are that there is a push to zone the Net to allow local rules to be imposed.

We can see this with DVD players as well. A DVD movie bought in the US will probably not play on a DVD player bought in the UK. A Norwegian teenager, Jon Johansen, faced a possible jail sentence in a case that took five years to resolve in Norway, due to pressure on Norwegian authorities from the US movie industry. What was his crime? He bought a DVD in France and got frustrated and not being able to watch it on his linux computer. So he and two others wrote a little utility programme to bypass the copy protection system on the DVD he owned so that he could watch it on his computer. He then made this utility available on the Net; which is a criminal offence under a US copyright law called the DMCA. And incidentally it irritates the hell out of me that my DVD player disables the fast forward button when draconian copyright notices come up at the start of my DVD movies.

The DMCA, the EU's copyright directive, the intellectual property rights enforcement directive, Napster and P2P, MP3, CPHack, DeCSS, Apple v Realnetworks, Sklyarov, Felten and SDMI, Apple's iPhone drm are all fascinating stories in their own right(which I've ranted about here over the years), even without the wider impact they have on Internet regulation.

As BigCos increasingly take charge of the network, download speeds are typically 8 times upload speeds and the ratio will get worse. It may be that individuals and small independent content cos will find it more and more difficult to get through and just to show that AT&T has evolved in forty years, when asked about open access to their network, Daniel Somes, head of AT&T's cable division said, they didn't spend $56 billion on a cable network "just to have the blood sucked out of our veins."

But I've already got waaaaaaay too sidetracked on what was supposed to be the set up/ background for my paper so I'm just going to briefly address Martin's two points about economics and quality.

Economics and Quality


The economics of information flows really haven't changed though the move from atoms to bits in distribution technologies means that the markets have fundamentally changed. As Benkler says

"Social production is reshaping markets while at the same time offering new opportunities to enhance individual freedom, cultural diversity, political discourse and justice."

But these outcomes are not necessarily going to automatically emerge from Web 2.0 any more than they emerged from the end to end Internet 1.0 since there is a powerful campaign by established commercial interests in the industrial information game - big content pharma and software - to protect themselves. And democratic consumerism will no more be an inevitable emergent property of web 2.0 than democracy and freedom.

Martin points to Weinberger's arguments about our changing relationship with content - ie that iTunes has led us to realise that the natural unit of musical interest in the single track but I'm not sure I buy that specific argument. The dynamics are a lot more complex and varied. Communities of shared interest have begun to create their own playlists by mixing tracks together. Personally I like the album package and find significant value in the job the music labels did in aggregating songs or artists together in one package. I also find it irritating that my son's MP3 player runs on a different piece of software to my own player and it is a tedious fiddly technical and aggregating administrative exercise collecting the songs together in the way I like.

Which brings me to Tim O'Reilly's Piracy is Progressive Taxation, and Other Thoughts on the Evolution of Online Distribution which offers seven lessons for the media revolution which though outlined in the relative ancient history of 2002 still, I think, hold true today:

Lesson 1: Obscurity is a far greater threat to authors and creative artists than piracy.

Lesson 2: Piracy is progressive taxation

Lesson 3: Customers want to do the right thing, if they can.

Lesson 4: Shoplifting is a bigger threat than piracy.

Lesson 5: File sharing networks don't threaten book, music, or film publishing. They threaten existing publishers.

Lesson 6: "Free" is eventually replaced by a higher-quality paid service

Lesson 7: There's more than one way to do it.

The key ones in the current context being lesson 5, 6 and 7. Yes the markets change. Yes the distribution channels have changed and yes the technologies have augmented the activities of the traditional content industries in ways they would never have dreamed of a mere ten years ago. As O'Reilly says:

"As Jared Diamond points out in his book Guns, Germs, and Steel, mathematics is behind the rise of all complex social organization.

There is nothing in technology that changes the fundamental dynamic by which millions of potentially fungible products reach millions of potential consumers. The means by which aggregation and selection are made may change with technology, but the need for aggregation and selection will not. Google's use of implicit peer recommendation in its page rankings plays much the same role as the large retailers' use of detailed sell-through data to help them select their offerings.

The question before us is not whether technologies such as peer-to-peer file sharing will undermine the role of the creative artist or the publisher, but how creative artists can leverage new technologies to increase the visibility of their work. For publishers, the question is whether they will understand how to perform their role in the new medium before someone else does. Publishing is an ecological niche; new publishers will rush in to fill it if the old ones fail to do so...

New media have historically not replaced but rather augmented and expanded existing media marketplaces, at least in the short term. Opportunities exist to arbitrage between the new distribution medium and the old, as, for instance, the rise of file sharing networks has helped to fuel the trading of records and CDs (unavailable through normal recording industry channels) on eBay.

Over time, it may be that online music publishing services will replace CDs and other physical distribution media, much as recorded music relegated sheet music publishers to a niche and, for many, made household pianos a nostalgic affectation rather than the home entertainment center. But the role of the artist and the music publisher will remain. The question then, is not the death of book publishing, music publishing, or film production, but rather one of who will be the publishers."

Open source works with software production. I would be delighted if sustainable business models for open content were to emerge from Web 2.0 in the face of the power dynamics at the heart of the established industrial information economy. I suspect we are a long way from the future that Martin is hoping for though as I have said before I think he is partly right in his thesis about some of the forces pressing for an accessible information future. (Though Adam Smith and Garrett Hardin [Tragedy of the Commons] might dispute the notion that individual participants in Web 2.0 activities, unlike Dawkin's selfish gene, would behave in the best interests of the system as a whole thereby speeding the evolutionary process) But O'Reilly's publishers, whatever shape or form they take, will need resources, resources which folk like Lessig, Boyle, Drahos and Benkler lead me to conclude will come via pay per view rather than free content augmented by parallel revenue flows.

5 comments:

Anonymous said...

Like you Ray, I guess I'm a pessimist. I don't believe premium content will become free either. Content which is cheap to produce as bait for an advertising platform maybe, premium content, no. When Gutenberg invented the printing press, did books become free? No. Cheaper in human terms (no further need to found a monastery to produce books), and more widely distributed, yes. Who suffered post-Gutenberg? The Church, arguably. Who gained? Universities, arguably. In other words, the existing establishment had to loosen its grip on information and the resulting income flow. But the economics of publishing have clearly changed with new technologies. And the existing establishment (publishers, universities?) will experience the same effects as the last time this shift happened (which was not Gutenberg but the invention of broadcasting).

Ray Corrigan said...

The church post Gutenberg is certainly an interesting case. Initially the printing press was welcomed (at least by some in the church hierarchy though there were the usual complement of soothsayers of doom) as a means to spread Roman Catholic doctrine but they quickly realised that people were reading and interpreting the bible in non standard ways. Cue the increased energy of the inquisition. Then Martin Luther comes along declaring the bible (rather than the papacy) the sole source of Christian authority and all hell breaks loose.
Broadcasting is another really interesting one and the story of Edwin Howard Armstrong and FM radio (of which Tom Lewis's 'Empire of the Air' is one of the best written accounts) could be taken to either support or detract from Martin's point about quality, depending on how you read it. It's a classic tale of established interests suppressing better substitutable technology on they could control that new technology.

AJ Cann said...

Broadcasting quality? Nothing to add to "Radio Radio", Elvis Costello, 1978: http://en.wikipedia.org/wiki/Radio_Radio_%28song%29

Ray Corrigan said...

He certainly upset the labels and the radio establishment with that one. :-)

Anonymous said...

An excellent response Ray (much better researched and thought out than mine) - I've responded to your piece over at my blog: http://nogoodreason.typepad.co.uk/no_good_reason/2007/09/future-of-conte.html
Martin