Friday, September 28, 2007
Who's up for change
Harvard bookshop overdoes the IP claims
"With the exception of the IP Mafia and its supporters/enablers, just about everyone recognizes that folks sometimes use copyright as an anticompetitive tool or tool for intimidation rather than to “promote the Progress of Science and useful Arts” by ensuring creators get compensated. While we can argue about how often this occurs or how much it genuinely chills free speech, you occasionally run into a case of copyright abuse so utterly blatant that it cries out for legal action — or at the very least a healthy dose of ridicule.
So, as a loyal Princeton alum (Go Tigers!), it gives me great pleasure to hold up for public opprobrium, general mockery, and possible legislative organizing the ridiculous claim by the Harvard University Bookstore, aka the Harvard Coop. As reported by the Harvard Crimson (Official motto: “No, we do not tell you where in the YaRd you can PaRk your damn caR”), the Harvard Coop claims it can eject students that write down the prices and ISBN numbers of text books. Why? According to Harvard Coop President Jerry P Murphy, “the Coop considers that information the Coop’s intellectual property.”"
See the original for links and further details. But throwing students out of a bookshop for writing down prices and ISBNs? Well I have been banging on about the perils of IP coming to a university campus near you for a very long time now...
The Future of Content Pt4
"(1) A cashless and cacheless society...
(2) Personalised filtering...
(5) Those who think they control information will get a wake up call...
(7) People will become the user interface...
(8) Technology will diversify not integrate?"
It's been an interesting exercise in sharing some thoughts openly and in a semi-structured fashion with colleagues. AJ Cann found the postings too long for what he would usually expect of blogs and reckons the power of blogging lies in microchunking content. Certainly blogs are terrific for just that, especially when you do most of your blog reading via an rss news reader as I do. But I share Martin's view of the value of variety in blogs. The kinds of blogs I read serve broadly four different functions for me -
- News/information/serendipity/professional currency - e.g. John Naughton
- Commentary (both general and in-depth) and techno-gadgetry - e.g. Tony Hirst
- Pointers to very specific areas of interest e.g. EFF Deeplinks or Scotusblog
- Topical academic discourse (not subject to the usual delays of the publishing cycle) e.g. Balkanization or Becker-Posner
None of the blogs mentioned reside exclusively in the category I've noted them under and the beauty of the format is they can and do move across the boundaries where necessary. In any case Martin has drawn some conclusions and also has some general thoughts on the experiment, which are well worth a read.
Thursday, September 27, 2007
Slesinger loses latest Pooh round against Disney
Dutch abandon evoting
Bertie Ahern spent 60 million euro on ostensibly the same e-voting system a few years ago but never got to deploy it. The Taoiseach has regularly complained loudly about the luddites who got in his way and how the world would laugh at the Irish with our quaint old voting pencils. So I don't suppose this news will help him to change his mind about the machines and scrap them... that is if the news ever reaches him. He has recently been rather preoccupied with his cross examination at the hands of tribunal looking into his personal finances.
Future of content part 3 response
Yet the OU has now opened up its content, and for some good reasons. Some of those reasons are complex, but underneath it I think there are two principles one altruistic and one more selfish:I think there is another more fundamental reason for the need for open educational content. The outputs of content production are the inputs to future creative endeavors and individual and collective learning processes. If these outputs get locked up and ghettoized behind pay-per-view virtual walls well then Spider Robinson's timeless Melancholy Elephants tells of the consequences, for future creators in particular, far more elegantly than I can.
Altruistic reason for open content: there is more value in having the world see this resource than if we store it away or try to sell it
Selfish reason for open content: if we don’t do it someone else will, and then what are we left with
On a separate issue relating to Wikipedia, which both Martin and Patrick refer too, I admit to being one of the original skeptics who laughed and said it couldn't possibly work. An encyclopedia with apparently no quality control and no form of sensible editorial process - if ever there was going to be definitive proof positive of millions of monkey's with keyboards being unable to reconstruct the works of Shakespeare this was it. Yet Wikipedia did work and it worked because of fiercely dedicated communities of interest that sprang up around the collections of entries. If someone tried to pollute one of these entries with jokes or distortions they would get quickly corrected by the interested communities/individuals who monitored them closely. But Wikipedia grew and the real and the regulatory world came calling in the form of intellectual property and defamation laws and the tendency for certain entities to arrange the favourable adjustment of their own entries or those related to them. The Wikipedia folk were obliged to introduce an editorial control process and not be quite so open.
By coincidence, Wendy Seltzer is giving two lectures at Cornell University today on "Protecting the University from Copyright Bullies" (3pm there, 8pm in the UK) and "Righting the Copyright Balance" (7.30pm there, 12.30am in the UK), outlining some of the concerns I have about the influence of the entertainment industry on the future of content. Thanks to Fernando Barrio, whose blog I added to my newsreader after his entertaining presentation at GIKII 2 last week, for the pointer.
The Future of Content Pt 2 follow up
I just wanted to add a few comments initially relating to Martin's piece. He says
But equally they [governments like China and Saudia Arabia] have really struggled to control information. Sure they’ve tried and had some success but this has had more to do with the immaturity of communication technologies, not the increasing ability of governments to censor information. The general trend is towards more freedom of information and as mobile devices and broadcasting/networking equipment become more powerful this will only increase.I'm tempted to get into a long note here about civil rights and the Net but I'll restrict it to a three points. Totalitarian regimes rule through fear - brutalise a sufficient proportion of the population and the rest fall into line. They don't require total control, just enough. If the great firewall of China is 80% effective (and regular readers of this blog will know how much I detest filter software) it is effective enough.
The second point is that we basically have "zero privacy" (as Scott MacNealy said) on the Net. The way to tackle totalitarian regimes is to not be afraid (as is the way to tackle terrorism but now I'm getting even more side-tracked) but this is a risky business in these places, involving threat to life and liberty. Dissident bloggers are relatively easy to find - ISPs and tech cos are the choke points with the necessary information to hand over to the authorities - and as Cardinal Richelieu said, he only needed "six lines written by the most honest man" to present sufficient evidence to hang him. I wrote on this blog last year
Yahoo! chief, Jerry Yang, "feels horrible" that Yahoo! helped the Chinese authorities to jail journalists. When asked about the arrests he said they "are never things you go home and feel good about. We feel horrible about that...We have no way of preventing that beforehand....If you want to do business there you have to comply."
Badly damaging someone's life is merely the price of doing business. The heart of the business process is amoral hence the key rule is caveat emptor, where the 'emptor' encompasses everyone who comes into contact with that business in any way. It's kind of ironic that doing business fundamentally depends simultaneously on trust and lack of trust.
I doubt Mr Yang will get much sympathy over his horrible feelings but my question would be at what point does he believe the price of compliance becomes too high?
Thirdly, remember a few years ago when a handful of fuel protesters with mobile phones blockaded fuel depots and brought the UK to a grinding halt? Well the law has been changed and if they try it again they could very well get picked up by the police and disappear for 28 days (or longer if the government follow through with intentions to extend the term of detention without charge).
Moving on to the commerce end of things, Martin says
Sure the respective industries want to control it, but the point is they can’t anymore. This is because technologically they’ll always be beaten, and also because the online world allows new market players who will offer an alternative, often a free one. They have to adapt or die – look at photography: Kodak might have wanted to control the print process but ultimately digital cameras and Flickr meant they couldn’t.
Absolutely spot on - commerce has to adapt to the new technological context. Google didn't exist a little over a decade ago and it's almost hard to believe Microsoft is a young a company as it is. And sure, the established commercial landscape and players in any market changes over time but that amoral focus on the bottom line doesn't change and the commercial pressures to stick up barriers to a new technological contexts, just long enough to underpin, nurture, grow profit margins and leverage power in the new reality, never goes away.
Martin also points out that just because governments and commerce want to control the Net doesn't mean they can.
I heard Robert Cailliau, a colleague of Tim Berners-Lee, once argue that we were reaching the edges of our understanding of the complexity in the net. He joked that maybe it was using us to get built.Since I've just published a book, one of the underlying themes of which is that we are pretty poor as a species at deploying, securing, controlling and regulating large information systems, I can only agree wholeheartedly with Martin and Robert Cailliau on this one.
But again what you've got to understand here - and it is difficult to emphasise this sometimes without sounding like a raving loonatic - is that the people (and their lawyers and the politicians they hold undue influence over) who run the content companies and other IP based industries, live in a completely different universe to the rest of us, with regard to their perspective on what the law ought to be. And they have the commercial, financial and political clout to ensure it is shaped in their favour. (Take a look at Jessica Litman's and Peter Drahos's terrific books Digital Copyright and Information Feudalism for engaging narratives on how these processes unfold in the US and international arenas)
The late Jack Valenti of the MPAA said the VCR relationship to the US movie industry was like that of the Boston strangler to a woman home alone. Jamie Kellner head of Turner Broadcasting believes TV viewers have a contract with the network to "watch the spots [adverts]" and that recording programmes for later viewing and then fast forwarding through the ads is theft. Why you you think the fast forward button on my DVD player is disabled when I want to skip those copyright notices at the beginning of a DVD? The US Movie industry run the DVD Content Control Association (DVDCCA) which licences tech companies to manufacture approved DVD machines with approved DRM mechanisms to play their DVDs.
I don't see how digital content shrinkwrap and clickwrap boilerplate licences, whether for software or any other product, which effectively say (and I paraphrase)-
"We cannot ever, under any circumstances, be held liable, even if proven beyond any reasonable doubt to be totally negligent when our product destroyed your life."
- could withstand appropriate scrutiny under the Unfair Contract Terms Act of 1977 or of the whole branch of civil liability tort law, which applies to normal products. But they do and courts repeatedly uphold them and we continue to buy generation after generation of substandardly secured software, for example, under purchasing conditions totally stacked in favour of the vendors.
The regulatory instruments and frameworks surrounding modern digital technologies are quite surreal. Technologists and educators need to be a lot more tuned into them than we are, if we are firstly to begin to understand them and secondly start to do something about imposing some semblance of reality on them in order to at least give Martin's vision of an accessible, sustainable, open content future a small glint of a fighting chance.
Tuesday, September 25, 2007
The Future of Content Pt 2
Well Part 1 is up and it's now my turn.
Next time I do this I'll try and make sure I'm better prepared in order to avoid what follows coming across as a stream of consciousness but Martin's post did trigger a whole series of ideas and possible responses in my mind. I'm tempted to launch into addressing the specifics of his essay and will do some of that but in an attempt to give the following some semblance of structure I'll try and address his primary points about digital content becoming free and widely available due to the pressures of economics and quality.
The idealist in me would like to believe that he's got it right that content will become widely and easily accessible but like Larry Lessig, I'm a bit of a pessimist and I fear a large chunk of this response may well end up parroting the message of several of Lessig's writings, with a dose of James Boyle and Yochai Benkler thrown in for good measure. Essentially I don't believe that the economics of information has changed, though networked digital technologies have re-shaped the distribution channels, the markets and business opportunities. Neither do I think that the technologies alone will lead to a kind of ecological emergence of widely accessible high quality content, especially since the market forces that Martin alludes to are distorted by an existing set information market dynamics dominated by small groups of established powerful interests, such as the entertainment, software and pharmaceutical giants.
A bit of recent history
In the early days of the general public's awakening consciousness of the World Wide Web, John Perry Barlow (another idealist) issued a Declaration of the Independence of Cyberspace.
"Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.
We have no elected government, nor are we likely to have one, so I address you with no greater authority than that with which liberty itself always speaks. I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear... Cyberspace does not lie within your borders...
We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before."
The Web and the Net were to become a self-organising free libertarian utopia where people could escape from the tyrannies of the real world - the anarchic emergence of freedom and democracy through technology. Needless to say it didn't happen because the people that use the technologies come under the jurisdiction of the real world governments. Actually round that time the notion that the Net would spontaneously lead to the inexorable spread of freedom around the world, something that democratic governments had spectacularly failed to achieve, became something of a mantra amongst technogeeks. Barlow was saying that geography didn't matter on the Net - someone in the UK or China could access a document in the US, where free speech was nominally protected by the First amendment to the Constitution. The comparatively low cost of access to speakers and listeners alike "guarantees" that speech in the most liberal regimes is available everywhere, even in the most restrictive regimes. Local laws and borders can be bypassed by the technology and the norms of cyberspace would converge towards the norms of the most liberal places connected to it.
Barlow's meme has often been taken to mean that we should rely on technology and geography and NOT laws, judges or politicians, to protect freedom of speech. Well government censorship of the Net in places like Saudi Arabia and large tech organisation's responsibility for fingering journalists for arrest in China have provided stark indications of totalitarian governments' ability to incorporate the existence of such technologies into their operations.
The idea that the technologies could not be regulated was distilled into the folklore of the Net partly in the form of three well known aphorisms that James Boyle labelled "a kind of Internet Holy Trinity." (Faith in the trinity is a condition of acceptance to the community. The sayings were (and indeed are still repeated and believed today in spite of the contrary developments of the past decade):
1/ In cyberspace the First Amendment is local ordinance (John Perry Barlow again)
2/ The Net interprets censorship as damage and routes around it (John Gilmore)
3/ Information wants to be free (Stewart Brand)
I guess I've already tackled Barlow but John Gilmore's "The net interprets censorship as damage and routes around it" was both a terrific sound-bite and, initially at least, it was technologically accurate. The Internet's original distributed architecture and packet switching were built around the problem of getting information packets delivered regardless of blockages, holes and malfunctions.
The Network was designed from the outlet to be neutral and operate in a way that would transcend its own inherent unreliability. The intelligence in the network would be kept at the ends or the nodes and the network itself would be relatively simple. Arguably, this end to end principle of network architecture is the most important factor underlying the explosion of innovation facilitated by the Internet. But remember end to end is a design principle and it doesn't hold in modern broadband networks. Increasing generations of network technology incorporate more and more 'intelligence' into the heart of the network facilitating network control by network owners.
The Net neutrality debate has been rumbling in the US for a few years now and if recent statements opposing net neutrality by the Department of Justice are anything to go by, it looks as though regulators on that side of the pond are leaning heavily towards favoring Network owners perspective in their approach to regulation.
John Naughton tells a wonderful story in his book, A Brief History of the Future, about when Paul Baran wanted to build a packet switching network in the 1960s. Jack Osterman, a senior executive at AT&T, the telecommunications monopolist in the US at the time, said "First, it can’t possibly work, and if it did, damned if we are going to allow the creation of a competitor to ourselves." Look at that word "allow". If the network owners control the networks, who uses them and how they get to use them then the network owners have a veto on innovation, which is the story told by Larry Lessig in the Future of Ideas.
Lessig also argues persuasively that the reason for the explosion in innovation facilitated by the Net was an accident of legal regulation - the telecommunications act of 1996 working together with the end to end architecture of the Net preventing discrimination by network owners - largely the telecoms companies - against innovators. The law and the technology disabled control. With broadband networks avoiding the end to end principle and the regulators facilitating network owner-operator control this has worrying implications for the future of freely accessible content postulated by Martin.
Now let's get back to Stewart Brand's declaration that "information wants to be free" - the implication being that free is it's natural state. It is an attractive notion and supported by weighty historical figures like Thomas Babbington Macaulay in his two speeches on copyright to the House of Commons in the 1840s and Thomas Jefferson widely quoted by cyberlibertarians:
"If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of everyone, and the receiver cannot dispossess himself of it. Its peculiar character too is that no one possess the less, because every other possess the whole of it. He who receives an idea from me receives instruction himself without lessening mine; as he who lights his taper at mine receives light without darkening me. That ideas should spread from one to another over the globe, for the moral and mutual instruction of man and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them like fire expansible over all space, without lessening their density at any point."
Sterling stuff and there's light at the end of the tunnel for Martin's accessible content future again. But people conveniently forget that when Brand first said information wants to be free the complete quote from that hackers conference in 1984 was:
"On the one hand information wants to be expensive, because it's so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other."
And unfortunately for those who would like content to be free the most powerful actors in the decision making process surrounding the deployment and regulation of the networks that will distribute the content are those who want it to support their profit margins.
And remember digital content comes with a licence written by a lawyer in lawyerland - so the Adobe ebook first edition of Alice's Adventures in Wonderland came with a licence which proclaimed "This book may not be read aloud". Bill Gates won't sign a Windows CD because the user doesn't own the software, she is just licensed to use it.
Established interests with established well paid lobbyists and politicians can influence the development of regulations in their favour as the story of intellectual property laws has demonstrated in the internet age. And content companies have had a surprisingly disproportionate influence (amazing considering the relative sizes of the industries - with the tech folk dwarfing the entertainment folk by a factor of ten economically) on the technology industry's outputs, convincing the Microsofts, Apples and Sonys (of course Sony is slightly schizophrenic having a foot in both camps) of the world to build disabling drm and surveillance technologies into their products.
As Lessig says, the content industries, wielding law and technology, have fought a bitter counterrevolution against Napster, DeCSS, CPHack, My.MP3, Grokster, Eldred et al and won, at least in the courts, every time. Though millions of copyrighted works are still circulating online it is an increasingly risky business to engage in such infringement with industrial scale operations the content companies now have in place monitoring this activity and targetting [tens of thousands] of people for legal action.
At the same time the broad-reaching effects of these changes in the intellectual property arena have been completely off the cognitive radar of interest of people they are likely to affect like educators (at least until the panic over the Blackboard patent last summer).
The Internet provides a particularly stark example of the interraction of Lessig's four regulating forces - law, norms, architecture and markets. Depending upon its design, its architecture - it can support or undermine law, social norms, and market forces.
The Internet is an entirely artificially created entity
The architecture of the Internet or cyberspace is programmed in the code, the logical layer and protocols that determine how it operates
That code and architecture
- Embeds certain values
- Supports/allows certain behaviours (how many times have you been on the line to a service e.g. bank or utility only when you do get a real person to be faced with the line "the computer won't let me!")
- Sets the terms according to which life in cyberspace will be lived
- Just like the laws of nature set the terms on which life is lived in the real world.
The Net - an entirely artificially created entity
The architecture programmed in the code and protocols that determine how it operates
It had an end to end DESIGN and complementary regulations which made control of activity on that network difficult
IT is an entirely artificially created entity. The design changed - the architecture was re-programmed (or at least the architecture of the broadband networks the Net has migrated to is substantially different to the TCP/IP protocols piggybacking on the phone networks) - as did the laws.
The Original Net Made Control Difficult
Geeks and wannabe geeks like me who try to bridge the geek-realworld divide liked to think about the architecture of the Net as a given - the distributed architecture of packet switching, TCP/IP, end to end. And that it couldn't be changed, just like the laws of nature. Our experience, our values our perception of the way the Internet was was fixed and we could not see beyond that.
So as Lessig said, we recited slogans -
the net treats censorship as damage,
the 1st is local ordinance,
information wants to be free
you can't regulate the Net
But this was not a view about nature - it was just a report about a particular design, a design cyberspace had about the time it was coming to the public's attention
This initial design did disable control and it did make regulation difficult.
It disabled control by governments - which, according to libertarians, theoretically increases liberty
Supported by law, it also disabled control by commerce e.g. network owners (phone companies) and content companies (entertainment, publishing and software )- thereby facilitating innovation and competition.
The architecture protected free speech - you could easily say what you wanted without that speech being controlled by others. China could not censor news about China. News of terror in Bosnia could flow freely outside state borders.
The architecture protected privacy - it supported relaying technologies and made it relatively easy to hide whom you were. Life could be lived anonymously.
The architecture protected free flow of information - text, music and pictures could be copied perfectly and for free and distributed anywhere on the Net, theoretically anywhere in the world.
It also protected an individual against local state regulation - A state that banned gambling within its borders couldn't prevent people from gambling on the Net. Geography didn't matter.
Each of these "liberties" depended on a certain design and features of the early Net
1.Users were not always identifiable
2.Data could not always be classified
Hence data access could not always be controlled. It was a nuisance for regulators and content owners. It changed.
1. It became easier to ID someone - who they are and where they are
2. Content could be identifiable and traceable and more easily classified
The controlling technologies are private (Cisco's new generation intelligent routers deployed by network owners) - tools commerce has built to
• Know who the customer is
• Control more easily what the customer does
• - where she goes (AOL would like you to stick with AOL content)
• - what she does with the content she gets
and laws commerce has bought like the DMCA of 1998 and the intellectual property rights enforcement directive of 2004, which do likewise.
Speech became less free - Chinese bloggers identified by Yahoo got jailed.
Privacy/anonymity became less secure - ISPs are the chokepoints. They are obliged to retain your traffic data and identify alleged transgressors to the content industries and government authorities.
Content could begin to flow less freely.
ICraveTV, Lessig and Code
Lessig tells the story of ICraveTV in Code.
A place of liberty and explosive innovation becomes something else.
For example ICraveTV a Canadian company, were allowed, under Canadian law to capture broadcast signals and re-broadcast them in any medium. The used the Net. Free TV is not allowed in the US though. Under US law ICraveTV should have negotiated with the original broadcaster. They used filters to try and keep Americans away from their free TV. But the Net made this difficult - it sees filters as damage and routes around them.
Hollywood didn't like Americans having free TV and sued asking a US court to shut down this Canadian company.
What if a German court said Mein Kampf could not be sold by Amazon.com anywhere because a German might get hold of it and its illegal in Germany?
OR a French court said a US auction site could not sell Nazi memorabilia because this was illegal in France
OR a Chinese court said a US ISP should be shut down because it broadcast information illegal in China
The US would be up in arms waving the HOLY of HOLIES - The First Amendment - such things violate free speech
Free speech did not register in the Pittsburgh court decision in the ICraveTV case. The judge ordered the site closed until they could prove free TV from Canada could not be seen by even one American.
ICraveTV promised to try and developed filters to zone cyberspace based on geography. ID technologies make zoning possible.
Incidentally, the French judge, in the Yahoo v France nazi memorabilia case, so vilified by the US media for interfering with American rights to free speech, only wanted the filters to be 80-90% effective.
The ICraveTV judge wanted 100% effectiveness and to hell with Canadian rights to free speech.
The US is very vocal about championing the cause of free speech on the Net and in the real world.
When it comes to issues of national security, though, which of course copyright is in the US, the values of the free flow of information fall away.
The lessons of the ICraveTV and Yahoo cases are that there is a push to zone the Net to allow local rules to be imposed.
We can see this with DVD players as well. A DVD movie bought in the US will probably not play on a DVD player bought in the UK. A Norwegian teenager, Jon Johansen, faced a possible jail sentence in a case that took five years to resolve in Norway, due to pressure on Norwegian authorities from the US movie industry. What was his crime? He bought a DVD in France and got frustrated and not being able to watch it on his linux computer. So he and two others wrote a little utility programme to bypass the copy protection system on the DVD he owned so that he could watch it on his computer. He then made this utility available on the Net; which is a criminal offence under a US copyright law called the DMCA. And incidentally it irritates the hell out of me that my DVD player disables the fast forward button when draconian copyright notices come up at the start of my DVD movies.
The DMCA, the EU's copyright directive, the intellectual property rights enforcement directive, Napster and P2P, MP3, CPHack, DeCSS, Apple v Realnetworks, Sklyarov, Felten and SDMI, Apple's iPhone drm are all fascinating stories in their own right(which I've ranted about here over the years), even without the wider impact they have on Internet regulation.
As BigCos increasingly take charge of the network, download speeds are typically 8 times upload speeds and the ratio will get worse. It may be that individuals and small independent content cos will find it more and more difficult to get through and just to show that AT&T has evolved in forty years, when asked about open access to their network, Daniel Somes, head of AT&T's cable division said, they didn't spend $56 billion on a cable network "just to have the blood sucked out of our veins."
But I've already got waaaaaaay too sidetracked on what was supposed to be the set up/ background for my paper so I'm just going to briefly address Martin's two points about economics and quality.
Economics and Quality
The economics of information flows really haven't changed though the move from atoms to bits in distribution technologies means that the markets have fundamentally changed. As Benkler says
"Social production is reshaping markets while at the same time offering new opportunities to enhance individual freedom, cultural diversity, political discourse and justice."
But these outcomes are not necessarily going to automatically emerge from Web 2.0 any more than they emerged from the end to end Internet 1.0 since there is a powerful campaign by established commercial interests in the industrial information game - big content pharma and software - to protect themselves. And democratic consumerism will no more be an inevitable emergent property of web 2.0 than democracy and freedom.
Martin points to Weinberger's arguments about our changing relationship with content - ie that iTunes has led us to realise that the natural unit of musical interest in the single track but I'm not sure I buy that specific argument. The dynamics are a lot more complex and varied. Communities of shared interest have begun to create their own playlists by mixing tracks together. Personally I like the album package and find significant value in the job the music labels did in aggregating songs or artists together in one package. I also find it irritating that my son's MP3 player runs on a different piece of software to my own player and it is a tedious fiddly technical and aggregating administrative exercise collecting the songs together in the way I like.
Which brings me to Tim O'Reilly's Piracy is Progressive Taxation, and Other Thoughts on the Evolution of Online Distribution which offers seven lessons for the media revolution which though outlined in the relative ancient history of 2002 still, I think, hold true today:
Lesson 1: Obscurity is a far greater threat to authors and creative artists than piracy.
Lesson 2: Piracy is progressive taxation
Lesson 3: Customers want to do the right thing, if they can.
Lesson 4: Shoplifting is a bigger threat than piracy.
Lesson 5: File sharing networks don't threaten book, music, or film publishing. They threaten existing publishers.
Lesson 6: "Free" is eventually replaced by a higher-quality paid service
Lesson 7: There's more than one way to do it.
The key ones in the current context being lesson 5, 6 and 7. Yes the markets change. Yes the distribution channels have changed and yes the technologies have augmented the activities of the traditional content industries in ways they would never have dreamed of a mere ten years ago. As O'Reilly says:
"As Jared Diamond points out in his book Guns, Germs, and Steel, mathematics is behind the rise of all complex social organization.
There is nothing in technology that changes the fundamental dynamic by which millions of potentially fungible products reach millions of potential consumers. The means by which aggregation and selection are made may change with technology, but the need for aggregation and selection will not. Google's use of implicit peer recommendation in its page rankings plays much the same role as the large retailers' use of detailed sell-through data to help them select their offerings.
The question before us is not whether technologies such as peer-to-peer file sharing will undermine the role of the creative artist or the publisher, but how creative artists can leverage new technologies to increase the visibility of their work. For publishers, the question is whether they will understand how to perform their role in the new medium before someone else does. Publishing is an ecological niche; new publishers will rush in to fill it if the old ones fail to do so...
New media have historically not replaced but rather augmented and expanded existing media marketplaces, at least in the short term. Opportunities exist to arbitrage between the new distribution medium and the old, as, for instance, the rise of file sharing networks has helped to fuel the trading of records and CDs (unavailable through normal recording industry channels) on eBay.
Over time, it may be that online music publishing services will replace CDs and other physical distribution media, much as recorded music relegated sheet music publishers to a niche and, for many, made household pianos a nostalgic affectation rather than the home entertainment center. But the role of the artist and the music publisher will remain. The question then, is not the death of book publishing, music publishing, or film production, but rather one of who will be the publishers."Open source works with software production. I would be delighted if sustainable business models for open content were to emerge from Web 2.0 in the face of the power dynamics at the heart of the established industrial information economy. I suspect we are a long way from the future that Martin is hoping for though as I have said before I think he is partly right in his thesis about some of the forces pressing for an accessible information future. (Though Adam Smith and Garrett Hardin [Tragedy of the Commons] might dispute the notion that individual participants in Web 2.0 activities, unlike Dawkin's selfish gene, would behave in the best interests of the system as a whole thereby speeding the evolutionary process) But O'Reilly's publishers, whatever shape or form they take, will need resources, resources which folk like Lessig, Boyle, Drahos and Benkler lead me to conclude will come via pay per view rather than free content augmented by parallel revenue flows.
Monday, September 24, 2007
Get your retaliation in first against the terrorists strategy a failure
"President George W. Bush is fond of reminding us that no terrorist attacks have occurred on domestic soil since 9/11. But has the Administration's "war on terror" actually made us safer? According to the July 2007 National Intelligence Estimate, Al Qaeda has fully reconstituted itself in Pakistan's northern border region. Terrorist attacks worldwide have grown dramatically in frequency and lethality since 2001. New terrorist groups, from Al Qaeda in Mesopotamia to the small groups of young men who bombed subways and buses in London and Madrid, have multiplied since 9/11. Meanwhile, despite the Bush Administration's boasts, the total number of people it has convicted of engaging in a terrorist act since 9/11 is one (Richard Reid, the shoe bomber).
Nonetheless, leading Democratic presidential candidate Hillary Clinton claims that we are safer. Republican candidate Rudy Giuliani warns that "the next election is about whether we go back on defense against terrorism...or are we going to go on offense." And Democrats largely respond by insisting that they, too, would "go on offense." Few have asked whether "going on offense" actually works as a counterterrorism strategy. It doesn't. The Bush strategy has been a colossal failure, not only in terms of constitutional principle but in terms of national security. It turns out that in fighting terrorism, the best defense is not a good offense but a smarter defense...
In isolation, neither the goal of preventing future attacks nor the tactic of using coercive measures is novel or troubling. All law enforcement seeks to prevent crime, and coercion is a necessary element of state power. However, when the end of prevention and the means of coercion are combined in the Administration's preventive paradigm, they produce a troubling form of anticipatory state violence--undertaken before wrongdoing has actually occurred and often without good evidence for believing that wrongdoing will ever occur.
The Bush strategy turns the law's traditional approach to state coercion on its head. With narrow exceptions, the rule of law reserves invasions of privacy, detention, punishment and use of military force for those who have been shown--on the basis of sound evidence and fair procedures--to have committed or to be plotting some wrong. The police can tap phones or search homes, but only when there is probable cause to believe that a crime has been committed and that the search is likely to find evidence of the crime. People can be preventively detained pending trial, but only when there is both probable cause of past wrongdoing and concrete evidence that they pose a danger to the community or are likely to abscond if left at large. And under international law, nations may use military force unilaterally only in response to an objectively verifiable attack or threat of imminent attack...
The preventive paradigm has compromised our spirit, strengthened our enemies and left us less free and less safe. If we are ready to learn from our mistakes, however, there is a better way to defend ourselves--through, rather than despite, a recommitment to the rule of law."
Sunday, September 23, 2007
Craig Murray, Boris Johnson and Arsenal's Uzbek billionaire
"This is London, not Uzbekistan. It is unbelievable that a website can be wiped out on the say-so of some tycoon.
We live in a world where internet communication is increasingly vital, and this is a serious erosion of free speech."
Sadly I don't have the capacity to look into this with any degree of depth currently but the blogosphere, listservs and usual geek and cyberrights sites (and, unusually, Arsenal fan sites) are buzzing with details and speculation. Given the story dovetails my lifelong obsession with Arsenal football club with current professional interests, though, I'll be returning to it when I get the chance.