Thursday, February 17, 2022

Rejected: The sustainability of open digital content

 Many years ago I was asked by an editor to write a paper, at short notice, for his journal. I drafted the piece below, quickly and sent it to him. He had two reviewers look at it and both offered legitimate critiques. Neither liked the writing style - "hyperbolic and over personal... would need to be amended to be publishable", one said, "rather conversational tone...not appropriate for a journal publication" the other said.  Both reviewers said the paper was interesting but not original or could not assess originality. One reviewer said the goal of the paper was clear, the other said it was not.I occasionally stick a red herring or a joke in draft papers to check if reviewers really do read them. On this occasion it was a joke about the Catholic church which one of the reviewers noted and stated their disapproval of. Both reviewers recommended publication with major changes.The editor then emailed me to say he didn't think it suitable for publication and was not prepared to publish it.

There is nothing particularly notable in the article - it is based on the work of Larry Lessig, James Boyle and Edward O. Wilson - and I have not thought about it in many years but I found myself wondering, in the early hours of this morning, whether that editor was embarrassed at over-estimating my suitabilty as a contributer to his journal. It is strange the things the mind alights upon during periods of sleeplessness. In any case I decided to post a copy here for posterity...

The sustainability of open digital content

 Introduction

 David Weinberger, in his terrific book, Everything is Miscellaneous, [1] points out that “information doesn’t just want to be free.  It wants to be miscellaneous.”[2]  In the digital world information is no longer constrained to the [pre World Wide Web] physical form that meant it was contained in folders, files, shelves or between the covers of books; and only produced by organisations and experts with the capacity, the capital and the equipment (e.g. printing presses) to do so. Neither does digital information have to be confined to the domains, categories or assumptions of experts. It can be moulded or adapted to context and we can "rethink information beyond material constraints". We have links and tags etc. and we're currently experiencing an information explosion/revolution which we could harness to change the world.

 At the OpenLearn Conference of 2007, John Seeley Brown, gave a wonderfully eloquent keynote speech[3] full of optimism about the sustainability of a future culture of participative, social and interactive learning and tinkering, in this rich ecology of digital information.  Highly respected scholars like Charles Nesson[4] and Yochai Benkler[5] tell us that the sky is the limit with digital technologies and digital information and it is down to us to play, understand and harness them to best effect.  Experience of using new technologies will turn us into a society of intelligent creators with and of digital information, as well as critically aware users, rather than just passive consumers (or TV couch potatoes), of that information. 

 You only have to look at the explosion of innovation since Tim Berners-Lee’s release of the World Wide Web protocols in the early 1990s[6] – from the Web itself to Google and other search engines, systems aimed at sharing music like Napster, social networking sites like Facebook, video and photo sharing sites like YouTube or Flickr and the printing press of the masses, the humble blog[7]  – to see that there is a lot to what these folk say.

 In fact I agree with much of it but I have to start this paper with a confession – it is kind of compulsory having been brought up in the Catholic Church that I start with a confession or an apology or both.  My confession is that, as someone steeped in[8] the Lawrence Lessig[9] School of pessimism and the Kenneth Deffeyes[10] School of environmentalism, I come offering a note of caution on the sustainability of this creative, evolving, participative, digital culture.

 On the sustainability of our digital knowledge ecology – or what I call ‘sustainable infodiversity’ in my book[11] – I argue that it is potentially unsustainable from two key perspectives:

 -          One is that Lawrence Lessig’s claim[12] that changes in laws and technologies are leading to a future of concentrated control of knowledge resources, remains as relevant today as when he started writing books and giving talks about it ten years ago

-          Secondly the energy and materials consumption generated by our ICT[13] and network architectures is huge and unsustainable from an environmental perspective.


Lessig’s pessimistic message

 In an information society access to, and control of, information is crucial. Who is to ensure that information technologies and the regulations governing them evolve in progressive or positive ways? What political philosophies will underpin this evolution? How, when, where and by whom will such decisions be made?

Sometimes these issues are left to groups of experts (or in Lessig’s story, industries lobbying governments) who draft legislation, on intellectual property for example, which potentially has a global effect. Yet intellectual property experts pursue lawsuits over silence[14] and electronic buttons[15] and it often takes the ordinary woman on the Clapham Omnibus[16] to throw some common sense into the mix.

 In a nutshell Lessig’s claim is that the Net lead to an explosion of innovation – an innovation revolution – a relatively uncontentcious claim given just some of the examples mentioned above. He goes on to say that this innovation threatened established commercial interests such as the entertainment industries and such industries are leading a successful counter-revolution to change technologies and laws to protect their businesses.  So since the mid 1990s we have had a string of laws and treaties passed like the 1996 World Intellectual Property Organisation (WIPO) treaties, the 1996 European Union database directive, the US Digital Millennium Copyright Act (DMCA) of 1998, the EU copyright and related rights directive of 2001 (EUCD), the US Copyright Term Extension Act (CTEA) of 1998, the EU intellectual property rights enforcement directive (IPRED) of 2004, the primary effect of which is to protect the business models of content industries, from the big tech companies through to the entertainment and the pharmaceutical companies.[17]

 The DMCA and the EUCD tend to be the main focus of geek criticism since they make it a crime, for example, to bypass the digital rights management (DRM)[18] or digital fences built into such content as DVDs, CDs or computer games.  When John Seeley Brown mentioned in his OpenLearn conference talk that you can get jailed in California for tinkering with the software on your car, he was not joking.[19] 

 It can even be quite difficult to comment on some of the laws getting passed, depending on what jurisdiction you happen to be resident in. I wrote an internet law course for the Open University[20] several years ago and was interested in using a wonderful animation produced by the Electronic Frontier Foundation (EFF)[21], parodying a particularly draconian piece of legislation making its way through Congress at the time (the Consumer Broadband Digital Television Promotion Act CBDTA[22]). But the University’s rights department quite correctly said I could not use it.  Unlike the US, parody is no defence against charges of copyright infringement in the UK, and the animation included two very brief clips (four seconds) of Mickey Mouse, which Disney would not have approved of.[23] 

 

 The counter-revolution: some cautionary or some aspirational tales?

 The counter-revolution in Lessig’s terms has led to a whole string of what I would describe as cautionary tales, though some might conceive of them as aspirational.

 Under the DMCA and EUCD it could be considered to be technically against the law to get on the Web and dig out that code that enables your single region UK DVD player to play DVDs imported from the US.  Russian programmer and PhD student,[24] Dimitri Sklyarov, fell foul of the DMCA in 2001 when he went to the US to give a paper[25] on the digital locks on Adobe eBooks.  Adobe complained to the FBI who picked him up right after he gave his talk.  He spent nearly a month in jail and a further six months unable to return to his young family in Russia until agreeing to testify in a case to be brought against his employer, Elcomsoft.[26]  The Russian government issued a general edict in the wake of the case, saying that the US was hostile territory for computer professionals.

 Essentially, Lessig argues, our collective inheritance of knowledge is being fenced off and handed over to private owners. It is important to note that Sklyarov was not accused of copyright infringement or helping or encouraging[27] anyone to engage in such acts.  He worked on a software tool that enabled people to take the digital locks off Adobe ebooks and he told people how the locks and the software worked. So if someone puts a digital lock on some information, it does not matter whether we have a right to access that information, the act of getting past the lock is criminalised. James Boyle[28] refers to it as a second enclosure movement[29] – but an enclosure of the commons of the mind rather than the grassy commons of olde Englande.

 Dimitri Sklyarov was not the only one on the receiving end of unwanted lawyerly attention in 2001.  Princeton Computer Science professor Ed Felten and some students and colleagues had taken up a music industry challenge (issued in September 2000) to crack their new Secure Digital Music Initiative (SDMI) watermarking technology. Felten’s team and a group from Rice University managed to crack all four SDMI watermarks by November 2000.  Felten et al then wrote a paper[30] on the research to be presented at a conference in the spring of 2001.[31] Following legal threats[32] from the music industry against the authors, conference organisers and their employers, they withdrew the paper.  Felten read a statement at the conference explaining the decision.[33]  Backed by the EFF, he then took legal action against the music industry claiming a first amendment (free speech) right to publish his paper.  That action was unsuccessful because the industry, in the wake of negative publicity surrounding the case, eventually claimed that they never intended to sue him or anyone else over the matter.  With the judge deciding Professor Felten had no cause of action given the music industry’s commitment not to sue, the paper got presented at the Usenix Symposium later in the year.[34] But several respected security researchers refused to publish research results afterwards for fear of triggering DMCA liability.[35]

 Lest the reader should think I have focused on particularly egregious examples of intellectual property extremism to make a point, I would ask you to consider the following [significantly less than exhaustive] list of cases, very briefly summarised, to see whether there might be the semblance of a pattern of evidence emerging to support what Lessig and Boyle have to say:

  • In 1980 the US Supreme Court gave its approval to the patenting of genetically modified organisms in Diamond v Chakrabarty, declaring that “anything under the sun made by man is patentable”.
  • In 1990 the California Supreme Court, in Moore v Regents of University of California decided a man was barred from patenting his own genes since it would interfere with medical research; but the researchers who had secretly patented this same man’s genes, without his knowledge, should be allowed to do so, or they would have no incentive to do research
  • In 1998 in State Street Bank & Trust Company v. Signature Financial Group, Inc., a federal judge decided that business methods were patentable.  The decision is credited with opening the floodgates on applications for business method patents, like the Amazon 1-click patent[36] and MercExchange’s “Buy now” electronic button patent[37]; plus for example various patents on paying online with a credit card.
  • BT sued Prodigy claiming they held the patent for hypertext linking. Fortunately New York district court Judge McMahon ruled against them in August 2002
  • ‘DVD Jon’, Norwegian teenager Jon Johansen had criminal proceedings hanging over him for five years, after he released a piece of code on the Net, ‘DeCSS’, which bypassed the US movie industry’s standard Content Scrambling System (CSS), the digital locks on DVDs.  He was acquitted twice by Norwegian courts.
  • In Universal v Corley a judge ruled, in August 2000, that the DMCA banned people linking to websites where the DeCSS code might be found or from telling people where such a “circumvention device” might be found.  The decision was upheld by an appeal court in 2001
  • Margaret Mitchell’s estate got an injunction against the publication of The Wind Done Gone by Alice Randall, a novel telling the story of Gone with the Wind, from the perspective of one of the African American slaves. Mitchell had died many years before and if it had not been for the Copyright Term Extension Act of 1998, the copyright on her famous book would have expired by the time Randall’s version of the story was due to be published. The term of copyright in the US, by the way, has been extended eleven times since the early 1960s.
  • Microsoft v Eolas was a dispute over a patent on a browser using applets and plug-ins, fundamental elements of Web browsing. Eolas was awarded massive damages by a jury after they found Microsoft liable of infringing the patent.  The case bounced up and down the courts and was eventually settled out of court in 2007.
  • In February 2007, a jury awarded damages of $1.52 billion against Microsoft for infringing Alcatel-Lucent’s MP3 patent.
  • In the notorious Blackberry case, which went on for some years, Canadian maker of the Blackberry, Research in Motion, eventually paid a US patent holding company, NTP, $600 million in accordance with a court order.  NTP have since sued Palm (unsuccessfully), AT&T, Verizon and a variety of other financially well endowed organisations.  It will be interesting to watch the progress on these lawsuits through 2008.
  • In a dispute over copyright in silence, UK music producer, Mike Batt, eventually settled with the late John Cage’s estate, out of court, for a reputed five figure sum.  In the 1950s Cage had composed a piece called ‘4 minutes and 33 seconds of silence’.  In 2002 Batt had included a minute’s silence in a CD recorded by the popular classical music group the Planets.
  • When RealNetworks enabled Apple iPod owners to buy songs for their iPod from the online RealPlayer Music Store, Apple went nuts accused Real of engaging in “the ethics of the hacker” and threatened to sue.  In the end they updated Apple software to block iPods playing songs from Real. Real updated their own software to bypass the new Apple digital fences and a tit-for-tat  arms race ensued.
  • Bill Wyman (the pop music journalist) has received a letter from lawyers representing Bill Wyman (of the Rolling Stones) ordering him to cease and desist using his own name.
  • In the Eldred v Ashcroft case argued before the US Supreme Court by Larry Lessig, Nobel Prize winning economist, Milton Friedman, argued, in an amicus brief supporting Lessig and Eldred, that the economic case against the extension of the copyright term by 20 years was a “no brainer”. Eldred and Lessig still lost, when the court announced its decision in January 2003.
  • In MGM v Grokster in 2005, the Supreme Court introduced a new “inducing infringement” test, arguably undermining their earlier Universal v Sony test in 1984, which suggested that copying technology was legal as long as it had “substantial non infringing” uses.[38]
  • The music industry has sued tens of thousands of individuals for infringing copyright via internet file sharing services like Grokster and Kazaa.  In October 2007, a woman ordered by a jury to pay $222000 for infringing the copyright in 24 songs. That is roughly $9,250 per song and arguably she got off lightly, as the law in the US states that it could have been $150,000 per song.
  • In the autumn of 2007, IP Innovation LLC, a subsidiary of Acacia, sued Red Hat and Novell for infringing their patent in a “user interface with multiple workspaces for sharing display system”.  It is reportedly the first ever patent infringement case against Linux.[39] The case has been described as SCO Mark II. SCO sued Novell and others claiming open source software infringed their copyrights in UNIX and related software.  After a lot of litigation SCO lost in October 2007, as Novell were declared the owners of the copyrights in UNIX and UnixWare.[40]  Acacia, however, have a much more successful record than SCO with intellectual property lawsuits.

 The list of cases and laws just goes on and on and I would suggest that they should lead us to take a cautious approach towards those who aspire to expand the scope and reach of intellectual property into areas where it does not really belong.[41]  Even in the cases where common sense eventually prevailed, that point was only reached by going through an army of expensive intellectual property lawyers. This whole situation is particularly problematic, from my perspective, when it cuts across basic research and education, as in the Felten SDMI case mentioned above.

 In the summer of 2006, education systems supplier, Blackboard was awarded a patent,[42] in 36 parts, on delivering courses via the Net.  The patent has also been granted in Australia, New Zealand and Singapore, is pending in the EU and various other parts of the world. Blackboard immediately sued their biggest competitor, Desire2Learn, for patent infringement.  I had mixed feelings about the patent and its associated litigation, having spent the best part of a decade attempting to explain the perils of changes in intellectual property regulations to disinterested academic colleagues.  Suddenly educational technologists everywhere were foaming at the mouth about how something so apparently ‘obvious’ could get patented. 

 If you read the patent application, though – techno speak wrapped in legalese – it will not necessarily appear obvious, at first glance, to someone unskilled in the art of online education. Changes that are affecting educators are happening in a regulatory area – intellectual property and associated technologies – that will have a profound affect on what we do but we are failing to notice because the specialism involved does not fall within the usual scope of our cognitive radar, at least until something like the Blackboard patent comes along. And although two judges, in interim judgements in August 2007, rejected most of Blackboards’ patent, their case against Desire2Learn continues and the real effect and reach of that patent remains to be seen

 In fairness, Blackboard did announce publicly that the company had no intention of suing universities or purveyors of open source education platforms like Moodle.  However, in 2003 the company threatened a couple of technology students with criminal sanctions because they had tinkered with the Blackboard campus ID system and planned to present a paper at an academic conference on the results of their research.[43]  Now the company did get a court injunction preventing the students presenting their paper.  So they had an arguable legal case.  But it should at least be of concern to the education community, that the suppliers of one of the most widely deployed platforms for the delivery of higher education materials via the Net, was prepared to pursue such a case.  The students, in the end as I understand it, had to do 40 hours community service and sign a legal undertaking never to attempt to understand Blackboard’s technology again.

 In the thick of all this ninety plus per cent of all Western culture produced in the last 100 years is, says James Boyle, “(a) under copyright and (b) has no identifiable copyright owner”, so it is all locked up and providing no benefit. That certainly cannot be good for educators.  He concludes that it would be more efficient to pay film, music, software and other intellectual property based industries, corporate welfare directly out of tax revenues to keep all the films they want to keep copyrighted forever. And let all the rest pass into the public domain after 25 years. Governments should give the content companies welfare grants directly because that is what they are getting from the intellectual property system as currently constituted but “at the cost of destroying access to 20th century culture in any fixed form.”

  

Boyle’s 7 ways to ruin a technological revolution

 Boyle gave a wonderful talk on this whole process at Google last December entitled “7 Ways to Ruin a Technological Revolution”.[44]  Astute and entertaining as ever he made most of the key points about intellectual property policy in the space of little more than half an hour (followed by a Q&A session):

  • The making of IP policy is disproportionately driven by emotive appeals by wealthy artists and industries. So we see Cliff Richard doing the rounds of the broadcasting studios when one of his 50 year old recordings is about to fall into the public domain and complaining that artists are going to be deprived of their pensions without these royalties.[45]
  • There is a complete failure to recognise the reality that every creator's inputs are someone else's outputs. The focus is only on protecting the outputs.  And if you fence off all those creative outputs and only make them available to people who are prepared to pay for them then you stifle future creativity.  To make things we need resources, including intangible resources like information and ideas. Authors, inventors, blues musicians, creators of all kinds, use language, stories, professional skills, musical notes and chords, facts and ideas, all building on the work of earlier creators. Many of these resources are free. A public highway, a public park, Fermat's last theorem, or an 1890 edition of a Shakespeare play are all free to use or copy. Setting a toll for all of these can’t be sustainable but in the universe that decision makers and shakers in this area inhabit, setting these tolls is considered a reasonable if not essential proposition. Jamie Kellner, head of Turner Broadcasting is on record claiming viewers have a contract with the broadcaster compelling them to “watch the spots” (adverts).
  • Whenever technology is factored into the debate the entire focus is on the negative effects of the technology – the ability to copy easily leading to piracy, rather than the vast new cheap distribution networks and markets it opens up.
  • We are very bad in the West at understanding the benefits of openness and things that are free, like public parks or Fermat’s last theorem
  • We ignore creativeness that does not involve property rights – if Tim Berners-Lee was trying to release the Web as a set of open protocols today he would be considered a nutcase. We ignore the benefits of technologies like the web and the end to end Net. So the computer as a general purpose machine becomes a bad thing and we have to move to controlled or trusted systems like Microsoft Vista
  • For what Boyle calls ‘IP maximalists’ it is important for policy to be made internationally, to facilitate policy laundering. It helps keep the NGOs out and then harmonisation upwards can be managed sequentially. Germany has a life of the author plus seventy years copyright term, then the rest of the EU harmonises up. The US then extends the term to match the EU. Mexico is now on life plus 100 years. Also in almost every international IP treaty, rights are mandatory and exceptions optional.
  • It is important that opponents fail to engage with the political process and Boyle reckons the techno-geeks bring “self marginalisation to the level of an Olympic sport”, which is really hard to do given we have all the good arguments on our side.

He rounds off by admitting that, as an educator, one of the most effective things he is able to do to counteract imbalance in the intellectual property/ public domain arena, is turn out graduates who know by rote: "If that were illegal, Google would be illegal."

Lessig, like Boyle, reckons they have demonstrated conclusively that they have all the good arguments on their side since it is not hard to convince someone that increasing the term of copyright will not provide a dead author with an incentive to create more work.  Lessig has now taken semi-retirement from the intellectual property wars to concentrate on his next ten year project, tackling corruption in US politics.  Lessig and Boyle and others have done a tremendous job energising people, setting up Creative Commons[46], the RSA Adelphi Charter[47] and such like but even Lessig himself agrees that he has failed to convince Congress or the courts in the US.  Until recently, he had lost every copyright case he brought before the courts, including the Eldred case in the US Supreme Court, on the constitutionality of extending the term of copyright.  The one (recent) victory in Golan v Gonzales[48] involved the court agreeing there might be a case to look at in relation to the conflict between copyright and the 1st amendment (free speech), rather than a win on the substantive issues.

Meanwhile the big content companies move on in convincing the US, EU, Canada, South Korea and Japan to set up another tailored international forum, the ‘Anti- Counterfeiting Trade Agreement’, to set favourable laws; since developing nations and NGOs have had some success in recent years at the World Intellectual Property Organization, in blocking the worst excesses of unbalanced proposals.[49]

 There is so much more to tell about this story in the context of developments in technology – e.g. DRM and trusted computing etc. – but in summary the Lessig/Boyle message is that the evolving regulatory framework surrounding knowledge resources is hostile to a rich sustainable public domain and open content and leans more towards blocking access to knowledge rather than facilitating it.  It is a message we would do well to heed if we are to actively avoid the future of control that Lessig predicts. John Seely Brown’s future of open and participative learning will not emerge by default and it certainly will not happen without help.

 

Sustainable infodiversity

Moving on more directly to environmental sustainability, we have burned through a large proportion of the earth’s fossil fuel resources in the blink of an eye on evolutionary timescales.[50] Over the course of the past two hundred years or so, through our increasing consumption of the earth’s coal, oil and gas, not only have we depleted those resources but we have slowly poisoned and over-heated the earth.[51]

 A – so far I believe – largely neglected effect of this pattern of consumption is the impact it will have on digital information in the knowledge economy, through the energy, material and environmental costs of current information and communications technologies (ICTs).  In the UK alone we throw away tens of millions of computers, mobile phones, printers, and other ICT items every year.  Since information is what economists call ‘non-rivalrous’ – so if I tell you my idea, I still have the idea – there is a widespread belief that, once information is digitised, it can be copied and distributed at zero marginal cost, i.e. ‘for free’. Yet digital information fundamentally depends on access to a source of energy; and our main sources of energy like oil, coal and gas are a depleting resource.  Before moving on, however, ask yourself a few questions about this assumption that digital information is free:

  • Have you got a broadband internet connection at home? 
  • Is it free?
  • Do you have free access to the Internet somewhere else?
  • Did you get your PC for free? 
  • How about your printer?
  • Free scanner?
  • Free digital camera? 
  • Video camera?
  • Mobile phone?
  • Perhaps you have free electricity?
  • Or maybe these devices run on free everlasting batteries, without the need for re-charging?7
  • And what about all those different, incompatible chargers – and associated energy costs of manufacture and operation – for all those different electronic gadgets that now fill the average home in the UK?  (In many cases they take up more space than the gadgets themselves).  Do they come free? Or do you have to buy a new one every time you leave one behind on a train or misplace it somewhere?
  • Did you ever get a virus through downloading a song ‘freely’ from the Internet?

 So we need a whole pile of moderately costly hardware and software, which rapidly becomes slow, obsolete and in need of replacing, as well as access to energy and communications utilities before we can get access to all this ‘free’ information.  In many cases they take up more space than the gadgets themselves. 

 Thomas Jefferson once said:

 “If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of everyone, and the receiver cannot dispossess himself of it… He who receives an idea from me receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature”

 As someone interested in facilitating access to knowledge and ideas, this is one of my favorite quotations but just as wine needs bottles, digital information needs electronic container vessels like computers. So even if it was free, in the sense of ‘free beer as opposed to free speech’,[52] digital information will always have an energy cost and our current ICT architectures are energy and resource intensive in both their manufacture and operation. 

 The big technology companies’ energy bills can run into hundreds of millions of dollars. Up to half of that energy can be taken up with the cooling needed by large computer server farms run by companies like Google, AOL or Microsoft.  We use a lot of energy to run the computers, which because the equipment generates so much waste heat, needs as much energy again just to cool them down.  In a world possibly facing an energy crisis[53] this means digital information is a little more rivalrous than we originally thought. Or, at least, the process of creating, storing (packaging?), transporting and accessing that information i.e. the construction and operation of the digital infrastructure without which that information could not exist in its ‘free’ electronic form. We cannot just put some digital information on a computer connected to the Internet and assume that it then automatically constitutes an infinitely deep well from which we can forevermore draw that information freely.

 I did a little experiment with my relatively old, low specification home computer one day, when my wife took the children to visit their grandparents.  I shut off all the other electrical devices in the house and checked the electricity meter to see how much energy my home PC and associated peripherals used.  It turned out that they use about a unit of electricity every nine hours, when not doing any heavy processing, or 1/9th of a unit an hour.  That is just my one home PC.  Multiply this by 20 million, assuming there is that number of household PCs in the UK.  That is about 2.2 million units of electricity per hour if the PCs are just switched on and running on idle.  UK domestic PCs, just ticking over, rate over 2 megawatts.  Now factor in the commercial sector and are you are suddenly faced with very high energy costs, simply to keep the high tech network that is the Internet, with its energy guzzling PCs at the ends, merely ticking over.  Sun’s chief technology officer, Greg Papadopoulos, estimates that large technology companies’ data centres alone need about 25 gigawatts.[54] This is the energy output of dozens of power plants before even thinking about the hundreds of millions of networked user PCs.

 Interestingly Nicholas Carr did a similar calculation, assuming an average PC consumes about 120Watts (which fitted surprisingly well with my own measurements), a server 200Watts plus a further 50Watts for air conditioning in a server room and on average between 10,000 and 15,000 avatars being operated in Second Life at any one time.  He concludes:

“So an avatar consumes 1,752 kWh per year. By comparison, the average human, on a worldwide basis, consumes[55] 2,436 kWh per year. So there you have it: an avatar consumes a bit less energy than a real person, though they're in the same ballpark.

Now, if we limit the comparison to developed countries, where per-capita energy consumption is 7,702 kWh a year, the avatars appear considerably less energy hungry than the humans. But if we look at developing countries, where per-capita consumption is 1,015 kWh, we find that avatars burn through considerably more electricity than people do.”[56]

Current ICTs are energy intensive and could be greatly improved.[57] In an energy-rich economy these costs might not get a lot of attention but a global economy, in which we may see rationing of dwindling energy resources like oil, has implications for digital information and who gets access to it.  That concerns me at a time when we are increasing our level of dependence on digital information especially since, if scholars like Boyle and Lessig are right, developments in intellectual property and other information laws are moving in the direction of restricting access to information.  The combination of energy rationing and Boyle’s second enclosure movement[58] could threaten our ability to make informed decisions about complex information systems and our access to the basic raw materials of education.  Critics will rightly point out that access to information has been a problem in the developing world for generations,[59] a situation which the affluent West has been complicit in creating.  Now such access issues might come to the middle classes in the West, ironically in an age where so much information is allegedly free.

 The power of sharing

 The Internet and its associated technologies are a complex information system with a complex set of ecologies analogous to the environment. Technical experts and ecologists understand, to some degree, the effect that changes to these systems will have. Most of the rest of us do not. That is not a criticism. It is impossible even for the experts to completely understand the knowledge society or the environment in their entirety.

 Experts may have a deep understanding of parts of the system but they never know it all and the models they use are simplified representations of some aspect of reality.  We do however need this deep understanding if we as a society are to make informed decisions about information systems, particularly those with wide-reaching effects. 

 And in the sense of the vision of a shared participatory construction of a culture of knowledge that John Seely Brown has described[60] as the ultimate form of cultural sustainability, we have in the sciences of the human genome, global warming and other areas hit a point and multiplicity of problems probably beyond the capacity of single individuals to solve. Progress in science and the useful arts has always been predicated on the widespread sharing and robust testing of ideas amongst scientific peers, Kuhn’s lesson about paradigm shifts accepted.[61]  From that perspective these new, incredibly complex problems absolutely require the kind of multi-disciplinary, distributed cooperation, sharing and learning that John Seely Brown advocates and is optimistic about seeing.  Yet the very structure of the funding of fundamental research in these areas mitigates against even basic data sharing and as John Wilbanks, executive director of the Science Commons initiative, says

“The knowledge simply isn't moving as easily as it should, and transactions are slow on a good day, non-existent on a bad one.”[62]

 Light at the end of the sustainability of infodiversity tunnel?

 James Boyle suggests we need parallel programmes of activism and scholarship to protect the public domain, in the face of a kind of second enclosure movement – an enclosure of the ‘commons of the mind’ by private prospectors like the large entertainment, technology and pharmaceutical industries.  Does this mean we need a kind of sustainable infodiversity in our global knowledge store, equivalent to a sustainable biodiversity in our physical and ecological environment?  In 2001 Edward O. Wilson wrote[63] that more that 99% of the world’s biodiversity was unknown and that we should rectify that state of affairs, since our ignorance was contributing to the destruction of the environment.  He outlines a five point plan[64] for doing this.

1.  Comprehensively survey the world’s flora and fauna.  This will need a large but finite team of professionals.

2.  Create biological wealth e.g. through pharmaceutical prospecting of indigenous plants.  Assigning economic value to biodiversity (e.g. as a source of material wealth as food or medicines or leisure amenities) is a key way to encourage its preservation.

3.  Promote sustainable development i.e. “development which meets the needs of the present without compromising the ability of future generations to meet their own needs”.

4.  Save what remains i.e. being realistic we are not going to halt environmental degradation overnight.

5.  Restore the wild lands e.g. through designating large areas of land as natural reserves like Costa Rica’s 50,000-hectare Guanacaste National Park.

 We could conceive of a parallel plan for that global information store, the infodiversity of which is potentially endangered by Boyle’s postulated second enclosure movement.

 1.  Comprehensively survey the world’s global knowledge store.

2.  We already have vast industries built on information wealth and intellectual property but we need to look at whether those industries are operating in a way which is in the best interests of a society requiring access to knowledge.

3.  Promote sustainable information development – information production and exploitation which meets the needs of the present without compromising the ability of future generations to build on that knowledge store.

4.  Save what remains e.g. seek to nullify developments in law or technology whose primary effect is the privatisation of knowledge and information in the public domain.

5.  Restore the wild lands.  Perhaps we need information reserves or wild lands, like networks of universities, other public institutions and open access knowledge projects, where ideas can be allowed to roam in the wild and the people in these institutions can exchange ideas without the need to deal with proprietary intellectual property claims of the commercial world, at least within the confines of the reserves?

Scientific knowledge is currently at a stage of development whereby the popular belief that we can synthetically create biodiversity is a complete pipedream.  Wilson suggested that the “search for the safe rules of biotic synthesis is an enterprise of high intellectual daring”.  Likewise the interaction of ideas, which creates the kind of infodiversity from which emerges other useful ideas, could be stifled by dividing up that public knowledge store amongst private owners.  It would be like trying to recreate the biodiversity of the African continent in Dublin Zoo or someone’s garden. 

 Wilson is an advocate of using the law to protect biodiversity: “The wise procedure is to use the law to delay, science to evaluate and familiarity to preserve. There is an implicit principle of human behaviour important to conservation: the better an ecosystem is known, the less likely it will be destroyed.[65] We could justifiably ask the question of whether intellectual property law, and indeed the whole portfolio of information and communications regulations, could play a similar role with our global information ecosystem. The concerned educator in me would say that they not only could play this role but should do so and that academics have a duty to understand, explain and shape future developments in this area. 

 If the last remaining trees or hedgerows in my neighbourhood were being ripped up by the local council to make way for a waste incinerator, I might feel strongly enough to talk to local people and get involved in a campaign to resist such a development.  I would do so because I would have an idea of the negative impact the plan would have on my family’s day to day life.  We do not have a picture of the impact when it comes to complex legal regulations or new technologies, however, because the effects are not so obvious or immediate. 

 Ultimately, the success or failure of what Boyle has called a second enclosure movement, rests on the evolutionary battle for dominance between two competing memes – the idea that knowledge should be shared and the idea that it should be controlled. They both have staying power, the former pointing towards a future era of digital enlightenment and the latter towards one of tightly controlled access to knowledge.

 When I first read James Boyle’s and Larry Lessig’s work it left me pretty gloomy about the future of the knowledge society.  In spite of a number of the negative developments since then in the direction of Boyle’s enclosure, though, I am now fairly optimistic about the power of the simple meme that sharing information is a good idea. In addition, and perhaps most importantly, given the limited capacity of contemporary professional politicians to understand and act effectively on these issues, some of the large commercial organisations have discovered that widespread open access to knowledge resources is good for their profit margins.[66] 

 The trick will be to continuously manage the balance between the competing (and simultaneously complementary) notions that:

  • information should be shared and
  • information should be controlled

 But in a reiteration of the need for scholarly understanding, education and action, the final word should go to Larry Lessig or James Boyle, whose work forms much of the basis of this paper.  On this occasion it is appropriate that it should be Boyle, who has written extensively about how the concept of environment and the environmental movement can inform our study and shaping of the landscape of digital content, [covered both by intellectual property and in the public domain], towards a future, open, sustainable and enlightened digital infodiversity:

 “The environmental movement also gained much of its persuasive power by pointing out that there were structural reasons for bad environmental decisions -- a legal system based on a particular notion of what "private property" entailed, and a scientific system that treated the world as a simple, linearly-related set of causes and effects. In both of these conceptual systems, the environment actually disappeared; there was no place for it in the analysis. Small surprise, then, that we did not preserve it very well…

 And, what is true for the environment is -- to a striking degree, though not completely -- true for the public domain… The idea of the public domain takes to a higher level of abstraction a set of individual fights -- over this chunk of the genome, that aspect of computer programs, this claim about the meaning of parody, or the ownership of facts. Just as the duck hunter finds common cause with the bird-watcher and the salmon geneticist by coming to think about "the environment," so an emergent concept of the public domain could tie together the interests of groups currently engaged in individual struggles with no sense of the larger context.[67] 



[1] Everything is Miscellaneous: The Power of the New Digital Disorder by David Weinberger, Times Books, 2007

[2] Weinberger op.cit. p7. This is a further reference to Stuart Brand’s talk at the first Hackers Conference in 1984, where he said "On the one hand information wants to be expensive, because it's so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other."  See also The Media Lab: Inventing the Future at MIT by Stuart Brand, Viking, 1987, pp202.

[3] OpenLearn 2007: Researching Open Content in Education, 30-31 October 2007, Milton Keynes, United Kingdom. www.open.ac.uk/openlearn/openlearn2007

[4] William F. Weld Professor of Law, Harvard Law School and Founder and Faculty Co-Director, Berkman Center for Internet & Society

[5] Jack N. and Lillian R. Berkman Professor of Entrepreneurial Legal Studies, Harvard Law School and Faculty co-director, Berkman Center for Internet and Society.  See in particular Benkler’s seminal work, The Wealth of Networks: How Social Production  Transforms Markets and Freedom, published by Yale University Press, 2006.

[6] See Weaving the Web by Tim Berners-Lee, Collins, 2000 for a first hand account of the tale.

[7] My own B2fxxx blog for example is at http://b2fxxx.blogspot.com/

[8] See T182 Law, the Internet and Society: Technology and the Future of Ideas, based on The Future of Ideas by Lawrence Lessig, Random House, 2001

[9] Professor of Law at Stanford Law School and founder of the school's Center for Internet and Society

[10] See Hubbert’s Peak: The Impending World Oil Shortage by Kenneth S. Deffeyes, Princeton University Press, 2001

[11] Digital Decision Making: Back to the Future by Ray Corrigan, Springer-Verlag, 2007.  I use the artificially constructed term ‘infodiversity’ to invoke the parallel idea of biodiversity in the digital knowledge context.

[12] See in particular The Future of Ideas (2001) op. cit. and Free Culture: The Nature and Future of Creativity by Lawrence Lessig, Penguin, 2004

[13] Information and communications technologies

[14] See Digital Decision Making: Back to the Future pp 40-42

[15] See Digital Decision Making: Back to the Future pp 61-63

[16] The expression “the man on the Clapham Omnibus” was coined by Lord Justice Bowen in the case of McQuire v. Western Morning News [1903] 2 KB 100, and is often used to refer to the hypothetical “reasonable man” in law.

[17] In the US alone, there has been an average of 4 new intellectual property laws passed each year since 1995, primarily as a result of lobbying on the part of interested industries.

[18] Also known as “digital restrictions management” by critics.  In the legislation DRM tends to be called “technological protection measures” (TPMs).

[19] OpenLearn 2007: Researching Open Content in Education, 30-31 October 2007.  Keynote address by John Seely Brown

[20] T182 Law, the Internet and Society: Technology and the Future of Ideas

[21] A US civil liberties group, focused on issued raised by new technologies, set up in 1990 by Mitch Kapor, John Perry Barlow and John Gilmore in the wake of the Secret Service’s raids on the Steve Jackson Games company. See http://www.eff.org/about/history for a succinct version of the history.

[22] The CBDTPA did not make it through in 2002 but it became the genesis of the so-called “broadcast flag” which has been the subject of much energetic activity on the part of lobbyists (on both sides of the divide), legislators and the Federal Communications Commission since then.

[23] The EFF had no problems making the animation available in the US, however, since parody is a strong defence against an accusation of infringement of copyright in that jurisdiction

[24] His PhD thesis was entitled, ''Methods of Testing E-book Security: How Secure Is Your Book.''

[25]At the DefCon conference in Las Vegas, 2001.

[26] The Department of Justice did bring a case against Sklyarov’s employer, Elcomsoft, the following year and Sklyarov did testify but the case was thrown out by a jury.

[27] The US Supreme Court introduced a new ‘inducing infringement’ test in their decision in the MGM v Grokster, 27 June 2005

[28] William Neal Reynolds Professor of Law and Director of the Center for the Public Domain, Duke Law School.

[29] See Boyle, J. The Second Enclosure Movement and the Construction of the Public Domain, Law and Contemporary Problems, Vol.66:33, 2003

[30] "Reading Between the Lines: Lessons from the SDMI Challenge"

[31] 4th International Information Hiding Workshop, Pittsburgh, April 26, 2001.

[32] See http://www.cs.princeton.edu/sip/sdmi/riaaletter.html

[34] Reading Between the Lines: Lessons from the SDMI Challenge by Scott A. Craver, Min Wu, Bede Liu, Adam Stubblefield, Ben Swartzlander, Dan S. Wallach, Drew Dean, Edward Felten, at the 10th USENIX Security Symposium, Washington D.C., 2001

[35] See for example, Censorship in Action: Why I Don’t Publish My HDCP Results, by Neils Ferguson, August 15, 2001 Ferguson had discovered what was considered a serious security problem with an Intel video encryption system, High Bandwidth Digital Content Protection (HDCP). He did not publish the results and deleted all parts of his website relating to the research.  Though Ferguson is a Dutch citizen, he frequently travelled to the US on business and didn’t want to get caught up in a Sklyarov-type controversy.

[36] Issued by the US Patent Office in 1999.  Amazon went on to successfully sue Barnes &Noble getting an injunction against their rival’s use of a 1-click button which interfered with their Christmas sales that year.  The patent has been through a range of complicated legal challenges and was nullified by the US Patent Office eventually in 2007.

[37] See Digital Decision Making: Back to the Future pp 61-63

[38] Actually the court deliberately and carefully avoided addressing the Universal v Sony test and it remains to be seen how they would rule should they be required to deal with it directly in the context of modern technologies.

[39] In the UK software is theoretically excluded from patenting, though patents have been granted here on inventions incorporating a software component

[40] The decision is available online at Groklaw, http://www.groklaw.net/staticpages/index.php?page=20070810205256644

[41] I’m particularly concerned as are many scientists such as Nobel Prize winner John Sulston (see The Common Thread: Science, Politics, Ethics and the Human Genome by John Sulston and Georgina Ferry, Bantam Press, 2002) about gene patenting but unfortunately do not have the space to get into a detailed discussion of the subject here.  I would just note one example – a company called Myriad Genetics holds patents on BRCA1 and BRCA2 genes, mutations of which indicate a predisposition towards developing breast or ovarian cancer. See also "Synthetic Biology: Caught between Property Rights, the Public Domain, and the Commons" by Arti K. Rai and James Boyle, PLoS Biology, 5(3): e58, March 2007, available at http://dx.doi.org/10.1371/journal.pbio.0050058

[43] See Blackboard Erases Research Presentation with Cease-and-Desist, TRO by Jennifer Jenkins, September, 2003.  It is available at http://www.chillingeffects.org/weather.cgi?WeatherID=383

[45] There is very little empirical work done in this area though a comparison between the database markets in the US and EU, since the EU directive on the legal protection of databases (1996) is instructive. On almost every measure the US database industries, where there is no protection, out-performs the EU sector. See also ‘Forever minus a day? Some Theory and Empirics of Optimal Copyright’ by Rufus Pollock at Cambridge University available at http://www.rufuspollock.org/economics/papers/optimal_copyright.pdf. He concluded that the optimum term of copyright should be 14 years. See also ‘The Value of the Public Domain’ by Rufus Pollock, IPPR, 2006

[46] http://creativecommons.org/

[47] The RSA (Royal Society for the encouragement of Arts, Manufactures & Commerce) Adelphi Charter on Creativity, Innovation and Intellectual Property available at http://www.adelphicharter.org/

[49] See the EFF page on the Broadcasting Treaty for an example of some of negotiating tricks that have been used at WIPO in recent years http://www.eff.org/issues/wipo_broadcast_treaty . See also Information Feudalism: Who Owns the Knowledge Economy by Peter Drahos with John Braithwaite, Earthscan, 2002, for a fascinating account of how this kind of process operated during the negotiations over many years at the General Agreement on Tariffs and Trade (GATT) which lead to the Agreement on Trade Related Aspects of Intellectual Property Rights (TRIPS) in 1994

[50] ‘Nuclear Energy and the Fossil Fuels’ by M.K. Hubbert, American Petroleum Institute Drilling and Production Practice, Proceedings of Spring Meeting, 1956, pp7-25.  See also a critical assessment of Hubbert in ‘Fixed View of Resource Limits Creates Undue Pessimism’, by M.A. Adelman and M.C. Lynch, Oil and Gas Journal, 7 April, 1997 pp 56-60

[51] See An Inconvenient Truth: The Crisis of Global Warming by Al Gore, Viking Books, 2007, for a fairly dramatic account of the effects and The Skeptical Environmentalist: Measuring the Real State of the World by Bjorn Lomborg, Cambridge University Press, 1998 for a more sceptical perspective.  See also Silent Spring by Rachel Carson, Houghton Mifflin, 1962 (and a variety of reprints by Penguin Books since 1965).

[52] See the Free Software Definition by Richard Stallman

[53] See The Entropy Law and the Economic Process by Nicholas Georgescu-Roegen, Harvard University Press, 1971; Hubbert’s Peak: The Impending World Oil Shortage by Kenneth S. Deffeyes, Princeton University Press, 2001; also The Diversity of Life by Edward O. Wilson, Penguin, 2001; and Heat: How to Stop the Planet Burning by George Monbiot, Allen Lane, 2006; also The Prize: The Epic Quest for Oil, Money and Power by Daniel Yergin, Free Press, 1991;

[54] In an interview with Sean Michael Kerner of Internetnews.com in the summer of 2006, Papadopoulos stated that effective management of energy consumption was the single biggest challenge facing technology companies. http://www.internetnews.com/ent-news/article.php/3610771

[55] See the World Resources Institute’s EarthTrends: The Environmental Information Portal at  http://earthtrends.wri.org/searchable_db/index.php?action=select_countries&theme=6&variable_ID=574

[57] See in particular the Ndiyo project ‘set up to foster an approach to networked computing that is simple, affordable, open, less environmentally damaging and less dependent on intensive technical support than current networking technology.’ http://www.ndiyo.org/ http://www.ndiyo.org/intro/summary

[58] The Second Enclosure Movement and the Construction of the Public Domain by James Boyle op.cit. And also available at http://www.law.duke.edu/pd/papers/boyle.pdf see also Shamans, Software and Spleens: Law and the Construction of the Information Society by James Boyle, Harvard University Press, 2006.

[59] Given relative incomes the cost of a standard Western text book to a college student in the Philippines would be the equivalent of a US student paying upwards of $3000 for the same book.

[60] OpenLearn 2007: Researching Open Content in Education, 30-31 October 2007, Milton Keynes, United Kingdom. www.open.ac.uk/openlearn/openlearn2007

[61] The Structure of Scientific Revolutions by Thomas Kuhn, University of Chicago Press, 1962.

[62] Will John Wilbanks Launch the Next Scientific Revolution? By Abby Seiff, Popular Science Magazine,  July 2007

[63] The Diversity of Life by Edward O. Wilson, Penguin, 2001

[64] Edward O. Wilson, op. cit pp297-326

[65] The other side of that coin is that familiarity breeds contempt.

[66] Google is just one company that comes to mind here.

[67]  The Second Enclosure Movement and the Construction of the Public Domain by James Boyle, Law and Contemporary Problems, Vol.66:33, Winter/Spring 2003, Nos. 1&2, pp 71, 73. This special edition of the journal was edited by Boyle and contains a whole host of related papers produced for the Conference on the Public Domain held at Duke University in November 2001. available at http://www.law.duke.edu/journals/lcp/indexpd.htm

No comments: