Pages

Monday, September 10, 2012

Regulation and trust in the digital economy: an uneasy relationship


A copy of the slides for and the transcript* of my talk at the Trust in the Digital Economy workshop at Aberdeen University last week.



Regulation and trust in the digital economy: an uneasy relationship

Good morning Aberdeen.

Abstract
What have terrorism, copyright infringement, spam, child protection and organised crime got in common?  They have all been cited by policymakers as reasons for introducing internet related laws. Unfortunately too many of these regulations are passed by legislatures lacking a rudimentary understanding of the technologies they are attempting to control. This has significant implications for innovators, economic agents and citizen consumers which go to the heart of what it means to engender trust in the digital economy.

Introduction
In the next 20 minutes I’m going to shoot through some basic behavioural economic theory and give a couple examples of regulations which I believe undermine trust in the digital economy. One of these is an existing law, the Digital Economy Act (DEA) 2010 and one is passing through parliament at the moment, the Communications Data Bill (CDB).

Whilst I’m outlining these ideas I’d like you to keep in mind that trust in the digital economy is one of the most fundamental issues of the 21st century.  We need to trust more people, more institutions and more complex systems than ever before. We need to trust them from further away and via the internet and technologies many of us don’t understand. So creating trust and engendering trust is more difficult than ever before.

The scale of it all also means that the bad guys can do more damage than ever and yet the traditional bad guys – the four horsemen of the infocalypse, drug dealers, child abusers, organized crime and terrorists – are not the key threats to trust, since they are few in number and operate at the fringes of society. The key threats arise from powerful governments and large organisations (including large criminal organisations) using their power to subvert trust. The global financial meltdown is the key case study of recent times. 

I’d also like you to remember that the number of people and institutions we trust every day is huge – the utility companies and their employees that provide my energy and water, the food I’ve consumed in the past 24 hours that I didn’t have to test chemically before eating it, the airports, airlines, pilots, ground staff, air crew, transport and security infrastructure that got me here today, [1] the websites I booked it all through, the police and public services that support order and stability.

3 Stakeholders’ model

Ok so getting back to the economics it’s useful to have a model through which to frame or attempt to understand some of these complex issues.  One way to think of it is through groups of stakeholders.  We can outline three generic groups of stakeholders in the digital economy –

  • the innovators/creators who come up with the ideas that form the basis of our products & services
  • the economic agents – by economic agents I mean commerce, public services and government – that get products and services to the public
  • the public – citizen consumers

I don’t like the word ‘citizen’ or ‘consumer’ but both together serve to separate this group from the other two.

For trust to thrive we need to look after the interests of all three sets of stakeholders.  All three need to thrive and that balance of interests is theoretically assumed to be delivered through Adam Smith’s invisible hand of the market working in harmony with enlightened governance.

Each of these three sets of stakeholders constitutes complex ecologies in themselves. Different innovators have different interests.  Different economic agents have often competing and/conflicting interests.  You only have to think about internet file sharing and the postulated damage it has done to the traditional large music labels, online news v newspapers, Amazon v ordinary bookshops.

There are fierce legal battles in the mobile and tablet computing space with  more than 50 Android patent cases being litigated globally. In the past couple of weeks a US jury had awarded Apple more than $1 billion in damages against Samsung.  The same week a Korean court issued injunctions against the sale of Apple and Samsung products for infringing each others’ patents. Last week Samsung won their latest court battle with Apple in Japan. These two companies alone are facing off against each other in courtrooms in ten different jurisdictions.  In the UK Samsung currently have the upper hand but only because the judge considered Apple’s products “much cooler”.

If you buy into some of the political rhetoric in the context of the US presidential race then the Government is against everybody.

So the reality is more complex than the model but for now let’s stick with the 3 stakeholder groups.

Behavioural forces model

What is it, then, that regulates the behaviour of these sets of stakeholders?  Yochai Benkler and Lawrence Lessig suggest there are four key forces:

  • social norms
  • the market
  • the environment or architecture
  • the law

Social norms dictate how we behave in social groups. When I first moved to the south of England to work, I didn't know I was not supposed to say hello to a stranger on a train. My attempts to engage someone in conversation were subject to suitably disdainful and horrified looks from my fellow passengers, who tried valiantly to ignore me. Having been normalised after 20 years, I can dish out the dirty looks with the best of them.

Social norms punish deviation after the event.

Market forces also regulate behaviour. Markets dictate that we don't get access to something unless we offer something of value in exchange. The price of cigarettes is potentially a constraint on a child's opportunity to smoke. Unlike social norms, market forces regulate at the time of the transaction. If children have no money, retailers will not sell them cigarettes.

Law and legal regulations provide the framework through which governments prescribe what is acceptable behaviour and what is not. Law acts as a threat. If we don't follow the law there is a risk that we will be found out and punished. I could cheerfully strangle several of the zombie bureaucrats I deal with on a daily basis but in addition to having some ethical concerns about murder, I’d prefer not to deal with the legal consequences of engaging in such activity.

Under the law, as with social norms, the punishment happens after the event.

‘Architecture’ or the built environment and the laws of physics – i.e. how the physical world is (and the limits of the laws of physics) – also regulate behaviour. Architecture is particularly important in the context of the digital economy, since digital technologies are entirely human constructs and designs.

Like market forces, constraints on behaviour imposed by architecture happen when we are trying to engage in that behaviour. For example, if a building has steep steps at the entrance and no other way in, it is difficult for a wheelchair user to enter the building unaided.

Prolific 20th-century New York City planner Robert Moses built highway bridges along roads to the parks and beaches in Long Island that were too low for buses to pass under. Hence the parks and beaches were accessible only to car owners – many of them white middle class or wealthy. Poor people without cars, mainly African Americans and other minorities, would be forced to use other parks and beaches accessible by bus. Thus social relations between black and white people were regulated, an example of discriminatory regulation through architecture.
It should be noted that Moses vehemently denied that there was any racist intent on his part. In one sense, his intent is irrelevant. The architecture regulated behaviour whether he intended to or not.

Architecture is also self-regulating – the steep steps get in the wheelchair user's way because they are steep and they are steps! Laws, norms and markets can only constrain when a ‘gatekeeper’ chooses to use the constraints they impose.

There were a couple of stark examples of architecture and technology regulating behaviour in an uncontrolled way in the past week.  On the evening of the 2nd of September roving network bots shut down the live streaming of the Hugo Awards ceremony just as Neil Gaiman was getting an award for his scripting of a Dr Who story, The Doctor’s Wife. The company policing the stream for copyright infringement, Ustream, could not stop the bots from censoring the live video streaming of one of the world’s most prestigious science fiction award ceremonies; a ceremony which could not be viewed live anywhere else other than by those physically present there.  Ustream could not shut down the bots once these automated stream killers decided to block the video because the bots were programmed and deployed by a third party company, Vobile, which was subcontracted by Ustream to do automated takedowns. The bots saw Dr Who clips and decided their broadcast wasn’t allowed regardless of rights clearances or fair use.

Ustream have reportedly discontinued their business relationship with Vobile.[2] Vobile’s CEO tells a different story to Ustream explaining that Vobile only notifies the main client when their bots find a match for material tagged copyrighted in their database.  They have no control over takedowns which were entirely the remit of Ustream as far as the Hugo Awards were concerned.[3]

Who is really responsible is irrelevant to the overriding point though that technological architecture inappropriately shut down a legitimate internet broadcast without due cause or justification.

In a postscript to the story overzealous drm bots hit the video of Michelle Obama’s speech to the Democratic National Convention a few days later. This time the bots didn’t shut down the live stream but blocked access to the video after the event.[4] What’s amusing about this is the Democratic Party’s long history of support for entertainment industry lobbying for more and more stringent copyright laws.

Architecture is a massively important regulator in the context of the digital economy.

Force of law

Here’s a pictorial representation of the four forces regulating an individual. It’s an over-simplified picture again because the different forces also interact with each other and have different powers of influence depending on the context and the stakeholder in question.
For the rest of this talk I’d like to focus mainly on the distorting power of one of the forces – ill-informed laws – in undermining trust in the digital economy.  I have chosen a couple of examples from recent times to illustrate the issue. Both are laws that attempt to enforce dangerous surveillance/controlling technological architectures.

  • The Digital Economy Act (DEA) 2010
  • The Communications Data Bill (CDB) currently under consideration

DEA

The Digital Economy Act was passed in the “wash up” of laws just before the last general election.  The process is supposed to be used only for uncontroversial measures that are agreed between the front benches of the main political parties.  The DEA was very controversial, however, and over 20,000 people and multiple big telcos and technology companies wrote to the MPs in an ultimately futile effort to get it blocked. BT and TalkTalk have subsequently unsuccessfully challenged it through the courts.

Sections 3-18 of the DEA essentially make ISPs responsible for policing the internet for copyright infringement.  Detailed provisions relate to:

  • notifying subscribers of reported infringements
  • providing infringement lists to copyright owners
  • obligations to limit internet access
  • obligations  to engage in website blocking (the current government have decided to abolish this since Ofcom said it was unworkable)

Ofcom and the government are working on the details of how this will all operate in practice.  The Hargreaves ‘Review of Intellectual Property and Growth’ cited the passage of the DEA as an example of the distortion of public policy by questionable evidence.

The reality of the Digital Economy Act's (DEA) online infringement of copyright provisions (sections 3 - 18) may finally begin to hit home next year (theoretically) when thousands of people start to get accusatory letters about copyright infringement from their ISPs. The UK courts have not fully tested evidence presented in such copyright infringement cases as the few that have been pursued were eventually settled out of court. So there is no authoritative legal guidance on standards of evidence or process.

In any case the systematic threatening of large numbers of people by ISPs on behalf of the copyright industries is unlikely to be conducive to engendering trust in the digital economy.

It was not even clear until very recently whether the process of identifying the accounts of suspected copyright infringers could be done with any degree of forensic integrity. Thanks to a report[5] by Dr Richard Clayton of Cambridge University for Consumer Focus it appears that as long as a careful detailed set of procedures which he outlines in the report are followed this may be possible.[6] But he emphasises that his blueprint is time limited and will be useless once peer to peer network technologies evolve to incorporate encryption routinely.

There is a lot of heated rhetoric exchanged through the mainstream media whenever new copyright regulations like the DEA come along (and there have been a lot of them over the past 15 or 20 years).  So it might be instructive to take some of the heat out of the debate by looking at the impact of copyright law on our three sets of stakeholders.

Take innovators/creators.  We can imagine that there is an optimum standard of copyright law that will encourage innovators to maximise their creative/inventive activity. As strength of copyright increases from nothing the economic incentive to create becomes greater.  But there will be a point at which the incentive decreases since it gets so strong that it prevents creators building on the work of earlier creators. Copyright in the UK and many other jurisdictions lasts now for the life of the author plus 70 years. So creative artists are theoretically precluded from using most of 20th century culture, as the basis or inspiration for their work.

We can also imagine that the public might prefer copyright to be weaker to enable them to access more creative work at cheaper prices.

Similarly the economic agents – agents, music, film, software, media companies and publishers – that get creative work from the creators to market might prefer stronger copyright laws.

In theory we could tweak copyright law to balance the interests of all three sets of stakeholders. We could measure the effects, feed that evidence back into the policymaking process and ultimately evolve an informed, evidence based set of copyright laws.  Choice of the optimum level won’t be ideal for all the stakeholders but it should be possible to agree a compromise to balance the interests of all three.

That’s not the way it worked with the DEA. There was no evidence just effective lobbying by the large entertainment industries.  The big winners with the DEA are some agents – a select few large music labels and movie companies who hold the copyright on commercially valuable works – and some creators – mostly the tiny percentage of global superstars who earn large sums from royalties.

The losers with the DEA was everybody else – the public, other creators and other economic agents, in particular the ISPs who have to engage in very costly technology investment and operational processes to police copyright on the Net.

The other big loser was trust.  It is not good business practice to threaten, throttle or block the internet connections of your customers.

CDB

The second regulatory vehicle I’d like to look at briefly is the Communications Data Bill. Section 1 of the Bill essentially gives the Secretary of State and her successors a blank cheque and they get to order anyone to do anything that can be related to facilitating access to communications data. Yes she gets carte blanche to order tracking, monitoring, surveillance, watching, interception, collection and use of any data she likes about everybody, however and whenever she feels like it with essentially no meaningful oversight or accountability. 

S9 of the Bill relates to the authorisations for obtaining data by police and other public authorities.

The government has the right to intercept and record information when someone is suspected of a serious crime. They have the right to engage targeted intelligence led surveillance on anybody. They should not have the right to engage in monitoring everyone.  But these proposals mean collection of data without suspicion or oversight: which is in effect uncontrolled mass surveillance. Due process requires that surveillance of a real suspected criminal be based on much more than general, loose, and vague allegations, or on suspicion, surmise, or vague guesses. To instigate the new set of legal norms envisaged in the Communications Data Bill which subsequently give the entire population less protection than a hitherto genuine suspected criminal is indefensible. The gathering of mass data to facilitate future unspecified fishing expeditions is also unlawful.

There is a significant danger in measures like the CDB of stumbling by default into a police state, just because the technology of mass surveillance is now more readily available and nominally more sophisticated. We need to avoid deploying these technologies blindly in response to some perceived threat. Without sufficient reasoned analysis of the purpose and detailed requirements of the technical systems we propose to build to counter these threats, we could find ourselves building technological monsters. Building an infrastructure of surveillance makes our three sets of stakeholders – innovators/creators, economic agents and the public – more vulnerable not less so to attacks by criminal elements such as the four horsemen of the infocalypse and rogue states with malevolent intent.

ISPs will evolve from the copyright police of the DEA to surveillance agents of the state under the CDB and that kind of mass surveillance is no way to engender trust. It also doesn’t work as any of the economists in the room with an understanding of Bayes theorem and the base rate fallacy will tell you.[7]

Under the CDB the only winners are the suppliers of the technology of surveillance. All other stakeholders lose and the damage to trust is potentially irreparable, in spite of the public being hugely forgiving of mass data collection.

We get bad laws when governments don’t understand technology

It’s a caricature but governments generally have two simplistic perspectives on technology:

  1. It is a magic solution to ill defined political problems that can be easily presented to the rabid 24 hour news media
  2. It is a terrifying tool that is used by the four horsemen of the infocalypse for nefarious ends, therefore requiring blanket surveillance.

The internet they see as TV on steroids or an online shopping centre best controlled by the entertainment industry.

This ignorance is bad for our three sets of stakeholders

    • Innovators/creators
    • Agents
    • Public
The laws this ignorance fosters, such as the DEA and CDB, undermine trust. And they undermine it in ways that are at best difficult or almost impossible to remedy. Mandating and building technological architectures of surveillance is bad for everyone but the agents who monetise and control those technologies.

Conclusion

The innovators and the public are hugely forgiving of or blind to economic agents’ (both commerce and government) data gathering.

That gives the powerful agents – commercial and government – substantial responsibility. Data pollution is the environmental disaster of the digital age and it is going to play havoc with trust in the digital economy.

For commercial agents delivering secure convenient products and services at a reasonable price, not suing the competition based on dodgy law, is the key to success.

For governments, can I recommend Professor Chris Reed’s doctrine of creative inertia when it comes to making laws about the internet, especially in relation to mandating architecture of surveillance? Mainly, don’t. And if you must, take the time, the care and the considerable cognitive effort that is required to find out what it is you’re dealing with first.

In relation to trust in the digital economy where have we got to?

Firstly we should understand, as my friend John Naughton says, that the internet is a global machine for springing surprises.

So:
  • Innovators need to engage with it
  • Governments need to apply creative inertia principle to regulating, especially with respect to surveillance architectures. They also need make a better effort to understand it (and those subsets of stakeholders who do understand it need to get better at explaining it to them)
  • Commercial agents, particularly those making hay from bad regulations e.g. entertainment companies and surveillance technology companies, need to understand that unfettered data pollution will come back to bite all three sets of stakeholders
  • And citizen consumers need to get educated, engaged and active

That’s it and if you have been, thanks for listening.
 

*Transcript of the talk as written rather than as delivered.  I spent a little longer on the DEA than I intended and didn't cover the CDB other than in outline

[1] Following some interesting experiences at Heathrow airport yesterday I was tempted to change my talk to outline how lack of trust nearly led to me not making it here at all but that’s a story for another day.  See http://b2fxxx.blogspot.co.uk/2012/09/the-chief-immigration-officer-and-me-or.html for the details.
[2] How copyright enforcement robots killed the Hugo Awards http://io9.com/5940036/how-copyright-enforcement-robots-killed-the-hugo-awards.
[3]See Don’t blame the copyright bots, says CEO of copyright bot company http://www.slate.com/blogs/future_tense/2012/09/07/vobile_ceo_yangbin_wang_copyright_bots_didn_t_kill_ustream_s_hugo_awards.html
[7] See Rudmin, Floyd (2006) ‘The Politics of Paranoia and Intimidation: Why does the NSA engage in mass surveillance of Americans when it is statistically impossible for such spying to detect terrorists?’ http://www.counterpunch.org/rudmin05242006.html for a lovely succinct illustration of this.

No comments:

Post a Comment