Saturday, August 01, 2009
Update: Here's Ars Technica's take on it.
Update 2: Ray Beckerman doesn't think any of the key p2p copyright infringement litigation issues have been addressed in the case. Marc Bourgeois has been blogging about the proceedings at Beckerman's blog this week.
Update 3: More reactions from the usual suspects.
Friday, July 31, 2009
"What finally put me over the edge? It wasn’t the routinely dropped calls, something you can only truly understand once you have owned an iPhone (and which drove my friend Om Malik to bail). I’ve lived with that for two years. It’s not the lack of AT&T coverage at home. I’ve lived with that for two years, too. It certainly isn’t the lack of a physical keyboard, that has never bothered me. No, what finally put me over the edge is the Google Voice debacle...I was explaining Apple's blocking of the iTunes sync feature on Palm Pre recently to my 11 year old and he said "You know dad, I hate all this drm stuff. Apple are really stupid and annoying. People should be able to play the music they've bought on any device they want." Spot on son and people should be able to use their phone on any network with any app they want too.
Google is planning on rolling out number portability, so I can move my mobile phone number to Google. None of my friends, family or contacts have to store a new number...
Or so I thought. Apple and AT&T are now blocking the iPhone version of the Google Voice app. Why? Because they absolutely don’t want people doing exactly what I’m doing - moving their phone number to Google and using the carrier as a dumb pipe.
So I have to choose between the iPhone and Google Voice. It’s not an easy decision. Except, it sort of is. Google isn’t forcing the decision on me, Apple and AT&T are. So I choose to work with the company that isn’t forcing me to do things their way. And in this case, that’s Google."
Thursday, July 30, 2009
"The nation’s cellphone networks could suffer “potentially catastrophic” cyberattacks by iPhone-wielding hackers at home and abroad if iPhone owners are permitted to legally jailbreak their shiny wireless devices — that’s what Apple claims.Hee hee.
The Copyright Office is considering a request by the Electronic Frontier Foundation to legalize the widespread practice of jailbreaking, in which iPhone owners hack their devices to accept software that hasn’t been approved for distribution through the iPhone App Store. Apple made the claim in comments filed last week (.pdf) with the agency...
Threat Level had no idea the iPhone was so dangerous. We’re gratified that Apple locked down this potential weapon of mass disruption before hackers could unleash cybarmageddon. This also explains why Apple rejected the official Google Voice App for the iPhone this week. We thought it was because Google Voice posed a threat to AT&T’s exclusivity deal with Apple. Now we know it threatened national security."
"Finnish Activist Mikko Rauhala has lodged an application to the European Court of Human Rights versus the Finnish state, regarding his right to freedom of expression.It will certainly be interesting to see what the ECHR makes of it.
The appeal is a follow up to the lengthy court process against Mr. Rauhala in Finland. The process started when Mr. Rauhala started a discussion board on the Internet, on which people could talk about the DVD copy protection method Content Scrambling System (CSS). Mr. Rauhala's motivation for this act was to criticize the implementation of the EU Copyright Directive in Finland which came into force in 2006. According to the implementation of the directive, "organized discussion" regarding circumventing technological protection measures, like the CSS, was prohibited. Mr. Rauhala's motivation was to display the disbeneficial nature of the directive. Thus, he himself reported his actions to the Finnish police in the first place, thinking that the police would not investigate the issue or that the public prosecutor would not press charges.
However the Police started its investigation with the public prosecutor who was supported strongly by the Finnish Anti-Piracy Association which arranged the "expert" testimonies for the prosecutor. Soon thereafter, the case was referred to the Helsinki District Court which decided in Rauhala's favor; that CSS was not a type of protection measure covered by the Directive and therefore the ban did not apply. The district attorney appealed to the Helsinki Appellate Court, which stated that Mr. Rauhala was guilty of illegally circumventing a technological protection measure and of providing an illegal service for the circumvention of protection measures. The Supreme Court of Finland denied Mr. Rauhala's application.
Mr. Rauhala had no other choice than to make an application to the European Court of Human Rights, because in all the court instances his argument that the discussion he was administering was within the confines of the Right to Freedom of Speech, as enshrined in the Finnish Constitution, was not considered. Mr. Rauhala is represented by the Helsinki, Finland based law firm Turre Legal.
EDRi-gram: Finish CSS decision overturned (4.06.2008)
(contribution by Markku Räsänen - Summer Associate, Turre Legal, Finland)"
"The response of the tech-savvy was, predictably, pretty savage. Techdirt ("it's difficult to think of anything quite this useless") at least offered some principles on which sustainable web businesses might be built. Others were not as kind. Someone even created an extremely profane and sometimes juvenile, but nevertheless quite funny anonymous graphical translation of the AP's diagram to explain the new plan. The criticisms of the plan (clueless graphics aside) centered around two tenets that are familiar to Techdirt readers.The Jessie Dylan video James mentioned is this one:
Personally, I am at best agnostic about tenet #1. I am not a technological determinist. I think that DRM has failed spectacularly in some areas (root kits on CD's)... and become standard (even if not loved) in others...
- an argument that DRM is a.) doomed to fail technologically and b.) has in fact already failed in social and economic practice...
- an assertion that "old media" (other names include "the clueless" "dinosaurs" "non digital natives" "the walking dead" etc.) are demonstrably incapable of understanding the potential upside of the sharing economy, or copy-friendly technologies, still less the business models that can be built on top of them...
On tenet #2, I think we are thinking too narrowly. Behavioral economists have identified specific deviations from economic rationality in human psychology-- we tend to value potential losses asymmetrically from potential gains, to use simple heuristics even when they are shown to be false and so on. In my new book, The Public Domain (freely available online, of course) I argue that we have a measurable cognitive bias against "openness" -- I call it cultural agoraphobia, and I argue that it impedes us in understanding the creative potential, productive processes and forms of social organization that the web makes possible... I would even argue that this cognitive bias, even more than industry capture of regulators, is one reason why our current intellectual property policy is so profoundly and utterly misguided. But its implications are wider still...
Unless you believe that markets spontaneously self-correct for everything (hint, check your IRA balance before you answer this question) you have to acknowledge that the problem that the AP is responding to may be our problem (how to pay for the kind of expensive investigative journalism that is a real boon to democracy and liberty) as well as their problem (how not to die in the immediate future.)...
I was lucky enough to be involved with Creative Commons from its inception and to help found Science Commons and ccLearn. Those organizations were designed to solve a particular problem for which there was a market and legal gap -- the problem of failed sharing. Jesse Dylan's brilliant video on the subject explains it better than I could. Are there equivalent institutional innovations that could help in the area of news gathering? I don't know. Journalism isn't my field. But without the kind of institutional innovation and experimentation in civil society that Creative Commons (or the Kiwani's) represented, I think that we are unlikely to solve its problems. Web 2.0 business methods alone, even with a Techdirt crystal ball, will not be enough. If I am right, mocking the clueless will be a poor consolation. "
Wednesday, July 29, 2009
"acquiescence to long-accepted practices has dulled us to the Constitution’s bracingly straightforward words. We should read them anew and reflect that the Founding generation did not evidently think that granting statutory privileges to such purely artistic creations as romantic operas or pretty pictures would promote the progress of both science and the useful arts. Furthermore, most citizens today would, if presented with the Constitution’s plain language rather than the convoluted arguments of professional jurisprudes, probably say the same thing about pop songs, blockbuster movies, and the like. That is certainly not to say that purely expressive works lack value. They may very well promote such important goals as beauty, truth, and simple amusement. The Constitution requires that copyright promote something else, however—”the Progress of Science and useful Arts”—and a great many works now covered by copyright cannot plausibly claim to do both."Interesting argument. He does accept, however, that the chances of winding back the clock on this through the courts are basically slim to non existent.
Certainly shades of his other current favorite programme, Top Gear, in there.
Getting back to biometric matters and specifically the gait recognition segment of the show I penned a follow up piece for the open2.net Bang blog, to explain some of the issues associated with the use of biometrics for surveillance and offer links to more in depth analyses. A copy of those thoughts follows:
The idea of gait recognition has been around for a long time. In G.K. Chesterton’s short story The Queer Feet, Father Brown prevents a crime by “merely by listening to a few footsteps in a passage.” Gait analysis has been widely deployed in professional sports and medicine, enabling sports stars to improve their golf swing, running stance or cycling position and helping in the design of prosthetic limbs for example.
As a means of identifying someone at a distance, without any need to inconvenience the people being analysed, it would appear to be a useful proposition. It is important to note, however, that identifying someone in a crowded city square and verifying that someone is one of 200 people who have walked down a colourful corridor with clear contrast under carefully controlled laboratory conditions, are two entirely different problems.
Technically speaking, checking the gait of one person, in a psychedelic corridor with perfect lighting conditions, to find a match in a database of 200 recorded gaits, is relatively straightforward.
Detecting individual gaits in a dynamic, crowded city square, under less than ideal lighting conditions and pinpointing a baddie by attempting to match those (potentially) millions of readings against a database of millions of recorded gaits, is a much more difficult problem.
And we haven’t even thought about how we would get accurate measurements of millions of people’s (or indeed the baddie’s) walking styles on our benchmark database in the first place yet. Then if the baddie puts a stone in his shoe to change his walk to deliberately fool the software, as Dallas did with his funny walk on the first programme in the Bang Goes The Theory series3, it becomes even more difficult.
From a security perspective, the notion that mass surveillance with advanced technology will magically detect the baddie, turns out to be fundamentally flawed. (It should be noted that mass surveillance is widely and wrongly promoted as an effective anti-terror tool but it is not advocated by the team at Southampton.)
Because terrorists are relatively rare, finding one is a needle in a haystack problem. You don’t make it easier to find the terrorist by throwing more hay (say the biometric data of millions of innocent people) on your data haystack. The technology doesn’t simply home in on the criminal as it does in Hollywood movies.
The police and security services end up spending so much time dealing with innocent people and false leads that their limited resources get swamped.
If each of the UK’s population of around 60 million were monitored once a day and our system was 99% accurate (e.g. flags 1 in a 100 innocents as terrorists and detects 99 out of every 100 terrorists), the police will have to process 600,000 false leads per day.
Given those of us who traverse public places are monitored multiple times a day you can see how that could quickly become unmanageable. It’s also unacceptable from a social, legal and economic point of view.
So it is probable that the use of gait recognition and other biometrics will prove to be more useful for small scale authentication - e.g. employee access to the workplace, rather than large scale surveillance e.g. picking a terrorist out of a crowd.
On small-scale authentication
Technically speaking authentication or verification is an easier thing to do than identification. Authentication (assuming we’re not trying to do it remotely) with biometrics merely asks whether a biometric belongs to the person presenting themselves for authentication. It compares their proffered biometric with the one on file under their name and determines whether there is a match.
Identification is much harder to do and is what security systems at airports or busy shopping areas or sports stadiums attempt to do – measure the biometrics of everyone passing through and attempt to check whether there is a match with a large (and not necessarily particularly reliable) database of biometrics.
The difference appears pedantic but is very important. In the authentication case one biometric is checked against one specific biometric on the database. In the identification case, millions of biometrics are checked against millions (potentially) of biometrics on the database.
Even with highly reliable technologies – say 99.9% accurate and none of the modern systems approach that yet – these millions of checks searching for matching pairs generate huge numbers of false positives (innocents flagged as malcontents) and dangerous levels of false negatives (real bad guys flagged as innocents and it only takes one to get through to cause serious security problems).
The police and security services then spend so much time, energy and resources dealing with innocent people they don’t have the time to deal with the real criminals.
Find out more
Floyd Rudmin, Professor of Social & Community Psychology at the University of Tromsø in Norway, explains why, statistically speaking, mass surveillance cannot work in this article:
The Politics of Paranoia and Intimidation: Why does the NSA engage in mass surveillance of Americans when it's statistically impossible for such spying to detect 4
Counterpunch magazine, May 24, 2006
For those interested in the use of biometrics and security more generally I’d recommend:
Beyond Fear: Thinking Sensibly About Security in an Uncertain World
Bruce Schneier, Springer-Verlag New York Inc
Freedom to Tinker 5 - hosted by Princeton's Center for Information Technology Policy.
Jerry Fishenden 6 - New Technology Observations from a UK Perspective.
UK High Court Judge, Hon Sir Jack Beatson explains the legal issues with the use of biometrics in crime detection in Forensic Science and Human Rights: The 7 [pdf], his valedictory address as President of the British Academy of Forensic Science, 16 June 2009.
Nuffield Council on Bioethics report, The forensic use of bioinformation: ethical 8 [pdf], published in September 2007.
Human Genetics Commission Citizens 9, July 2008.
Biometrics: Enabling Guilty Men to Go Free? Further Adventures from the Law of Unintended 10 - Jerry Fishenden blog post
Digital Decision Making: Back to the Future - chapters five and six
Ray Corrigan, Springer-Verlag, 2007
Tuesday, July 28, 2009
"46 Words as such do not, therefore, constitute elements covered by the protection.
47 That being so, given the requirement of a broad interpretation of the scope of the protection conferred by Article 2 of Directive 2001/29, the possibility may not be ruled out that certain isolated sentences, or even certain parts of sentences in the text in question, may be suitable for conveying to the reader the originality of a publication such as a newspaper article, by communicating to that reader an element which is, in itself, the expression of the intellectual creation of the author of that article. Such sentences or parts of sentences are, therefore, liable to come within the scope of the protection provided for in Article 2(a) of that directive.
48 In the light of those considerations, the reproduction of an extract of a protected work which, like those at issue in the main proceedings, comprises 11 consecutive words thereof, is such as to constitute reproduction in part within the meaning of Article 2 of Directive 2001/29, if that extract contains an element of the work which, as such, expresses the author’s own intellectual creation; it is for the national court to make this determination...
51 In the light of the foregoing, the answer to the first question is that an act occurring during a data capture process, which consists of storing an extract of a protected work comprising 11 words and printing out that extract, is such as to come within the concept of reproduction in part within the meaning of Article 2 of Directive 2001/29, if the elements thus reproduced are the expression of the intellectual creation of their author; it is for the national court to make this determination...
61 The Court finds...that a temporary and transient act of reproduction is intended to enable the completion of a technological process of which it forms an integral and essential part. In those circumstances...those acts of reproduction must not exceed what is necessary for the proper completion of that technological process.
62 Legal certainty for rightholders further requires that the storage and deletion of the reproduction not be dependent on discretionary human intervention, particularly by the user of protected works. There is no guarantee that in such cases the person concerned will actually delete the reproduction created or, in any event, that he will delete it once its existence is no longer justified by its function of enabling the completion of a technological process...
64 In the light of the foregoing, the Court finds that an act can be held to be ‘transient’ within the meaning of the second condition laid down in Article 5(1) of Directive 2001/29 only if its duration is limited to what is necessary for the proper completion of the technological process in question, it being understood that that process must be automated so that it deletes that act automatically, without human intervention, once its function of enabling the completion of such a process has come to an end.
65 In the main proceedings, the possibility cannot be ruled out at the outset that in the first two acts of reproduction at issue in those proceedings, namely the creation of TIFF files and text files resulting from the conversion of TIFF files, may be held to be transient as long as they are deleted automatically from the computer memory.
66 Regarding the third act of reproduction, namely the storing of a text extract of 11 words, the evidence submitted to the Court does not permit an assessment of whether the technological process is automated with the result that that file is deleted promptly and without human intervention from the computer memory. It is for the national court to ascertain whether the deletion of that file is dependent on the will of the user of the reproduction and whether there is a risk that the file might remain stored once the function of enabling completion of the technological process has come to an end.
67 It is common ground, however, that, by the last act of reproduction in the data capture process, Infopaq is making a reproduction outside the sphere of computer technology. It is printing out files containing the extracts of 11 words and thus reproduces those extracts on a paper medium.
68 Once the reproduction has been affixed onto such a medium, it disappears only when the paper itself is destroyed.
69 Moreover, since the data capture process is apparently not likely itself to destroy that medium, the deletion of that reproduction is entirely dependent on the will of the user of that process. It is not at all certain that he will want to dispose of the reproduction, which means that there is a risk that the reproduction will remain in existence for a longer period, according to the user’s needs.
70 In those circumstances, the Court finds that the last act in the data capture process at issue in the main proceedings, during which Infopaq prints out the extracts of 11 words, is not a transient act within the meaning of Article 5(1) of Directive 2001/29...
...the Court (Fourth Chamber) hereby rules:
1. An act occurring during a data capture process, which consists of storing an extract of a protected work comprising 11 words and printing out that extract, is such as to come within the concept of reproduction in part within the meaning of Article 2 of Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society, if the elements thus reproduced are the expression of the intellectual creation of their author; it is for the national court to make this determination.
2. The act of printing out an extract of 11 words, during a data capture process such as that at issue in the main proceedings, does not fulfil the condition of being transient in nature as required by Article 5(1) of Directive 2001/29 and, therefore, that process cannot be carried out without the consent of the relevant rightholders."
"When I read in the pages of this newspaper this month that the Conservative Party was planning to transfer people’s health data to Google, my heart sank. The policy described was so naive I could only hope that it was an unapproved kite-flying exercise by a young researcher in Conservative HQ. If not, what was proposed was both dangerous in its own right, and hazardous to the public acceptability of necessary reforms to the state’s handling of our private information.I suspect Google executives won't be Mr Davis biggest fans either.
There are powerful arguments for people owning their own information and having rights to control it. There are massive weaknesses in the NHS’s bloated central database and there are benefits from using the private sector. But there are also enormous risks, so we are still a long step from being able to give personal data to any company, let alone Google.Google is the last company I would trust with data belonging to me."
Needless to say the Tenenbaum case will be closely watched by the usual suspects but this whole area of the future of our information ecology has to get beyond the usual small number of commercial vested interests, librarians, scholars, artists and activists and somehow into the consciousness of the wider public, if we are to make any real progress. In the case of our physical environment people 'get' the climate change argument even if, as Edward O. Wilson eloquently explained in a New Scientist interview in 2006 (podcast below) about his attempts to reach out to the US religious right, we are in denial about the message and the potential lifestyle implications.
The environmental sustainability story is at least on the cognitive radar of Jo Public although, as David McKay says, it is every big and not every little that is going to make a difference. The ongoing evolution of our knowledge ecology sadly exercises the cerebral and professional energies of the few; and even those few have little or no empirical evidence with which to work to enable the beginnings of any kind of rational mapping of our future information landscape.
Monday, July 27, 2009
"Without any warning or consent, Amazon remotely deleted copies of two books from thousands of Kindle devices. Few customers were aware that Amazon retained the technical capability to erase ebooks from their devices. Yet, when the company learned that two books had been distributed without proper authorization, it simply deleted the books and promised a refund.
The case proved that fact is often stranger than fiction since the books were George Orwell's 1984 and Animal Farm, conjuring up new images of the Orwellian vision of information control...
While Amazon has since apologized and promised that it won't repeat the Orwell misstep, the reality is that this incident is only the latest in a long line in which companies prove to be their own worst enemy by undermining consumer confidence in the digital economy."
"Internet users in Hull risk having their connection cut if they illegally share files, under a controversial "three strikes and out policy" operated by the only internet service provider in the area.
Karoo, the north-eastern city's only ISP, serving 90,000 customers, has in recent years, with little national publicity, been cutting users' internet connection immediately and without warning if they were found to be in breach of copyright.
It was forced to relax its stance after growing criticism from digital rights groups and customers."