Posted tagged ‘internet’

Copyright vs. The Law

September 10, 2010

Techdirt points us to an interesting conflict going on in Sweden:

The Swedish National Police have been attempting to build a database which allows them to match shoe-prints found at crime scenes with the type of shoes that made them. In order to build the database, they have simply been downloading pictures of shoe treds from the internet. Now, some shoe companies are claiming that those images are their intellectual property and cannot be taken without permission.

The police claim that the law lets them ignore copyright in solving crimes, but an intellectual property professor quoted in the article notes that such an exemption only applies in the direct police investigation of a specific crime — not for the sake of building up a general database. The professor suggests that this appears to be a clear violation of Swedish copyright laws.

Fine points of Swedish copyright law aside (for the time being), what’s interesting here are the two potential ways to approach the dispute:

If we trust that the motive of the police is to protect the life and property of the citizenry, then we can see the conflict over the database as taking root in a claim by commercial brands that the protection of their intellectual property in the abstract (it should be safe to assume that this database will not result in any real economic damage) should trump the ability of the police to solve actual crimes.

If we’re feeling more cynical about state power, we could view this as a conflict between copyright and state power. Which source of authority is exerts more influence: the corporation’s ownership of it’s intellectual property or the state’s ability to surveil its citizens?

Open Universities, pt. 2

August 25, 2010

In our previous post we discussed the possibility that web-casting academic lectures could transform universities in some way, possibly making academic information accessible to more people outside the university, or helping to keep lecturers honest and accountable. On Monday, the New York Times published this story about attempts by humanities scholars to use the internet to transform the process of peer review:

some humanities scholars have begun to challenge the monopoly that peer review has on admission to career-making journals and, as a consequence, to the charmed circle of tenured academe. They argue that in an era of digital media there is a better way to assess the quality of work. Instead of relying on a few experts selected by leading publications, they advocate using the Internet to expose scholarly thinking to the swift collective judgment of a much broader interested audience.

So the idea here is not that digital information systems will help make education more accessible, but rather that they could be used to allow a broader segment of the public to weigh in on key academic questions: Which research findings are interesting, convincing or valid? Can research that stands the test of peer review by experts hold up in the court of public opinion? In other words, the process of internet reader review could be use to break down the social power and exclusivity of expertise, one important step in opening the intellectual commons.

Open Universities?

March 19, 2010

With growing concern in the United States about the cost, accessibility and quality of education, it may be useful to consider a recent trend in American Universities: many professors now publicly ‘share’ video of their lectures, opening their courses to anyone with internet access, rather than only to paying, enrolled students. The Chronicle of Higher Education recently published a great story about this.

Personally, as a lecturer myself, I imagine having my lectures taped and shared would make me self-conscious in practice, which might cause me anxiety or help me improve my lectures. It certainly might make professors accountable to a broader public, and help “peer review” their claims. It is also appealing in theory to consider the ethic of open-source teaching, which could push the conventional limits of public education. I have always found that colleges and universities are places where the ethic of an intellectual “commons” is strong; this trend still survives in the age of the corporate university in a somewhat muted form.

I am also a fan and follower of online lectures. The European Graduate School offers many lectures online, and considering their faculty of superstar theorists, this is a unique opportunity to hear from Zizek, Butler, DJ Spooky and a whole slew of continental philosophers, media and cultural theorists, film directors and other media practitioners. John Merriman‘s lectures at Yale in modern European and French history are online. MIT’s Open Course Ware site provides syllabi, assignments, lectures and other media to anyone with a web browser. There must be countless others.

Of course, many universities will worry about harming their bottom line, if prospective students can see lectures without enrolling and paying. Some professors will be uncomfortable, unable or unwilling to share their lectures. No matter how the “classroom” or the price of textbooks and materials might change in the internet era, it remains the case that students who want a credentialed degree will have to enroll and pay. Internet video alone won’t solve a national problem with access to education, rising tuition, failing schools, and so on, but sharing lectures online is already stretching the boundaries of the classroom.

digital books: when it rains, it pours.

March 16, 2010

The recent release of David Shield’s book Reality Hunger: a Manifesto (see here and here) comes on the heels of increasing controversy in Europe over Helene Hegemann’s literary debut Axolotl Roadkill (see here). Hegemann is already caught up in an intellectual property scandal; we’ll see what happens to Shields. Both books take a recombinant, “remix” approach to writing, cobbling together excerpts of other people’s writing with their own bits of text. How very contemporary. The idea of remixing as a unique mode of cultural production and the attendant issues of intellectual property that always seem to follow it have now made it into the book market.

While consumers read literary mash-ups like last year’s Pride and Prejudice and Zombies, media giants like Amazon, Sony and Barnes & Noble are competing to get the reading public hooked on portable hand-held digital reading devices: Kindle, Reader and Nook. Consequently, the publishing industry is already embroiled in typical efforts to protect corporate property: conflict over ebook file formats and which devices can read which formats, as well as concern over the proliferation of ebooks as a hot commodity for file sharing.

At the same time, but getting less media attention, has been Google’s ongoing commercial/legal negotiation with various publishers, universities and other authorities as they expand the ever-growing Google Books project. The project makes a massive amount of material available to the public, online, much of it for free, but many books and other printed materials are still not fully usable or readable thanks to pressure from publishers.

There are many things that are controversial about Google Books. For one, why should we trust a private corporation with the next generation of media services we would normally expect from public libraries? If Google cuts a deal with publishers, much of the content would likely become pay-to-play – and then, publishers would have some say in the cost and accessibility of their products. Even if Google was committed to keeping user access free and open, other issues might arise, too.

Nicholas Sarkozy, Jean-Noël Jeanneney and others close to the French National Library have argued that Google books will only speed the trend of cultural globalization as Americanization, and place control of books belonging to France’s national “patrimony” in non-French hands. Other times, their line seems to be pan-European. But whether they argue for a French digital library (like Gallica), or a European Union version (like Europeana), the point is to mount a public, European challenge to American corporate digitization projects like Google’s.

These varied anecdotes suggest that we’re witnessing an interesting moment of transformation in books, and in the ways that people talk about, think about, buy and sell, and fight over, books. With so much intellectual content and so much money at stake, this dialogue, now fairly widespread, will only get hotter.

Prelinger Manifesto: On the Virtues of Preexisting Material

February 23, 2010

Rick Prelinger, a force in internet archiving, is also the author of this useful manifesto On the Virtues of Preexisting Material, in which he outlines 14 Principles for using preexising works to make new work:

1 Why add to the population of orphaned works?
2 Don’t presume that new work improves on old
3 Honor our ancestors by recycling their wisdom
4 The ideology of originality is arrogant and wasteful
5 Dregs are the sweetest drink
6 And leftovers were spared for a reason
7 Actors don’t get a fair shake the first time around, let’s give them another
8 The pleasure of recognition warms us on cold nights and cools us in hot summers
9 We approach the future by typically roundabout means
10 We hope the future is listening, and the past hopes we are too
11 What’s gone is irretrievable, but might also predict the future
12 Access to what’s already happened is cheaper than access to what’s happening now
13 Archives are justified by use
14 Make a quilt not an advertisement

When Artists Go a-Sharing…

June 14, 2009

In our recent discussions here at Enclosure, we’ve been discussing how “free rider” user-consumers – who download content without paying for it – may be making the business of selling cultural content more and more difficult, maybe unsustainable or ultimately impossible. Well, at least this is the way public discussion has been tending in the last several years.

But why blame the user? High profile artists like Radiohead and Girl Talkproducers of cultural content – have recently been offering their music online for an unspecified cost – users can choose donation-style how much they want to pay, including nothing. These artists have boldly cut the corporate middle men (or culture industry) out of the deal altogether, distributing content directly to user-consumers. This business model is most definitely not sustainable for corporate distributors (and three cheers for that!), but is it sustainable for the artists themselves? This, I think, remains to be seen; Radiohead and Girl Talk are in much the same boat as DiFranco and MacKaye, here. Though so far things look fairly affirmative.

Radiohead apparently chose to do this as part of a major sea-change for the band in 2007. Their former contract with a major label had expired, and instead of signing a new contract, they decided to go indy (see New York Times article here). They could afford the financial gamble of pay-what-you-want digital distribution because they were already a very wealthy and successful band – they made money no object in part because they didn’t need the money. But this was also a welcome gesture that they didn’t care about the money – we may say “sure, they can afford to do that,” but I suspect there’s something more highminded at work in Radiohead’s decision.

There also may have been something simply practical behind their decision. According to the NY Times article referenced above, Radiohead stood to make a good deal more profit by selling their wares directly, compared to the amount they would have made after their record label took its generous cut. What’s more, releasing the music digitally meant no lag time for producing CDs, one among several factors that often makes corporate culture distribution take longer; Radiohead could release the album themselves, the very second the final mixdown was finished. They also reduced production costs dramatically. Radiohead’s choice was as good for cold, rational-economic reasons as it was for warm, high-minded ethical reasons.

In other words, Radiohead showed that – at least under certain circumstances – going indy in the digital age could mean much more profit for artists than their major label contracts ever would have delivered. The economic reasoning is simple: cut out the middleman, and simultaneously cut out the process of manufacturing CDs. Where costs dry up, profits bloom. And here’s the kicker: they made this increased profit in spite of the fact that, according to one estimate, at least 3/5 of downloaders took the album for free. What happened to the internet “free-rider problem”?

Then, about a year ago, Girl Talk released Feed the Animals in the same pay-what-you-want, web-only format. Unlike Radiohead, though, his reasons for doing so were non-economic. His album of mash-ups was composed entirely of music sampled from other artists, and in a statement of frank copyright defiance, he made no effort to license or “clear” any of the samples. The major record companies never would have sold such a thing in any case. The compelling question about Girl Talk, according to this NY Times article, is whether this type of distribution can make Girl Talk a star and a financial success. Given his status last year one of the darlings of the underground, I think the only sensible answer in hindsight is yes.

Unlike Radiohead, who were already riding a long wave of fame thanks to almost two decades of major label promotion when they made the switch to distrubuting digital donation-based downloads, Girl Talk has never been major. If Radiohead showed that one could jump from the top of the skyscraper and fly on his own without corporate support, Girl Talk is testing whether an DIY artist (a self-contained performer,  producer and distrubutor) can get in on the ground floor, so to speak. If Girl Talk can make it economically by distributing albums on the web for free/donation/profit (and that’s really the only way to understand what he’s doing!) , maybe anyone can. Which means: maybe there is no internet free rider problem…or if there is, it would only trouble the music industry, not the artist.

Private Ownership and Corporate Ownership, from Ani to Einstein

June 14, 2009

In the discussion around my previous post, P and Bob have been pushing me to consider what seems to be one of the most important problems with the principle of open source (or even “open resource”) culture – if content is shared for free (often on the internet), how can producers/sellers of content (large and small alike) continue to make a living? Won’t most users/consumers choose free copies of content over paid ones? – it’s only economically rational, after all. Will the production of culture become economically unsustainable?

It’s a fair line of questioning, and unavoidable. Most discussions of what’s happening to culture in the digital era will eventually unearth the same concern. P and I had already thought of the question, and discussed it a bit in person in January, but have never written about it directly on the blog, perhaps dodging it for its difficulty, before Bob reminded us of its importance.

As P put it, the large corporate distributors of culture – record companies, film companies, etc. – have recently been defending their turf against open source encroachment by arguing, loudly and publicly, that free sharing of cultural content will not only ruin their business as middle men, but also make it harder for the artists whose work they sell to make a living. As P argued, we know that this is true for large corporations, but is it true for smaller producers, even individual artists? To put it in Bob’s terms, does open source culture pose the same problem for “private ownership” in general as it does for “corporate ownership” in specific?

The short answer is: we don’t know yet. In order to find out, we could start by talking with (or researching) some independent producers and sellers of culture, like Ani DiFranco or Ian MacKaye, to see if their business is suffering in the digital era. As both indy stars operate record labels, it might also be interesting to seek out some unsigned artists who distribute their own content.

Behind all this, there’s a deeper issue. As Bob perceptively picked out, there’s some tension here at Enclosure between a general critique of all private ownership/property, and a specific critique of corporate ownership. Are we waging a critique of private property itself, or are we only concerned about large holders and monopolies? I see my views on this as a spectrum of value: small businesses are preferable to large ones, but an end to private property would be even better. While it is easy to readily critique corporate power and monopolies, it is a bit harder for me to critique smaller businesses (even though they are for-profit enterprises just the same). These are broad and difficult questions – we’ll have to keep working on them as our discussion continues.

Meanwhile, I wanted to catch up on a tidbit Bob mentioned: Einstein’s day job. Coincidentally, the subject is very relevant for Enclosure. In 1905 when Einstein published his first two groundbreaking articles in physics, he was working for the Swiss patent office, himself contributing to the private enclosure of science and technology.

For Bob: the free rider problem

June 9, 2009

Thanks to Bob for giving us some really nice food for thought! Bob brought up the free rider problem – who creates the content vs. who merely consumes content? How do systems of distribution or exchange guarantee that content producers are fairly compensated for their work?

The free rider is a problem only in economies such as capitalism, where property is privately held. In systems where there are significant common resources available (feudalism, communism), there are no free riders in the pejorative sense, because everyone uses the common resources freely and that is economically normative. Such systems also normally contain collectively understood work obligations – all can withdraw resources from the common account because all deposit value through their work.

In our own, admittedly somewhat utopian thinking, those of us on the young internet left (like the Swedish Pirate Party, which just took a seat in the EU Parliament!), want to use the internet if possible to transform the capitalist economy by growing a body of commonly available resources, a new non-commercial commons.

In general, the problem with free riders is that they eat up bodies of resources without contributing to those bodies. In the case of digital file sharing, the main distinction is between users who “share” content, by both uploading and downloading, and those who only donwload (the free riders). As a recent research paper in Business/Econ argued, those who take files without giving any in return use up one key resource: bandwidth. They slow down the network for everyone hooked into it, making it incrementally harder for each individual user to upload and download files, and they do so without offering any files in return.

But this same research paper also argues that internet free riders can have a quiet, often unnoticed benefit: they are likely to become uploaders or sharers by accident, because the programs they use to find content are designed to automatically share whatever content they have previously downloaded. The default setting in many P2P programs is to put downloads and uploads in the same folder, to handle all files in and out through that same location. Users can normally change these settings at will, but free riders are the type of users who don’t bother with more advanced settings, looking for a quick fix download – and thus, they end up sharing files anyway.

This trend is especially visible in recent episodes where the MPAA or RIAA tried to sue a group of kids for uploading who “didn’t know what they were doing” or “just wanted to download.” I think there is good reason to suspect that a large percentage of people who share files at all are not interested in these P2P systems and how they work, but just want to download the songs, movies, etc. that they like. Most downloaders are not proud pirates intending to upload, but they use software that uploads on their behalf.

From this perspective, the internet is unusual as a system of exchange because it is full of free riders, and still delivers unprecidented amounts of content, both free and paid. What do we make of this? Can we argue that P2P systems are a unique type of exchange system, one which tends to multiply free riders for their side effects, so to speak? Do free riders matter where transactions are non-commercial, I mean non-monetary? Interesting stuff, and I’m sure we could generate further important questions at will.

A Response: The Future of File Sharing

June 8, 2009

At the IP Watch blog, Bruce Gain has published a post entitled The Future of File Sharing, in which he explores three new models for digital content distribution that may have some potential to stem the tide of illegal file sharing where industry lawsuits and HADOPI are currently failing. He poses the question this way:

[W]hat alternatives exist that can appease those with royalty interests as well as meet the demand of consumers, especially those who actively engage in sharing copyright-protected media files?

What’s especially interesting about his list is that it offers at two starkly different visions of the future of intellectual property when it comes to digital media.

The first and third of these options – free media supported by ads and global licensing – work on the assumption that p2p is an open Pandora’s box. Rather than keeping at the Sisyphean task of constructing ever tighter systems of DRM in response to the ever more sophisticated, widespread, and user-friendly methods of breaking DRM, copyright holders refocus their efforts in finding alternative revenue streams to support free-range content. In the former case, it’s revenue from advertisements attached to the media distribution platform. In the latter, it’s a blanket point-of-entry fee that covers the entire universe of content that a given user might access for free, distributed by some means to the “owners” of that content.

Gain’s second option, on the other hand, seeks to radically raise the horizon of access to p2p by further privatizing the undergirding technology. Rather than continuing to produce personal computers as open systems that run any compatible applications regardless of origin and allow connectivity to any other computer (given the proper software, etc.), computers of the future could be closed systems – more like video game consoles – where only official applications, peripheral hardware, and networks will be compatible.

The article runs down a variety of logistical problems with each of these three options, which I won’t rehash here, but I do think it’s important to note that all of Gain’s considerations seem to address the first half of his goal (appeasing royalties-seekers) rather than the second (appeasing consumers). I don’t think this imbalance is necessarily a result of any pro-IP bias on the author’s part. Rather, I think it reveals a quality of the current digital distribution system that Gain leaves unspoken; namely, that for those consumers with a baseline of technological know-how, it already fullfils demand almost perfectly. Cultural resources are already available nearly immediately, and in most cases for free over the greynet.

If we are to be honest about what it means to seek out new content distribution systems that “appease everybody” (i.e., producers, sellers, and consumers), we must recognize that we are inherrently talking about finding ways to restore a measure of profitability to the privatized culture industry in the digital age.

Speaking of Taxonomy

June 3, 2009

Cory Doctorow’s latest Guardian blog post addresses the privitization of data classification on the internet via Google’s virtual monopoly on search.

He points out that before the advent of Search, internet developers assumed that all the information on the entire net would have to be arranged into categories, akin the Dewey Decimal System. It’s easy to see how this kind of enforced taxonomy can be problematic. As Doctorow explains:

Melvin Dewey didn’t predict computers; he also mixed Islam in with Sufism, and gave table-knocking psychics their own category. A full-contact sport like the internet just doesn’t lend itself to a priori categorisation.

The implementation of search engine technology, however, radically transforms the way we find and interact with data on the web.

Enter search. Who needs categories, if you can just pile up all the world’s knowledge every which way and use software to find the right document at just the right time?

But this is not without risk […] the way that search engines determine the ranking and relevance of any given website has become more critical than the editorial berth at the New York Times combined with the chief spots at the major TV networks. Good search engine placement is make-or-break advertising. It’s ideological mindshare. It’s relevance.

So, when a private company owns the algorithms that define how easy or difficult it is to access particular information or, in a less explicit sense, how much authority is given to some sources over others, the taxonomy of the internet is essentially privatized.