Law in the Internet Society

View   r11  >  r10  ...
FarayiMafotiFirstPaper 11 - 22 Jan 2012 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="FirstPaper"
MY REVISED ESSAY
Changed:
<
<
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
>
>
Your revised essay shouldn't be stacked on top of former drafts. It should replace former drafts. The History facility in the wiki does the job of permitting comparison of versions. This misuse of the wiki form makes actual comparison of versions much more difficult. Please undo it, by creating a clean version of your second draft, then another with my comments interlined, then a clean copy of your next version, each saved on top of the other, so that the history shows correct diffs.
 

GOOGLE, GIVE US A PEAK

Google Inc has not cooked its search results to favor its own products and listings, Executive Chairman Eric Schmidt told a U.S. Senate hearing looking into whether the search giant abuses its power. Members of the Senate Judiciary Committee's antitrust panel said last September that Google had grown into a dominant and potentially anti-competitive force on the Internet. This hearing should come as no surprise to anyone who has been following Google’s ongoing squabbles with the FTC and the EC. Practically every player in the digital economy is gunning for Google these days with some accusing Google of operating a “black box” algorithm that lacks transparency or accountability. Others say Google stacks the deck against rival services, such as maps or shopping services, when it displays its own affiliated sites or content prominently in search results.

Added:
>
>
Nonsense. All of this is merely a thin crust of complaining on top of an immense reservoir of not doing anything. You haven't responded to the basic criticism of the last draft, which is that you're mischaracterizing self-promotion by legislators and regulators with actual governmental activity, of which there isn't any for obvious reasons you don't mention.
 

THE ARGUMENT FOR TRANSPARENCY

Changed:
<
<
“Search neutralists,” as they call themselves, articulate their argument against Google as follows: If search engines have become an undisputed gateway to Internet, and are now arguably as essential a component of its infrastructure as the underlying network itself, does that not create a basis on which to argue for algorithmic transparency? Given that Google, the overwhelmingly dominant search engine, can apparently assert full and undisclosed editorial control of what content you see and what you don’t, does it follow that this endangers the fundamental openness of the internet?
>
>
“Search neutralists,” as they call themselves, articulate their argument against Google as follows: If search engines have become an undisputed gateway to Internet, and are now arguably as essential a component of its infrastructure as the underlying network itself, does that not create a basis on which to argue for algorithmic transparency?
 
Added:
>
>
If that's the question being asked, the answer is simple: no. Among other reasons is the existence of the First Amendment. I don't know whether all "search neutralists" are incompetent morons, or only the ones who teach on the Columbia Law faculty, but if there is something intelligent enough to be worth writing an essay about, this question isn't it.

Given that Google, the overwhelmingly dominant search engine, can apparently assert full and undisclosed editorial control of what content you see and what you don’t, does it follow that this endangers the fundamental openness of the internet?

Of course not. Why would it? Google is just one method for searching the web. Most of us use multiple other methods, whether we know it or not, and there's an immense, deeply-funded competitor pressing the Google results model everywhere on earth every second of every day. You'd have to be making up both facts and law as you go along to believe there's any energy available in that question. This was the problem that needed to be addressed after draft one, and despite arguing with me in the comments and writing another supposedly-responsive draft, you still haven't laid a glove on it.
 

THE ARGUMENT FOR TRANSPARENCY WILL BE IGNORED BY THE COMMON HERD

Disregarding transparency’s obvious problems with execution ((1) the more transparent the algorithm, the more vulnerable it is to gamesmanship by spammers or worse, the greater the chance of the algorithm being rendered useless; (2) if the algorithm is transparent to regulators, they are unlikely to adapt fast enough to spur innovations), the concept is only worthwhile to prosumers, not consumers, and it is vital to remember that antitrust law, at least in theory, is supposed to be about protecting consumers. All consumers see is the supposedly objective final results, not the intervention by the gatekeeper. Unless the search manipulation is drastic (i.e. no relevant result appears), corrupted results are an “unknown unknown” and so no one cares. People will continue to see the search as a credence good, whose value is difficult to determine even after consumption.

Added:
>
>
Or an approximation which is sufficient for their present purposes, whatever those purposes are. If they want another approximation, Bing is delighted to provide one. Depending on what you're looking for, and what you're looking at it on, either one (or a third engine) may be the "best" choice, though there is no reason to suppose that an actual optimum exists where any search with a significant number of results is conducted.
 

A PROSUMER-INSPIRED SEARCH ARCHITECTURE

Added:
>
>
Transparency’s relation to prosumers: The prosumer campaigns for a system that allows a visitor to conduct any or all three types of a search task: develop information, compare options, and find where to execute transactions.
 
Changed:
<
<
Transparency’s relation to prosumers: The prosumer campaigns for a system that allows a visitor to conduct any or all three types of a search task: develop information, compare options, and find where to execute transactions. Algorithmic opacity would not be ideal for the prosumer because the prosumer, the active, tech-savvy customer who gains information from digital media or online, and interprets and influences mass consumers in terms of lifestyle and brand choices, desires increased facility with the technology in order to maximize his ability to engage critically with it and collaborate with others. Collaboration or federation is value to the prosumer. Currently, web search engines like Google function as weak federation mechanisms either by bringing up relevant web pages for user queries or via directories of related sites. A federated architecture, however, would offer a single point of entry allowing users to employ specific applications optimized for their searches. To be clear, the emerging paradigm is based on the combination of a multi–domain query approach with the integration of heterogeneous data sources capable of scouring the deep Web.
>
>
Why is a prosumer different from a consumer in this respect? You haven't actually made any use of the idea of the prosumer, and you've missed the point involved in my suggesting the importance of our own acts in building the web in consequence.

Algorithmic opacity would not be ideal for the prosumer because the prosumer, the active, tech-savvy customer who gains information from digital media or online, and interprets and influences mass consumers in terms of lifestyle and brand choices, desires increased facility with the technology in order to maximize his ability to engage critically with it and collaborate with others.

So what?

Collaboration or federation is value to the prosumer. Currently, web search engines like Google function as weak federation mechanisms either by bringing up relevant web pages for user queries or via directories of related sites. A federated architecture, however, would offer a single point of entry allowing users to employ specific applications optimized for their searches.

Huh? There's nothing to prevent people from wrapping the results of simultaneous searches among the competing engines in results-rankers of their own devising. I often use a tool that does simultaneous Bing and Google searches, combines the two sets, and then throws away almost all the information each of them provided in order to give me what I want. I don't have to care what the algorithms are that either engine used. All they did was dig raw material out of the Web for me, and I processed it myself. The union of everything produced by Google and everything produced by Bing, reselected and sorted by what I want to prioritize, is easy for me to make and entirely eliminates whatever "anti-competitive" effects you think you could discover in either mega-engine's behavior, for some tiny number of searches in some tiny number of ways. A little technical thinking and some prototyping could probably have prevented you from wasting time on this blind alley of thinking, as it would help the law professors who mumble about this stuff all the time without knowing shit about it. But because they don't know shit, their chances of figuring anything out are tiny, and they never learn anything from anybody else, because they're so smart they don't need to listen to anyone except themselves. You do not want to follow their example.

To be clear, the emerging paradigm is based on the combination of a multi–domain query approach with the integration of heterogeneous data sources capable of scouring the deep Web.

Well, then, why bother writing the essay, inasmuch as there's no need whatever for "transparency" in order to enter into this "emerging paradigm"?
 

FOOD FOR THOUGHT: ONE IMPLICATION OF A FEDERATED SEARCH ARCHITECTURE

Changed:
<
<
Conceptualizing a prosumer- ideal search architecture, or as Professor Moglen puts it, “a system of federated search technology, in which we all do searching for one another in some fast and efficient manner” can prove difficult, however, for a number of reasons, not least of which is because there would need to be a revenue mechanism different from the “pay-per-click” method that we are accustomed to. Existing revenue sharing agreements between search engines and publishers, where each receives a fixed share of the profit, are no longer feasible. Consider Google’s model: once a user clicks on a sponsored link, the search engine receives the payment from the corresponding advertiser and gives part of this to the publisher. The payment ratio of the search engine is defined by a commercial contract, existing independently of the specific search. When it comes to federated search, however, the contracts between the publisher and the domain-specific search engines must account for the fact that each search engine plays a role in the generation of the search process. In order for there to be a disciplined way to estimate the search value of each domain-specific search engine, the monetization must be performed after the ranking. This would help avoid issues of gamesmanship (often a search engine may want to bid strongly or decrease the bid on a query for purely economic reasons) around the domain-specific engines.
>
>
Conceptualizing a prosumer- ideal search architecture, or as Professor Moglen puts it, “a system of federated search technology, in which we all do searching for one another in some fast and efficient manner” can prove difficult, however, for a number of reasons, not least of which is because there would need to be a revenue mechanism different from the “pay-per-click” method that we are accustomed to.

No, it is, not "can be" difficult for a simpler reason: we don't know how. This has nothing to do with "revenue mechanisms." If we knew how to federate search, we wouldn't need "revenue mechanisms," anymore than we need "revenue mechanisms" for Wikipedia, or free software. We need to know, as a technical matter, how to federate search. If you already know that, and all you need is a revenue model, you should write an essay about that. Many people, including me, will be immensely impressed.

Existing revenue sharing agreements between search engines and publishers, where each receives a fixed share of the profit, are no longer feasible. Consider Google’s model: once a user clicks on a sponsored link, the search engine receives the payment from the corresponding advertiser and gives part of this to the gpublisher.

What have sponsored links got to do with it? In a federated search model, there wouldn't be any.

The payment ratio of the search engine is defined by a commercial contract, existing independently of the specific search. When it comes to federated search, however, the contracts between the publisher and the domain-specific search engines must account for the fact that each search engine plays a role in the generation of the search process. In order for there to be a disciplined way to estimate the search value of each domain-specific search engine, the monetization must be performed after the ranking. This would help avoid issues of gamesmanship (often a search engine may want to bid strongly or decrease the bid on a query for purely economic reasons) around the domain-specific engines.

This is all irrelevant to what would happen if we had a federated system for building back-links in the Web.

I'm not sure you've focused clearly enough on what search engines do. Perhaps you should begin from considering an alternate-universe, in which Tim Berners-Lee had chosen a double-linked instead of single-linked architecture for the CERN system that became the Web. There would have been an even more serious difficulty with a Web made double-linked, which you'll spot when you think about it, but the problem of search would be different. Or you can imagine the Web in terms of the history of Lisp: what you have to do to recover from the drawbacks of using the single-linked list as the primitive data type in a computer language. Or you could take a look at the "searching" half of the third volume of Donald Knuth's work of genius _The Art of Computer Programming_, and ruminate on the necessary structures for the World Wide Web that would make searching trivial, and then consider why we don't switch to them. In any event, until you separate the technical problem of search from the social opportunity to address the primary problem with 20th-century mass advertising, it's unlikely that you're going to write anything about the union of the two forces with free software, which created the entity Google, and the Web you think you know. I went through all of this in class, not throughly enough to displace the resulting essay of yours, but enough to have explained already the difficulties in argument that this revision does not yet address.

 
Added:
>
>
 

Google's Algorithmic Cat and Mouse Game: The Case against Greater Transparency


Revision 11r11 - 22 Jan 2012 - 15:47:03 - EbenMoglen
Revision 10r10 - 18 Dec 2011 - 01:08:03 - FarayiMafoti
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM