Could Suggested Search Defamation Survive the CDA?

Internet Blog Icon 20140904 shutterstock_217141063

Could Suggested Search Defamation Survive the CDA?

“Suggested search defamation,” or “autocomplete defamation,” lawsuits have overcome legal hurdles in at least a half-dozen other countries. Is it a matter of time before we see one—and maybe a successful one—in the United States? Or would the Communication Decency Act effectively immunize a search engine from suggested search defamation liability.

Suggested search terms are those proposed by a search engine in response to a user’s prior search query or queries. Often, such as in the case of Google, suggested search terms appear as “autocomplete” suggestions, or alternative endings to the user’s then-current search. Suggested search technology has been lauded as a valuable tool that helps users find more relevant and interesting content. That same technology, however, also has caused controversy. Social interest groups have criticized Google and others for suggesting search terms that are offensive, racist or discriminatory.  Increasingly, individuals also have challenged suggested search terms—in particular as supplied by autocomplete tools—that falsely suggest an association between their names and criminal, immoral or other unsavory behavior.

Courts in Germany, Japan, Australia, Italy and France have permitted suggested search defamation, or autocomplete defamation, claims to proceed. Then in early August, a Hong Kong court refused to dismiss a suggested search defamation claim brought by movie mogul Albert Yeung Sau-shing against Google. Yeung’s claim, which remains in litigation, is based on one or more suggested search terms that juxtaposed his name and a reference to organized crime. Among other arguments, Google had claimed it does not “publish” its suggested search results. The Hong Kong court disagreed. Applying defamation law not entirely dissimilar to that of the United States, the court concluded that “any person who takes part in making the defamatory statement known to others may be liable for it.”

The Yeung case received relatively heavy coverage in entertainment and business media. The U.S. legal press, however, all but ignored it. The seeming presumption is that no suggested search defamation lawsuit could survive the rigors of U.S. defamation law, in particular Section 230 of the federal Communications Decency Act.  Section 230 states, “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” For purposes of Section 230, “information content providers” include individual end users. The practical effect of the almost 20-year old Section 230 has been to immunize search engines, social media providers and other websites from defamation and most other claims arising from user-generated content.

Section 230’s protections are not absolute.  The Ninth Circuit has identified two important prerequisites to Section 230 immunity. The first, as stated in Batzel v. Smith, is that the defendant have reasonably believed the unlawful content was provided with the intention that it be published. The second, as famously articulated in Fair Housing Council v. Roommates.com, LLC, is that the defendant not have “materially contribut[ed] to [the content’s] alleged unlawfulness.”  The Roomates.com opinion noted that search engines would generally be entitled to Section 230 immunity because, unlike the defendant in the Roommates.comcase, they did not “force users to participate in [a] discriminatory process.

A suggested search defamation or autocomplete search defamation defendant undoubtedly would claim immunity under Section 230. They would argue that the suggested search terms at issue had been “provided” by end users and that they, as an “interactive computer service,” had passed them on without “material contribution.” Already, in response to suggested search defamation complaints, both major search providers (Google and Microsoft) take that route. Suggested and autocomplete search terms, they claim, are automatically generated by algorithms. Those algorithms, they further claim, are in turn driven by the searches and browsing habits of other end users.

As a factual predicate to a Section 230 defense, however, the search engines’ positions wouldn’t be particularly strong. As is common sense, the algorithms behind suggested search and autocomplete technology aren’t black boxes. They’re proprietary applications devised and controlled—including both inputs and outputs—by search engines or by those with whom they contract to provide search services. Popular search engines already have taken steps to eliminate racist and discriminatory search suggestions. The exercise of control as to inputs, means of selection and outputs would, at least arguably, amount to a “material contribution” within the meaning of Roomates.com.

Additionally, and as relevant to Batzel, search engines would be challenged to argue that search engine users or others “intended” their content to be published as suggested search terms. Google, for example, could point out that it receives a broad license to all content that users “submit” in connection with Google’s services. That broad license, Google might argue, is tantamount to an expression of intent that all user “submissions” be re-published.  In the context of a practical contract of adhesion, however, that argument would seem a stretch. Most end users would cringe at the notion that someone might publish their search history. Google’s Batzel argument would seem even worse to the extent its suggested search algorithm considered non-user content. Absent a clearer expression of intent, a suggested search defamation defendant seeking Section 230 immunity might fall short under Batzel, too.

A court hearing a suggested search defamation case would have to consider the implications of its ruling to search services other than the major search engines. At least in the search context, courts have tended to round legal edges in favor of search providers. But what of potential black-hat search providers who might benefit from a search-favorable ruling? How much control over algorithmic inputs and outputs is too much? And if search providers aren’t liable for arguably defamatory search suggestions, who is? Could a cadre of end users effectively defame someone through the repeated entry of defamatory search suggestions, only to later claim as a defense that they didn’t “publish” any of them?

Of course, a suggested search defamation lawsuit would face any number of additional hurdles. Even presuming that a suggested search term constitutes a publication—and it seems to fit the definition—a plaintiff would have to prove that the particular combination of words, presented as an autocomplete or suggested search term, was defamatory.  A search engine would argue that suggested search terms aren’t intended, and so can’t be understood, as representations of fact sufficient to support a claim of defamation.  A plaintiff also would have to prove that the suggested search term at issue was understood as referencing him and not others, and was seen by others.  A search engine defendant could be counted on to raise any number of additional defenses, including the range of Constitutional protections afforded by New York Times v. Sullivan and related cases.  Especially in an anti-SLAPP jurisdiction like California—where a losing defamation plaintiff may end up paying the defendant’s attorneys’ fees—the risks would be high.

Still, based on the number of inquiries we receive from businesses and business persons victimized by suggested search and autocomplete defamation, the chances of a test case are significant. It only takes one motivated and sufficiently financed plaintiff to file a claim. It’s at least possible to imagine a suggested search defamation case surviving a motion to dismiss.