Volokh et al v. James, No. 1:2022cv10195 - Document 29 (S.D.N.Y. 2023)

Court Description: OPINION AND ORDER re: 8 MOTION for Preliminary Injunction . filed by Rumble Canada Inc., Locals Technology Inc., Eugene Volokh. Accordingly, for the aforementioned reasons, the Court finds that Plaintiffs are entitled to a preliminary injunction prohibiting the enforcement of N.Y. Gen. Bus. Law § 394-ccc. The Clerk of Court is respectfully requested to terminate the pending motion at ECF No. 8. (Signed by Judge Andrew L. Carter, Jr on 2/14/2023) (tg)

Download PDF
Volokh et al v. James Doc. 29 Case 1:22-cv-10195-ALC Document 29 Filed 02/14/23 Page 1 of 21 UNITED STATES DISTRICT COURT SOUTHERN DISTRICT OF NEW YORK EUGENE VOLOKH, LOCALS TECHNOLOGY INC. and RUMBLE CANADA INC., Plaintiffs, 22-CV-10195 (ALC) -againstOPINION AND ORDER LETITIA JAMES, in her official capacity as New York Attorney General, Defendant. ANDREW L. CARTER, JR., United States District Judge: “Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express ‘the thought that we hate.’” Matal v. Tam, 137 S. Ct. 1744, 1764 (2017) (citations omitted). With the well-intentioned goal of providing the public with clear policies and mechanisms to facilitate reporting hate speech on social media, the New York State legislature enacted N.Y. Gen. Bus. Law § 394-ccc (“the Hateful Conduct Law” or “the law”). Yet, the First Amendment protects from state regulation speech that may be deemed “hateful” and generally disfavors regulation of speech based on its content unless it is narrowly tailored to serve a compelling governmental interest. The Hateful Conduct Law both compels social media networks to speak about the contours of hate speech and chills the constitutionally protected speech of social media users, without articulating a compelling governmental interest or ensuring that the law is narrowly tailored to that goal. In the face of our national commitment to the free expression of speech, even 1 Dockets.Justia.com Case 1:22-cv-10195-ALC Document 29 Filed 02/14/23 Page 2 of 21 where that speech is offensive or repugnant, Plaintiffs’ motion for preliminary injunction, prohibiting enforcement of the law, is GRANTED. BACKGROUND I. Factual Background The following facts are drawn from the Complaint, Plaintiffs’ declaration in support of their motion for preliminary injunction, and Defendant’s declaration in opposition to the motion, and the documents relied upon therein. A. The Plaintiffs Plaintiffs Eugene Volokh (“Volokh”), Rumble Canada Inc. (“Rumble”) and Locals Technology Inc. (“Locals”) operate online platforms that they believe are subject to the law as a “social medial network” as defined by the law. Plaintiff Volokh is a California resident and the co-owner and operator of the Volokh Conspiracy, a legal blog. (Compl., ECF No. 1 ¶ 12.) Plaintiff Rumble, headquartered in Toronto, Canada, operates a website “similar to YouTube” which “allows independent creators to upload and share video content”. (Id. ¶ 13.) Rumble has a “profree speech purpose” and its “mission [is] ‘to protect a free and open internet’ and to ‘create technologies that are immune to cancel culture.’’ (Id.) Plaintiff Locals is a subsidiary of Rumble Inc. and operates a website that allows “creators to communicate and share content directly with unpaid and paid subscribers.” (Id. ¶ 14.) Locals also has a stated “pro-free speech purpose” and a “mission of being ‘committed to fostering a community that is safe, respectful, and dedicated to the free exchange of ideas.’” (Id.) B. The Buffalo Mass Shooting On May 14, 2022, an avowed white supremacist used Twitch, a social media platform, to livestream himself perpetrating a racially motivated mass shooting on Black shoppers at a grocery 2 Case 1:22-cv-10195-ALC Document 29 Filed 02/14/23 Page 3 of 21 store in Buffalo, New York. (Sawyer Decl., ECF No. 20 ¶ 4; id., Ex. A, ECF No. 20-1 at 34.) The attack left ten people dead and three wounded. (Id.) Shortly thereafter, a recording of the mass shooting “went viral” and was re-posted on several websites, including 4chan and Reddit. (Id.) A manifesto expressing the shooter’s racist ideology was also shared on social media. (Id. at 15–16, 34.) In response to the mass shooting, Governor Kathy Hochul issued a referral letter to the Office of the Attorney General (“OAG”), directing it to investigate the events surrounding the shooting, focusing on “the specific online platforms that were used to broadcast and amplify the acts and intentions of the mass shooting[.]” (Id. at 6; Compl., ECF No. 1 ¶ 38.) Governor Hochul also directed the OAG to “investigate various online platforms for ‘civil or criminal liability for their role in promoting, facilitating, or providing a platform to plan or promote violence.” (Id.) On October 18, 2022, the OAG released a report detailing its findings. (Sawyer Decl., ECF No. 20 ¶ 3.) In the associated press release, Defendant stated that “[o]nline platforms should be held accountable for allowing hateful and dangerous content to spread on their platforms” because an alleged “lack of oversight, transparency, and accountability of these platforms allows hateful and extremist views to proliferate online.” (Compl., ECF No. 1 ¶ 3.) C. The Hateful Conduct Law The Hateful Conduct Law, entitled “Social media networks; hateful conduct prohibited” went into effect on December 3, 2022. The law applies to “Social media network(s)” 1, and defines “Hateful conduct” as: 1 “Social media network” is defined as “service providers, which, for profit-making purposes, operate internet platforms that are designed to enable users to share any content with other users or to make such content available to the public.” N.Y. Gen. Bus. Law § 394-ccc(1)(b). Defendant does not challenge Plaintiffs’ assertion that they have standing to sue, but Defendant reserves the right to do so in the future. (Def.’s Opp’n, ECF No. 21 at 6, n.3.) 3 Case 1:22-cv-10195-ALC Document 29 Filed 02/14/23 Page 4 of 21 “[T]he use of a social media network to vilify, humiliate, or incite violence against a group or a class of persons on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression.” N.Y. Gen. Bus. Law § 394-ccc(1)(a). Thus, the Hateful Conduct Law requires that social media networks create a complaint mechanism for three types of “conduct”: (1) conduct that vilifies; (2) conduct that humiliates; and (3) conduct that incites violence. (Id.) This “conduct” falls within the law’s definition if it is aimed at an individual or group based on their “race”, “color”, “religion”, “ethnicity”, “national origin”, “disability”, “sex”, “sexual” orientation”, “gender identity” or “gender expression”. Id. The Hateful Conduct Law has two main requirements: (1) a mechanism for social media users to file complaints about instances of “hateful conduct” and (2) disclosure of the social media network’s policy for how it will respond to any such complaints. First, the law requires a social media network to “provide and maintain a clear and easily accessible mechanism for individual users to report incidents of hateful conduct.” This mechanism must “be clearly accessible to users of such network and easily accessed from both a social media networks’ application and website. . . .” and must “allow the social media network to provide a direct response to any individual reporting hateful conduct informing them of how the matter is being handled.” N.Y. Gen. Bus. Law § 394-ccc(2). Second, a social media network must “have a clear and concise policy readily available and accessible on their website and application. . . ” N.Y. Gen. Bus. Law § 394-ccc(3). This policy must “include[] how such social media network will respond and address the reports of incidents of hateful conduct on their platform.” N.Y. Gen. Bus. Law § 394-ccc(3). 4 Case 1:22-cv-10195-ALC Document 29 Filed 02/14/23 Page 5 of 21 The law also empowers the Attorney General to investigate violations of the law and provides for civil penalties for social media networks which “knowingly fail[] to comply” with the requirements. N.Y. Gen. Bus. Law § 394-ccc(5). II. Procedural History This action was commenced by Plaintiffs on December 1, 2022. (Compl., ECF No. 1.) The Complaint alleges both facial and as-applied challenges to the Hateful Conduct Law, arguing that it violates the First Amendment because it: (1) is a content and viewpoint-based regulation of speech; (2) is overbroad; and (3) is void for vagueness. (See generally id.) Plaintiffs also allege that the law is preempted by the Communications Decency Act, 47 U.S.C. § 230. (Id.) Plaintiffs filed their motion for preliminary injunction on December 6, 2022, arguing that (1) Plaintiffs have a well-founded fear that the law will be enforced against their online platforms and (2) they are likely to prevail on the merits because the law burdens and compels speech, is overbroad, void for vagueness, and preempted by the Communications Decency Act. (Mot., ECF No. 8; see generally Pl.’s Mem., ECF No. 9.) Defendant filed a memorandum in opposition on December 13, 2022, arguing that Plaintiffs are unlikely to succeed on the merits of their claims because, inter alia, the law does not target protected expression based on content or viewpoint, is not substantially overbroad or vague, and is not preempted. (See generally Def.’s Opp’n, ECF No. 21.) The Court heard oral argument on the motion on December 19, 2022. (See Dec. 19, 2022 “Tr.”, ECF No. 27.) LEGAL STANDARD To obtain a preliminary injunction, the movant must show “a likelihood of success on the merits, a likelihood of irreparable harm in the absence of preliminary relief, that the balance of 5 Case 1:22-cv-10195-ALC Document 29 Filed 02/14/23 Page 6 of 21 equities tips in the party’s favor, and that an injunction is in the public interest.” ACLU v. Clapper, 804 F.3d 617, 622 (2d Cir. 2015) (citing Winter v. NRDC, 555 U.S. 7, 20 (2008)). However, “a preliminary injunction is an extraordinary and drastic remedy, one that should not be granted unless the movant, by a clear showing, carries the burden of persuasion.” Sussman v. Crawford, 488 F.3d 136, 139 (2d Cir. 2007); see also Anwar v. Fairfield Greenwich Ltd., 728 F. Supp. 2d 462, 472 (S.D.N.Y. 2010) (“Temporary restraining orders and preliminary injunctions are among the most drastic tools in the arsenal of judicial remedies, and must be used with great care.”) (internal citations omitted). DISCUSSION I. Irreparable Harm Although a showing of irreparable harm is typically the “single most important prerequisite for the issuance of a preliminary injunction,” Faiveley Transp. Malmo AB v. Wabtec Corp., 559 F.3d 110, 118 (2d Cir. 2009) (citation omitted), “[c]onsideration of the merits is virtually indispensable in the First Amendment context, where the likelihood of success on the merits is the dominant, if not the dispositive, factor.” N.Y. Progress and Prot. PAC v. Walsh, 733 F.3d 483, 488 (2d Cir. 2013). Accordingly, the Court’s analysis will focus on the second prong of the preliminary injunction analysis. II. Likelihood of Success on the Merits Where, as is the case here, the injunction being sought will provide the plaintiff with substantially all the relief sought in the complaint, the plaintiff must demonstrate a “clear or substantial likelihood of success on the merits.” Yang v. Kosinski, 960 F.3d 119, 127–28 (2d Cir. 2020) (internal citations and quotations omitted). 6 Case 1:22-cv-10195-ALC Document 29 Filed 02/14/23 Page 7 of 21 For the reasons set out more fully below, the Court finds that Plaintiffs have demonstrated a substantial likelihood of success on the merits of their First Amendment claims, but not on their preemption claim. A. Plaintiffs’ As-Applied First Amendment Challenge i. The Legal Framework for the First Amendment “The First Amendment generally prevents government from proscribing speech…or even expressive conduct…because of disapproval of the ideas expressed. R.A.V. v. City of St. Paul, Minn., 505 U.S. 377, 382 (1992) (internal citations and quotations omitted). “The First Amendment reflects “a profound national commitment to the principle that debate on public issues should be uninhibited, robust, and wide-open.” Snyder v. Phelps, 562 U.S. 443, 452 (2011). Thus, as is relevant to the current facts, “[a]s a Nation we have chosen . . . to protect even hurtful speech on public issues to ensure that we do not stifle public debate.” Id. at 461. This protection extends to speech which the government may seek to limit because it is offensive or insulting. See R.A.V, 505 U.S. at 391 (“The First Amendment does not permit [the government] to impose special prohibitions on those speakers who express views on disfavored subjects.”) Even regulations that seek to regulate speech “that insult[s], or provoke[s] violence, on the basis of race, color, creed, religion, or gender” have been found to run afoul of the First Amendment because they constitute content and viewpoint-based regulation of protected speech. Id. at 391–92. In evaluating whether a regulation violates the First Amendment, courts “distinguish between content-based and content-neutral regulations of speech.” Id. “Content-based laws— those that target speech based on its communicative content—are presumptively unconstitutional and may be justified only if the government proves that they are narrowly tailored to serve compelling state interests.” Reed v. Town of Gilbert, Ariz., 576 U.S. 155, 163 (2015). 7 Case 1:22-cv-10195-ALC Document 29 Filed 02/14/23 Page 8 of 21 “Government regulation of speech is content based if a law applies to particular speech because of the topic discussed or the idea or message expressed.” Id. Additionally, when “a state compels an individual to speak a particular message, the state alters the content of their speech, and engages in content-based regulation.” CompassCare v. Cuomo, 465 F. Supp. 3d 122, 155 (N.D.N.Y. 2020) (quoting Nat’l Inst. of Fam. & Life Advocs. v. Becerra, 138 S. Ct. 2361, 2371 (2018) (“NIFLA”) (internal quotations and alterations omitted)). ii. Whether the Hateful Conduct Law Compels Speech Plaintiffs argue that the law regulates the content of their speech by compelling them to speak on an issue on which they would otherwise remain silent. (Pl.’s Mem., ECF No. 9 at 12; Tr., ECF No. 27 at 47:5–13.) Defendant argues that the law regulates conduct, as opposed to speech, because there is no requirement for how a social media network must respond to any complaints and because the law does not even require the network to specifically respond to a complaint of hateful content. (Def.’s Opp’n, ECF No. 21 at 9.) Instead, the law merely requires that the complaint mechanism allows the network to respond, if that is the social media network’s policy. (Tr., ECF No. 27 at 11:25–1212:4.) Defendant likens the Hateful Conduct Law to the regulation upheld in Restaurant Law Ctr. v. City of New York, which required fast-food employers to set up a mechanism for their employees to donate a portion of their paychecks to a non-profit of that employee’s choosing. 360 F. Supp. 3d 192 (S.D.N.Y. 2019). The court found that this did not constitute “speech”—nor did it constitute “compelled speech”—noting that the “ministerial act” of administering payroll deductions on behalf of their employees did not constitute speech for the employers. Id. at 214. As such, the court applied rational basis review and found that the regulation passed muster. Id. at 221. 8 Case 1:22-cv-10195-ALC Document 29 Filed 02/14/23 Page 9 of 21 However, those facts are not applicable here. The Hateful Conduct Law does not merely require that a social media network provide its users with a mechanism to complain about instances of “hateful conduct”. The law also requires that a social media network must make a “policy” available on its website which details how the network will respond to a complaint of hateful content. In other words, the law requires that social media networks devise and implement a written policy—i.e., speech. For this reason, the Hateful Conduct Law is analogous to the state mandated notices that were found not to withstand constitutional muster by the Supreme Court and the Second Circuit: NIFLA and Evergreen. In NIFLA, the Supreme Court found that plaintiffs—crisis pregnancy centers opposing abortion—were likely to succeed on the merits of their First Amendment claim challenging a California law requiring them to disseminate notices stating the existence of familyplanning services (including abortions and contraception). NIFLA, 138 S. Ct. at 2371. The Court emphasized that “[b]y compelling individuals to speak a particular message, such notices ‘alte[r] the content of [their] speech.’” Id. (quoting Riley v. National Federation of Blind of N. C., Inc., 487 U.S. 781, 795 (1988)). Likewise, in Evergreen, the Second Circuit held that a state-mandated disclosure requirement for crisis pregnancy centers impermissibly burdened the plaintiffs’ First Amendment rights because it required them to “affirmatively espouse the government’s position on a contested public issue….” Ass’n, Inc. v. City of New York, 740 F.3d 233, 250 (2d Cir. 2014) (quoting Alliance for Open Soc’y Int’l, Inc. v. U.S. Agency for Int’l Dev., 651 F.3d 218, 236 (2d Cir. 2011), aff'd, 570 U.S. 205 (2013)). Similarly, the Hateful Conduct Law requires a social media network to endorse the state’s message about “hateful conduct”. To be in compliance with the law’s requirements, a social media network must make a “concise policy readily available and accessible on their website and 9 Case 1:22-cv-10195-ALC Document 29 Filed 02/14/23 Page 10 of 21 application” detailing how the network will “respond and address the reports of incidents of hateful conduct on their platform.” N.Y. Gen. Bus. Law § 394-ccc(3). Implicit in this language is that each social media network’s definition of “hateful conduct” must be at least as inclusive as the definition set forth in the law itself. In other words, the social media network’s policy must define “hateful conduct” as conduct which tends to “vilify, humiliate, or incite violence” “on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression.” N.Y. Gen. Bus. Law § 394-ccc(1)(a). A social media network that devises its own definition of “hateful conduct” would risk being in violation of the law and thus subject to its enforcement provision. The gap between the state’s definition of “hateful conduct” and other potential definitions is illustrated by Plaintiffs’ own current content moderation policies. For instance, Rumble reserves the right to unilaterally remove any content that it deems is: “a) is illegal; b) is pornographic, obscene, or of an adult or sexual nature; c) is grossly offensive to the online community, including but not limited to, racism, anti-semitism and hatred; d) supports or incites violence or unlawful acts; e) supports groups that support or incite violence or unlawful acts; or f) promotes terrorist organizations.” (Compl., ECF No. 1 ¶ 116 (internal quotations omitted).) The policy does not explicitly pertain to content that vilifies or humiliates, as is defined in the law, and does not explicitly apply to content aimed at a person or group’s “religion”, “disability”, “sexual orientation” or “gender expression”, as is expressly enumerated in the Hateful Conduct Law. For Rumble to be in compliance with the law, it would need to publish a policy expressly indicating that its users have a mechanism to complain about the “hateful conduct” as defined by the Hateful Conduct Law, not removable content as defined by Rumble. 10 Case 1:22-cv-10195-ALC Document 29 Filed 02/14/23 Page 11 of 21 Likewise, Locals’ website identifies a few categories of content that it may remove from its website, including content that “threatens violence against an individual or group of people.” (Id. ¶ 133.) However, this policy also does not encapsulate the classes of groups or persons to whom “hateful conduct” may be directed as is defined by the New York legislature, and it would need to be modified to be brought into compliance with the law by including speech that potentially vilifies or humiliates a group or individual. To be in compliance with the law, Locals would need to publish a policy detailing the types of content its users are entitled to complain about through the new mechanism—i.e., “hateful conduct” as defined by the law—thus compelling Locals to endorse the state’s definition of that term. Clearly, the law, at a minimum, compels Plaintiffs to speak about “hateful conduct”. As Plaintiffs note, this compulsion is particularly onerous for Plaintiffs, whose websites have dedicated “pro-free speech purpose[s]” (Compl., ECF No. 1 ¶¶ 13, 14), which likely attract users who are “opposed to censorship” (Pl.’s Mem., ECF No. 9 at 24). Requiring Plaintiffs to endorse the state’s definition of “hateful conduct”, forces them to weigh in on the debate about the contours of hate speech when they may otherwise choose not to speak. In other words, the law, “deprives Plaintiffs of their right to communicate freely on matters of public concern” without state coercion. Evergreen,740 F.3d at 250. Additionally, Plaintiffs have an editorial right to keep certain information off their websites and to make decisions as to the sort of community they would like to foster on their platforms. It is well-established that a private entity has an ability to make “choices about whether, to what extent, and in what manner it will disseminate speech…” NetChoice, LLC v. Att’y Gen., Fla., 34 F.4th 1196, 1210 (11th Cir. 2022). These choices constitute “editorial judgments” which are protected by the First Amendment. Id. (collecting cases). In Pacific Gas & Electric Co. v. Public 11 Case 1:22-cv-10195-ALC Document 29 Filed 02/14/23 Page 12 of 21 Utilities Commission of California, the Supreme Court struck down a regulation that would have forced a utility company to include information about a third party in its billing envelopes because the regulation “require[d] appellant to use its property as a vehicle for spreading a message with which it disagrees.” 475 U.S. 1, 17 (1986). Here, the Hateful Conduct Law requires social media networks to disseminate a message about the definition of “hateful conduct” or hate speech—a fraught and heavily debated topic today. Even though the Hateful Conduct Law ostensibly does not dictate what a social media website’s response to a complaint must be and does not even require that the networks respond to any complaints or take down offensive material, the dissemination of a policy about “hateful conduct” forces Plaintiffs to publish a message with which they disagree. Thus, the Hateful Conduct Law places Plaintiffs in the incongruous position of stating that they promote an explicit “pro-free speech” ethos, but also requires them to enact a policy allowing users to complain about “hateful conduct” as defined by the state. iii. Whether the Hateful Conduct Law Compels Commercial Speech In the alternative, Defendant argues that even if the law is found to regulate speech, it only regulates commercial speech and should thus be subject to a lesser standard of review. (Def.’s Opp’n, ECF No. 21 at 9, 12–13.) Defendant characterizes the law’s requirement that social media networks publish a policy as “a truthful disclosure of fact” that is only subject to rational basis review. (Tr., ECF No. 27 at 44:7–8.) Defendant likens the Hateful Conduct Law’s policy requirement to other regulations upheld by the Second Circuit requiring (1) chain restaurants to post calorie content information for their menu items, New York State Rest. Ass’n v. New York City Bd. of Health, 556 F.3d 114, 137 (2d Cir. 2009), and (2) lightbulb manufacturers to disclose the 12 Case 1:22-cv-10195-ALC Document 29 Filed 02/14/23 Page 13 of 21 mercury content in their products, Nat’l Elec. Mfrs. Ass’n v. Sorrell, 272 F.3d 104, 116 (2d Cir. 2001). In general, laws regulating commercial speech are subject to a lesser standard of scrutiny. N.Y. State Rest. Ass’n v. New York City Bd. of Health, No. 08-CV-1000(RJH), 2008 WL 1752455, at *6 (S.D.N.Y. Apr. 16, 2008), aff'd, 556 F.3d 114 (2d Cir. 2009) (citing Nat’l Elec. Mfrs. Ass’n v. Sorrell, 272 F.3d 104, 113 (2d Cir. 2001)). The Supreme Court has articulated two definitions of what constitutes commercial speech. First, speech is considered to be commercial when it “‘does no more than propose a commercial transaction.’” Conn. Bar Ass’n v. United States, 620 F.3d 81, 93 (2d Cir. 2010) (quoting Bolger v. Youngs Drug Prods. Corp., 463 U.S. 60, 66 (1983)). Second, “commercial speech [constitutes] ‘expression related solely to the economic interests of the speaker and its audience.’” Conn. Bar Ass’n, 620 F.3d at 94 (quoting Cent. Hudson Gas & Elec. Corp. v. Pub. Serv. Comm’n of New York, 447 U.S. 557, 561 (1980)). Commercial speech is generally subject to intermediate scrutiny, Safelite Grp., Inc. v. Jepsen, 764 F.3d 258, 261 (2d Cir. 2014); however, where the commercial speech conveys “purely factual and uncontroversial” information, courts apply rational basis review. New York State Rest. Ass’n v. New York City Bd. of Health, 556 F.3d 114, 132 (2d Cir. 2009) (citing Nat’l Elec. Mfrs. Ass’n, 272 F.3d at 114–15). The policy disclosure at issue here does not constitute commercial speech and conveys more than a “purely factual and uncontroversial” message. The law’s requirement that Plaintiffs publish their policies explaining how they intend to respond to hateful content on their websites does not simply “propose a commercial transaction”. Nor is the policy requirement “related solely to the economic interests of the speaker and its audience.” Rather, the policy requirement compels a social media network to speak about the range of protected speech it will allow its users to engage (or not engage) in. Plaintiffs operate websites that are directly engaged in the proliferation of 13 Case 1:22-cv-10195-ALC Document 29 Filed 02/14/23 Page 14 of 21 speech—Volokh operates a legal blog, whereas Rumble and Locals operate platforms where users post video content and comment on other users’ videos. The “lodestars in deciding what level of scrutiny to apply to a compelled statement must be the nature of the speech taken as a whole and the effect of the compelled statement thereon.” Riley v. Nat’l Fed’n of the Blind of N. Carolina, Inc., 487 U.S. 781, 796 (1988). Where speech is “inextricably intertwined with otherwise fully protected speech”, it does not retain any of its potential commercial character. Id. Here, the law clearly implicates protected speech—namely hate speech—by requiring a disclosure of the Plaintiffs’ policy for responding to complaints of hateful content. This is different in character and kind from commercial speech and amounts to more than mere disclosure of factual information, such as caloric information or mercury content, as Defendant tries to equate. iv. Whether the Hateful Conduct Law Survives Strict Scrutiny Because the Hateful Conduct Law regulates speech based on its content, the appropriate level of review is strict scrutiny. See Evergreen, 740 F.3d at 244. To satisfy strict scrutiny, a law must be “narrowly tailored to serve a compelling governmental interest.” Amidon v. Student Ass’n of State Univ. of New York at Albany, 508 F.3d 94, 96 (2d Cir. 2007). A statute is not narrowly tailored if “a less restrictive alternative would serve the Government’s purpose.” United States v. Playboy Entm’t Grp., Inc., 529 U.S. 803, 813, (2000). Plaintiffs argue that limiting the free expression of protected speech is not a compelling state interest and that the law is not narrowly tailored. While Defendant concedes that the Hateful Conduct Law may not be able to withstand strict scrutiny, she maintains that the state has a compelling interest in preventing mass shootings, such as the one that took place in Buffalo. (Tr., ECF No. 27 at 45:1–15.) 14 Case 1:22-cv-10195-ALC Document 29 Filed 02/14/23 Page 15 of 21 Although preventing and reducing the instances of hate-fueled mass shootings is certainly a compelling governmental interest, the law is not narrowly tailored toward that end. 2 Banning conduct that incites violence is not protected by the First Amendment 3, but this law goes far beyond that. While the OAG Investigative Report does make a link between misinformation on the internet and the radicalization of the Buffalo mass shooter (Sawyer, Decl., Ex. A, ECF No. 20-1 at 23–26), even if the law was truly aimed at reducing the instances of hate-fueled mass shootings, the law is not narrowly tailored toward reaching that goal. It is unclear what, if any, effect a mechanism that allows users to report hateful conduct on social media networks would have on reducing mass shootings, especially when the law does not even require that social media networks affirmatively respond to any complaints of “hateful conduct”. In other words, it is hard to see how the law really changes the status quo—where some social media networks choose to identify and remove hateful content and others do not. 2 The memorandum in support of the legislation that was presented to the New York State Assembly ahead of the floor debate lists the justification for the law as “concerns about misinformation that is spread on social media networks.” (Sawyer Decl., Ex. C, ECF No. 20-3.) While the law was enacted in the wake of the Buffalo mass shooting, the original iteration of the bill was drafted in the wake of the events of January 6, 2021 (Compl., ECF No. 1 ¶ 30; Sawyer Decl., Ex. D, ECF No. 20-4 at 182–183), further suggesting that the law is really aimed at misinformation on the internet. However, the First Amendment’s shielding of hate speech from regulation means that a state’s desire to reduce this type of speech from the public discourse cannot be a compelling governmental interest 3 The Supreme Court has held that speech that consists of “fighting words” and speech that incites violence or lawlessness are not protected by the First Amendment. Chaplinsky v. New Hampshire, 315 U.S. 568, 572 (1942); Brandenburg v. Ohio, 395 U.S. 444, 447 (1969). For speech to incite violence, “there must be ‘evidence or rational inference from the import of the language, that [the words in question] were intended to produce, and likely to produce, imminent’ lawless action.” Am. Freedom Def. Initiative v. Metro. Transp. Auth., 70 F. Supp. 3d 572, 581 (S.D.N.Y. 2015), vacated on other grounds, 109 F. Supp. 3d 626 (S.D.N.Y. 2015) (citations omitted). The Hateful Conduct law’s ban on speech that incites violence is not limited to speech that is likely to produce imminent lawless action. 15 Case 1:22-cv-10195-ALC Document 29 Filed 02/14/23 Page 16 of 21 Accordingly, for the reasons stated above, the Court finds that Plaintiffs have demonstrated a substantial likelihood of success on their as applied First Amendment challenges to the Hateful Conduct Law. B. Plaintiffs’ Facial First Amendment Challenges Plaintiffs argue that the Hateful Conduct Law “is overbroad because it applies to a substantial amount of protected speech, especially compared to its nonexistent or minimal lawful application” and is unconstitutionally vague. (Pls.’ Mem., ECF No. 9 at 17–20.) In response, Defendant argues that Plaintiffs have not demonstrated that the law would chill protected speech and that the operative terms of the law are clear and defined. (Def.’s Opp’n., ECF No. 21 at 18– 23.) Both the Supreme Court and the Second Circuit have recognized that “[a]lthough facial challenges are generally disfavored, they are more readily accepted in the First Amendment context.” Picard v. Magliano, 42 F.4th 89, 101 (2d Cir. 2022) (quoting Beal v. Stern, 184 F.3d 117, 125 (2d Cir. 1999)). Although facial invalidation of a statute is a “strong medicine” that should be applied “sparingly and only as a last resort”, Hobbs v. Cnty. of Westchester, 397 F.3d 133, 155 (2d Cir. 2005) (quoting FCC v. Pacifica Foundation, 438 U.S. 726, 743 (1978)); see also Am. Booksellers Found. v. Dean, 342 F.3d 96, 105 (2d Cir. 2003), a court may consider a facial overbreadth claim where a plaintiff has established that there are no set of circumstances under which the challenged statute could be valid or that the challenged statute “lacks any plainly legitimate sweep.” Picard, 42 F.4th at 101 (quoting United States v. Stevens, 559 U.S. 460, 472 (2010)). “It is established that the courts may, as an exception to ordinary standing requirements, entertain a claim that a law, even if constitutional as applied to the claimant, is so broad that it 16 Case 1:22-cv-10195-ALC Document 29 Filed 02/14/23 Page 17 of 21 “may inhibit the constitutionally protected speech of third parties[.]” Hobbs, 397 F.3d at 155. “The purpose of an overbreadth challenge is to prevent the chilling of constitutionally protected conduct, as prudent citizens will avoid behavior that may fall within the scope of a prohibition, even if they are not entirely sure whether it does.” Farrell v. Burke, 449 F.3d 470, 499 (2d Cir. 2006). “[T]he overbreadth of a statute must not only be real, but substantial as well, judged in relation to the statute’s plainly legitimate sweep.” Id. (quoting Broadrick v. Oklahoma, 413 U.S. 601, 615 (1973)). “When a court finds that a statute suffers from such substantial overbreadth, all enforcement of the statute is generally precluded.” Am. Booksellers Found., 342 F.3d at 104. As the Court has already discussed, the law is clearly aimed at regulating speech. Social media websites are publishers and curators of speech, and their users are engaged in speech by writing, posting, and creating content. Although the law ostensibly is aimed at social media networks, it fundamentally implicates the speech of the networks’ users by mandating a policy and mechanism by which users can complain about other users’ protected speech. Moreover, the Hateful Conduct law is a content based regulation. The law requires that social media networks develop policies and procedures with respect to hate speech (or “hateful conduct” as it is recharacterized by Defendant). As discussed, the First Amendment protects individuals’ right to engage in hate speech, and the state cannot try to inhibit that right, no matter how unseemly or offensive that speech may be to the general public or the state. See Matal, 137 S. Ct. at 1764; see also R.A.V., 505 U.S. at 391. Thus, the Hateful Conduct Law’s targeting of speech that “vilifi[es]” or “humili[ates”] a group or individual based on their “race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression” N.Y. Gen. Bus. Law § 394-ccc(1)(a), clearly implicates the protected speech of social media users. 17 Case 1:22-cv-10195-ALC Document 29 Filed 02/14/23 Page 18 of 21 This could have a profound chilling effect on social media users and their protected freedom of expression. Even though the law does not require social media networks to remove “hateful conduct” from their websites and does not impose liability on users for engaging in “hateful conduct”, the state’s targeting and singling out of this type of speech for special measures certainly could make social media users wary about the types of speech they feel free to engage in without facing consequences from the state. This potential wariness is bolstered by the actual title of the law— “Social media networks; hateful conduct prohibited” —which strongly suggests that the law is really aimed at reducing, or perhaps even penalizing people who engage in, hate speech online. As Plaintiffs noted during oral argument, one can easily imagine the concern that would arise if the government required social media networks to maintain policies and complaint mechanisms for anti-American or pro-American speech. (Tr., ECF No. 27 at 29:2.) Moreover, social media users often gravitate to certain websites based on the kind of community and content that is fostered on that particular website. Some social media websites—including Plaintiffs’— intentionally foster a “pro-free speech” community and ethos that may become less appealing to users who intentionally seek out spaces where they feel like they can express themselves freely. The potential chilling effect to social media users is exacerbated by the indefiniteness of some of the Hateful Conduct Law’s key terms. It is not clear what the terms like “vilify” and “humiliate” mean for the purposes of the law. While it is true that there are readily accessible dictionary definitions of those words, the law does not define what type of “conduct” or “speech” could be encapsulated by them. For example, could a post using the hashtag “BlackLivesMatter” or “BlueLivesMatter” be considered “hateful conduct” under the law? Likewise, could social media posts expressing anti-American views be considered conduct that humiliates or vilifies a group based on national origin? It is not clear from the face of the text, and thus the law does not 18 Case 1:22-cv-10195-ALC Document 29 Filed 02/14/23 Page 19 of 21 put social media users on notice of what kinds of speech or content is now the target of government regulation. Accordingly, because the Hateful Conduct Law appears to “reach[…] a substantial amount of constitutionally protected conduct”, Farrell, 449 F.3d at 496 (quoting Kolender v. Lawson, 461 U.S. 352, 358 n.8 (1983)), the Court finds that Plaintiffs have demonstrated a likelihood of success on their facial challenges under the First Amendment. 4 C. Preemption Under Section 230 of the Communications Decency Act Lastly, Plaintiffs allege that the Hateful Conduct Law is preempted by Section 230 of the Communications Decency Act because it imposes liability on websites by treating them as publishers. (Pl.’s Mem., ECF No. 9 at 20– 21.) The Communications Decency Act provides that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 47 U.S.C. § 230(c)(1); see also Ricci v. Teamsters Union Loc. 456, 781 F.3d 25, 27 (2d Cir. 2015). The Act has an express preemption provision which states that “[n]o cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.” 47 U.S.C. § 230(e)(3). 4 The Court also finds that severing parts of the law would not serve to save the entire statute. As stated in footnote four, while speech that incites violence is not protected by the First Amendment, whether speech does in fact incite violence is a fact-based inquiry. See Am. Freedom Def. Initiative, 70 F. Supp. 3d at 581 (For speech to incite violence, “there must be ‘evidence or rational inference from the import of the language, that [the words in question] were intended to produce, and likely to produce, imminent’ lawless action.”). The Supreme Court has rarely applied this standard, “and never explicitly found speech to be on the proscribable side of the standard.” Id. Thus, it is not clear that limiting the Hateful Conduct Law only to speech that incites violence would necessarily pass constitutional muster. In addition, at oral argument, Plaintiffs argued that the law was not severable. (Tr., ECF No. 27 at 62:8–63:7.) 19 Case 1:22-cv-10195-ALC Document 29 Filed 02/14/23 Page 20 of 21 A plain reading of the Hateful Conduct Law shows that Plaintiffs’ argument is without merit. The law imposes liability on social media networks for failing to provide a mechanism for users to complain of “hateful conduct” and for failure to disclose their policy on how they will respond to complaints. N.Y. Gen. Bus. Law § 394-ccc(5). The law does not impose liability on social media networks for failing to respond to an incident of “hateful conduct”, nor does it impose liability on the network for its users own “hateful conduct”. The law does not even require that social media networks remove instances of “hateful conduct” from their websites. Therefore, the Hateful Conduct Law does not impose liability on Plaintiffs as publishers in contravention of the Communications Decency Act. III. Balance of the Equities Finally, given that Plaintiffs have demonstrated a substantial likelihood of success on the merits of their First Amendment claims, the Court must consider whether “the balance of the equities tips in [Plaintiffs’] favor, and [whether] an injunction is in the public interest.” ACLU v. Clapper, 804 F.3d 617, 622 (2d Cir. 2015). Given that “enjoining enforcement of a statute that potentially violates citizens’ constitutional rights is in the public interest”, and that Defendant can show no harm as a result of being prevented from enforcing an unconstitutional statute, the Court finds that the balance of the equities tips in favor of granting the preliminary injunction. See CompassCare, 465 F. Supp. 3d at 159. CONCLUSION Accordingly, for the aforementioned reasons, the Court finds that Plaintiffs are entitled to a preliminary injunction prohibiting the enforcement of N.Y. Gen. Bus. Law § 394-ccc. The Clerk of Court is respectfully requested to terminate the pending motion at ECF No. 8. 20 Case 1:22-cv-10195-ALC Document 29 Filed 02/14/23 Page 21 of 21 Dated: February 14, 2023 New York, New York ___________________________________ ANDREW L. CARTER, JR. United States District Judge 21

Some case metadata and case summaries were written with the help of AI, which can produce inaccuracies. You should read the full case before relying on it for legal research purposes.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.