enculturation

A Journal of Rhetoric, Writing, and Culture

Reading Mean Comments to Subvert Gendered Hate on YouTube: Toward a Spectrum of Digital Aggression Response

Derek M. Sparby, Illinois State University

(Published March 16, 2021)

In summer 2019, Carlos Maza, a gay Latino video creator and producer for Vox, took to Twitter to call out YouTube for not enforcing their harassment and cyberbullying policies. He linked to a video mashup of Steven Crowder, an extremist conservative YouTuber with a large following, repeatedly calling Maza homophobic and racist slurs across multiple videos. In one tweet, he explains that Crowder’s fanbase doxxed[1] him in 2018; he includes a screenshot of a small portion of the hundreds of text messages saying “debate steven crowder” (@gaywonk, “Last year, I got doxed . . . ”). However, Maza calls out YouTube, not Crowder, arguing that they refused to enforce their own rules because they would have to ban a popular, monetized channel that brings the site revenue. Specifically, Maza pointed to three of YouTube’s nine criteria for harassing content that Crowder regularly violated:[2] “Content that is deliberately posted in order to humiliate someone[,] . . . make hurtful and negative personal comments/videos about another person’s information[, and] . . . incites others to harass or threaten individuals on or off YouTube” (YouTube, “Harassment and Cyberbullying Policy”). Maza further claims, “by refusing to enforce its anti-harassment policy, YouTube is helping incredibly powerful cyberbullies organize and target people they disagree with” (@gaywonk, “This”). He also vehemently declares, “YouTube does not give a fuck about diversity or inclusion. YouTube wants clicks” (@gaywonk, “YouTube”).

Seemingly unwilling to give up the revenue that channels such as Crowder’s bring in, YouTube’s responses to Maza and others have been consistently sluggish. YouTube demonetized Crowder’s channel in early June 2019 but only after a rise in public pressure from Maza’s widely shared tweets. While Maza’s situation is relatively recent and widely publicized, harassment is not a new complaint about YouTube; many have criticized the platform for not doing enough to support its content creators when faced with aggression, from both human and nonhuman agents, and from both content creators and viewers (Alexander; Eordogh; Strapagiel). In December 2019, YouTube announced new harassment and cyberbullying policies that specifically include creator-on-creator harassment, as well as more explicit renunciation of racism, sexism, and homophobia (Wakabayashi). However, many, including Maza (“TL;DR”)[3], are skeptical that they will be impactful (Zialcita). Policies mean little without enforcement, and YouTube’s track record of action has not been great.

Maza drew widespread attention to creator-on-creator harassment and, consequently, to YouTube’s subpar rule enforcement and creator protection. However, this article focuses on one of the largest problems for digital aggression on YouTube: commenter-on-creator harassment. A lot of research has been done on who propagates digital hate against whom and what the effects of these behaviors are, but my research looks toward solutions; my aim is provide concrete tactics for response and theorize proactive approaches to prevent these kinds of behaviors. After a review of previous research on vitriolic comments sections and aggression on YouTube and a discussion of methodology and method, I rhetorically analyze “reading mean comments” (RMC) as a rhetorical tactic to elucidate how it is a practical technique that resists digital aggression. I also forward what I call a spectrum of digital aggression responses ranging from direct to indirect. This model, I argue, can help researchers uncover how certain response tactics are tied to privilege, power, and the affordances and constraints of platforms.

2. Online Comments and YouTube Hate

Scholarship has documented how comments sections can perpetuate injurious discourses online. Clinnin and Manthey point to the many ways in which comments sections can be positive community building spaces (32), but also how they perpetuate “toxic commenting culture” (31, 33). By now most users are aware of the old adage “don’t read the comments,” and recent Pew surveys on online harassment have solidified this sentiment. In 2017, comments sections were rated the second most common place adults experience harassment online, coming in behind social media more generally (Duggan). Many news organizations like Reuters, Popular Science, The Verge, and NPR shut down their comments sections because moderating them became too expensive and time-consuming (Ellis; Jensen; Konnikova). While Joseph M. Reagle’s Reading the Comments shows the valuable discourse comment sections are able to create online, it also points to their ability to easily fall into vitriol and chaos.

Patricia Lange refers to the people who post aggressive and vitriolic comments as “haters” because their contributions do not “‘offer any [criticism] or any helpful information . . . . [They] insult you and offer no suggestions on [improvements]’” (Skazz, qtd. in Lange 6). “Mean comments” are the remarks haters make: “the phrases haters use are repetitive, unimaginative, and similar to those of other haters. They are unable to offer ‘legitimate’ arguments about why they hate something” (Lange 7).While such description is useful, the terms “haters” and “mean” insinuate a certain innocence and innocuousness that rarely characterizes these comments. In fact, “mean comments” often tend to align more closely with the legal definition of “hate speech”: “an incitement to hatred primarily against a group of persons defined in terms of race, ethnicity, national origin, gender, religion, sexual orientation, and the like” (“Hate Speech Law”). A large portion of mean comments are based in discourses that support socially hegemonic ideals of racism, ableism, misogyny, and homophobia and they can be as serious as rape or death threats. YouTube is reputed for these comments. In a recent Ted Talk, Dylan Marron said, “with comments sections inevitably comes hate” (“Empathy Is Not Endorsement” 10:12-10:15). Urban Dictionary jokes that YouTube comments sections are “the only place where a polite discussion about kittens can lead to a flame war about government conspiracies” (Extinguisher). While said in jest, this description is not inaccurate.

Even though YouTube is ostensibly trying to keep its vitriol under control, it is not doing enough. Danielle Keats Citron explains that YouTube is actively trying to “prevent abuse from happening and to diminish its impact when it occurs” (226). But with over one billion users—which YouTube contextualizes as “almost one-third of all people on the Internet” (“Press”)—the large scale makes it difficult to effectively enforce its own policies, rules, and values. Jean Burgess and Joshua Green highlight two reasons why YouTube might draw so many negative comments: “anonymity (so that there are few disincentives to behave badly) and scale (so that it becomes difficult to keep up with policing and moderating comments)” (96).  Simultaneously, there are no incentives to behave well. YouTube developed the Creators channel (formerly Creator Playbook) and the Creator Hub, which provide strategies for YouTubers to build successful channels. However, neither acknowledges digital aggression (Wotanis and McMillan 915), let alone provide strategies for addressing it. Creators, especially those potentially most vulnerable to haters, must learn to navigate the already difficult subject of digital aggression without any help from the website that houses their content and purports to help them. YouTube could provide stronger rules or enforce their existing ones more firmly and consistently, but their failure to even acknowledge aggression in their creator materials shows that they privilege money, clicks, and views over inclusive platform design.

Further, research has documented that women and marginalized identities receive more aggressive comments than men (Searles, Spencer, and Duru 949-50; Gardiner et al.), especially on YouTube (Wotanis and McMillan). Reagle has pointed out that gendered harassment online is actually so pervasive that “sexually violent comments, especially toward women, are an established genre of comment . . . characterized by profanity, ad hominem invective, stereotype, and hyperbolic imagery of graphic (and often sexualized) violence that manifests as a threat or wishful thinking” (106). In addition, Lindsey Wotanis and Laurie McMillan found in their quantitative study of two top YouTubers’ videos that women receive more critical/hostile comments and sexist/sexually aggressive comments than men (919). Their examination of the gendered difference between mean comments reveals that women’s presence on YouTube is marked by more hate and sexual aggression, and thus greater objectification and depersonalization, than men receive. Wotanis and McMillan argue that these kinds of abusive comments are so prevalent on YouTube that they “are a part of YouTube culture” (914). More so, as Clinnin and Manthey have pointed out, “The discourse in online comments often devolves into misogynistic, racist, homophobic, ableist, and violent hate speech targeted at marked bodies online (including those who identify as women, people of color, LGBTQ, disabled, religious minorities, or other non-normative identities)” (31). In the opening example, Carlos Maza is a queer Latino whose harassment is based explicitly in his identity; he is by far not the only person to experience these kinds of comments.

This gendered and raced aggression dissuades some women, particularly queer women and women of color, from creating and sharing content. Warren, Stoerger, and Kelley found that women tend to participate less frequently online, even when their identities are protected by anonymity, because of their fear of being harassed (11, 22). For some women who have put themselves publicly online and faced a deluge of negative responses, the repercussions have gone far beyond what can be simply ignored or accepted as part of YouTube culture: gendered cyber harassment “‘discourages [women] from writing and earning a living online . . . . The harassment causes considerable emotional distress. Some women have committed suicide’” (Citron, qtd. in Wotanis and McMillan 915). Ultimately, because these women are intimidated out of producing content for YouTube, they are silenced.

Aggressive behaviors are pervasive online (Duggan; Gruwell; Gurak; Phillips; to name a very few), particularly gender-, race-, sexuality-, and disability-based aggression (Clinnin and Manthey; Cloud; Jane; Poland; Sparby; and so many more). “Mean comments” have the power to shape reality, shift identities, and alter agency by discouraging and excluding certain voices. As such, harassment in digital spaces requires rhetorical interventions that are able to recognize, critique, and work against this power (Reyman and Sparby, “Introduction,” 1-2, 7). Fortunately, some women YouTubers resist these intimidation tactics and refuse to be silent, and many of them use “reading mean comments” as a rhetorical tactic to subvert aggression and build a strong self-policing community on their channels. This article examines this tactic and looks toward what a spectrum of digital aggression response can illustrate for content creators and digital rhetoricians.

3. Methodology and Method

As a methodological basis for this study, I use a technofeminist framework to rhetorically analyze online comments. Clinnin and Manthey explain that “Rhetorical technofeminism is a theoretical and activist framework that offers flexible applications to analyze and repurpose a variety of social, cultural, political, and technological problems such as the toxic comment section” (36). Dubisar et al. also explain that “we can use digital platforms to apply both rhetorical analysis and a critical feminist lens in order to reveal infrastructures of power and inequity within discourse that occurs in public space” (55). Importantly, this framework enables us to consider both the technological and the sociocultural reasons for digital aggression and develop multidimensional responses that recognize how each target has a unique experience with it. Clinnin and Manthey argue that “a rhetorical technofeminist approach helps us see that the comment is a product of a larger system of identities, bodies, discourses, and technologies that requires change” (37). As such, this analysis takes an intersectional approach to studying technology and discourse to search for socially just and equitable responses to digital aggression on YouTube.

This research also draws on my previous co-authored scholarship with Jessica Reyman, which argues that because it is a complicated, multidimensional issue, digital aggression requires an ecology of rhetorical approaches that are not only reactive (i.e., deleting harmful content/comments after the fact) but also proactive. This ecology of approaches includes:

1.     Platform design that offers moderation tools and clear policies;

2.     Community leaders articulating and following rules and norms;

3.     Moderators enforcing rules and norms while modeling behavior;

4.     Community members reinforcing norms and rules and also teaching new members how to behave. (Reyman and Sparby, “Introduction,” 7-8)

The opening example of Maza and Crowder shows how YouTube fails in the first category (or perhaps, more accurately, chooses to fail) by not strictly enforcing their own rules and choosing to focus on brand building and views over cultivating an inclusive platform. As such, while YouTube has the ability to excel in ethical platform design, it instead reinforces digital aggression. Consequently, it falls on individual YouTube creators, as both community leaders (category 2) and moderators (category 3), to monitor their channels and decide how to handle abuse and hate while also encouraging their subscriber community to self-police (category 4). While a technofeminist framework highlights the ways in which YouTube has failed its users, it also enables us to recognize and promote the successful efforts many YouTubers make to mitigate aggression on their channels. Rhetoricians can intervene in platform failures such as YouTube’s by offering rhetorical analyses that both expose how and why they have failed while also offering strategies for intervention and resistance.

To learn more about the rhetorical tactic of RMC, I analyzed seven videos from five YouTube celebrities (Table 1)—Jenna Mourey, Grace Helbig, Hannah Hart, Lilly Singh, and Colleen Ballinger—who use RMC to both strip their aggressors of power and to create inclusive communities on their YouTube channels. Jimmy Kimmel first drew attention to this video genre in 2012 during a segment called “Celebrities Read Mean Tweets” on Jimmy Kimmel Live!. There, he explains the genre’s premise in simple terms: a few celebrities read aloud one mean tweet they have received and then respond to it. The goal is to ridicule the hater while simultaneously subverting the power of their mean tweet through a sort of “sticks and stones”[4] approach to digital aggression: the haters can hate, but at the end of the day, the celebrity is still a celebrity. Since then, reading mean tweets has made its way to YouTube where it has been transformed into RMC by celebrities such as Mourey, Helbig, Hart, Singh, and Ballinger.

Table 1. YouTube Videos Analyzed

YouTuber Video Title Date Posted

Hannah Hart

(MyHarto)

Reading Mean Tweets! #MakeItHappy ft. JennaMarbles, Colleen Ballinger, Lilly Singh, and Mamrie Hart! 29 Jan 2015

Jenna Mourey

(Jenna Marbles)

Reading Mean Comments 15 Jan 2015

Grace Helbig

(Daily You)

Pete Holmes Compliments the Sh*t Out of You 22 Oct 2013

Grace Helbig

(Grace Helbig)

READING MEAN COMMENTS w/JAMES CORDEN 30 Mar 2015

Colleen Ballinger

(Colleen Ballinger)

READING MEAN COMMENTS 28 Jul 2015

Colleen Ballinger

(Colleen Ballinger)

MEAN COMMENTS – An Original Song 7 Sep 2016

Lilly Singh

(||Superwoman||)

What YouTube Comments Really Mean 19 Sep 2015

I rhetorically analyzed these five YouTube celebrities’[5] RMC videos for several reasons. First, I have been a fan of their channels for several years and am well-versed in their video styles and fan communities. This insider knowledge is important because it precludes the need for extensive participant-observation or ethnographic research; I understand how the communities function and the role of each YouTuber in building her fanbase. Second, all five YouTubers are leaders in what has been dubbed the “awkward older sister” movement (Peterson; Framke). All five women are friends who rose to popularity around the same time and often appear in each other’s videos, and they have dedicated themselves to supporting young women. Third, I chose these five women because, as I identify above, aggressive online comments are often based in denigrating gender, race, ability, and other intersecting identity factors. Two celebrities are multiply marginalized along the intersections of race and gender and sexuality and gender: Lilly is Punjabi-Canadian and finds her race is the target of many mean comments, and Hannah is an out lesbian whose sexuality is often targeted. Jenna, Colleen, and Grace present as white, cisgendered, and heterosexual. All five women appear to be able-bodied. Hannah has been open about struggling with mental health and attending therapy, and many of the others have implied similar mental health issues. I highlight these intersecting identity factors because, as the analysis below reveals, they are important to how each YouTuber approaches RMC and how the ability to use it also hinges on issues of identity and sociocultural power. Finally, I also selected these five women because, as John R. Gallagher has noted, the labor of managing aggressive comments is unevenly distributed: “The labor of monitoring and managing comments consequently means that content creators who identify as women have more content moderation and the type of moderation is more emotionally damaging than for content creators who identify as men” (181). As such, this article highlights how when enacting RMC these YouTubers must put in some of that extra work in lieu of YouTube’s platform design and policy enforcement working for them.

4. Reading Mean Comments to Subvert Hate on YouTube

YouTubers are concerned that censoring comments or disallowing certain users to post limits free speech (Lange). Many join YouTube and post videos precisely because they believe anyone should be able to express themselves. Free speech is a core value of the entire user group, not just a defense mechanism used by haters to defend their behaviors. As such, any reactive method for resisting haters and their mean comments must not limit free speech or prohibit abusive comments. Instead, it must find ways to reduce their power and bring fans together, mobilizing them to do some of the work of creating inclusive communities. This is where RMC can serve as a productive tactic for simultaneously subverting gendered hate and empowering content creators and community members.

This tactic is parodic: it “intentionally copies the style of someone famous or copies a particular situation, making the features or qualities of the original more humorous” (Warnick and Heineman 83). Rhetorically, parody helps us “respond evaluatively to what is said to us” (Dentith 3) and “mock[s] that which is being imitated” (Ballard 10). Parody is “playful” (Dentith 11) and “always critical” (Hutcheon 93). In this regard, it has the capacity to initiate sophisticated cultural critique against systems of power by allowing users to “subvert, or critically remix, the power dynamics of mainstream popular culture” (Dubisar et al. 53). This playfully critical and subversive critique is often carried out through a satirical re-invention or imitation of a prior text that Joel Penney explains results in its “active transformation” (226). For example, through an analysis of The Gaythering Storm—a parody video made in response to an anti-LGBTQ+ video by a similar name—Penney shows that censorship risks sending the message that LGBTQ+ people are “passive victims of a bigoted majority” and that parody can be an active process to transform the original message, reducing some of its harm by treating it as an object of ridicule (Penney 227). Parody can strip offensive or abusive representations of their power.

Knowing that parody’s power comes from its capacity to inspire critical analysis, YouTubers who enact RMC often parody multiple mean comments in one video to reveal the comments as hate speech while critiquing the larger cultural assumptions behind them. When a user reads only one mean comment, or even a few, it can be difficult to understand the larger picture of the kinds of online aggression woman YouTubers face. However, when users are faced with a cluster of mean comments in a five-minute video, it becomes clear that such vitriol is more than just “mean”: it is hate speech aimed at specific identities and often fixated on critiquing and/or (sometimes violently) sexualizing YouTubers’ appearance. This reveals that YouTube remains a space where women’s bodies are controlled and destroyed, even if only in fantasy. In critiquing such hate speech, RMC thus should be understood as a form of feminist rhetorical critique (see also Dubisar et al.). Dentith explains that “many parodies draw on the authority of precursor texts to attack, satirise, or just playfully refer to elements of the contemporary world” (9). In the case of mean comments and haters, the alleged authority is the haters, their hate speech, and their hierarchical worldview that presumes power over women’s bodies. Through the use of parody in RMC, YouTubers resist and subvert the haters’ presumed authority. Rather than allow the hate speech to silence them, woman YouTubers speak out even louder against such oppressive power, actively working against the idea that they are not welcome on YouTube. In performing parody, YouTubers actively transform malicious comments intended to offend or silence into new texts that do just the opposite, thereby opening up their channels as a space for productive discourse.

In the remainder of this section, I analyze how such parodic performances unfold through a variety of techniques to understand how the Youtubers’ power and positionality impacts their ability to respond, as well as how direct or indirect they are in order to mitigate further backlash. Grace has men present in both of her videos, Lilly summarizes mean comments and unapologetically derides her haters, Colleen exaggerates the comments while singing them, Hannah ends on a positive note, and Jenna shows the reality of mean comments. These distinctions are important not only because they reveal something about the fanbase toward which they are directing these parodies, but also they demonstrate how Penney’s “active transformation” varies across contexts and adapts to new rhetorical situations. No one technique will work for every YouTuber; instead, each must understand their situation juxtaposed with their audience’s needs and expectations and tweak the technique accordingly.

4.1. Grace and the Use of Male Authority

In both of her RMC videos, Grace invites guests to help her read and respond: the first video features Pete Holmes (Helbig, “Pete Holmes”), and the second one features James Corden (Helbig, “Reading”). Grace puts most of the work of responding to the comments on the men. She chooses and reads the comments, but leaves it to Pete and James to initiate the response. Further, when Grace displays the comments she reads, she enlarges and superimposes them over the majority of the screen. This serves the dual purpose of making the comment clearly visible and obscuring Grace’s face so viewers cannot see her reaction when she reads the comments (Figures 1 and 2).

Grace Helbig and Pete Holmes are almost completely obscured by screencap of a YouTube comment that says “You’re a f*cking sl*t Grace; you africa me so much. You are so NOT interesting! BITCH F*** You!”

Figure 1. “You’re a f*ckng sl*t Grace; you Africa me so much. You are so NOT interesting! BITCH F*** YOU!”

Grace Helbig and James Corden are almost completely obscured by a screencap of a YouTube comment that says “i h@ you.”

Figure 2. “i h@ you”

Grace removes her own agency of responding to these comments and instead puts the impetus on her male guests. As a result, Pete and James represent a male authority that deconstructs some of the gendered hate Grace receives, paradoxically both silencing and empowering her. She seems to recognize that—as white, heterosexual men with normalized masculinity—their voices afford Pete and James a level of authority that is recognized by the haters, which may give her more perceived authority to subvert them. However, the risk of borrowing their authority is that she could alienate some of her own fans, many of whom are young women who may be disappointed to see Grace respond through these male personas.

4.2 Lilly and the Use of Summary Instead of Direct Quotation

Lilly’s video, “What YouTube comments really mean,” focuses on various types of comments she has received; she does not read or display any of her actual comments. As a multiply marginalized YouTuber, she receives aggressive comments often grounded in both her gender and race that can be more violent. As such, she adapts RMC in a way that calls attention to both privilege and vulnerability. Calling out specific comments by name or even direct quotation could put her in a more precarious position, so Lilly’s decision to summarize is both parodic and protective.

Lilly often responds with sarcastic contempt for her haters. Rather than engage in the same hate speech these comments aim at her (racism, appearance-based sexism, threats, and so on), she instead channels her rage toward the content, mocking the narrow mindset that produces it. Lilly also uses a camera technique to create a boundary between her and her aggressors by appearing on the left side of the screen to summarize (Figure 3) and then on the right to give her response (Figure 4). Finally, she also rewards good comments by thanking fans and explaining that she is going to respond to positive comments as soon as she posts the video. Fans typically respond positively to her efforts to re-engage the community.

Lilly Singh makes an angry face and mimicks typing on a keyboard while she says “You’re a terrorist."

Figure 3. “You’re a terrorist.”

Lilly Singh points to her head and makes a sarcastic face while she says “My brain is controlled by the media and I blindly accept information. Now I’d love to stay and chat, but I have to forward this scary email to 70 people in the next seven minutes or a ghost is going to kill me.”

Figure 4. Response to “You’re a terrorist”: “My brain is controlled by the media and I blindly accept information. Now I’d love to stay and chat, but I have to forward this scary email to 70 people in the next seven minutes or a ghost is going to kill me.”

4.2 Colleen and the Use of Exaggeration through Song

Colleen Ballinger adds a unique twist to RMC by using negative comments as fodder for song lyrics. She shows these comments while she sings them; at times, these videos seem like a dark sing-along. Colleen also demonstrates parodic exaggeration, emphasizing her haters’ grammatical errors and ridiculing their intelligence. For instance, in “Mean Comments” when she over-pronounces the “l” in the comment that calls her “toltoly stupid,” she also sticks out her tongue as she exaggerates the misspelling (Figure 5). Both songs also place strong emphasis on the fact that so many of the mean comments she receives are riddled with errors. In “Mean Comments,” part of the chorus is “So keep the comments coming, but here’s some help: before you go insulting people maybe learn how to spell.”

Colleen Ballinger sings “Toltoly stupid” while rolling her tongue to emphasize the misspelling. A screencap of the comment appears below her.

Figure 5. “Toltoly stupid.” 

A drawback of Colleen’s sing-along method is that the happy tune of both of her songs belie the serious nature of some of these comments. For instance, one comment references masturbation: “roses are red violets are blue pornhub is down psychosoprano will do” (Ballinger, “Mean Comments,” 00:23-00:27). [7] However, despite this risk of downplaying the mean comments, Colleen also stresses how much power she has even in the face of aggression because she has enabled monetization on her channel. In “Reading Mean Comments,” she sings “But the joke’s on you, so keep saying ‘I want you killsd,’ cuz your comments make me money and you’re paying my bills” (Ballinger, “Reading Mean Comments,” 01:15-01:21). In “Mean Comments,” she sings “You might think that I’m hurt or I’m feeling abused, but I’m not. I’m getting paid from your comments and views” (Ballinger, “Mean Comments, “00:51-00:58). She emphasizes that even hateful comments support her income.

4.4 Hannah and the Use of Positive Endings

Although “Reading Mean Tweets! #MakeItHappy ft. Jenna Marbles, Colleen Ballinger, Lilly Singh, and Mamrie Hart!” is Hannah’s video, she does not read any of her own mean comments. Instead, she asks—as the title implies—Jenna, Colleen, Lilly, and Mamrie Hart[8] to read comments that she has selected. One unique feature of these videos is that we see the YouTubers react to seeing the mean comment for the first time. Before she reads her first one, Jenna is visibly nervous; off-camera, Hannah says, “I’m going to have you read some negative comments okay?” (Hart 00:58-01:00). Jenna responds with “‘Kay” and makes funny faces at the camera—including a half-wince, half-smile (Figure 6)—indicating her unease. The other YouTubers have similar reactions, which indicates the performativity of RMC. While the YouTubers may appear cheerful or confident in the polished, edited versions of their videos, they are actually apprehensive and uneasy about mean comments.

Jenna Mourey smiles and winks uncomfortably in response to Hannah Hart’s statement, captioned in white text at the bottom of the screen, “I’m going to have you read some negative comments. Ok?”

Figure 6. Jenna responding to Hannah saying, "I'm going to have you read some negative comments. Ok?"

Around the 1:50 mark, less than halfway through the video, Hannah admits that RMC can be tough. She asks the YouTubers to read and react to positive comments, and the real purpose of the video becomes clearer: Hannah contrasts the vitriol of the mean comments with the community-building of the positive ones. This method allows Hannah to highlight and discourage hate speech. At the same time, Hannah reconstructs the discursive space on YouTube; she shows that it has the potential to be a space that supports instead of denigrates non-hegemonic identities.

4.5 Jenna and the Use of Repetition and Tone

[Content warning for this section: implications of rape and murder]. Those who participate in RMC choose the comments they will read. Jenna’s video is the only one in this study to include sexually violent comments—that is, comments that fantasize about both rape and violence or murder—although she is not the only one who regularly receives them. She begins the video by reading and laughing at or responding to comments. One comment reads, “Ugly slut you are” and Jenna responds by smiling and saying “Thanks, Yoda” (Mourey 01:12-01:14). However, as she goes on, her tone shifts when she begins reading the sexually violent comments:

  • I just hope you die. So your [sic] saying no makeup no sucky cocky and with lip stick swallow spit or gargle. (Mourey 03:50-04:04)
  • I hate that stupid unfunny kunt [cunt] sloot [slut]. I would fuk [fuck] her but only if I can brutally murder her afterwards. (Mourey 05:15-05:22)
  • I would destroy her vag and dump her in the lake back there… (Mourey 05:29-05:33).

With each sexually violent comment, the tone of the video shifts. Jenna looks and sounds tired (Figures 7 and 9), even though she slogs through two more minutes of comments after the third one. Whereas the other YouTubers keep the tone relatively lighthearted and try to focus on making users laugh, Jenna unflinchingly demonstrates the emotional reality of what it can mean to be a woman on YouTube.

Jenna Mourey wearily reads a mean comment. The screencap of the comment at the bottom reads, “I hate that unfunny kunt sloot. I would fuk her but only if I can brutally murder her afterwards.”

Figure 7. “I hate that unfunny kunt sloot. I would fuk her but only if I can brutally murder her afterwards”

Jenna Mourey tiredly reads a mean comment. The screencap of the comment at the bottom reads, “I would destroy her vag and dump her in the lake back there…”

Figure 8. “I would destroy her vag and dump her in the lake back there…”

As Jenna states in the intro and outro of “Reading Mean Comments,” she produced this video because her fans requested it. She explains why she originally resisted it: “So yeah, I’ve been putting off making this video because it’s a downer, it’s fucking sad…. I’ve sorta been hesitant to do this” (Mourey 00:28-00:33; 00:40-00:42). Typically, these kinds of videos are meant to be funny and even fun, but Jenna shows her fanbase what kinds of comments she faces. Many users commented their surprise that the video was so serious, such as, “Normally I love people reading hate comments, but this is sad. Like I literally started to tear up...” (Forest of Shadows). Perhaps this cognitive dissonance was intentional: it seems many fans learned from Jenna’s video that mean comments on women YouTubers’ videos are not like they are for men’s videos.

5. The Problem and Promise of RMC

Even though RMC parodies haters and actively transforms the comments, it is important to note that this tactic has the potential to perpetuate problematic mindsets. As Butler explains, “Parody by itself is not subversive” because it can easily “become domesticated and recirculated as instruments of cultural hegemony” (Gender Trouble 189). If the YouTuber does not draw a sharp line between critique and original text, then their parodic tactic could be mistaken for sincerity, and they could accidentally legitimize the online aggression they seek to resist. Because of the titles, framing devices, and unique techniques of RMC identified above, it seems reasonable to expect that viewers would understand them as parody. However, a recording of one of Colleen’s live shows (uploaded by YouTuber 138riley138, not by Colleen herself) highlights how this particular tactic can be dangerous without careful framing. In this video, Colleen sings “Reading Mean Comments” before a live audience.[9] I noted earlier how the sing-along format may undermine the seriousness of the mean comments that she receives, and in this live segment several audience members can be heard singing along with her. In addition, they often cheer loudly during some of the more unsavory comments. Part of this enthusiasm likely stems from support, but they nonetheless recirculate harmful speech in a new rhetorical context largely because this video does not contain framing devices to distance the parody from the original. First, the recording does not show if Colleen addressed the nature of mean comments during the performance itself to frame it as a parodic. Second, the description (Figure 9) presupposes viewer knowledge of what “Colleen singing Reading Mean Comments” means. As such, any viewers unfamiliar with Colleen or her videos may not understand the song’s parodic nature. As Dietel-McLaughlin explains, audience awareness is a key part of the subversive power of parody (n.p.). If Colleen is not careful to frame these performances in terms of parody, she risks perpetuating the very hate she is trying to subvert through secondary audience members who do not understand her purpose.

Description of 138riley138's video of Colleen singing Reading Mean Comments.

Figure 9. Description of 138riley138's video of Colleen singing Reading Mean Comments. 

In addition, RMC is a risky tactic in that regardless of audience awareness of the parody or context, repeating these comments even in the service of critique can serve to perpetuate and normalize injurious speech (Butler, Excitable Speech; Milner; Phillips; and Sparby). Even though the YouTubers clarify the difference between mean comments and good ones, RMC can still be normalizing. If someone laughs at the parody while recognizing its evaluative function, then the chance of normalizing and valorizing the hate speech is reduced. However, if someone laughs at the parody without an awareness of or appreciation for its critique, the hate speech has a higher risk of being normalized and rewarded in much the same way that trolling can be in spaces like 4chan (Sparby). As such RMC has the potential to perpetuate the hate its users seek to resist.

Despite this danger of RMC, I argue that its potential as a method for subverting online aggression is still promising. In the face of haters trying to silence their voices in a public space, RMC offers a potent mechanism for simultaneously refusing silence, launching critique, and generating spaces for free and open discourse in order to build and maintain a strong fanbase and community. This is especially important in that YouTube celebrities cannot allow haters to overrun the comments sections on their channels if they want their fans to feel comfortable speaking and expressing themselves. It is also important that community members have a clearer understanding of the rules and norms of the space and feel empowered to enforce them (category 4 of the framework for approaches to aggression). The comments sections of videos enacting RMC are full of supportive remarks from fans. Comments on Colleen’s videos emphasize how happy viewers are to engage with and watch her videos so they can help her make more money: “when she said the comments make her money I just want to spam her with comments!!!” (AlexisHappySnuggle Raby). Comments on Jenna’s videos defend her: “Whoever made those comments, I hope you know you're fucking disgusting…. This makes me so angry” (Miekel). Grace’s, Hannah’s, and Lilly’s videos have similar comments, and many even go so far as to promise that they will downvote negative comments, showing they have taken it upon themselves to help establish and protect the discursive spaces. These positive comments, and others like them, show that RMC can help strengthen each YouTuber’s community. RMC subverts haters by empowering fans to engage with each other in supportive dialogue.

6. Toward a Spectrum of Digital Aggression Response

While RMC is an effective tactic to address digital aggression, it is just one available means to respond to digital aggression. In fact, as I demonstrate here, a wide range of rhetorical tactics commonly used to respond to hateful comments can be identified along a spectrum of direct to indirect responses. For example, in the summer of 2016, Leslie Jones was bombarded with racist tweets at the prompting of right-wing antagonist Milo Yiannopoulos (Woolf). Her offense? That she dared to be a Black woman starring in the all-woman remake of Ghostbusters. Her initial tactic was to respond directly to them, expanding her presence in the space. As Dieterle, Edwards, and Martin demonstrate, Jones’ direct responses incited an influx of support from her fans and followers (particularly through hashtags like #StandWithLeslie and #BlackMenSupportLeslie) (198). But at their core, her responses were aimed directly at her aggressors, and, unfortunately, her direct response spurred an influx of more aggressive and violent tweets, eventually resulting in her taking a break from Twitter to protect her safety.

On the other hand, Dylan Marron has taken a different approach. He has received a lot of hate for being a queer Latino in digital public spaces, and one way he subverts hateful comments is through his podcast “Conversations with People Who Hate Me” where he puts targets and aggressors in contact for a conversation—sometimes even talking to his own aggressors—to learn from each other. He has developed the mantra “empathy is not endorsement” (Marron, “Empathy Is Not Endorsement” 09:01-09:04) to explain the disconnect he experiences when he feels bad for someone who has put him down, positing that it is possible to recognize each other’s humanity without compromising our own beliefs. He has talked to his own aggressors in at least two episodes, which makes plotting this response on a spectrum complex; on the one hand, he speaks directly with an aggressor, but on the other he broadcasts the conversations to a wide audience.

These are just two tactics, alongside RMC, that exist on what I call a spectrum of digital aggression responses (see Figure 10). These responses range from direct to indirect, with Jones falling closer to the direct end since her main audience is the aggressors, and Marron falling closer to the middle since his audience is both the aggressors and the targets. Both involve talking directly to aggressors, but the first refuses silence and tries to establish power, and the second highlights a human connection with aggressors to understand that, as Marron explains it, “hurt people hurt people” (Marron, “Hurt People”). RMC is closer to the indirect end of the spectrum (Figure 10). Each video opens and closes with framing devices—general salutations and farewells—that establish and re-establish the fans, not the haters, as the primary audience.

A spectrum of digital response from direct to indirect, with Leslie Jones almost all the way to the direct end and Reach Mean Comments in the middle; Dylan Marron is between the two but closer to RMC.

Figure 10. Plotting Jones, Marron, and RMC on a spectrum of digital aggression response.

Identifying RMC and other tactics along a spectrum of digital aggression responses—sometimes alongside the kind of backlash they receive—is useful because it highlights how much power and privilege a target has to respond. For instance, Jones’s refusal to be silent resulted in her aggressors increasing their attacks on her, which was inarguably tied to her race and gender; a white man in her same situation may have had a different experience. Marron makes it clear that his conversational tactic works for him, but that there are some targets who are too vulnerable to talk to their harassers or who are simply out of empathy for them (Marron, “Empathy Is Not Endorsement” 09:30-09:49). He recognizes his own privilege to be able to have these conversations. Further, while RMC typically falls closer to indirect response than direct, some RMC videos may deviate one way or the other more. For instance, Lilly’s RMC video—which highlights types of comments instead of quoting them—would fall closer to indirect since she does not call out specific YouTubers or comments. As a Punjabi woman, this choice was undoubtedly a protective measure to avoid inciting more comments from particular aggressors. Where her responses fall on the spectrum is strongly tied to her race and gender and thus to her sociocultural power and privilege.

7. Conclusion

As scholars and teachers of digital rhetoric and practitioners of digital technologies, it is our ethical responsibility to examine the ways in which digital spaces reinscribe hegemonic ideologies and power structures while uncovering and enacting ways to subvert them (e.g. Selfe and Selfe; DeLuca; Sparby; Reyman and Sparby). Social media companies are not going to make the kinds of interface and infrastructural changes necessary to create more open and inclusive spaces without pressure. Carlos Maza’s Twitter thread and ensuing public outcry in 2019 resulted in some YouTube policy changes six months later, but since then, attention to this issue has died down. Rhetoricians can help figure out how we can collaborate with others to keep people interested and continue to pressure YouTube and other social media platforms to fight aggression on their sites. We can also, as I show in this article, help identify a spectrum of responses to digital aggression that demonstrates how intersecting identity characteristics tie to targets’ power and privilege and thus their ability to respond in these spaces.

This study of how five YouTubers address digital aggression is surely limited. Hannah, Grace, Lilly, Colleen, and Jenna are famous YouTubers and make six figure salaries from their video productions, which leaves us with questions like how can other YouTubers with fewer fans and less power and privilege enact resistance against their haters? Without the backing of subscribers and money, what approaches can smaller scale YouTubers adopt to effectively address digital aggression? Further, while I have briefly identified tactics used on Twitter and professional podcasts, what responses are available to targets across other platforms and with varying degrees of power and privilege? How are content creators responding to aggression on other video platforms such as TikTok, which saw a huge surge of new users when the COVID-19 pandemic began in March 2020? Plotting these tactics on a spectrum of digital aggression response can help us see a fuller picture about what kinds of responses are available to targets and how these options are often tied directly to their identity/ies.

However, any of these responses—even RMC—is inherently reactive and has its limits. Ideally, YouTube and other social media spaces would develop proactive approaches to aggression that could mitigate it before it has a chance to happen. But for now, it is clear that most of the burden has been placed on the users, leaving them to pick up YouTube’s slack by creating and enforcing their own rules without violating the free speech that is so heavily valued by the community. RMC has been proven useful to YouTubers as they take on the burden of addressing aggression, harassment, and hate speech on the platform. But this tactic has also proved draining over time. After talking back to haters for nearly a decade, Jenna announced in June 2020 that she would no longer be creating YouTube content after the hate she has received for problematic videos she posted early in her YouTube career[10] (Dowling). In the video, she looks and sounds exhausted; the video has since been removed from her channel. RMC and talking back to haters can serve as a short-term response to aggression, but it is not a sustainable long-term approach. As we move forward with research on digital aggression, we need to look for and highlight more tactics so we can find ways to support content creators on YouTube.


[1] Doxxing is releasing private information to the public. In Maza’s case, it appears his phone number was circulated widely, but doxxing can also include addresses, account usernames and passwords, family member names and contact information, and any other private data people generally do not want publicized.

[2] The policy was updated in December 2019 and no longer reflects this wording.

[3] The full text of the tweet reads: “TL;DR: YouTube loves to manage PR crises by rolling out vague content policies they don't actually enforce. These policies only work if YouTube is willing to take down its most popular rule-breakers. And there's no reason, so far, to believe that it is.”

[4] From the old childhood adage “sticks and stones may break my bones, but words will never hurt me.”

[5] YouTubers who have made successful careers creating content for YouTube. They do not hold other jobs, often earn six figures annually, sell merchandise, engage in branding, and sometimes hire publicists to manage their public personas.

[6] A note on the data display: throughout this article, I display screenshots from the videos. All opted to show user information in their own screencaps, and, since these videos are public and already have millions of views, I have not altered mine to occlude them.

[7] Colleen’s channel used to be called PsychoSoprano, but she renamed it.

[8] Another “awkward older sister” YouTuber who is part of the same friend group; no relation to Hannah Hart.

[9] Many YouTubers perform live shows. Colleen often tours as Miranda Sings, her comedic alter ego, and sometimes performs some of her Colleen content, including “Reading Mean Comments.” Grace Helbig, Hannah Hart, and Mamrie Hart have also toured together and separately.

[10] In an older video, Jenna impersonated Nikki Minaj and wore Blackface. She has addressed the video numerous times in the last nine years and explained that she kept it and a couple other problematic videos from her early 20s on her channel so she can show her fans that it’s possible to own up to your mistakes and to learn and grow from them. Responses to the hurtfulness of her past racist—and consequently implicitly violent and hateful—content are absolutely valid; personal attacks that amplify hate and violence against her are not.

Works Cited

Alexander, Julia. “Creators Aren’t Surprised that YouTube Won’t Enforce Its Own Policies Against Harassment.” The Verge, 5 Jun 2019, https://www.theverge.com/2019/6/5/18653598/steven-crowder-carlos-maza-youtube-bullying-harassment-commentary-censorship. Accessed 31 Dec 2019.

AlexisHappySnuggle Raby. Comment on “Reading Mean Comments.” YouTube, 2016, https://www.youtube.com/watch?v=pbYSiLg-0sg.

Ballard, Tom. “YouTube Video Parodies and the Video Ideograph.” Rocky Mountain Review of Language and Literature, vol. 70, no. 1, 2016, pp.10-22.

Ballinger, Colleen. “Mean comments - an original song.” YouTube, uploaded by Colleen Ballinger, 7 Sep 2016, https://www.youtube.com/watch?v=tKAWkcBsqO0. Accessed 8 Jan 2020.

 ---. “Reading mean comments.” YouTube, uploaded by Colleen Ballinger, 28 Jul 2015, https://www.youtube.com/watch?v=pbYSiLg-0sg. Accessed 8 Jan 2020.

 Burgess, Jean and Joshua Green. YouTube. Polity, 2009.

 Butler, Judith. Gender Trouble. Routledge, 1990.

 ---. Excitable Speech: A Politics of the Performative. Routledge, 1997.

 Citron, Danielle K. Hate Crimes in Cyberspace. Harvard UP, 2014.

 Clinnin, Kaitlin and Katie Manthey. “How Not to be a Troll: Practicing Rhetorical Technofeminism in Online Comments.” Computers and Composition, vol. 51, 2019, pp. 31-42.

 Cloud, Dana. “Foiling the Intellectuals: Gender, Identity Framing, and the Rhetoric of the Kill in Conservative Hate Mail.” Communication, Culture & Critique, vol 2, 2009, pp. 457-479.

DeLuca, Katherine. “‘Can we block these political thingys? I just want to get f*cking recipes:’ Women, Rhetoric, and Politics on Pinterest.” Kairos, vol. 19 no. 3, 2015, http://kairos.technorhetoric.net/19.3/topoi/deluca/index.html. Accessed 8 Jan 2020.

Dentith, Simon. Parody. Routledge, 2000.

Dietel-McLaughlin, Erin. “Remediating Democracy: Irreverent Composition and the Vernacular Rhetorics of web 2.0.” Computers and Composition Online, 2009, http://cconlinejournal.org/Dietel/index.html. Accessed 28 Oct 2018.

Dieterle, Brandy, Dustin Edwards, and Paul “Dan” Martin. “Confronting Digital Aggression with an Ethics of Circulation.” Digital Ethics: Rhetoric and Responsibility in Online Aggression, edited by Jessica Reyman and Erika M. Sparby, Routledge, 2019, pp. 197-213.

Dowling, Amber. “Jenna Marbles Apologizes for Past Racist Videos and quits YouTube.” The Loop, 26 Jun 2020, https://www.theloop.ca/jenna-marbles-apologizes-for-past-racist-videos-and-quits-youtube/. Accessed 11 Sept 2020.

Dubisar, Abby and Jason Palmeri. “Palin/Pathos/Peter Griffin: Political Video Remix and Composition Pedagogy.” Computers and Composition, vol. 27, no. 2, 2010, pp. 77-93.

Dubisar, Amy, Claire Lattimer, Rahemma Mayfield, Makayla McGrew, Joanne Myers, Bethany Russell, and Jessica Thomas. “Haul, Parody, Remix: Mobilizing Feminist Rhetorical Criticism With Video.” Computers and Composition, vol. 44, 2017, pp. 52-66.

Duggan, Maeve. “Online Harassment 2017.” Pew Research Center, 11 Jul 2017, https://www.pewresearch.org/internet/2017/07/11/online-harassment-2017/. Accessed 31 Dec 2019.

Edwards, Dustin. “Framing Remix Rhetorically: Toward A Typology of Transformative Work.” Computers and Composition, vol. 39, 2016, pp. 41-54.

Ellis, Justin. “What Happened After 7 News Sites Got Rid of Reader Comments.” NiemanLab, 16 Sep 2019, https://www.niemanlab.org/2015/09/what-happened-after-7-news-sites-got-rid-of-reader-comments/. Accessed 31 Dec 2019.

Eordogh, Fruzsina. “YouTube’s Related Video Algorithm Helpful to Predators.” Forbes, 18 Feb 2019. https://www.forbes.com/sites/fruzsinaeordogh/2019/02/18/youtubes-related-video-algorithm-helpful-to-predators/#51a365ca4872. Accessed 31 Dec 2019.

Extinguisher. “YouTube Comments.” Urban Dictionary, 1 Feb 2014, https://www.urbandictionary.com/define.php?term=Youtube%20comments. Accessed 31 Dec 2019.

Forest of Shadows. Comment on “Reading Mean Comments.” YouTube, 2017, https://www.youtube.com/watch?v=RVB0v963fkE. Accessed, 28 Oct 2018.

Framke, Caroline. “The Ascendancy of the ‘Awkward Older Sister.’” The Atlantic, 11 May 2015, https://www.theatlantic.com/entertainment/archive/2015/05/amy-schumer-grace-helbig-awkward-older-sister/392006/. Accessed 28 Oct 2018.

Gallagher, John R. “The Economy of Online Comments: Attention as Economic Motivation in Digital Public Spheres.” Humans at Work in the Digital Age: Histories of Digital Textual Labor, edited by Shawna Ross and Andrew Pilsch, Routledge, 2019, pp. 172-184.

Gardiner, Becky, Mahana Mansfield, Ian Anderson, Josh Holder, Daan Louter, and Monica Ulmanu. “The Dark Side of Guardian Comments.” The Guardian, 12 Apr 2016, https://www.theguardian.com/technology/2016/apr/12/the-dark-side-of-guardian-comments. Accessed 5 Mar 2020.

Gruwell, Leigh. “Writing Against Harassment: Public Writing Pedagogy and Online Hate.” Composition Forum, vol. 36, 2017, http://compositionforum.com/issue/36/against-harassment.php, accessed 4 Mar 2020.

Gurak, Laura. Persuasion and Privacy in Cyberspace. Yale UP, 1997.

Hart, Hannah. “Reading mean tweets! #MakeItHappy ft. Jenna Marbles, Colleen Ballinger, Lilly Singh, & Mamrie Hart!” YouTube, uploaded by MyHarto, 29 Jan 2015, https://www.youtube.com/watch?v=_sCzo0M03qs. Accessed 8 Jan 2020.

“Harassment and Cyberbullying Policy.” YouTube, 2019, https://support.google.com/youtube/answer/2802268?hl=en. Accessed 31 Oct 2019

“Hate Speech Law and Legal Definition.” US Legal, 2016, https://definitions.uslegal.com/h/hate-speech/. Accessed 28 Oct 2018.

Helbig, Grace. “Pete Holmes Compliments the Sh*t Out of You.” YouTube, uploaded by DailyYou, 22 Oct 2013, https://www.youtube.com/watch?v=T2DbTjlbkU4. Accessed 8 Jan 2020.

---. “Reading mean comments w/ James Corden.” YouTube, uploaded by Grace Helbig, 30 Mar 2015, https://www.youtube.com/watch?v=jlRiYqwxTlQ. Accessed 8 Jan 2020.

Hutcheon, Linda. The Politics of Postmodernism. Routledge, 1989.

Jane, Emma A. “‘Back to the kitchen, cunt’: Speaking the Unspeakable about Online Misogyny.” Continuum: Journal of Media & Cultural Studies, vol. 28, no. 4, 2014, pp. 558-570.

Jensen, Elizabeth. “NPR Website to Get Rid of Comments.” NPR, 17 Aug 2016, https://www.npr.org/sections/publiceditor/2016/08/17/489516952/npr-website-to-get-rid-of-comments. Accessed 31 Dec 2019.

Konnikova, Maria. “The Psychology of Online Comments.” New Yorker, 23 Oct 2013, https://www.newyorker.com/tech/annals-of-technology/the-psychology-of-online-comments. Accessed 5 Mar 2020.

Lange, Patricia G. “Commenting on Comments: Investigating Responses to Antagonism on YouTube.” Society for Applied Anthropology Conference, 31 Mar 2007, Tampa, FL, pp. 1-26, https://www.researchgate.net/publication/228615792_Commenting_on_Comments_Investigating_Responses_to_Antagonism_on_YouTube. Accessed 28 Oct 2018.

Marron, Dylan. “Empathy is Not Endorsement.” YouTube, uploaded by TED, 18 May 2018, https://www.youtube.com/watch?v=waVUm5bhLbg. Accessed 7 Jan 2020.

Marron, Dylan, host. “Hurt People Hurt People [transcript].” Conversations with People Who Hate Me, episode 2, 6 Aug 2017, https://www.dylanmarron.com/podcast/episode-guide/episode-2

@gaywonk (Carlos Maza). “Last year, I got doxxed, and it scared the fuck out of me. My phone was bombarded with hundreds of texts at the exact same time. The messages? [screenshot of phone numbers saying “debate steven crowder.” Twitter, 30 May 2019, 8:06 p.m.

---. “This isn't about ‘silencing conservatives.’ I don't give a flying fuck if conservatives on YouTube disagree with me. But by refusing to enforce its anti-harassment policy, YouTube is helping incredibly powerful cyberbullies organize and target people they disagree with.” Twitter, 30 May 2019, 8:16 p.m.

---. “YouTube does not give a fuck about queer creators. YouTube does not give a fuck about marginalized creators. YouTube does not give a fuck about diversity or inclusion. YouTube wants clicks. YouTube wants clicks. YouTube wants clicks.” Twitter, 30 May 2019, 8:25 p.m.

---. “TL;DR: YouTube loves to manage PR crises by rolling out vague content policies they don't actually enforce. These policies only work if YouTube is willing to take down its most popular rule-breakers. And there's no reason, so far, to believe that it is.” Twitter, 11 Dec 2019, 9:41 a.m.

Miekel. Comment on “Reading mean comments.” YouTube, 2017, https://www.youtube.com/watch?v=RVB0v963fkE.

Milner, Ryan M. “Hacking the Social: Internet Memes, Identity Antagonism, and the Logic of Lulz.” The Fibreculture Journal, vol. 22, pp. 62-92, 2013,  http://twentytwo.fibreculturejournal.org/fcj-156-hacking-the-social-internet-memes-identity-antagonism-and-the-logic-of-lulz/. Accessed 28 Oct 2018.

Mourey, Jenna. “Reading mean comments.” YouTube, uploaded by JennaMarbles, 15 Jan 2015, https://www.youtube.com/watch?v=RVB0v963fkE. Accessed 8 Jan 2020.

Penney, Joel. “Responding to Offending Images in the Digital Age: Censorious and Satirical Discourses in LGBT Media Activism.” Communication, Culture & Critique, vol. 8, 2015, pp. 217-234.

Peterson, Anne Helen. “Why Teens Love YouTube’s Grace Helbig.” Buzzfeed, 9 Feb 2015, https://www.buzzfeed.com/annehelenpetersen/why-teens-love-grace-helbig. Accessed 28 Oct 2018.

Phillips, Whitney. This Is Why We Can’t Have Nice Things: Mapping the Relationship between Online Trolling and Mainstream Culture. MIT P, 2015.

Poland, Bailey. Haters: Harassment, Abuse, and Violence Online. Potomac Books, 2016.

“Press.” YouTube, 2018, https://www.youtube.com/yt/about/press/. Accessed 28 Oct 2018.

“Reading mean comments – Colleen Ballinger (live in Vancouver).” YouTube, uploaded by 138riley138, 20 Feb 2016, https://www.youtube.com/watch?v=6T07r6iNZLM. Accessed on 28 Oct 2018.

Reagle, Joseph M., Jr. Reading the Comments: Likers, Haters, and Manipulators at the Bottom of the Web. MIT Press, 2015.

Reyman, Jessica and Erika M. Sparby. “Introduction: Toward an Ethic of Responsibility in Digital Aggression.” Digital Ethics: Rhetoric and Responsibility in Online Aggression, edited by Jessica Reyman and Erika M. Sparby, Routledge, 2019, pp. 1-14.

Reyman, Jessica and Erika M. Sparby, eds. Digital Ethics: Rhetoric and Responsibility in Online Aggression. Routledge, 2019.

Searles, Kathleen, Sophie Spencer, and Adoabi Duru. “Don’t Read the Comments: The Effects of Abusive Comments on Perceptions of Women Authors’ Credibility.” Information, Communication, & Society, 2018, pp. 1-16.

Selfe, Cynthia L. and Richard Selfe. “Politics of the Interface: Power and Its Exercise in Electronic Contact Zones.” College Composition and Communication, vol. 45, no. 4, 1994, pp. 480-504.

Singh, Lilly. “What YouTube comments really mean.” YouTube, uploaded by ||Superwoman||, 19 Sep 2015, https://www.youtube.com/watch?v=05MzCDKoQ3w. Accessed 8 Jan 2020.

Sparby, Erika M. “Digital Social Media and Aggression: Memetic Rhetoric in 4chan’s Collective Identity.” Computers and Composition, vol. 45, 2017, pp. 85-97.

Strapagiel, Lauren. “LGBT Creators Say YouTube Doesn’t Actually Value Queer and Trans Creators.” Buzzfeed, 5 Jun 2019, https://www.buzzfeednews.com/article/laurenstrapagiel/lgbt-creators-youtube-harassment-carlos-maza. Accessed 31 Dec 2019.

Wakabayashi, Daisuke. “YouTube Takes Tougher Stance on Harassment.” The New York Times, 11 Dec 2019, https://www.nytimes.com/2019/12/11/technology/youtube-harassment-policy.html. Accessed 31 Dec 2019.

Warnick, Barbara and David S. Heineman. Rhetoric Online: The Politics of New Media. Peter Lang, 2012.

Warren, Jonathan, Sharon Stoerger, and Ken Kelley. “Longitudinal Gender and Age Bias in a Prominent Amateur New Media Community.” New Media & Society, vol. 14, no. 1, 2012, pp. 7-27.

Woolf, Nick. “Leslie Jones Bombarded with Racist Tweets After Ghostbusters Opens.” The Guardian, 18 Jul 2016, https://www.theguardian.com/culture/2016/jul/18/leslie-jones-racist-tweets-ghostbusters. Access 7 Dec 2020.

Wotanis, Lindsey and Laurie McMillan. “Performing Gender on YouTube: How Jenna Marbles Negotiates a Hostile Online Environment.” Feminist Media Studies, vol. 14, no. 6, 2014, pp. 912-28.

“Welcome to the Creators Hub.” YouTube, 2014, http://www.youtubecreatorshub.com/. Accessed 7 Jan 2020.

“YouTube Creators.” YouTube, 2019, https://www.youtube.com/user/creatoracademy. Accessed 7 Jan 2020.

Zialcita, Paolo. “YouTube Announces New Anti-Harassment Policy To Fight Racial, Gender, LGBTQ Abuse.” NPR, 11 Dec 2019, https://www.npr.org/2019/12/11/787165948/youtube-announces-new-anti-harassment-policy-to-fight-racial-gender-lgbtq-abuse. Accessed 31 Dec 2019.