Loading...

Zoom Aid or Zoom Raid?: An Interview with Stephen J. Neville

August 14, 2020
Klara du Plessis
|

The transition wasn’t gradual; it happened over night. In March 2020, due to the global COVID19 pandemic, university classes and seminars, research meetings, supervisory chats, scholarly conferences, and more, all moved online. Suddenly the screen became the office, the boardroom, and the classroom. Suddenly the screen became the sole mediator of collective thinking, discussing, and sharing of information. While some might argue for the efficacy of this collective move online, Stephen J. Neville—who would have presented at the SpokenWeb Listening, Sound, Agency symposium this July—published early research on the flip side of online videoconferencing, discovering the racist and misogynist underpinning of many so-called Zoom-bombing attacks (co-authored with Prof. Greg Elmer and Anthony Glyn Burton). Here I ask him a few further questions about his latest, highly relevant research.

The 2020 SpokenWeb symposium, Listening, Sound, Agency, has been postponed due to COVID19 and your research has been transformed due to the associated phenomenon of Zoom bombing. Can you talk a bit more technically about the DH tools you’ve applied to studying Zoom bombing?

We looked at the zoom-bombing phenomenon on three separate platforms, Twitter, Reddit and YouTube, and the methods are slightly different for each. We began by creating a keywords list to query our data collection protocol: “Zoom attack,” “Zoom bombed,” “Zoom bomber,” “Zoom raider,” “Zoom raid,” “Zoom raids.” On Twitter we used the Twitter Capture and Analysis Toolkit (Borra and Rieder, 2014) to capture copies and metadata of all tweets containing the keyword strings. We collected 42,104 tweets before isolating a random sample of 1,000 tweets for qualitative coding. On Reddit we manually compiled a list of zoom-bombing subreddits from mentions in news media and links in our Twitter dataset. We then used the Pushshift API (Baumgartner, 2017) to collect all posts and metadata from these subreddits. This retrieved 11,944 posts including from subreddits that were banned by moderators due to the controversy around zoom-bombing. From here we isolated a sample of the 300 “top ranked” posts for qualitative coding. On YouTube, our data collection procedure began with the Video List Module provided through the YouTube Data Tools (Rieder, 2015). We created a video list by prompting a search query using our keywords as well as “online school trolling” which we found was a common tag in YouTube videos of zoom-bombing content. Using the Video Info Module (Rieder, 2015) and our video list in hand we isolated a sample of 60 videos, including all content with over 10,000 views. From here we conducted a qualitative content analysis, using video metrics (view count, like/dislike count, comment count) to inform our analysis and data visualizations.

Zoom felt like such an immediate corrective to the global isolation caused by COVID19, but your research really discovers the negative subtexts of “bigoted bombing.” Can you see your research on Zoom developing in further directions?

Absolutely, Zoom has served as a corrective to feelings of isolation and logistical problems of remote work and learning during the pandemic. The trouble is that Zoom was so rapidly adopted by such an expansive userbase when the platform hadn’t been properly tested for security flaws. As a result, over night Zoom became a target for trolls and pranksters who really didn’t require much technical expertise to carry out an attack, but simply needed Zoom meeting credentials (i.e. an ID code and sometimes a password). This information is readily accessible on social media often because meeting hosts are trying to promote their events. In other cases, meeting participants, especially high school aged students, volunteer access codes on Discord, 4chan or Reddit to encourage others to “bomb” their own classes. The reality is that as users adapted to Zoom, trolls followed suit by adjusting their tactics and targets to this new online space.

I don’t really see our research on Zoom developing in further directions, but I am generally interested in how new platforms and mechanisms are exploited by malicious actors. This is a recurring theme, for instance, pre-pandemic in public spaces users have reported disturbing or violent images being anonymously sent to their device via Apple’s AirDrop function. As COVID-19 proceeds I will certainly watch out for any new developments in online pranking and harassment.

What can you say about your experience of listening on Zoom? Or how the technology has mediated your interpersonal interactions?

My experience of listening on Zoom has improved significantly since the early days of social distancing. In March as I was finishing a graduate seminar online, some people tended to always leave their microphones on: the sound of typing, tapping, squeaky chairs and other annoying background noise had quite the cacophonous effect. Interestingly, these background sounds play a big part in what’s been called “zoom fatigue” as our ears have great trouble determining what to focus on. However, I’ve noticed that people have learned very quickly and now typically tend to mute themselves during meetings. I’ve also found that Zoom affects my experience of interpersonal interaction by creating an acoustic gap between my embodied voice and the online space of videoconferencing. As a result, I often feel quite detached both sonically and relationally during Zoom conversations. I suspect I’m not alone in this since the platform doesn’t really support interpersonal cues (e.g. body language) that allow for the turn-taking, interjections, and exclamations that are integral to face-to-face conversation.

Could you provide a short audio clip of anything (related to Zoom or not) that you are currently listening to and describe why you chose it?

https://www.youtube.com/watch?v=lK_HijA5StY

[Watch clip from 5:35-5:57] The short clip I’ve provided is from a zoom-bomb video and features the highly aggravating practice of “ear rape,” whereby perpetrators use abrasive sounds (e.g. screams, heavy breathing, loud music, etc.) to assault the ears of meeting participants. Ear rape often includes spamming obscene language to cause disruption or annoyance by overwhelming the audio channel. When applied to Zoom, ear rape weaponizes the minor annoyances associated with “zoom fatigue.” It gives the listener an idea of what kinds of disturbance Zoom hosts face when being zoom-bombed.

 

Stephen J. Neville is a SSHRC-funded PhD student in the joint-program of Communication and Culture at York and Ryerson Universities. His master’s research on privacy and surveillance issues of smart speaker technology was awarded the 2019 Beaverbrook Prize by the Canadian Communication Association and will be published in upcoming editions of Surveillance & Society and Convergence. He uses qualitative and digital methods to examine how social imaginaries are shaped through new media and online platforms. His dissertation research explores smart voice assistants (e.g. Alexa, Siri) through a genealogy of sonic technology and other assistive platforms.

This article is published as part of the Listening, Sound, Agency Forum which presents profiles, interviews, and other materials featuring the research and interests of future participants in the 2021 SpokenWeb symposium. This series of articles provides a space for dialogical and multimedia exchange on topics from the fields of literature and sound studies, and serves as a prelude to the live conference.

Klara du Plessis

Winner of the 2019 Pat Lowther Memorial Award and shortlisted for the Gerald Lampert Memorial Award, Klara du Plessis’ debut collection, Ekke, was released from Palimpsest Press. Her second book, Hell Light Flesh, is forthcoming Fall 2020. Klara is a FRQSC-funded PhD student at Concordia University focusing on poetry reading curation in the contemporary CanLit scene, and a researcher for SpokenWeb.