And then, the trolls came.
A torrent of emails, messages, rumbles, squeaks, snapgrams, televars came at us. Mom and I were called clickwhores, paid actresses, grief profiteers. Strangers sent us long, rambling walls of text explaining all the ways Dad was inadequate and unmanly.
Hayley didn’t die, strangers informed us. She was actually living in Sanya, China, off of the millions the UN and their collaborators in the US government had paid her to pretend to die. Her boyfriend—who had also “obviously not died” in the shooting—was ethnically Chinese, and that was proof of the connection.
Hayley’s video was picked apart for evidence of tampering and digital manipulation. Anonymous classmates were quoted to paint her as a habitual liar, a cheat, a drama queen.
Snippets of the video, intercut with “debunking” segments, began to go viral. Some used software to make Hayley spew messages of hate in new clips, quoting Hitler and Stalin as she giggled and waved at the camera.
I deleted my accounts and stayed home, unable to summon the strength to get out of bed. My parents left me to myself; they had their own battles to fight.
Decades into the digital age, the art of trolling has evolved to fill every niche, pushing the boundaries of technology and decency alike.
From afar, I watched the trolls swarm around my brother’s family with uncoordinated precision, with aimless malice, with malevolent glee.
Conspiracy theories blended with deep fakes, and then yielded to memes that turned compassion inside out, abstracted pain into lulz.
“Mommy, the beach in hell is so warm!”
“I love these new holes in me!”
Searches for Hayley’s name began to trend on porn sites. The content producers, many of them AI-driven bot farms, responded with procedurally generated films and VR immersions featuring my niece. The algorithms took publicly available footage of Hayley and wove her face, body, and voice seamlessly into fetish videos.
The news media reported on the development in outrage, perhaps even sincerely. The coverage spurred more searches, which generated more content…
As a researcher, it’s my duty and habit to remain detached, to observe and study phenomena with clinical detachment, perhaps even fascination. It’s simplistic to view trolls as politically motivated—at least not in the sense that term is usually understood. Though Second Amendment absolutists helped spread the memes, the originators often had little conviction in any political cause. Anarchic sites such as 8taku, duangduang, and alt-web sites that arose in the wake of the deplatforming wars of the previous decade are homes for these dung beetles of the internet, the id of our collective online unconscious. Taking pleasure in taboo-breaking and transgression, the trolls have no unifying interest other than saying the unspeakable, mocking the sincere, playing with what others declared to be off-limits. By wallowing in the outrageous and filthy, they both defile and define the technologically mediated bonds of society.
But as a human being, watching what they were doing with Hayley’s image was intolerable. I reached out to my estranged brother and his family.
“Let me help.”
Though machine learning has given us the ability to predict with a fair amount of accuracy which victims will be targeted—trolls are not quite as unpredictable as they’d like you to think—my employer and other major social media platforms are keenly aware that they must walk a delicate line between policing user-generated content and chilling “engagement,” the one metric that drives the stock price and thus governs all decisions. Aggressive moderation, especially when it’s reliant on user reporting and human judgment, is a process easily gamed by all sides, and every company has suffered accusations of censorship. In the end, they threw up their hands and tossed out their byzantine enforcement policy manuals. They have neither the skills nor the interest to become arbiters of truth and decency for society as a whole. How could they be expected to solve the problem that even the organs of democracy couldn’t?
Over time, most companies converged on one solution. Rather than focusing on judging the behavior of speakers, they’ve devoted resources to letting listeners shield themselves. Algorithmically separating legitimate (though impassioned) political speech from coordinated harassment for everyone at once is an intractable problem—content celebrated by some as speaking truth to power is often condemned by others as beyond the pale. It’s much easier to build and train individually tuned neural networks to screen out the content a particular user does not wish to see.
The new defensive neural networks—marketed as “armor”—observe each user’s emotional state in response to their content stream. Capable of operating in vectors encompassing text, audio, video, and AR/VR, the armor teaches itself to recognize content especially upsetting to the user and screen it out, leaving only a tranquil void. As mixed reality and immersion have become more commonplace, the best way to wear armor is through augmented-reality glasses that filter all sources of visual stimuli. Trolling, like the viruses and worms of old, is a technical problem, and now we have a technical solution.
To invoke the most powerful and personalized protection, one has to pay. Social media companies, which also train the armor, argue that this solution gets them out of the content-policing business, excuses them from having to decide what is unacceptable in virtual town squares, frees everyone from the specter of Big Brother–style censorship. That this pro-free-speech ethos happens to align with more profit is no doubt a mere afterthought.
I sent my brother and his family the best, most advanced armor that money could buy.
Imagine yourself in my position. Your daughter’s body had been digitally pressed into hard-core pornography, her voice made to repeat words of hate, her visage mutilated with unspeakable violence. And it happened because of you, because of your inability to imagine the depravity of the human heart. Could you have stopped? Could you have stayed away?
The armor kept the horrors at bay as I continued to post and share, to raise my voice against a tide of lies.
The idea that Hayley hadn’t died but was an actress in an anti-gun government conspiracy was so absurd that it didn’t seem to deserve a response. Yet, as my armor began to filter out headlines, leaving blank spaces on news sites and in multicast streams, I realized that the lies had somehow become a real controversy. Actual journalists began to demand that I produce receipts for how I had spent the crowdfunded money—we hadn’t received a cent! The world had lost its mind.
I released the photographs of Hayley’s corpse. Surely there was still some shred of decency left in this world, I thought. Surely no one could speak against the evidence of their eyes?
It got worse.
For the faceless hordes of the internet, it became a game to see who could get something past my armor, to stab me in the eye with a poisoned video clip that would make me shudder and recoil.
Bots sent me messages in the guise of other parents who had lost their children in mass shootings, and sprung hateful videos on me after I whitelisted them. They sent me tribute slideshows dedicated to the memory of Hayley, which morphed into violent porn once the armor allowed them through. They pooled funds to hire errand gofers and rent delivery drones to deposit fiducial markers near my home, surrounding me with augmented-reality ghosts of Hayley writhing, giggling, moaning, screaming, cursing, mocking.
Worst of all, they animated images of Hayley’s bloody corpse to the accompaniment of jaunty soundtracks. Her death trended as a joke, like the “Hamster Dance” of my youth.