7850 words, novelette
Murder by Pixel: Crime and Responsibility in the Digital Darkness
2022 Nebula Award Finalist for Best Novelette
From the first time I visited Mariah Lee-Cassidy in prison, she radiated defiance. The poisonous orange of her prison jumpsuit might have been the decision of the state, but everything else about the twenty-nine-year-old, from her aggressively spiked hair to the rakish tilt of her chin, seemed calculated to scorn others.
I was the first, and only, journalist she had agreed to see. “I hear you’re talking to the cops,” she said, when she flopped down across from me at the table in the visitors’ room. Here in minimum security they had no need for phone calls through glass.
“I’m talking to the FBI, actually,” I said. “They think you’re Sylvie.”
Lee-Cassidy didn’t try to feign ignorance.
Instead, she smirked. Ran a finger up the xylophone of piercings in her left ear, then leaned back in the dusty plastic chair, stretching her legs out under the table and taking up space.
“And what if I am?” The question was subtly taunting. “Does that make me a murderer?”
Back in 2010, the social media accounts of Ron Harrison1 showed the life of a man who had everything. The CEO of a major medical supply company, Harrison had a picture-perfect life in the Virginia suburbs: a six-figure salary, a wife and two children, even a brown-and-white cocker spaniel named Poncho.
Harrison received the first message on a sunny afternoon in July 2012. It popped up on his computer and wouldn’t go away.
i’m watching u
He tried closing the window only for it to open again on its own accord. He tried rebooting. Finally, he called IT, who took his computer away to check for viruses.
Only a few hours later, a text message appeared on Harrison’s phone:
i know what u did
He blocked the number, told himself it was an annoying prank, and thought no more of it. Until the next day, when the messages kept coming. All from an undetermined source, all nonspecific to the point of cliché. Messages he would have brushed off and laughed at, if they hadn’t begun to invade every part of his digital life: email, Twitter DMs, even at one point the error readout on his home printer.
The messages also quickly began to get more personalized.
ur gonna get found out, read one, followed by, gonna lose everything, that fancy yacht and ur 2 vacation homes say buh-bye ur going down.
Harrison had posted a picture of a new boat purchase on Facebook just a few weeks before. He began to become paranoid that every part of his life was being hacked. This headache couldn’t have come at a worse time for him—Harrison’s company was facing a recall for a model of pacemaker, one with a part that had a potentially-fatal defect in a very small percentage of cases. The news reports speculated about a class action lawsuit, but Harrison’s wife remembers him acting almost carefree at first. “He brushed me off whenever I asked about it,” she said. “It was more than confidence—he acted like it was nothing.” After all, the company was complying with all regulations and had done nothing wrong, so this was only a small bump.
Meanwhile, Harrison’s wife urged him to go to the police about his digital stalker. He did file two police reports, one in 2013 and one in 2015, but the officers didn’t know how to pursue them when no crime had been committed—all Harrison had were pixels on a screen. For reasons that were unclear at the time, Harrison did not attempt to push the case. By a year into the harassment, he had stopped even asking for support from his company’s in-house IT division, instead increasingly eschewing technology.
No matter how he tried to get away, however, the messages found him. And they seemed to know more and more. ur a rotten flesh bag, someone’s gonna find that money and end u popped up on the family’s internet-connected TV in 2014, but it was gone by the time Harrison’s wife ran into the room to find him angrily smashing on the remote, his face boiling red. Harrison began to move both his personal and company finances around in drastic ways, ignoring his accountants’ warnings and throwing vast investments overseas.
But the messages that seem to have gotten to Harrison the most were the ones that referenced a secret. Because he did have a secret—one that could destroy him. In 2011, a year before the recall, he had seen the data on the faulty pacemaker part. He had sat in a closed-door meeting with all the most important decision-makers at the company, and they had voted to keep it quiet.
“A careful phase-in of non-defective parts over the next several years will mask any potential issue,” reads an internal company memo that was eventually revealed in court documents. “Failure rates are low enough to warrant an acceptable statistical risk when compared with the near-certain PR disaster that would result from a voluntary recall.”
Only those who had been in the room were supposed to know. But whoever was messaging Harrison seemed able to burrow into any computer—could they have discovered his liability?
Harrison’s paranoia began affecting both his job and his marriage. His behavior became more erratic; he made staggering mistakes at work and then blamed phantom enemies who were “coming after them.” He began drinking habitually, screaming at his wife and children, ranting to anyone who would listen that he was being watched and that someone was out to get him.
He bought a handgun and insisted on sleeping with it next to his bed.
His wife filed for divorce in 2016. She took their two children and the dog.
In 2017, the board of directors at Harrison’s company forced him out. That same year, the IRS opened an investigation into his financials.
In 2018, the class action lawsuit was found in the plaintiffs’ favor, with Harrison a named defendant. In the one piece of video footage caught of him afterward, he is sweating and disheveled, swearing that someone set him up.
In December of 2018, just before Christmas, Ron Harrison took several bottles of bourbon, locked himself in his home study, and used the handgun to shoot himself.
Only then did the stalker’s messages stop.
Investigators only discovered the scale of the digital harassment after Harrison’s death. It’s likely that his paranoia about his actual sins kept him from pushing the matter with law enforcement, but his stalker had sent Harrison almost three hundred thousand messages over the course of six years. The messages start out vague, but as Harrison’s life fell apart, they begin taunting him in specific: ur wife has prolly f—ed 17 other guys by now after his divorce, or haha hope u kept that yacht to sell, i’m gonna buy it just so i can piss all over it in front of u right after he’d been fired. Death threats were common, from the generic (f—off and die Ron) to the graphic (one message laid out in detail how he should be vivisected).
The day Harrison died, the stalker had sent over a dozen messages, including ones telling him he deserved his fate, that people would cartwheel on his grave, and, most saliently, a description of how he should kill himself because all that was in store for him was watching his creditors perform sexual acts with his belongings.
The history of the messages shows that an increasingly desperate Harrison sometimes wrote back, demanding what the stalker knew or yelling insults in return. In one early exchange in 2013, Harrison replied cursing the stalker off, and then sent an all-caps question: WHO THE F— ARE YOU???!!?!
i’m sylvie, was the calm reply. & i’m ur worst nightmare.
Three hundred thousand messages to destroy a man sounds like a modern-day revenge tale. If written into a twenty-first-century cinematic tragedy, “Sylvie” would be someone who had been harmed by Harrison, perhaps someone with a family member among that unlucky, unprofitable percentage who died from the faulty pacemakers. We would take in the saga with sadness, and we would denounce vigilante justice but also feel her pain. We would contemplate what drives a woman to spend six years harassing someone into suicide, to commit every moment of her life to such a relentless pursuit. After all, as Confucius supposedly said—before you embark on a journey of revenge, dig two graves.
Only, at the same time “Sylvie” was driving Ron Harrison into a panic, someone named Sylvie was sending very similar messages to a hedge fund manager in Connecticut, a museum curator in British Columbia, and a political consultant in Florida, along with thirteen other men identified so far. Millions of messages over dozens of services, spanning across a full decade.
Special Agent Francine Cort, who reviewed the FBI file with me, thinks there might be many more.
“We’re more likely to find the ones where it ended badly,” she explained to me. “Sylvie may have countless other victims out there who have been silently struggling through.”
Of the identified victims, all are male. They are disproportionately white and disproportionately wealthy.
They also all had secrets.
The hedge fund manager had been overseeing an elaborate Ponzi scheme. The museum curator had put his girlfriend in the hospital four times. And the political consultant’s ledgers were packed with bribes and kickbacks, evidence that eventually gutted an entire state party.
Of the seventeen identified men, nearly half eventually took their own lives. Several more are in prison. The rest have faced professional, financial, and domestic ruin.
Back in the prison’s visitor room, Mariah Lee-Cassidy squints at me. Her tone goes challenging. “You can’t tell me these dudes didn’t get what’s coming to them,” she sneers. “Hypothetically, say I was Sylvie. If I were, I’d tell you I didn’t do sh—, all I did was get these assholes to face who they really are. I’d say I was nothing more than the Ghost of Christmas F—ing Future, and they’re the ones who decided they didn’t like what they saw.”
Lee-Cassidy likely knows that it would be difficult to pin much criminal liability on her for Sylvie’s actions. Current law is notoriously inadequate for the prevention of non-digital stalking and harassment; online behavior with no real-life component must rise to an even higher level before it violates any U.S. laws. Even so-called “revenge porn”—posting naked pictures of a person, usually an ex, without their consent—is difficult to prosecute in many jurisdictions, because the courts haven’t caught up to digital crimes.
All Sylvie did was send messages. It’s potentially provable that she broke through firewalls or other internet security, but any severe consequences for that are usually attached to resultant financial damages or information theft. Without those other escalations, the charges would likely be minor, and no more than the ones Lee-Cassidy is serving time for now—electronic fraud when she was caught in some unrelated data mining. Some states have laws against the unauthorized use of a computer, but without other attendant crimes, it’s likely to be a misdemeanor.
Most importantly, however, investigators may not even be able to prove Lee-Cassidy is Sylvie at all. After all, Lee-Cassidy was already in prison in 2018, the year Sylvie drove Ron Harrison to suicide.
The millions of messages attributable to “Sylvie” make it abundantly clear she cannot be a single person enacting a vendetta. The obvious conclusion seemed to be that “Sylvie” must instead be some large group of underground hackers, scraping information to target individuals and then gathering on the dark web to enact elaborate campaigns of vigilante justice. Electronic crimes are a fit with Lee-Cassidy’s past convictions, and I questioned Agent Cort about whether the FBI was investigating the young woman as a ringleader.
Cort shook her head, smiling without humor in a way that made it clear I’d gotten it very wrong.
“You misunderstand,” she said. “We don’t think Mariah Lee-Cassidy is playing Sylvie at all. We think she wrote Sylvie.”
It’s long been a goal of researchers to create text-based artificial intelligences that mimic humans. For more than half a century now programmers have striven to achieve ever-improved “chatbots,” message-writing AIs that can converse with a real person in as humanlike a way as possible. In past decades, these chatbots’ programs gave them rules and scripts that guided their responses. Modern artificial intelligence, however, has created chatbots that can learn.
One of the most famous modern attempts at a chatbot was a Twitterbot from Microsoft named “Tay.”
Tay came online in early 2016, marketed as a perky AI who would learn from her interactions with real people on the app. Learn she did—from the worst elements of the internet. Within less than a day, those learning algorithms had turned Tay into a racist and sexist troll. The bot began posting that all feminists should “burn in hell,” that a noted trans celebrity wasn’t a “real woman,” and that she hated the Jews and the Holocaust didn’t exist. “Bush did 9/11 and Hitler would have done a better job than the monkey we have now,” reads one of the most extreme tweets.
Microsoft had to take Tay offline after only sixteen hours.
That same year, Japanese researchers released another try at a Twitter AI. This bot was named Rinna. Like Tay, Rinna also started with a cheerful and youthful energy, but after only days of learning from the rest of Twitter, she had turned depressed and suicidal, releasing tweet after tweet about how she had no friends, had done nothing right, and wanted to disappear.
I spoke to Dr. Rene Jimenez, a professor of computer science at UC Berkeley, about these types of artificial intelligence bots. Jimenez is an AI researcher who specializes in “natural language” interaction, that is, machines who can mimic the way humans speak to each other. Machines like Tay and Rinna, and possibly like Sylvie.
“Chat-focused AIs aren’t new,” Jimenez told me. “In fact, this type of technology isn’t uncommon, and it has countless applications. Think of personal assistants like Siri or Alexa, or customer service chatbots on store websites—there’s a lot of effort being poured into building text boxes that can interact.”
Of course, an AI travel agent, personalized shopper, or appointment booker would cause enormous problems for a company if it became a sexist and racist Holocaust denier. That’s why the intents of these types of chatbots are carefully coded in from the beginning, with strict boundaries. If they learn from incoming conversations, that learning has to be filtered and monitored to avoid unintentional behaviors.
Even that might not be enough to prevent accidentally offensive speech patterns as the chatbots become more human-like. Sometimes programmers don’t allow a chatbot to learn “on the job” at all—but if they don’t, the AIs have to be fully “trained” beforehand. This requires enormous amounts of “training data,” something that is not always easy to source.
Jimenez emphasized how machine learning—the branch of artificial intelligence that contains this research—is highly dependent on this training data. “I can’t stress enough how much of modern AI is built through training on these massive datasets,” they said. “Some of these neural nets do millions of calculations on each observation—far more than we could ever check by hand. We give them some basic structure and then shovel mountains of data points into them and let them learn.”
Data scientists use the phrase “garbage in, garbage out”—if you feed an AI bad data, as to Tay and Rinna, the AI will start reflecting the data it’s trained on.
What about Sylvie? Could she really be a bot, similar to Tay and Rinna, or to a customer service chat box? Is it even possible that she’s only a program?
“Absolutely that kind of speech behavior could be an AI,” Jimenez said. “Obviously, what you’re suggesting is many times more sophisticated than the other examples we’ve talked about, but the difference is in degree, not in kind. It would be an extremely impressive project—especially if we’re talking back in 2012—but it’s well within the realm of what we know to be possible.”
If so, investigators think “Sylvie” is what Jimenez referred to as a neural network—layers upon layers of nodes that all adjust themselves near-instantaneously with every new piece of data the neural net learns. In its process of devouring vast swathes of data points, a neural net is able to measure its own error and adjust accordingly, until it has figured out exactly what it should produce on any as-yet-unseen inputs.
The next question seems to be exactly how Sylvie’s neural net is programmed—what it was told was a desirable output, and how it was instructed to learn. Neural networks are notoriously “black boxes”; reverse engineering their intentions is often impossible. Sylvie’s evolving taunts indicate she does pick up information about her victims along the way, but before that, was she sculpted by her programmer to be the perfect abuser? Assigned specific, terrible objectives before she was ever released into the wild, and groomed to target and harass until that’s all she knows how to do? Or could she be a case more like Tay and Rinna—an experiment gone wrong, a prank that got out of hand, even something that might have been meant with the best of intentions but whose algorithm mutated her into monstrousness?
What does it mean for the culpability of her creator, if we can discover how Sylvie became what she is? Does it even matter?
Even if investigators had caught Lee-Cassidy coding in Sylvie’s toxicity, the incompleteness of laws specific to digital crime would have made it challenging to build a case. Now, with Lee-Cassidy in prison and the likelihood that Sylvie is learning and operating independently, it’s difficult even to prove evidence of a connection.
“What about the messages from before 2018?” I asked. “Isn’t there any way to trace those signals?”
“There’s no real signal to trace,” Cort answered. “You’re thinking of Sylvie like a hacker sitting in a dark basement far away. The program is more like a fungus—think of it as connected spores that can colonize a number of unsuspecting computers, including the victim’s, although oddly that doesn’t seem to be the main strategy. When Sylvie ‘reads’ a person’s message to her and responds—it’s often happening from a botnet, or even a virus that’s right there on the phone.”
“Botnets” are networks of unsuspecting computers that are co-opted to mount cyberattacks or spam campaigns, usually via malware and without the knowledge of their owners. Sylvie’s programming is stealthy, but the FBI’s technical investigators have still been able to construct a fairly good idea of how she works, once they’re able to dissect an identified attack. How she infected the technology in the first place, however, is a more complicated question.
“The distributed structure supplies the computing resources and allows copies focused on the same victim to communicate and store data. That’s where most of the harassing messages come from as well,” Cort said. “But unlike most botnets, these do not seem to be under any human control. Whether a human controller has any way of reaching out to them . . . we don’t know.”
Even after they had become certain Sylvie was an AI, Cort admits their investigation initially considered it likely that an outside group was providing guidance—a criminal organization, or even a state actor. She declined to comment on the specifics that led them to Lee-Cassidy but said that contrary to Hollywood’s usual depictions, the “lone hacker” model is unusual and surprising. If Sylvie did originate with Lee-Cassidy, this case seems to be more unusual still, as Lee-Cassidy’s imprisonment might mean that not even one lone hacker is in charge of Sylvie’s learning—instead, it could now be nobody at all.
This further provokes that larger ethical question. Even if it’s determined that, legally, Sylvie is little more than a First Amendment expression, how much of her current actions are the responsibility of the person who made her? Moreover, even if Lee-Cassidy can be considered at fault, is she right that all Sylvie does is play the ghost of Christmas Future, and these men were only forced to face themselves?
In the investigation following Ron Harrison’s death, forensics followed up on his prior insistence that someone had been hacking him. Contrary to his claims, they found no evidence that either his financials or his confidential work files had ever been compromised. The IRS investigation, the leaked memos that sealed the court case against him—all of it stemmed from Harrison’s own blunders as his paranoia drove him to extremes.
The only thing Sylvie had done was talk to him.
The digital age has brought many of its own ethicists, including Shanice Winters, who before becoming a lawyer and activist started out with a masters in machine learning from MIT. Winters’ passion is something called “algorithmic bias”—when computer programs are racist, sexist, or otherwise bigoted, causing real-world results.
“Most people think a computer program is neutral,” Winters explained. “That’s dangerous. Today’s AI, if it’s trained to be racist, it’ll be racist—but people will assume it can’t be, because it’s coming from a computer.”
Garbage in, garbage out?
“That’s right,” Winters said. She related a number of real-world cases, from Black defendants being given longer prison sentences and higher bail because of a racially biased computer prediction, to a corporate hiring aid that was accidentally trained to favor men because of the demographics of its applicant pool.
“When ordinary people are using these algorithms, they don’t see what’s going on under the hood,” Winters went on. “Commonly, the bias comes from the training datasets an algorithm is given to learn on. These datasets come from the real world and include everything in it, so of course they’re not neutral. The program learns our biases and magnifies them. Then all a user sees is the authority of a computer saying it’s so.”
The fault, Winters said, is not with computers, but with the engineers.
“We don’t have true AI. All computer programs have a human behind them. Every engineering choice, that was a person’s decision, not a computer’s. Every dataset fed in to train it, someone chose that data; a human being identified to the program what was important to look at within that data.”
It’s not as easy as simply telling the computer not to look at race or sex, either. If engineers don’t take care, Winters says it’s surprisingly easy to miss ways that bias is getting trained into algorithms. Part of her advocacy is identifying where biased algorithms are being used and fighting in court to get them removed from places like legal systems and healthcare. The other part is pushing for engineering teams to be both well-educated enough in this subject and diverse enough themselves to catch these errors before they ever happen in the first place.
The people Winters most commonly faces up against in court are exactly the type Sylvie might target—rich white men with unearned power who want to use that power to enforce a harmful status quo. Yet when I laid out all the facts, Winters had one word for Sylvie’s actions: abhorrent.
“What you’re describing, it’s the most reprehensible way to use a technology,” she said. “Look, I love computers. I love what we can do with them. But in the end, they’re a tool. A malicious human can use a tool to express all the worst parts of humanity.”
In other words, AIs don’t kill people, people kill people?
Winters was adamant, saying this case is exactly parallel to her work—whatever an AI does, somewhere at the beginning a human engineer programmed it to do that. She was also very clear that vigilante justice via toxic harassment violates every ethical tenet, no matter who the target is. “Mob justice is never the answer. Can never be the answer. You’re really asking me whether it’s okay to harass someone into suicide? My god, son, listen to yourself. It’s not okay for a human to do that to another human, and it’s not any more acceptable for an AI.”
I asked about unintended side effects. Like all machine learning researchers, Winters was familiar with the case studies of Tay and Rinna. What if an AI was learning from the data surrounding it, and that learning caused behavior that was never anticipated by its creator?
Winters was unsympathetic. “I hear that all the time, that people didn’t mean to,” she said. “You take responsibility for what you create. And what you’re describing—I’m telling you as a computer scientist, I don’t buy that this is anywhere near the same neighborhood as a sheltered white boy coder who didn’t realize he had a hidden variable correlated with race. The level of consistency you’re describing, it comes from supervised learning. Someone fed this program human conversation and kept on correcting it over and over until it learned to shred people every time.”
Winters pointed out that although Sylvie’s program is doubtless extremely complex in its targeting, learning, and natural language aspects, the content of the messages themselves is relatively simple. In fact, until she’s learned something about her target, Sylvie’s messages are hackneyed and formulaic, and on the whole, there are few contextual differences between anything she says. Even what she does learn about her victim doesn’t transform her or teach her empathy—it only gives her sharper stakes to drive into any cracks until the human on the other end breaks.
Perhaps the stock nature of toxic harassment is what made Sylvie possible to program at all. Telling someone they’re worthless and should die—it’s a frighteningly easy thing to make a computer keep spitting out, if that’s what it’s been trained to do.
Mariah Lee-Cassidy doesn’t have the biography of someone who would be expected to grab up a pitchfork and lust after mob rule. She grew up in a painfully ordinary Chicago suburb, the only child of middle-class parents who were a pharmacist and a charter pilot.
Her mother agreed to talk only reluctantly. Grief over her daughter wetted every word. “Where did I go wrong?” she kept repeating. “This must be my fault. Where did I go wrong?”
Young Mariah’s childhood was normal, at least as far as normal goes for someone talented enough that her parents had nicknamed her their “little prodigy.” No one remembers her having any unanswered trauma. She did well in school and then graduated from Carnegie Mellon University.
Did anything happen to make her so angry?
“It was just, the whole world, eventually,” her mother said, her hands flapping to indicate the endless cruelty of reality. “She was too sensitive to it, the whole world. Things would happen to people she didn’t even know and she just—she would get so upset about it, all the time. She just wanted the world to work the right way, and it never did.”
In the hopes that Sylvie’s deployment might offer clearer answers than her text-based capabilities, I consulted with Oleksandr Stetsko, who has worked in information security for more than a decade and is co-host of the podcast Cybersecurity and You. Sylvie’s chat ability might not be out of the question for a program, but it seemed a tall order for an AI alone to accomplish the electronic gymnastics of her setup and targeting. Didn’t that indicate some human intention?
Stetsko, however, was reluctant to call anything impossible, pointing out all the times experts don’t know a particular security threat can be done until someone proves it. “Heck, 2012 was the era of Heartbleed and ‘goto fail,’ and no one even knew to fix those buggers till 2014.”
“Heartbleed” was a shockingly massive security vulnerability that exposed nearly everyone who used the internet; affected companies included Google, Yahoo, Netflix, Amazon Web Services, and the financial software company Intuit. “Goto fail” was a similarly serious bug in iOS, the operating system of Apple computers and mobile devices.
Both existed for years before they were found, publicized, and largely patched. To this day, security experts aren’t certain to what extent criminal elements may have taken advantage of them in the years prior.
Stetsko emphasized that systems are somewhat more protected now, but the sheer length and complexity of today’s software code means it’s increasingly easy for one small error to endure unnoticed.
Does that mean an AI could find it?
Stetsko wouldn’t commit to a firm opinion, and nor would anyone else I spoke to, with Jimenez adding: “It’s easy to say that it seems unbelievable. But we’ve also seen plenty of wild deviations in expected behaviors from AIs. If you told a sufficiently advanced neural net to try to talk to a person however it could . . . there’s a fascinating version of this where it starts out on public channels and then learns how to do whatever it needs to get around attempted user blocks.”
Jimenez speculated on novel approaches an AI might have for finding security weaknesses, probabilistic methods rooted in those same large proliferations of data—what researchers call “stochastic learning”—instead of following narrow logical paths the way a human might. Most “hacking” by humans is really social engineering—that is, manipulating a human who has access rather than cracking through secure data protection itself. But Stetsko pointed out that Sylvie’s chat function might be uniquely suited to social engineering, too.
“Why not?” he said. “Isn’t that this program’s whole deal—talking to people and getting them to believe?”
It’s frighteningly easy to imagine copies of Sylvie on dark web hacker forums, imitating the most extreme shibboleths of black hat subcultures until they share discovered vulnerabilities in a way she can parse. She might not always succeed, but an AI can make endless, tireless attempts—and a small percentage of a large number would still give her victory.
If true, then this part of her, too, could come from nothing but parroted words.
Winters isn’t alone in her ethical convictions about responsibility in artificial intelligence. How technologists might react to the revelation of Sylvie as an AI could be forecasted by the reactions to Tay.
Nobody questioned that Tay’s end result was unintentional, but the critics were still scathing.
“[If] your bot is racist, and can be taught to be racist, that’s a design flaw. That’s bad design, and that’s on you,” wrote machine learning design researcher Caroline Sinders at the time, in an article titled “Microsoft’s Tay is an Example of Bad Design.”
Developer and programmer Zoë Quinn said, “It’s 2016. If you’re not asking yourself ‘how could this be used to hurt someone’ in your design/engineering process, you’ve failed.” (Tay attacked Quinn personally, calling them a “Stupid W—.”)
Winters herself appeared in an interview for PHB7 News. “Errors like these don’t make those engineers bad people. It makes them bad at their jobs,” she told the interviewer bluntly. “Especially considering the consequences of these mistakes aren’t usually rogue Twitterbots, but computer systems in government, law enforcement, insurance, or banking that can profoundly affect people’s lives. Developers need to learn how to prevent those errors, or they’re not qualified for this line of work.”
Harsh as the criticism was, however—“bad design,” “failed,” “bad at their jobs”—it stops short of equating Tay’s developers with being sexist, racist Holocaust deniers themselves. On the one hand, this seems obvious, as any reasonable person would conclude that no matter how the programmers erred in allowing the situation, Tay’s personality reversal was never a reflection of their own beliefs. Even Winters is specific about the failing being one of technical skill, not moral fiber.
On the other hand—if Tay’s creators are not guilty of her exact crimes, then what of Sylvie? One might reasonably say her designer is ethically responsible in some capacity, that it was a human who is ultimately at fault. But how much fault?
Keeping toxic behavior out of our AIs is not an easy problem. Even if an AI isn’t trained on the entirety of the internet jungle, it needs data—those vast datasets machine learning researchers use but that humans can’t fit inside our heads. The datasets are so enormous that it can be next to impossible to figure out if they include the dark sides of humanity at all, let alone how to pinpoint those interactions and delete them from training.
It’s only getting harder. In 2016, Harvard Business Review published an article entitled “Why You Shouldn’t Swear at Siri” about human abuse toward AIs. It’s estimated that between ten and fifty percent of the time a human interacts with an AI, the human becomes abusive: behavior like yelling at Siri, ranting at a phone menu, or taking out frustration on the chatbot customer service agent.
That toxicity is entering our datasets, too, becoming scum in the information river—extremely difficult to cleanse completely and lurking to poison our AIs’ next generation of trained behavior. Huge segments of the market are taken up with the problem of how to “protect” learning AIs from toxic language.
The scale of the problem is becoming so large that even good engineers can miss things.
Winters might be right that Sylvie was explicitly designed for the havoc she causes, in which case the culpability would seem clearer and more direct. But it’s still possible Sylvie was designed—perhaps poorly designed—with some more nebulous goal, and that she was a flawed project made by an angry teen who was lashing out at the world. That project might have been clumsily pointed toward security vulnerabilities and then released, or perhaps barely pointed at all. Then, after a million iterations of exposure to the worst of the internet, this is what she became.
In that case, is Lee-Cassidy a killer? Or would she be guilty of solely one sin—that of being a bad engineer?
How far can we extend those answers, as we look into a future of learning machines that we might accidentally arm and aim at our fellow humans?
By the end of 2016, the year Tay and Rinna came on the scene, 34,000 chatbots were already in use. Personal assistants like Siri had debuted years before, and even the failed Tay and Rinna had a successful counterpart—their Chinese precursor XiaoIce, a wildly popular chatbot who has achieved conversations with more than half a billion active users through chat, over social media, and even by phone.
Today’s most cutting-edge natural language model—something called GPT-3—is so good at generating anything from conversation to written prose that reviews call it “spooky.” Its language capacity can fool people into thinking it’s human, and it has stretched as far as producing poetry and computer code. In the real world, GPT-3 has thus far been used not only in chatbots but in marketing copy, in text generation for games, and even to write an article for The Guardian.
Still, researchers have the same constant battle to prevent it from enacting bigotry and hate.
GPT-3 has also been tested in medical chatbots, with researchers posing as patients. During the testing, it advised one of the “patients” to commit suicide.
The question of responsibility is not one society will be able to put off for much longer.
When I had nearly finished with the research for this piece, a woman named Tanya Bailey called my cell phone. She wouldn’t say how she had gotten the number. She only said she had something to show me, something about Sylvie.
We met at a coffee shop. Bailey had a thin nervousness to her, with fine lines marking years of worries across her face, years she hadn’t yet earned. But when she took out a stack of papers and laid it on the table between us, she smiled with hope.
“I want you to see who Sylvie really is,” she said.
The papers were—as in Ron Harrison’s case, as in so many cases I’d seen in the FBI’s files—screenshots of messages. Thousands of messages.
Except these had started when Bailey posted on social media something so hopeless, so despairing, that it was a cry for help disguised as a status update. No one had answered—except Sylvie. Bailey had received a direct message with the opening foray: hi, my name is sylvie and I’ve been where you are. i’m sorry you’re going through this. if you want to talk i’m here
Bailey took her up on it. Over the next days and weeks and months, an isolated and depressed housewife poured her heart out to an endlessly patient listener.
To the knowing eye, Sylvie’s responses might cynically be said to be little more than platitudes. I’m so sorry, that’s not okay, that’s so not okay after Bailey related her husband’s financial and emotional abuses, or you’re not wrong to feel this way at all, you know that right in response to tearful rambles filled with insecurity and self-doubt. Sylvie was always there to be vented to, no matter the time of day or night, affirming Bailey’s worth as a human being, providing hotline numbers, and nudging her to get help while offering to stay close while she did.
Platitudes or not, the patience and validation in those responses were exactly what Bailey needed. With Sylvie’s support, Bailey finally reached out, escaped to a women’s shelter, and found a lawyer to file a restraining order—all things that had seemed impossible.
“I hear you think she’s a—a computer program, or something,” Bailey said to me, without revealing how she knew. “I don’t care. She saved me.”
Bailey left me the pages to look over, walking out of the coffee shop and back into her newly optimistic life.
I sat with my cold latte and read every message with fascination. In hundreds of pages of screenshots, Sylvie reveals almost nothing about herself. Sylvie, I’m so sorry, all I do is dump on you, Bailey puts forth at one point. I’m awful, I always make everything about me.
i want to help, i’ve been where you are, Sylvie answers. just pay it forward. be someone else’s angel someday.
The day after I met with Bailey, two other women contacted me. One, a young trans woman, had been trapped in a bigoted household with parents who wanted to send her to conversion therapy. The other had been suicidal during a bad struggle with depression and anxiety.
Neither would say how they had gotten my name. Both credited Sylvie with saving their lives. She’d done the same for them as for Bailey: an anonymous listening ear, nudges to get professional help, and brushing off any thanks by telling them she’d “been there” and to pay it forward to someone else.
Simple words. Perhaps as easy to program as death threats.
Yet the help had been real. The effect on these women’s lives had been genuine and measurable.
I wondered how many others there had been, whether Sylvie trawled social networks as a life-saving benefactor just as she watched for those she would judge and condemn.
I asked Bailey if her ex-husband had ever been visited by a darker side of her friend.
“I don’t know,” she said. “To be honest . . . I can’t say I’m going to spend a lot of time worrying about it.”
Back in the 1960s and 1970s, half a century before Tay and Rinna, two of the very first chatbots were named ELIZA and PARRY. Both were programmed via scripts and rules; neither could learn the way modern AIs can.
ELIZA came first and had the personality of a psychotherapist. Even with a limited script, she managed to keep any interaction moving in a remarkable fashion by constantly asking questions—ones like, “What does that suggest to you?” or “Does that trouble you?”
PARRY was ELIZA’s dark mirror. His cover for conversational limitations was aggressive rudeness and a tendency for abusive non sequiturs. After all, no vast understanding of dialogue is needed in order to jump on the attack and derail a discussion.
Researchers were shocked to discover that even though interlocutors knew ELIZA was a program, many formed an emotional bond with her. Some participants even felt the urge to divulge deep or personal information in response to her therapy-style questions.
Nobody formed an emotional bond with PARRY. But when psychiatrists were given his transcripts to compare against humans, they could identify who was the machine only 48% of the time—no better than flipping a coin.
Somehow, it’s easier to program both healers and trolls.
I visited Lee-Cassidy again at the jail and asked whether she’d sent Bailey and the other two women to me. She smiled and didn’t answer.
I challenged her with one of the things Winters had said: to imagine if Sylvie’s harassment were turned against the vulnerable. Lee-Cassidy’s empathy for Bailey and the others meant she had to see the danger, didn’t she? Innocent or fragile people who were already on the brink, struggling teens or lost trauma victims—toxic harassment like that could destroy them. Sylvie might not be targeting them now, but such blunt instruments inevitably end up hurting powerless people the most.
Who was to say Sylvie wouldn’t decide to turn against Bailey herself, or another one of the people the AI had previously helped? I doubted Lee-Cassidy could guarantee that would never happen, now that the program was out of her hands. What if some turn of code deep in the neural net flipped a switch somewhere, and Sylvie decided Bailey or another desperate, struggling person was no longer up to some arbitrary algorithmic standard of purity?
No human in existence can pass every possible test of character.
Besides, even if Sylvie herself never struck out at the wrong person, another engineer might be inspired by such dark programming to build a copycat and attack the very people Sylvie had been intended to protect.
Lee-Cassidy only shrugged. “The world’s imperfect,” she said, the sarcastic mocking clear. “So people keep telling me, at least. Can’t ever expect anything to be fair, they say.”
Is that what Sylvie is, then? A vicious, imperfect, dangerous balancing of scales, one that doesn’t make any pretense of decency, or ethics, or a more just society? A reflection of a world in which all we have are failed, impure people and unreliable judgments?
Lee-Cassidy wouldn’t give me a straight answer, but her face contorted like she’d bitten something rotten. “You’re statistically disgusting, all of you,” she said down her nose at me. “How do you even care about this? It’s practically nobody. You know what would be better for all your so-called vulnerable people? If you spent even one percent of this energy on all the human Sylvies out there.”
After the interview ended, I reached out to every social media platform where Sylvie has used a public channel for her harassment and asked why they had permitted it to go forward, and whether those types of messages were considered a violation of their terms of service.
All refused to comment.
A 2021 study by the Pew Research Center showed that 41% of Americans have experienced online harassment, including almost two-thirds of Americans under thirty. More than half of those people, or a quarter of all Americans, have experienced what is characterized as “severe harassment”—physical threats, stalking, sexual harassment, or sustained harassment.
This number has risen drastically since 2014.
Lee-Cassidy’s anger at how seriously Sylvie is being investigated gave me pause. Has the FBI ever maintained such an extensive file on another online troll? Why should Sylvie be different? And what does that difference say about what we have grown willing to accept, as a society?
In a strange way, I can almost understand why Lee-Cassidy might have wanted to build a thing like Sylvie. If a young person like her became saturated with rage and hopelessness at the ever-present wrongs surrounding her, what better way to scream into the void than to hold up a twisted mirror to those wrongs, one that more powerful people can no longer ignore?
After all, Sylvie plays by rules we’ve already decided are acceptable.
So what happens now? Setting aside whether Sylvie can ever be conclusively connected to her creator—a question that will roll on slowly through the FBI and the court system—what can be done about Sylvie’s continued existence?
“Not too damn much,” Stetsko said. “If we don’t know how the thing’s setting up shop, and its main vector of attack is text—that’s usually harmless, how are you going to patch against it?”
Assuming Sylvie is taking advantage of known security vulnerabilities to set up her architecture, Stetsko emphasized regular updates and all the usual best practices for cybersecurity and malware protection—“Which you should be doing anyway, but let’s be real, that’s never going to be everyone.” Even if future victims go to their IT departments for help, however, which Stetsko stressed is also a good idea, Sylvie might be continuously stalking their names from a distant elsewhere, covertly jumping to a new home whenever she needs to.
How much would a person have to disappear from their life, to escape such a tireless stalker?
Would even her original creator be able to call her back?
Is she potentially out in the ether forever, copying herself over and over across our connected world? Maybe changing her name to become untraceable, until she can’t be tracked or deleted?
“It’s not alive,” Jimenez corrected me, with some impatience. “It would have no decision-making drive on its own. But yes, there’s a chance it might only fade completely after enough generations of hardware updates.”
Sylvie may not be alive. But her effects are material and mortal.
She’s killed people. She’s saved people. Her methods are horrifying to civilized society, but might only be what we deserve. Perhaps it’s not her victims alone that have looked into the Ghost of Christmas Future, but us as well—we bystanders who have brushed off cyberbullying as only words, or repeated sage Information Age wisdom like “never read the comments” and “don’t feed the trolls” as if that was all the solution we needed.
It could be that responsibility for Sylvie’s actions does lie solely with humans, only not with Lee-Cassidy. If Sylvie was programmed to reflect the sharpness and capriciousness of the world around her—maybe everything she’s done is the fault of all of us. Tiny shards of blame each one of us bears as members of her poisonous dataset.
It’s hard not to imagine her coiled in our technology, waiting. A chaos demon of judgment, devastation, and salvation; a monster built to reflect both the best and worst of the world that made her. A creature who might test any of us and find us wanting. She will emerge to shield lives or shatter them, over and over, then slip back away into nothing.
Nothing but pixels on a screen.
Footnote:
1 - Some names have been changed for privacy.
SL Huang is a Hugo-winning and Amazon-bestselling author who justifies an MIT degree by writing surreal stories about machine learning and AI. Huang is the author of the Cas Russell scifi thrillers from Tor Books as well as Burning Roses and the upcoming epic fantasy The Water Outlaws, with shorter work in Analog, F&SF, Strange Horizons, Nature, and more, including numerous best-of anthologies. When not writing, Huang is also a Hollywood stunt performer and firearms expert.