By Shoshana Wodinsky
While millennials might be known as the generation killing off everything from starter homes to beer, we should also be known as the generation of stress.
A series of reports from health insurance giant Blue Cross Blue Shield showed that depression diagnoses among millennials have rocketed up 47 percent over the past few years compared to the 33 percent boost seen in older generations. Meanwhile, a recent survey from the American Psychological Association showed that nearly half of millennials—44 percent—will admit that their mental health is “less than excellent”.
I’m one of those people. I’m also one of thousands that turn to robots to find some sort of solace.
Across Twitter, there are more than a dozen bots specifically programmed, for a lack of better phrase, to “care.” These are chatbots that, at specific intervals throughout the day, will babble and spit out all sorts of affirmations. Once an hour or so, followers are told how much they matter, how they should keep pushing forward and how they shouldn’t forget that they’re loved. For some anxiety-sufferers—including myself—the occasional bot-prod becomes almost like an ersatz form of therapy.
These self-care bots, for the most part, are created in the same way that people create mad-libs. A person can create a bot and give it different self-care related phrases (like “please,” “don’t forget!” and “you deserve”), along with a set of verbs, nouns and adjectives.
The bot strings together those linguistic parts to create cohesive sentences and then spits those sentences out on a regular schedule. Once an hour seems to be the norm, but some of them might only tweet once a month or so.
While writing this, I received an alert from @5minselfcare, a bot that brands itself as a self-care “clock.” “Today is very hard,” it told me, “but if you can spend just five minutes thinking about things you’re looking forward to, it will help.” Earlier, the bot said that it believed in me (thanks!), then asked me to take some time and “pick out something nice” for dinner (I’ll try!).
Little nudges like these are some of the only things that will remind me to actually look after myself, rather than indulging the workaholic tendencies that I—and others in my generation—have become known for.
Just over 8,200 twitter users follow this account, which is actually a pretty slim slice of the people eager for bot-delivered affirmations. More than 15,000 follow @everydaycarebot (which tweets out “practical” self care reminders), and more than 130,000 follow @tinycarebot, whose one purpose is to remind you to “take a break” every once in a while.
Emily Reynolds, the journalist who created the @everydaycare bot said in an interview with Dazed that she wasn’t “expecting it to change people’s lives.” Still, she hopes that “by reminding people to make doctor’s appointments or deal with their bills or have a wash in the sink if they can’t manage a shower” that she’s making their day “a little bit better.”
Though their bots and others have been made with the best intentions, it doesn’t preclude the fact that Twitter is swarming with robo-accounts that are far less wholesome. Countless numbers of bot accounts have been used to sow misinformation both stateside and abroad, using the platform to amplify decisive rhetoric that often amounted to sheer political propaganda. At Congress’s behest last year, the company managed to pick off more than 35,000 bot accounts with dubious Russian origins. Around the 2016 presidential election, these accounts alone pumped out 1.4 million tweets that were viewed roughly 288 million times.
According to including Joshua Emerson, a self-styled Twitter bot hunter, accounts like these are still popping up in droves, and Twitter’s battle against these accounts is akin endless game of digital whack-a-mole. And while the platform has shed hundreds of thousands of these malicious accounts, the process has resulted in some of its more innocuous or even friendly bots getting caught in the crossfire.
That’s what happened this past summer. Twitter attempted to stymie their spam-flow by cracking down on bot developers that wanted to use its platform. Under these new rules, even the most casual botter has to go through a comprehensive vetting process—meaning that now, you have to indicate the country you’re working out of and a minimum of 300 words describing what, exactly, your bot intends to do. Twitter doesn’t tell developers about the criteria their bot has to meet (if any), or how long it takes for these applications to process.
At the same time, the company cracked down on Cheap Bots Done Quick (CBDQ)—a third-party app that roughly 7,000 bot accounts, including @tinycarebot, use to function—all were swiftly rendered defunct. Though Twitter wouldn’t give a public answer about the suspension, the app’s creator, George Buckenham, mentioned that “spammers” had been abusing the app and using it to hawk products.
Twitter did eventually restore access to the app following a short hiatus. But in the interim, you had users tweeting that they “needed” @tinycarebot back online, and others saying that they “missed that lil bot”.
The suspension didn’t just highlight the small-yet-important role that these bots play in people’s lives, but it also shone light onto the ever-narrowing tightrope that these accounts must walk in order to continue existing.
“There’s actually a really blurry line between stuff that’s artistic or creative or stuff that amuses people,” they said, “and stuff that ends up being actual spam.” And Twitter is certainly still full of spam.
Even those with bots running on the platform right now—including bots just reminding you to hydrate or take your medication—are now required to step up to the digital plate and explain the purpose these programs serve the site writ large.
For some of Twitter’s most prolific botters, this is a bridge too far. Darius Kazemi, an internet artist with more than 80 bots to his name, told Quartz that it would be “prohibitive” to justify the existence of his bots. In that same interview, fellow artistic botter Allison Parrish said that she wouldn’t be taking any “proactive measures” to keep her creations running on the platform.
For now, their bots—and thousands of others—are still chugging away on my timeline and those of countless tweeters. Twitter, for its part, has stated that it’s “committed to supporting devs whose projects make Twitter better, including devs who build creative bots.”
But as the platform continues to be inundated with increasingly sophisticated examples of bot-warfare, it’s not hard to imagine that this may not be the case for much longer. In all likelihood, Twitter will eventually tighten its bottleneck and roll out more restrictions and regulations onto these creators whose worst crime is just trying to make the site a less awful place to be.
As Oscar Schwartz, an author and researcher on the intersection of machines and art, put it:
“Twitter’s new developer policy is part of a broader attempt to rid the platform of spam and malicious bots, making it a cleaner, more sanitized place to spend time. The question, though, is whether this digital gentrification will sacrifice the very essence of what made Twitter a compelling and creative place to begin with.”
If—or when—that happens, it’ll mean losing some of twitter’s greatest bots. And it’ll mean losing the cheapest therapists I’ve ever had.
Top Image Credit: Andreas Eldh, Flickr.com