Hate is one of the more popular topics on Twitter. It gathers more tweets and retweets than the MLB, the Grammys, the Super Bowl, and Game of Thrones.
Now, Seattle design firm POSSIBLE has given Twitter users spreading hate a hard choice before they retweet a nasty, hateful nugget: don’t retweet – or retweet and send a donation to an anti-hate group.
Using scrappy problem solving, machine learning, and human decency, they’ve forced people who retweet hate to stop spreading hate speech on Twitter, or retweet and commit a $1 donation to a nonprofit that fights for inclusion, diversity and equality. Donations are pledged by anyone seeking to curb hate speech online. The pledges are managed and distributed by a third-party platform, Public Good.
The current sponsored group is Life After Hate, which helps members of extremist groups leave and transition to mainstream life.
Machine learning first identifies hateful speech on Twitter. A human moderator then selects the most offensive and most dangerous tweets and attaches an undeletable reply. It informs recipients that if they retweet the message, a donation will be committed to Life After Hate.
Twitter was the early focus of WeCounterHate because its user anonymity makes it a hotbed for white supremacy. But it’s also easy to listen to all the conversations. Intervention on Twitter was attractive.
“I see Twitter as a pivot platform for white supremacist thinking,” says Herron. “It’s more mainstream and people can be groomed into deeper hate ideologies. Many often go from something like Twitter to Gab. We’ll often see a hatefluencer on Twitter with a Gab profile in their account. Gab is an entirely different level of rough.”
It’s working. In its pilot launch, WeCounterHate proved successful, reducing the retweet rate of tagged hate speech by 66%. Herron and his team at POSSIBLE just garnered 2019 Effie Award for their work. Effies are given annually to recognize the most effective marketing strategies world-wide.
While the mechanics of WeCounterHate are elegant and simple, there were two big challenges: training the machine-learning and finding a beneficiary partner.
“Hate speech is subjective and it scales,” says Herron, so training the machine-learning engine took effort. “We have a killer data team at POSSIBLE.” But first they had to source the data. Then came the tricky step of manually defining and categorizing hate speech.
Finding a non-profit willing to work with an unproven platform took some time. Life After Hate was crucial in training the machine-learning. “They know more about what to look for than we ever could have discovered just by research.”
Life After Hate identified “coded” hate speech – often replacement terms, but more often combinations of emoji symbols and characters recognized by users of Twitter. Life After Hate gave critical insight on how to interject WeCounterHate into Twitter conversations.
WeCounterHate is racking up wins and more partners have been excited to jump in. “I feel like everyone that hears about our project wants to help,” says Herron. “Our vendors we normally work with to produce advertising campaigns have been very gracious in donating their expertise and time for WeCounterHate.”
The POSSIBLE team has also had conversations with prominent groups and individuals committed to eliminating online harassment. POSSIBLE got in touch with UN Tech Against Terrorism as well as Google Jigsaw - an incubator that builds technology to counter violent extremism and protect people from online harassment.
There’s still a long way to go stopping the spread of hate online. “It’s hard to separate the speed at which hate currently spreads and the role [social media] platforms play in making that possible.”
Facebook, for example, uses a combination of machine learning and human moderators to sift through the staggering piles of information and remove offending content. WeCounterHate differs in that it creates an obvious disincentive to spread hate.
Ultimately, WeCounterHate is seeking to change not merely perceptions but behavior. Because retweeting is so easy, most people do it with hardly a thought. By making them pause, the campaign causes them to think about what they are doing, and hopefully encourage them to start acting differently.
Herron and his POSSIBLE team are interested in hate crime as well. They’ve begun data analysis that can connect real-world hate incidents with Twitter hate speech at a geographical level. They’re hopeful the connections pan out to be insightful enough to publish.
Herron also thinks there’s opportunity to use machine-learning to identify and shut down personally targeted hate speech and bullying. “I definitely think that will become a thing here soon, especially in platforms that are used to dox or harass individuals.”
Shawn has built an impressive career in design and creative direction. He racked up wins with household brands like Brooks Shoes, Whole Foods, Microsoft, and SAFECO Insurance. As Creative Director at POSSIBLE, Shawn has developed award-winning work for Bacardi, AT&T, and The Lonely Whale Foundation. His laurels include recognition by SXSW Interactive, the Webby Awards, Addy's Best in Show, Ad Freak, PRSA Totem Best in Show, One Show, Shorty's Award, and Effies.
Shawn is happy to share credit for his success with the Design Department at WWU. “They expect their students to ‘figure it out’, and as a result I believe many of the graduates from that program have that dig-in mentality. Scrappier than most.”
In addition to WeCounterHate, Shawn has contributed work to Facinghomelessness.org and The Block Project, Seattle-based organizations that work to end homelessness.
“I think as creative leaders we’ve been given a ton of opportunity to learn how to shift people’s opinions throughout our careers – be it through design, writing or advertising. There’s a responsibility in that opportunity. A responsibility to do more than sell things.”