Spreading peace through hate? Robot-backed Muslim recruitment program targets 5 million Facebook users

Written by By Tim Cocks, CNN The internet is an extremely social place. The idea of “bubbles” or social media echo chambers has been around for years, but in recent years the phenomenon of…

Spreading peace through hate? Robot-backed Muslim recruitment program targets 5 million Facebook users

Written by By Tim Cocks, CNN

The internet is an extremely social place. The idea of “bubbles” or social media echo chambers has been around for years, but in recent years the phenomenon of “filter bubbles” has increasingly affected our lives, becoming increasingly evident in our private lives as well as public ones. The term is probably best explained by the fact that information on social media tends to be filtered through the prism of the individual whose post is being viewed. This collective filter has allowed us to develop views on the world (and on ourselves) we might otherwise not have.

Many people might find this useful; others might argue it’s blinding us to a wider context.

Perhaps it’s this latter group who have used the word “filter bubble” to describe AI — a subject that is on everyone’s lips at the moment.

‘Mute the AI’

Michael Stipe has called “monitors” — that is, those who are programming artificially intelligent machines to think, feel and act — an “ideological dictatorship”. The rapper is not alone. The term “filter bubble” has taken on a life of its own, currently being used in popular culture such as Robin Williams’ avatar in “Good Will Hunting,” as well as in Andy Rubin’s popular children’s app “Wooden Splinters” and Shane Carruth’s short film “The OA.”

But this “filter bubble” isn’t just about whether we see or ignore others. The term also can refer to our relationship with another culture: something we share or care about with any relative or friend on any part of the world.

Selfie time: She wears “the veil” on her forehead — aka a ‘mini-beard’ — to avoid showing an object of attraction to a non-Muslim man via her smart phone. Credit: @walidesharam

It’s important to see the word “filter bubble” in that context.

Think of the image above, featuring Cuban rapper El Chacal. Would El Chacal’s image, or that of any other artist from another culture, be shown in a media context like this one? I doubt it. Such a scenario would break with the idea of a controlled communication process that has come to define social media.

Revealing not just human-machine interaction, but that of individual-sensor interaction too, it asks whether AI and similar technologies are the right approach to cultural engagement.

Hate speech and AI: How to make a ‘bot hate you

The history of computing demonstrates how the “computer” has not necessarily been regarded as a generic term but rather, one that has been applied to software in individual machine-learning frameworks. As a society, we’ve come to rely on these software frameworks. Inevitably, as with any new technology, we will see its great potential and also its many potential pitfalls.

Technology is a powerful tool. We are drowning in a sea of information — a volume that is expanding exponentially. People often feel overwhelmed. We must seek to understand what is happening.

Reliance on machine-learning frameworks might provide a way to do that, combining technologies that are complementary. One such way would be “YassifyBot” (or one or more of its social media cousins), designed for Muslim individuals to engage with other Muslims, and to educate them about Muslims.

“YassifyBot”

The program is designed to provide a means of verifying that a person’s information is authentic. It is currently being tested by Muslim individuals to determine the potential effect of using an automated program to verify the authenticity of a person’s accounts.

The program takes several steps, including sending users a text message informing them that it has verified their information and using this text message to contact them. It then sends another text message informing them that it has confirmed their information has been accurate, and that the user is the owner of that account.

The program does not generate any “ill” or “bad” data, it merely checks whether the data has been correct. If it has, the account is verified. If it has not, it disappears from its users’ profiles. This format reminds people of their responsibility to provide accurate information. By using an automated tool to ensure its users’ accounts are accurate, its contributors are making a clear commitment to their own accountability.

Leave a Comment