Facebook Asking “Hard Questions”, Outlines Efforts to Eliminate Extremist Content

Facebook Asking 'Hard Questions', Outlines Efforts to Eliminate Extremist ContentIf you were seeking some perspective on the significance of social media in the current communications landscape, you need look no further than the debate around Facebook’s influence over the news cycle, particularly around the most recent elections across Europe and North America.

In varying capacity, social media has been blamed for everything, from the rise in extreme political movements to the proliferation of false propaganda – and all of that is true to some degree.

Really, the shifting media landscape makes perfect sense – it used to be that we were reliant on mainstream media outlets to let us know what was happening in the world, but in the modern, connected age, we’re all now able to share news and updates with each other just as fast, and we instinctively place more trust into information shared by those we know. This, inevitably, also means that some beliefs and movements are gaining more momentum, because they’re able to generate widespread reach, while the increased emphasis on digital news content has also put more pressure on traditional outlets to come up with more sensationalized, divisive content to fuel clicks.

That, in turn, further solidifies and justifies such movements. So yes, Facebook can, and does, empower politicized groups, no question. Now to work out what we do to stop it.

This is one of several key questions Facebook’s looking to examine in a new series they’re calling ‘Hard Questions’.

 As explained by Facebook:

“As more and more of our lives extend online, and digital technologies transform how we live, we all face challenging new questions — everything from how best to safeguard personal privacy online to the meaning of free expression to the future of journalism worldwide. We want to broaden that conversation. So today, we’re starting a new effort to talk more openly about some complex subjects.�

Among the topics Facebook’s looking to address with this new series are:

  • How should platforms approach keeping terrorists from spreading propaganda online?
  • After a person dies, what should happen to their online identity?
  • How aggressively should social media companies monitor and remove controversial posts and images from their platforms? Who gets to decide what’s controversial, especially in a global community with a multitude of cultural norms?
  • Who gets to define what’s false news — and what’s simply controversial political speech?
  • Is social media good for democracy?
  • How can we use data for everyone’s benefit, without undermining people’s trust?
  • How should young internet users be introduced to new ways to express themselves in a safe environment?

These are definitely some serious considerations, and it’ll be interesting to see just how much Facebook is willing to probe each, particularly given that some focus on the methods which directly contribute to how the platform generates revenue – most notably, the questions around data collection and usage.

In the first instalment, Facebook has outlined some of the key elements in how they tackle terrorism and extremist content on their platform, including their latest advanced in artificial intelligence and machine learning which have been designed to detect and weed out questionable content.

Facebook’s summary is surprisingly open, providing overviews on the strengths, and limitations, of their systems to detect such behavior.

“We’ve been cautious, in part because we don’t want to suggest there is any easy technical fix. It is an enormous challenge to keep people safe on a platform used by nearly 2 billion every month, posting and commenting in more than 80 languages in every corner of the globe. And there is much more for us to do.�

Facebook notes that they have a team of more than 150 people who solely focus on detecting and removing terrorism and extremist-related content, along with their advancing machine learning efforts, which are constantly evolving. Through this, they’re hoping to make Facebook “a hostile place for terrorists� and eliminate misuse. As the platform expands, so too do the challenges, but it’s an interesting insight into Facebook’s perspective on this key area.

At the same time, Twitter has also outlined their efforts to eliminate bots and misinformation on their platform. This comes after reports that huge networks of Twitter bots are being ‘weaponized’ by political candidates to sway public opinion.

“We’re working hard to detect spammy behaviors at source, such as the mass distribution of Tweets or attempts to manipulate trending topics. We also reduce the visibility of potentially spammy Tweets or accounts while we investigate whether a policy violation has occurred. When we do detect duplicative, or suspicious activity, we suspend accounts. We also frequently take action against applications that abuse the public API to automate activity on Twitter, stopping potentially manipulative bots at the source.�

Such efforts could extend beyond just those bots used for political manipulation, with many Twitter users still buying followers, Likes and retweets. There are also apps like Thunderclap which have gained momentum of late – Thunderclap enables users to sign up to share a specific tweet or post at an assigned time of day, which helps boost promotion, and could, potentially, manipulate Twitter’s Trending Topics i.e. a heap of people tweeting about the same things all at once indicates a trend, which gets on the ‘Trending’ list, boosting promotion, etc.   Â

It’s difficult to know just how far Twitter’s efforts might extend, but all such uses of their systems could come under increased scrutiny – worth considering for those who are employing such tactics.

While the impacts of both Facebook and Twitter’s efforts won’t be clear for some time, it is interesting to note how the major networks are looking to address such issues, and to consider their flow-on effects and how they can counter misuse. Facebook initially played down their influence over public opinion, but mounting pressure has forced them to act, and hopefully, through this, we see new measures which enable all platforms to weed out questionable behaviors and enable free expression without also fueling anti-social and destructive elements.

But really, the right balance is virtually impossible to strike. Every effort on this front should be supported and encouraged, but it’s difficult to have a platform that facilitates global, real-time expression within any set of defined parameters around what that means.

The discussions, however, are important, and are worth putting forward.Â

Source: SocialMediaToay.com

 

%d bloggers like this: