Antisocial content can affect social media message

Antisocial content can affect social media message

10 ways to stay safe on public WiFi networks
July 3, 2017
Social Media marketing Is a Toxic Mirror
July 3, 2017

antisocial media

Anti social media is the reduction of face to face communication from using social media. Generations X-Z generally prefer anti social media to actual personal interaction and share everything in life on line ad nausea. Do I really need to see a jpeg of your latest meal? Or your kid at the pool? Or your buddy passed out after he shit himself?

Zuckerberg’s challenge of the antisocial content

antisocial content behavior beat by Facebook who will hire 3,000 people to monitor content after recent instances of users streaming disturbing acts online on its ‘Live’ service.
Also It may be a very hands-on approach for an internet giant to rely on humans instead of algorithms to get a job done, but here’s why monitoring content on social media is proving to be such a challenge…

Post overload

A quarter of the world’s population (1.94 billion people) now uses Facebook every month; nearly 1.3 billion use it daily. Watchers say Facebook’s trillions of posts may be becoming too big for its computer algorithms and current staff to manage.
Most of the company’s current content reviewers are low-paid contractors overseas who spend just a few seconds on each post. A study last year found they inconsistently apply Facebook’s standards

Censorship and free speech balance

Facebook video purposes to share millions of personal stories. It has also been used to document events of public interest, such as a series of police shootings of unarmed black men in US that sparked a national debate on race and policing.Their is an argue of ‘distressing speech is sometimes the most important to a public conversation’, so censorship often proves to be an issue.

Human solution

Experts say hiring more moderators will boost monitoring as ‘humans tend to have more nuance and context than an algorithm’. But Zuckerberg has told investors, ‘no matter how many people we have on the team, we’ll never be able to look at everything’. Facebook feels that artificial intelligence tools will eventually allow reviewers to do a better job of reviewing content.
On Facebook, a video, picture or any other piece of content reaches the review team after it is reported by users for flouting its ‘community guidelines’.

Facebook is not the only one

Google, too, has struggled with content issues involving its YouTube video service. An automated advertising system that sometimes places ads next to questionable content has also proved troublesome.
Google reportedly has ‘a 10,000-strong army of independent contractors to flag offensive or upsetting content’ in search results. One expert estimates that globally, ‘well over 100,000’ people, many of them low-paid workers in countries like the Philippines, are paying to police online content.

Twitter’s case

For years, Twitter has faced criticism for the rampant abuse some of its users inflict on others. In 2015, Twitter said it was tripling the size of its staff handling abuse complaints, but complaints persist of online abuse.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.