Skip to main content

Liker, a Facebook Alternative for Liberals, Is Hive of False Claims About Trump

Liker, a Facebook Alternative for Liberals, Is Hive of False Claims About Trump

...Now, Rivero wants to take the diehard liberals from Occupy Democrats—and anyone else he can get—onto a social network of his own. In early August, Rivero quietly launched Liker, an upstart that looks very similar to Facebook, if Facebook was only about politics. Like Occupy Democrats, though, Liker has quickly filled with fake information about Trump.

Rivero describes Liker as the answer for Facebook users who feel that their own posts on Facebook are being ignored in favor of posts from popular political-themed pages. In return, Liker offers an obsessive focus on engagement—Liker users are numerically ranked according to the likes they’ve accumulated, with top users receiving gold or silver “stars.” ...

https://www.thedailybeast.com/liker-a-facebook-alternative-for-liberals-is-hive-of-false-claims-about-trump
https://www.thedailybeast.com/liker-a-facebook-alternative-for-liberals-is-hive-of-false-claims-about-trump

Comments

  1. For fucks sake, Trump provides plenty of actual outrageous news items without having to fake a single one...

    ReplyDelete
  2. Cindy Brown Let's stick to platform and migration discussions, not politics, Oh Fellow Moderator ;-)

    ReplyDelete
  3. Cindy Brown the Russians are probably behind liberal propaganda too, their aim is to sow division and weaken NATO, both of which are served by candidates on the extreme left or right, and they made Facebook propaganda supporting both extremes in the 2016 election.

    ReplyDelete
  4. Joshua Lee Turning that observation to the question of social and online media: how do you avoid this?

    What mechanisms -- technical, social, epistemic, values, whatever it takes -- can keep media systems from turning into propaganda cesspits?

    ReplyDelete
  5. Edward Morbius have user-programmed filters, so the people with sense can not bother to read the nonsense people pick up on the Internet. Almost anything else is fighting the Internet, or requires a budget bigger than any of us have.

    ReplyDelete
  6. Google+ circles were real useful for that BTW. Facebook also has a mechanism like G+ circles, but they make it so much more trouble to set up that nobody uses it.

    ReplyDelete
  7. Of course, that leaves the rest of society voting for useful idiots like before. I don't know how to fight that. Prayer perhaps.... :)

    ReplyDelete
  8. Joshua Lee user-programmed filters are great for the individual user, but they allow the propaganda engines free reign over the content. (See Usenet)

    ReplyDelete
  9. Jose Pina Coelho Usenet was a darned sight finer, before the eternal September, than Facebook.

    ReplyDelete
  10. Active moderation versus 'all that is needed for evil to prevail' is for good people to support free speech. Free speech explodes like popcorn into vitriol.

    ReplyDelete
  11. Passive and active moderation tools are a good idea if a defined set of rules are in place. If we had independent, user triggered (via reporting), fact checking of posts that would be helpful. The post could be removed from view until the fact check is complete (or at least flagged publicly to show a fact check report has been requested). When complete, the post itself could be flagged with a rating and result then automatically mark it as "opinion". This would give the OP an opportunity to correct errors and change the post to remove the opinion mark but trigger another fact check. Taking this type of approach honors the spirit of free speech while combating propaganda of all kinds. Of course the approach has to be part of the terms of service for all users to agree to and should be clearly posted. Conversely, this also allows tracking of harassment via false reporting (people who constantly flag posts they politically oppose to get them removed from view). Such users would be subject to discipline under the same terms. This does not apply to posts that violate other terms like those that are criminal or civil violations of jurisdictional law (libel, slander, harassment, defamation, etc).

    ReplyDelete
  12. Joshua Lee I set up friend categories long ago in Facebook, but initially they didn't have much functionality. It was only more recently that they allowed you to post to a specific group. And you can't post to more than one group, but you can put individual person in more than one group. Still, given the ever-changing platform and historical propensity to push private posts into public ones, I don't trust that their functionality works as advertised.

    ReplyDelete
  13. Shelenn Ayres Some set of Arbiters of Truth is a role I've suggested elsewhere. I don't know if that would be sufficient.

    To say nothing of acceptable to various parties.

    The role of various fact-check (and alt-fact-check) organisations is along these lines. The dichotomy presents one of the fundamental challenges.

    ReplyDelete
  14. Edward Morbius A good example of how not to have fact checkers is FB hiring right wing biased fact checkers. Then again, they are a company who have the right to control the "ambience" of their space. I was referring applying a best practice to federated social networks in a transparent fashion. Bias in fact checking would be revealed publicly by users without fear of retribution.

    thedailybeast.com - How The Weekly Standard Played Facebook and Screwed Think Progress

    ReplyDelete
  15. Curiously, when I just tried to view the site from Japan, the following error message appeared:

    " We are sorry Liker is not yet available in your country. Please check back in a couple of weeks."

    Apparently, the site employs a script that sniffs which country my IP address is associated with, and depending on the country, refuses to display the site.

    ReplyDelete
  16. Edward Morbius Now you've just converted the problem to one of finding reliable fact-checkers. In the end such a process will IMAO be subverted ultimately by either market or political forces (or both, take your pick). I suspect massaging the dynamics of the whole system somehow is the most viable option, and I barely know how to begin there.

    ReplyDelete
  17. Michael Earl I'm going to try to itemize the various options here. For now I'm not focusing on "this works" / "this doesn't work", but just "what is the available space"? And the focus is truth or validity, not other forms of abuse (stalking, harassment, coercion, blackmail, etc.)

    1. Individual / personal responsibility. Readers are responsible for assessing the validity / veracity of specific claims.

    2. Publisher responsibility. Sites are responsible for vetting, or at least addressing, misinformation / disinformation / propaganda distributed through their networks.

    3. Distributed collective action. Teams of users deploy tools to vet / confirm quality / validity.

    4. Arbiters of truth. One or more agents (organisations, sites, individuals) who assess and rate validity of items.

    5. Reputation systems. Past statements are assessed and carried forward to present pronouncements.

    6. Hierarchical reputation. As with 5., but the reputation cascades up / down from source, author, editor, publisher, site, network, and for social networks, those who post/repost/forward content.

    7. AI/ML systems. This is effectively 4. but transacted by software, not humans.

    8. Aribiters/AI hybrid. What it says on the tin.

    Am I missing anything?

    ReplyDelete
  18. Sadly it often collapses at 1. I found it on the internet = it is true. Just Google it!

    ReplyDelete

Post a Comment

New comments on this blog are moderated. If you do not have a Google identity, you are welcome to post anonymously. Your comments will appear here after they have been reviewed. Comments with vulgarity will be rejected.

”go"