When it comes to misinformation, it's a Herculean task to rein it in once it's bounced around the internet, security experts argued at the RSA Conference this week.
"The overwhelming majority of people who are ever going to see a piece of misinformation on the internet are likely to see it before anybody has a chance to do anything about it," according to Yoel Roth, the former head of Trust and Safety at Twitter.
When he was at Twitter, Roth observed that over 90% of the impressions on posts were generated within the first three hours. That’s not much time for an intervention, which is why it's important for the cybersecurity community to develop content moderation technology that "can give truth time to wake up in the morning," he says.
"It's a hacking of people problem," lamented panel moderator Ted Schlein, chairman and general partner at Ballistic Ventures, a cybersecurity venture capital firm. "In my view, if we spend so much time, energy, and dollars fighting to protect our technology and our systems, shouldn't we be doing the same for people?"
The cybersecurity community should focus on creating ways to detect and shut down disinformation while mitigating its effects, Schlein argued. Presumably, this call to action includes targeting misinformation, which differs from disinformation as it relates to intent. (Misinformation is defined(Opens in a new window) as "incorrect or misleading information," regardless of intent. Disinformation is a lie told deliberately to influence opinion or cover up a fact.)
Here are some recent examples of disinformation campaigns and misinformation spreaders caught in the act:
Medical professionals have been complaining for years about patients taking dangerous advice from "expert
Read more on pcmag.com