At the WIRED25 Festival in San Francisco on Friday morning, WIRED editor in chief Nicholas Thompson recalled being interviewed earlier this year for a story on how LinkedIn had managed to avoid becoming a hot mess like so many other social media platforms. He said at the time that it likely had something to do with the way that LinkedIn profiles are tied to users real world identities, which disincentivizes bad behavior, but that his opinion on the matter changed after the story was published.
Because I was quoted in the story, I was tagged in all of the replies, Thompson explained during an on-stage interview with LinkedIn CEO Jeff Weiner. And the thing that came in the replies is that there are a ton of women who feel like public conversations on LinkedIn may be great, but they are harassed like hell in private messages.
He asked Weiner how LinkedIn has responded to the feedback, and what has changed since. Weiner didnt have a specific answer.
Instead, Weiner reiterated that, for LinkedIn, ensuring conversation health and high user trust on the platform is important to the company, and touted the companys use of technology to identify problematic content as quickly as possible. Weiner said that LinkedIn also relies heavily on users to identify and flag bad behavior to ensure that moderators can take down activity that breaks the rules.
When pressed on the moderation challenges posed by private messages, which are only visible to the two parties in question, Weiner doubled down on the position that the onus is primarily on users, and not the company, to spot and report harassment. Harassment in private messages is just as easily flagged as public harassment, said Weiner, as long as people understand who to reach out to, we're going to get it in the hands of the right team and take action.
In recent weeks, Facebook has come under fire for its policies regarding misinformation, one of which permits political candidates to lie in paid advertisements without fear of being fact-checked. Like Twitter, the company has shied away from policing posts by political figures that break its rules generally so long as the posts dont incite violence or have the potential to cause real-world harm.
LinkedIn, Weiner explained, takes that approach even further. If there is an intention to deceive and do harm, then yes, [we will police content], but LinkedIn wont police misinformation generally, he said, as the platform doesnt want to insert itself into complicated user debates about the truth.
Whats true to some may not be true to others, said Weiner. Unfortunately, when facts start getting called into question it starts to seriously muddy the water in terms of kinds of quality conversations that all of us can have.