Search

Trump’s latest attack on Section 230 is really about censoring speech - The Verge

abaikans.blogspot.com

One aspect of the 2020 presidential campaign that isn’t much discussed is the fact that both candidates want to end the internet as we know it. Both President Trump and Joe Biden have called for the end of Section 230 of the Communications Decency Act, which protects tech companies in most cases when their users post something illegal on their platforms.

Trump brought the subject up today when a Twitter account with fewer than 200 followers posted an obviously doctored image of Senate Majority Mitch McConnell dressed up in Soviety military garb, with the caption reading “Moscow Mitch.”

“Why does Twitter leave phony pictures like this up, but take down Republican/Conservative pictures and statements that are true?” the president wanted to know. “Mitch must fight back and repeal Section 230, immediately. Stop biased Big Tech before they stop you!”

He then tagged Republican senators Marsha Blackburn and Josh Hawley, who reliably step up to lodge baseless complaints about systematic bias against their party whenever called upon. (In fact, they introduced something called “the Online Freedom and Viewpoint Diversity Act” on Tuesday, the point of which seems to be to stop social networks from doing so much moderating.)

The reason Twitter (usually) leaves phony pictures like that up is that the United States permits its citizens to speak freely about politicians — even to say mean things about them. Repealing Section 230 would likely have no impact on the tweet in question, because the Twitter user’s speech is protected under the First Amendment.

It might, however, make Twitter legally liable for what its users post — which would lead the company to remove more speech, not less. Whatever repealing Section 230 might achieve, it would not be what the president seems to want.

Anyway, all of this is well known to followers of the long-running Section 230 debates and seemingly impenetrable to everyone else. But if there’s one important lesson from 2020, it’s that long-running debates over expression can sometimes result in clumsy but decisive actions — ask TikTok! And so it’s worth spending a few more minutes talking about what smarter people say ought to be done about Section 230.

As it so happens, there’s a sharp new report today out on the subject. Paul Barrett at the NYU Stern Center for Business and Human Rights looks at the origins and evolution of Section 230, evaluates both partisan and nonpartisan critiques, and offers a handful of solutions.

To me there are two key takeaways from the report. One is that there are genuine, good-faith reasons to call for Section 230 reform, even though they’re often drowned out by bad tweets that misunderstand the law. To me the one that lands the hardest is that Section 230 has allowed platforms to under-invest in content moderation in basically every dimension, and the cost of the resulting externalities has been borne by society at large.

Barrett writes (PDF):

Ellen P. Goodman, a law professor at Rutgers University specializing in information policy, approaches the problem from another angle. She suggests that Section 230 asks for too little — nothing, really — in return for the benefit it provides. “Lawmakers,” she writes, “could use Section 230 as leverage to encourage platforms to adopt a broader set of responsibilities.” A 2019 report Goodman co-authored for the Stigler Center for the Study of the Economy and the State at the University of Chicago’s Booth School of Business urges transforming Section 230 into “a quid pro quo benefit.” The idea is that platforms would have a choice: adopt additional duties related to content moderation or forgo some or all of the protections afforded by Section 230.

The Stigler Center report provides examples of quids that larger platforms could offer to receive the quo of continued Section 230 immunity. One, which has been considered in the U.K. as part of that country’s debate over proposed online-harm legislation, would “require platform companies to ensure that their algorithms do not skew toward extreme and unreliable material to boost user engagement.” Under a second, platforms would disclose data on what content is being promoted and to whom, on the process and policies of content moderation, and on advertising practices.

This approach continues to enable lots of speech on the internet — you could keep those Moscow Mitch tweets coming — while forcing companies to disclose what they’re promoting. Recommendation algorithms are the core difference between the big tech platforms and the open web that they have largely supplanted, and the world has a vested interest in understanding how they work and what results from their suggestions. I don’t care much about a bad video with 100 views. But I care very much about a bad video with 10 million.

So whose job will it be to pay attention to all this? Barrett’s other suggestion is a kind of “digital regulatory agency” whose functions would mimic some combination of the Federal Trade Commission, the Federal Communications Commission, and similar agencies in other countries.

It envisions the digital regulatory body — whether governmental or industry-based — as requiring internet companies to clearly disclose their terms of service and how they are enforced, with the possibility of applying consumer protection laws if a platform fails to conform to its own rules. The TWG emphasizes that the new regulatory body would not seek to police content; it would impose disclosure requirements meant to improve indirectly the way content is handled. This is an important distinction, at least in the United States, because a regulator that tried to supervise content would run afoul of the First Amendment. [...]

In a paper written with Professor Goodman, Karen Kornbluh, who heads the Digital Innovation and Democracy Initiative at the German Marshall Fund of the United States, makes the case for a Digital Democracy Agency devoted significantly to transparency. “Drug and airline companies disclose things like ingredients, testing results, and flight data when there is an accident,” Kornbluh and Goodman observe. “Platforms do not disclose, for example, the data they collect, the testing they do, how their algorithms order news feeds and recommendations, political ad information, or moderation rules and actions.” That’s a revealing comparison and one that should help guide reform efforts.

Nothing described here would really resolve the angry debate we have once or week or so in this country about a post that Facebook or Twitter or YouTube left up when they should have taken it down, or took down when they should have left it up. But it could pressure platforms to pay closer attention to what is going viral, what behaviors they are incentivizing, what harms all of that may be doing to the rest of us.

And over time, the agency’s findings could help lawmakers craft more targeted reforms to Section 230 — which is to say, reforms that are less openly hostile to the idea of free speech. Moscow Mitch will continue to have to take his lumps. But the platforms — at last — will have to take theirs, too.

The Ratio

Today in news that could affect public perception of the big tech platforms.

Trending down: A video of a man shooting himself with a gun started circulating on TikTok Sunday night, despite the company’s attempts to take it down. Creators warned that the clip was being hidden in innocuous videos and shared across the site, making it harder to avoid. (Julia Alexander / The Verge)

Governing

The Trump campaign is betting on YouTube as a primary way to reach voters ahead of the November election. It appears to be a move away from the Facebook strategy that helped propel him to victory in 2016. Alex Thompson at Politico tells the story:

Many digital strategists say YouTube’s algorithm is more likely to recommend to viewers channels that are updated regularly with new content. “The name of the game with algorithms is to flood the zones,” said Eric Wilson, a veteran Republican digital operative. “The Trump campaign is putting on a master class in advertising according to algorithms — it just rewards the side that will produce more content.” [...]

The Trump campaign’s YouTube strategy is also the latest example of it becoming its own news publisher, bypassing the established media. Many of the campaign’s videos are short news clips or snippets of the press secretary’s daily briefing.

The 2020 US election will likely spark violence and a constitutional crisis, according to experts who gamed out possible November scenarios. Unless Biden wins in a landslide, the experts predict significant unrest. Gulp. (Rosa Brooks / The Washington Post)

The Trump campaign launched a series of Facebook ads featuring a manipulated photo of Joe Biden edited to make the former vice president appear older. It’s among the latest examples of Trump sharing content that has been deceptively altered to attack Biden. (Jesselyn Cook / HuffPost)

Joe Biden’s campaign is taking over a popular Instagram account created by a teen supporter. Formerly a fan account, @VoteJoe account will now serve as the campaign’s primary point of grassroots outreach on Instagram. (Makena Kelly / The Verge)

Also: Joe Biden is partnering up with the celebrity video platform Cameo to allow celebrities to earmark payments for his campaign. Andy Cohen, Mandy Moore, Tituss Burgess, Dulé Hill, and Melissa Etheridge are lending their support to the campaign on the platform starting this week. (Makena Kelly / The Verge)

Oracle’s closeness with the Trump administration could prove helpful in its bid to buy TikTok. Oracle founder Larry Ellison is a prominent Trump supporter. (David McCabe / The New York Times)

TikTok and WeChat are being lumped together in the Trump’s administration’s attempt to crack down on national security threats from China. But WeChat, in addition to being a vital communication channel for the Chinese diaspora, is also a global conduit of Chinese state propaganda, surveillance and intimidation. (Paul Mozur / The New York Times)

Facebook’s ban on political ads the week before the US election will muzzle important political speech and disproportionately burden challenger campaigns, this article argues. That could benefit incumbents who have large organic reach on social media platforms. (Daniel Kreiss and Matt Perault / Slate)

Also: Facebook’s political ad ban could threaten the ability of election officials to spread accurate information about how to vote. (Jeremy B. Merrill / ProPublica)

Facebook’s decision to leave up Trump’s post urging people to vote twice angered employees, who called the move “shameful” and “unconscionable.” (Craig Silverman and Ryan Mac / BuzzFeed)

Facebook took down an image posted by GOP congressional candidate Marjorie Taylor Greene, a QAnon conspiracy theorist, showing her holding a rifle next to a photo of Alexandra Ocasio-Cortez. The company said the post violated its policy on “violence and incitement.” (Eliza Relman / Business Insider)

Misinformation campaigns are likely going to come to online multiplayer games like Animal Crossing. Today, no online multiplayer game has a publicly available policy specifically related to medical or political disinformation in the US. (Daniel Kelley / Slate)

Amazon said it plans to continue protesting the Department of Defense’s decision to award the JEDI contract to Microsoft. The DoD recently affirmed its decision, but Amazon said not all the relevant information about the “politically corrupted contract” has been made public. Can’t wait! (Amazon)

Apple is doubling down on its legal battle against Epic Games. The company filed counterclaims alleging Epic breached its contract and seeking an unspecified amount in damages. (Todd Haselton / CNBC)

Apple didn’t commit to stop processing requests for user data from Hong Kong authorities in the wake of a national security law imposed by Beijing. Now, the company is opening up about what kinds of data requests it receives. (Zack Whittaker / TechCrunch)

The Australian Competition and Consumer Commission opened an investigation into the Apple App Store and Google Play. The commission is looking at competition between the two app stores and how they share data. (Tegan Jones / Gizmodo)

Italy’s competition authority opened an investigation into cloud storage services operated by Apple, Dropbox and Google. The move comes in response to complaints about how the companies collect user data for commercial purposes. (Natasha Lomas / TechCrunch)

Industry

TikTok has been building a vocal contingent of young supporters amid growing uncertainty about the app’s future in the US. the company is working behind the scenes to turn creators in the US into superstars, arming them with brand deals and introductions to Hollywood power brokers. Here’s Sarah Frier at Bloomberg:

The effort has given TikTok growing influence over American culture, which is not an accident, says Brett Bruen, who served as the White House director of global engagement in the Obama administration. He believes China and ByteDance are playing the long game. “It’s all a localization strategy, which allows you to not only achieve relevance but respect,” he said. “The most effective advocates for your company and for policy decisions are those local influencers and local partners.”

U.S. President Donald Trump has ordered ByteDance to sell its U.S. TikTok assets and he has threatened to ban the app if a deal doesn’t happen in coming weeks. Embedding the business deeply in society, while providing a livelihood for thousands of rising American stars will make it harder to uproot the app from the country. Creators say they haven’t been asked to make public statements in support of the app, but it comes naturally to some.

ByteDance is giving TikTok employees a half-month’s salary bonus in an attempt to calm the workforce as the company continues to negotiate a sale. The company said the money is meant to reward employees at a time of unprecedented economic and social upheaval. (Zheping Huang / Bloomberg)

Fan armies are harassing gay and trans people on TikTok. Cut it out, fan armies! (Taylor Lorenz / The New York Times)

A Facebook engineer quit today, saying they could “no longer stomach contributing to an organization that is profiting off hate in the US and globally.” It’s the latest resignation to come amid rising discontent within the company. (Read the resignation letter.) (Craig Timberg and Elizabeth Dwoskin / The Washington Post)

Facebook will now notify third-party developers if it finds a security vulnerability in their code. After a third-party developer is notified, they’ll have 21 days to respond and 90 days to fix the issues. (Zack Whittaker and Sarah Perez / TechCrunch)

Facebook gave employees with children extra time off to care for their kids during the pandemic. Some employees without kids thought it was unfair. (Daisuke Wakabayashi and Sheera Frenkel / The New York Times)

Tech companies are changing up their perks to account for remote working conditions. Some are mandating people take time off, and offering childcare support and mental health resources. (Arielle Pardes / Wired)

Amazon announced plans to expand to 25,000 workers in Bellevue, Washington. In a blog post the company said new leases and office-tower development would increase its projected headcount by 10,000. (Matt Day / Bloomberg)

Twitter reenabled the ability to download archives of “Your Twitter Data,” nearly two months after shutting off the feature as a precaution against hacking. The data could give you insight into what teen hackers could have stolen during the notorious bitcoin scam in July. (Sean Hollister / The Verge)

Brands are paying Twitter users between $20 and $60 to respond to viral tweets with a mention of their company. The move sends people to their sites without having to pay higher fees to advertise on Twitter. (Michael Tobin / Bloomberg)

People are streaming chess games on Twitch. The game might seem like an unlikely contender for the digital era, but it’s captured peoples’ attention. (Kellen Browning / The New York Times)

The Social Dilemma, a docu-drama that debuts on Netflix this week, has a simplistic view on the evils of social media platforms. It treats social media as a totally unprecedented threat, dismissing comparisons with radio, television, or any previous mass medium. (Adi Robertson / The Verge)

The pandemic is exacerbating discrimination in the school system, particularly as it relates to suspensions and other disciplinary action. Experts are worried about an uptick in Zoom suspensions. (Aaricka Washington / The New York Times)

And finally...

Talk to us

Send us tips, comments, questions, and Section 230 reforms: casey@theverge.com and zoe@theverge.com.

Let's block ads! (Why?)



"really" - Google News
September 09, 2020 at 05:00PM
https://ift.tt/2Fh4Mre

Trump’s latest attack on Section 230 is really about censoring speech - The Verge
"really" - Google News
https://ift.tt/3b3YJ3H
https://ift.tt/35qAk7d

Bagikan Berita Ini

0 Response to "Trump’s latest attack on Section 230 is really about censoring speech - The Verge"

Post a Comment

Powered by Blogger.