Notice

I am working on the template of this blog today in order to chase down some problems that have developed with my template and widgets.

nullspace for future use

nullspace for future use

About

Thursday, March 25, 2021

Congressional Hearing on "Disinformation Nation: Social Media's Role in Promoting Extremism and Misinformation"



The Joint Subcommittee (US House and US Senate) on Consumer Protection and Commerce will be having a public hearing today regarding "Social Media's Role in Promoting Extremism and Misinformation." Since this is a topic of supreme importance to public discourse and free speech, Macon Media has embedded a live video player of the hearing that will begin at noon today. Macon Media has also included several documents released by the subcommittee in order to assist members of the public to have more complete information on what is happening. Since not everyone can watch it live, the material will be archived here for members of the public to refer to later.


DAY SPONSOR

Macon Media is being underwritten today by Franklin Health and Fitness, home of #ResultsForEveryone. Try FHF with a FREE 3-Day Guest Pass! To claim your pass, and to learn more about Franklin Health and Fitness, visit franklinhealthandfitness.com.



HEARING NOTICE

Subcommittee on Consumer Protection and Commerce

SUBJECT: Joint Subcommittee Hearing on “Disinformation Nation: Social Media's Role in Promoting Extremism and Misinformation”

The Subcommittee on Communications and Technology and the Subcommittee on Consumer Protection and Commerce of the Committee on Energy and Commerce will hold a virtual, fully remote joint hearing on Thursday, March 25, 2021, at 12 p.m. (EDT). The hearing is entitled, “Disinformation Nation: Social Media's Role in Promoting Extremism and Misinformation.” Witnesses will be by invitation only.

Livestream of Hearing



MEMORANDUM
March 22, 2021

To: Subcommittee on Communications and Technology and Subcommittee on Consumer Protection and Commerce Members and Staff

Fr: Committee on Energy and Commerce Staff

Re: Hearing on “Disinformation Nation: Social Media’s Role in Promoting Extremism and Disinformation”

On Thursday, March 25, 2021, at 12 p.m. via Cisco Webex online video conferencing, the Subcommittee on Communications and Technology and the Subcommittee on Consumer Protection and Commerce will hold a joint hearing entitled, “Disinformation Nation: Social Media’s Role in Promoting Extremism and Disinformation.”

I. THE ROLE OF SOCIAL MEDIA PLATFORMS IN PROMOTING MISINFORMATION AND EXTREMIST CONTENT

A. Background

Facebook, Google, and Twitter operate some of the largest and most influential online social media platforms reaching billions of users across the globe. As a result, they are among the largest platforms for the dissemination of misinformation and extremist content.

These platforms maximize their reach—and advertising dollars—by using algorithms or other technologies to promote content and make content recommendations that increase user engagement.2

Users of these platforms often engage more with questionable or provocative content, thus the algorithms often elevate or amplify disinformation and extremist content. 3

Facebook, Google, and Twitter also have access to vast swaths of user data that allows them to microtarget content to users who would be more susceptible to disinformation and extremist content.4

B. The Spread and Consequences of Misinformation and Extremist Content

Facebook, Google, and Twitter have long come under fire for their role in the dissemination and amplification of misinformation and extremist content. For instance, since the beginning of the coronavirus disease of 2019 (COVID-19) pandemic, all three platforms have spread substantial amounts of misinformation about COVID-19.5

At the outset of the COVID-19 pandemic, disinformation regarding the severity of the virus and the effectiveness of alleged cures for COVID-19 was widespread.6

More recently, COVID-19 disinformation has misrepresented the safety and efficacy of COVID-19 vaccines.7

Facebook, Google, and Twitter have also been distributors for years of election disinformation that appeared to be intended either to improperly influence or undermine the outcomes of free and fair elections.8

During the November 2016 election, social media platforms were used by foreign governments to disseminate information to manipulate public opinion.9

This trend continued during and after the November 2020 election, often fomented by domestic actors, with rampant disinformation about voter fraud, defective voting machines, and premature declarations of victory.10

Additionally, Facebook executives were repeatedly warned that extremist content was thriving on their platform, and that Facebook’s own algorithms and recommendation tools were responsible for the appeal of extremist groups and divisive content.11

Similarly, since 2015, videos from extremists have proliferated on YouTube; and YouTube’s algorithm often guides users from more innocuous or alternative content to more fringe channels and videos.12

Twitter has been criticized for being slow to stop white nationalists from organizing, fundraising, recruiting and spreading propaganda on Twitter.13

The consequences of disinformation and extremist content on these platforms are apparent. Many experts agree that disinformation about COVID-19 has greatly intensified an already deadly public health crisis.14

Experts also acknowledge that misinformation about the 2020 presidential election and extremist content has further divided the nation and provoked an insurrection.15

C. Facebook, Google, and Twitter’s Response to Misinformation and Extremist Content

All three platforms have a policy against demonstrably false COVID-19 information or COVID-19 misinformation that can cause harm.16

Facebook and Twitter also demote or label certain misinformation, such as misinformation about social and political issues.17

In September 2020, Facebook adopted new policies related to the November election such as banning political advertisements and labeling allegations of election fraud.18 After the January 6, 2021 U.S. Capitol riots, Facebook began removing disinformation that could lead to further violence.19 Last month, Facebook updated its policies on COVID-19 misinformation from initially removing false information “that could lead to imminent physical harm” to banning misinformation about COVID-19 and threatening to ban users, groups, or pages that repeatedly spread misinformation.20

In October 2020, YouTube announced it would remove videos containing misinformation about COVID-19 vaccines, expanding on its prohibition of COVID-19 misinformation that contradicted local health authorities or the World Health Organization.21 In December 2020, YouTube announced it would begin removing content that falsely alleged widespread election fraud, but that policy would not apply to videos uploaded prior to the announcement.22

After the U.S. Capitol riots, YouTube announced that it would suspend accounts that promoted videos of false allegations about the 2020 presidential election.23

In advance of the November 2020 election, Twitter expanded the use of its warning labels to limit the spread of election misinformation.24 In December 2020, Twitter announced it would label misinformation about COVID-19 vaccines.

These platforms often ramp up their efforts against misinformation and extremist content in response to social and political pressure.25

Despite these efforts, recent studies have found that COVID-19 misinformation and extremist content continue to thrive on these platforms.26

II. WITNESSES

The following witnesses have been invited to testify:

Mark Zuckerberg
Chairman and Chief Executive Officer
Facebook

Sundar Pichai
Chief Executive Officer
Google

Jack Dorsey
Chief Executive Officer
Twitter


References

1 See NYU Stern Center For Business And Human Rights, Tackling Domestic
Disinformation: What The Social Media Companies Need To Do (Apr. 3, 2019).
2 See id.
3 See id.
4 See How Data Privacy Laws Can Fight Fake News, Just Security (Aug. 15, 2019) Misinformation, CNBC (Jan. 25, 2021); Surge of Virus Misinformation Stumps Facebook and Twitter, New York Times (Mar. 8, 2020).
6 Id.
7 ‘We Are Talking About People’s Lives’: Dire Warnings of Public Health Crisis as COVID19 Misinformation Rages, USA Today (Dec. 9, 2020); Misinformation Messengers Pivot from Election Fraud to Peddling Vaccine Conspiracy Theories, New York Times (Dec. 16, 2020); Normalization of Vaccine Misinformation on Social Media Amid COVID ‘a Huge problem,’ ABC News (Dec. 10, 2020); COVID Vaccine: Disappearing Needles and Other Rumors Debunked, BBC News (Dec. 20, 2020).
8 Election 2020: Facebook, Twitter And YouTube Wrestle With Misinformation, CNET (Nov. 11, 2020); NYU Stern Center for Business and Human Rights, Disinformation and the 2020 Election: How the Social Media Industry Should Prepare (Sept. 1, 2019).
9 The Propaganda Tools Used by Russians to Influence the 2016 Election, New York Times (Feb. 16, 2018).
10 'Not A Whole Lot Of Innovation': 2020 Election Misinformation Was Quite Predictable, Experts Say, USA Today (Nov. 17, 2021); Did Social Media Actually Counter Election Misinformation?, Associated Press, (Nov. 4, 2020).
11 Facebook Executives Shut Down Efforts To Make The Site Less Divisive, Wall Street Journal (May 26, 2020); Facebook Knew Calls For Violence Plagued ‘Groups,’ Now Plans Overhaul, Wall Street Journal (Jan. 31, 2020).
12 Cornell University, Auditing Radicalization Pathways On YouTube (Dec. 4, 2019).
13 Twitter Suspends More Than 50 White Nationalist Accounts, NBC News (July 10, 2020); Twitter Still Has A White Nationalist Problem, HuffPost (May 30, 2019).
14 The Union of Concerned Scientists, We Just Witnessed the Dangers of the Autocratic Disinformation Playbook (Jan. 8, 2021) (blog.ucsusa.org/genna-reed/dangers-of-autocraticdisinformation-playbook).
15 Id.
16 On Social Media, Only Some Lies Are Against The Rules, Consumer Reports (Aug. 13, 2020).
17 Id.
18 Facebook, New Steps to Protect the US Elections (Sept. 3, 2020) (press release).
19 Facebook, Our Preparations Ahead of Inauguration Day (Jan. 11, 2021) (press release).
20 Facebook, An Update on Our Work to Keep People Informed and Limit Misinformation About COVID-19 (Feb. 8, 2021) (press release).
21 YouTube Bans Covid-19 Vaccine Misinformation, Forbes (Oct. 14, 2020); On Social Media, Only Some Lies Are Against the Rules, Consumer Reports (Aug. 13, 2020).
22 YouTube Official Blog, Supporting The 2020 U.S. Election (Dec. 9, 2020)
(blog.youtube/news-and-events/supporting-the-2020-us-election/).
23 YouTube Will Start Penalizing Channels That Post Election Misinformation, TechCrunch (Jan. 7, 2021).
24 Twitter Blog, Additional Steps We’re Taking Ahead of the 2020 US Election (Oct. 9, 2020) (blog.twitter.com/en_us/topics/company/2020/2020-election-changes.html).
25 See The Technology 202: Democrats Ratchet Up Pressure on Silicon Valley to Tackle Vaccine Misinformation, Washington Post (Jan. 26, 2021); Social Media Platforms Face A Reckoning Over Hate Speech, Associated Press (June 29, 2020); EU Piles Pressure On Social Media Over Fake News, Reuters (Apr. 26, 2018).
26 Cybersecurity for Democracy, Far-Right News Sources On Facebook More Engaging (Mar. 3, 2021); Pew Research Center, How Americans Navigated the News in 2020: A Tumultuous Year in Review (Feb. 22, 2021); Anti-Defamation League, Exposure to Alternative & Extremist Content on YouTube (Feb.12, 2021); The German Marshall Fund, Engagement With Deceptive Outlets Higher on Facebook Today than Run-Up to 2016 Election (Oct. 12, 2020).


Witnesses

Mark Zuckerberg
Chairman and Chief Executive Officer
Facebook

Hearing Before the United States House of Representatives
Committee on Energy and Commerce
Subcommittees on Consumer Protection & Commerce and Communications & Technology

March 25, 2021

Testimony of Mark Zuckerberg
Facebook, Inc.

I. Introduction
Chairs Pallone, Schakowsky, and Doyle, Ranking Members McMorris Rodgers, Latta, and Bilirakis, and members of the Committee,

I want to start by extending my deepest condolences to the families of the Capitol police officers who lost their lives in the wake of January 6 and my appreciation to the many officers who put themselves at risk to protect you. Their bravery stands as an example to us all. My heart also goes out to those of you who lived through the awful events of that day. The Capitol attack was a horrific assault on our values and our democracy, and Facebook is committed to assisting law enforcement in bringing the insurrectionists to justice.

I look forward to discussing the role that misinformation and disinformation play in our country’s information ecosystem and the work Facebook is doing to reduce harmful content on our platform. Facebook’s mission is to give people the power to build community and bring the world closer together. Our services enable more than three billion people around the world to stay connected with friends and family, discover what’s going on in the world, and entertain and express themselves. We build products people use to share ideas, have fun, offer support, connect with neighbors, celebrate milestones, promote small businesses and non-profits, and discuss important topics, including family, careers, health, politics, and social issues.

It’s important to note that the vast majority of what people see on Facebook is neither political nor hateful. Political posts make up only about 6 percent of what people in the United States see in their News Feed, and the prevalence of hateful content people see on our service is less than 0.08 percent. While we work hard to prevent abuse of our platform, conversations online will always reflect the conversations taking place in living rooms, on television, and in text messages and phone calls across the country. Our society is deeply divided, and we see that on our services too. We are committed to keeping people safe on our services and to protecting free expression, and we work hard to set and enforce policies that meet those goals. We will continue to invest extraordinary resources into content moderation, enforcement, and transparency.

II. Our Efforts to Combat Misinformation

People want to see accurate information on Facebook, and so do we. That’s why we have made fighting misinformation and providing people with authoritative information a priority for the company. We have recalibrated our products and built global partnerships to combat misinformation on a massive scale.

We created an industry-leading fact-checking program. We work with 80 independent third-party fact-checkers certified through the non-partisan International Fact-Checking Network to curb misinformation on Facebook and Instagram. If content is rated false by one of these third-party fact-checkers, we put a warning label on it. And based on one fact-check, we’re able to kick off similarity detection methods that identify duplicates of debunked stories. When content is rated false, we significantly reduce its distribution; on average, this cuts future views by more than 80 percent. If people do try to share the content, we notify them of additional reporting, and we also notify people if content they have shared in the past is later rated false by a fact-checker. Group admins are notified each time a piece of content rated false by fact-checkers is posted in their Group, and they can see an overview of this in the Group Quality tool. We use information from fact-checkers to improve our technology so we can identify misinformation faster in the future. We also work to reduce the incentives for people to share misinformation to begin with. Since a lot of the misinformation that spreads online is financially motivated spam, we focus on disrupting the business model behind it. We take action against Pages that repeatedly share or publish content rated false, including reducing their distribution and, if necessary, removing their ability to monetize. And we’ve enhanced our recidivism policies to make it harder to evade our enforcement. We’ve also taken steps to reduce clickbait and updated our products so people see fewer posts and ads in News Feed that link to low-quality websites.

As well as taking steps to fight misinformation, we also use our platform to proactively connect people to authoritative information. We have directed over 2 billion people to our Covid-19 Information Center, and over 140 million people to our Voting Information Center. This is an important component of our work to build a healthier information ecosystem.

As one of the leading platforms where people share information and express themselves, misinformation is an ongoing challenge for us. With millions of Americans using our services every day, there will always be things we miss. However, I believe we do more to address misinformation than any other company, and I am proud of the teams and systems we have built. Below is an overview of this work in two important contexts: Covid-19 and the 2020 presidential election.

A. Covid-19 and Vaccine Misinformation

Since Covid-19 was declared a global public health emergency, Facebook has been working to connect people to authoritative information from health experts and keep harmful misinformation about Covid-19 from spreading on our apps. As part of our efforts, we have focused on:

• Promoting reliable information by launching a Covid-19 Information Center which we showed at the top of the Facebook News Feed and on Instagram, and that we direct people to when they search for information about Covid-19. We have connected over 2 billion people to authoritative information through this resource.

• Combating Covid-19 misinformation by removing over 12 million pieces of false content, including from foreign leaders; barring entities that have repeatedly shared false information;

removing exploitative ads spreading panic about the virus or mistruths about cures for financial gain; and promoting authoritative and science-based search results.

• Providing aggregated data on symptoms and travel patterns to public health officials, researchers, and nonprofits to help them calibrate the public health response.

• Supporting newsgathering by investing $100 million to assist local news and journalists and funding a $1 million grant program to support fact-checkers covering the virus.

In April 2020, we started showing messages in News Feed to people who liked, commented on, or reacted to posts with Covid-19 misinformation that we later removed for violating our policy. We’ve redesigned these notifications to make them more personalized and to more clearly connect people with authoritative information. Now people will see a thumbnail of the post and more information about where they saw it, how they engaged with it, why it was false, and why we removed it. People will then be able to see more facts in our Covid-19 Information Center and take other actions such as unfollowing the Page or Groups that shared this content. We are also continuing to improve search results on our platforms. When people search for vaccine or Covid-19 related content on Facebook, we promote relevant, authoritative results and provide third-party resources to connect people to expert information about vaccines.

In the Appendix are some of the alerts people see on Facebook that are designed to keep them informed and limit misinformation about Covid-19.

1. Covid-19 Vaccines

In addition to our work to combat misinformation about Covid-19 generally, we’re running the largest worldwide campaign to promote authoritative information about Covid-19 vaccines specifically by:

• Providing $120 million in ad credits to help health ministries, non-profits, and UN agencies reach billions of people around the world with Covid-19 vaccine and preventive health information.

• Providing training and marketing support to help governments and health organizations move quickly and reach the right people with the latest vaccine information.

• Providing data to inform effective vaccine delivery and educational efforts to build trust in Covid-19 vaccines.

• Helping people find where and when they can get vaccinated, similarly to how we helped people find information about voting during elections.

We’re also focused on removing misinformation on Facebook and Instagram about Covid-19 and vaccines. In December, we began removing false claims about Covid-19 vaccines that could lead to imminent harm, including false claims about the safety, efficacy, ingredients, or side effects of the vaccines. Last month, we expanded the list of false claims we will remove to include additional debunked claims about Covid-19 and vaccines following consultations with leading health organizations, including the World Health Organization. We already reject these claims in advertisements and prohibit any ads that discourage vaccines.

Groups, Pages, and accounts on Facebook and Instagram that repeatedly share these debunked claims may be removed altogether. In some instances, we are also requiring Group admins to temporarily approve all posts from other admins or members who have violated our Covid-19 policies. Claims about Covid-19 or vaccines that do not violate these policies may remain eligible for review by our independent third-party fact-checkers. If a claim is then identified as false, it will be labeled and will be demoted in News Feed.

B. Election Misinformation and Support for the Democratic Process

Facebook stands for giving people a voice, and it was important to us that everyone could make their voice heard during the election. While we were only a small piece of the broader election ecosystem, we announced a series of policies in advance to help protect the integrity of the election and support our democratic process.

As part of this effort, we worked hard to combat misinformation and voter suppression. We partnered with election officials to remove false claims about polling conditions and displayed warnings on more than 150 million pieces of content after review by our independent third-party fact-checkers. We put in place strong voter suppression policies prohibiting explicit or implicit misrepresentations about how or when to vote as well as attempts to use threats related to Covid19 to scare people into not voting. We also removed calls for people to engage in poll watching that used militarized language or suggested that the goal was to intimidate, exert control, or display power over election officials or voters, and we stopped recommending civic Groups.

As the ballots were counted, we deployed additional measures that we announced in advance of the election to help people stay informed:

• We partnered with Reuters and the National Election Pool to provide reliable information about election results in the Voting Information Center and notified people proactively as results became available. We added labels to posts about voting by candidates from both parties and directed people to reliable information about results.

• We attached an informational label to content that sought to delegitimize the outcome of the election or discuss the legitimacy of voting methods.

• We strengthened our enforcement against militias, conspiracy networks, and other groups to help prevent them from using our platform to organize violence or civil unrest in the period after the election.

Based on what we learned in 2016 about the risk of coordinated online efforts by foreign governments and individuals to interfere in our elections, we invested heavily in our security systems and monitored closely for any threats to the integrity of elections from at home or abroad. We invested in combatting influence operations on our platforms, and since 2017, we have found and removed over 100 networks of accounts for engaging in coordinated inauthentic behavior. We also blocked ads from state-controlled media outlets in the US to provide an extra layer of protection against various types of foreign influence in the public debate ahead of the election. Finally, we proactively supported civic engagement on our platform. We ran the largest voting information campaign in American history. Based on conversion rates we calculated from a few states we partnered with, we estimate that we helped 4.5 million people register to vote across Facebook, Instagram, and Messenger—and helped about 100,000 people sign up to be poll workers. We launched a Voting Information Center to connect people with reliable information on deadlines for registering and voting and details about how to vote by mail or vote early in person, and we displayed links to the Voting Information Center when people posted about voting on Facebook. 140 million people visited the Voting Information Center on Facebook and Instagram since it launched. We are encouraged that more Americans voted in 2020 than ever before and that our platform helped people take part in the democratic process.

III. Our Efforts to Address Polarization and Divisive Content

Facebook’s mission is to bring people together, and we stand firmly against hate and the incitement of violence. We have industry-leading policies that prohibit such content on our platforms, and we invest billions of dollars and work tirelessly to improve and enforce these policies. We are proud of the work we have undertaken to address harmful content on Facebook, from our robust content review and enforcement program to our industry-leading Community Standards Enforcement Report, which includes hard data that we hope can inform public discourse and policymaking about these issues.

A. Efforts to Keep Hate and Violence Off Our Platform

We have taken major steps to keep our community safe. While our enforcement efforts are not perfect and there is always more work to be done, we have built industry-leading policies, teams and systems to keep hate and violence off our platform.

Our Dangerous Organizations and Individuals policy prohibits content calling for or advocating violence, and we ban organizations and individuals that proclaim a violent mission. We remove language that incites or facilitates violence, and we ban Groups that proclaim a hateful and violent mission from having a presence on our apps. We also remove content that represents, praises, or supports them. We believe this policy has long been the broadest and most aggressive in the industry.

In August 2020, we expanded this policy further to address militarized social movements and violence-inducing conspiracy networks such as QAnon. To date, we have banned over 250 white supremacist groups and 890 militarized social movements, and we have been enforcing our rules that prohibit QAnon and militia groups from organizing on our platform. We have also continued to enforce our ban on hate groups, including the Proud Boys and many others.

Moving quickly to find and remove dangerous organizations such as terrorist and hate groups takes significant investment in both people and technology. That’s why we have tripled the size of our teams working in safety and security since 2016 to over 35,000 people. Our team of experts includes 300 professionals who work exclusively or primarily on preventing terrorist and violent content from appearing on our platform and quickly identifying and removing it if it does. These professionals possess expertise ranging from law enforcement and national security to counterterrorism intelligence and radicalization.

Four years ago, we developed automated techniques to detect content related to terrorist organizations such as ISIS, al Qaeda, and their affiliates. We’ve since expanded these techniques to detect and remove content related to other terrorist and hate groups. We are now able to detect and review text embedded in images and videos, and we’ve built media-matching technology to find content that’s identical or near-identical to photos, videos, text, and audio that we’ve already removed. Our work on hate groups focused initially on those that posed the greatest threat of violence at the time; we’ve now expanded this to detect more groups tied to different hate-based and violent extremist ideologies. In addition to building new tools, we’ve also adapted strategies from our counterterrorism work, such as leveraging off-platform signals to identify dangerous content on Facebook and implementing procedures to audit the accuracy of our AI’s decisions over time.

B. Actions to Address Content That Violates Community Standards in Groups

People turn to Facebook Groups to connect with others who share their interests and to build community. This is particularly important in the midst of the Covid-19 crisis, which makes connecting both more important and more challenging than ever. However, we recognize the importance of keeping violent and hateful content out of Groups and have taken significant steps towards that goal.

We remove Groups that represent QAnon, even if they contain no violent content. And we do not allow militarized social movements—such as militias or groups that support and organize violent acts amid protests—to have a presence on our platform. In addition, last year we temporarily stopped recommending US civic or political Groups, and earlier this year we announced that policy would be kept in place and expanded globally. We’ve instituted a recommendation waiting period for new Groups so that our systems can monitor the quality of the content in the Group before determining whether the Group should be recommended to people. And we limit the number of Group invites a person can send in a single day, which can help reduce the spread of harmful content from violating Groups.

We also take action to prevent people who repeatedly violate our Community Standards from creating new Groups. Our recidivism policy stops the administrators of a previously removed Group from creating another Group similar to the one removed, and an administrator or moderator who has had Groups taken down for policy violations cannot create any new Groups for a period of time. Posts from members who have violated any Community Standards in a Group must be approved by an administrator or moderator for 30 days following the violation. If administrators or moderators repeatedly approve posts that violate our Community Standards, we’ll remove the Group.

Our enforcement effort in Groups demonstrates our commitment to keeping content that violates these policies off the platform. In September, we shared that over the previous year we removed about 1.5 million pieces of content in Groups for violating our policies on organized hate, 91 percent of which we found proactively. We also removed about 12 million pieces of content in Groups for violating our policies on hate speech, 87 percent of which we found proactively. When it comes to Groups themselves, we will remove an entire Group if it repeatedly breaks our rules or if it was set up with the intent to violate our standards. We took down more than one million Groups for violating our policies in that same time period.

IV. Updating the Rules of the Internet

In my testimony above, I laid out many of the steps we have taken to balance important values including safety and free expression in democratic societies. We invest significant time and resources in thinking through these issues, but we also support updated Internet regulation to set the rules of the road. One area that I hope Congress will take on is thoughtful reform of Section 230 of the Communications Decency Act.

Over the past quarter-century, Section 230 has created the conditions for the Internet to thrive, for platforms to empower billions of people to express themselves online, and for the United States to become a global leader in innovation. The principles of Section 230 are as relevant today as they were in 1996, but the Internet has changed dramatically. I believe that Section 230 would benefit from thoughtful changes to make it work better for people, but identifying a way forward is challenging given the chorus of people arguing—sometimes for contradictory reasons—that the law is doing more harm than good.

Although they may have very different reasons for wanting reform, people of all political persuasions want to know that companies are taking responsibility for combatting unlawful content and activity on their platforms. And they want to know that when platforms remove harmful content, they are doing so fairly and transparently.

We believe Congress should consider making platforms’ intermediary liability protection for certain types of unlawful content conditional on companies’ ability to meet best practices to combat the spread of this content. Instead of being granted immunity, platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it. Platforms should not be held liable if a particular piece of content evades its detection—that would be impractical for platforms with billions of posts per day—but they should be required to have adequate systems in place to address unlawful content.

Definitions of an adequate system could be proportionate to platform size and set by a third-party. That body should work to ensure that the practices are fair and clear for companies to understand and implement, and that best practices don’t include unrelated issues like encryption or privacy changes that deserve a full debate in their own right.

In addition to concerns about unlawful content, Congress should act to bring more transparency, accountability, and oversight to the processes by which companies make and enforce their rules about content that is harmful but legal. While this approach would not provide a clear answer to where to draw the line on difficult questions of harmful content, it would improve trust in and accountability of the systems and address concerns about the opacity of process and decisionmaking within companies.

Ultimately it is up to Congress and the new Administration to chart the path forward. Facebook stands ready to be a productive partner in the discussion about Section 230 reform—as well in important and urgent conversations about updating the rules for privacy, elections, and data portability.

V. Conclusion

Every day we see people using our services to come together and do good—forming supportive communities, raising money for good causes, drawing attention to important issues, creating opportunities for themselves, or simply being there for one another in times of need. Facebook is successful because people around the world have a deep desire to connect and share, not to stand apart and fight. This reaffirms our belief that connectivity and togetherness are ultimately more powerful ideals than division and discord—and that technology can be part of the solution to the deep-seated challenges in our society. We will continue working to ensure our products and policies support this ambition.

APPENDIX: Facebook Efforts to Combat Covid-19 Misinformation









Sundar Pichai
Chief Executive Officer
Google

Written Testimony of Sundar Pichai
Chief Executive Officer, Alphabet
United States House Committee on Energy and Commerce “Disinformation Nation: Social Media's Role In Promoting Extremism And Misinformation”
March 25, 2021

Chairman Doyle, Ranking Member Latta, Chairwoman Schakowsky, Ranking Member Bilirakis, Full Committee Chair Pallone and Full Committee Ranking Member McMorris Rodgers, thank you for the opportunity to appear before you today.

This month, the worldwide web turned 32. Over the past three decades, we’ve seen the web inspire the best in society, by expanding knowledge, powering businesses, and providing opportunities for expression, discovery, and connection—no matter who you are, or where you live.

I joined Google in 2004 because I believed the internet was the best way to bring the benefits of technology to more people, and I believe that still today. I am proud that Americans can turn to Google for help in moments that matter, whether they’re looking for COVID vaccine information on Search and Maps, working and learning from home using Google Workspace or Google Classroom, learning new skills on YouTube, or using our digital tools to grow their businesses. In 2020, our products helped 2 million US businesses, publishers, and others generate $426 billion in economic activity. And we helped billions of people find comfort and connection in an otherwise awful year.

Beyond our products, we were proud to announce last week our plans to invest over $7 billion in data centers and offices across 19 states, and create at least 10,000 full-time Google jobs in the U.S. That’s in addition to the 84,000 employees we currently employ across the country. And according to an Oxford Economics report, YouTube's creative ecosystem supported the equivalent of 345,000 full time jobs in 2019.

We are energized by the opportunity to help people at scale, and we are humbled by the responsibility that comes with it. We have thousands of people focused on everything from cyber attacks, to data privacy, to today’s topic: misinformation. Our mission at Google is to organize the world’s information and make it universally accessible and useful. Core to that mission is providing trustworthy content and opportunities for free expression across our platforms, while limiting the reach of harmful misinformation.

It’s a large, dynamic challenge without easy answers. More than 500 hours of video are uploaded to YouTube every minute, and approximately 15% of the searches on Google each day are new to us. Eighteen months ago most people hadn’t heard of COVID-19; sadly, coronavirus was the top trending search of 2020.

Responding to the events of January 6th

Staying ahead of these challenges and keeping users safe and secure on our platforms is a top priority. We saw how high those stakes can be on January 6, 2021, when a mob stormed the U.S. Capitol. This was an unprecedented and tragic event, and Google strongly condemns these violent attacks on our democracy, and mourns the lives lost.

In response, our teams worked to raise up authoritative news sources across our products. Teams at YouTube quickly took down any live streams or videos that violated our incitement to violence policies, and on January 7th, we began issuing strikes to those in violation of our presidential election integrity policy. In the Play Store, we removed apps for violating our policies on inciting violence. We also prohibited advertisers from running ads that referenced the 2020 election or topics related to the Capitol riots in the scope of our Sensitive Events policy.

Doing our part to contribute to the integrity of the US 2020 election

We were able to act quickly because of the investments we made to prepare for the 2020 elections. Last year, teams across Google and YouTube worked around the clock to contribute to election preparedness, by helping voters find authoritative information about the election; by working with campaigns to equip them with best-in-class security features and helping them connect with voters; and by protecting our platforms from abuse.

Helping voters find authoritative information on our services

This U.S. election cycle saw all-time highs in searches on Google for civics-related topics. Anticipating that need, we worked to launch features that would help people find the information to participate in the democratic process, including how to register and how to vote in their states.

Consistent with our approach to prior election cycles, we showed “how to register” and “how to vote” reminders to all our U.S. users directly on Google Search, Maps and YouTube. These reminders were seen over 2 billion times across our products. As the election neared, we helped people find polling and ballot drop off locations: from mid-October through Election Day, we added more than 125,000 voting locations in Google Maps. Across our products, these features were seen nearly 500 million times. Finally, starting on Election Day, we worked with the Associated Press to provide real-time election results for relevant searches on Google. These results had over six times more views in 2020 than in 2016. Similarly, on YouTube, we launched an election results information panel that showed on top of search results and under videos with election-related content. It pointed to our election results page on Google, and over time, we expanded it to include an additional link pointing to a page on the US Cybersecurity and Infrastructure Security Agency (CISA) website that debunked incorrect claims made about the integrity of the elections. Once the safe harbor deadline for state certification passed, we updated this YouTube Election Results Information Panel again to point to the National Archives Office of the Federal Register page of record for the 2020 electoral college vote. Collectively, our election information panels on YouTube have been shown over 8 billion times.

Working with campaigns

We also helped campaigns and elected officials effectively use Google and YouTube products to reach voters and enhance their election security. As part of our Civics Outreach Virtual Training Series, Google held 21 training sessions for over 900 candidates, campaigns, public officials, and nonprofit leaders. Overall, we held 45 group and individual trainings to help more than 2,900 election workers learn to use Google tools to amplify their message and better connect with voters through events like digital town halls, debates and virtual campaign rallies.

In addition, as a part of our Election Cybersecurity Initiative with the University of Southern California’s Annenberg School, nearly 4,000 elected officials, secretaries of state, campaign staffers, political party representatives, and state election directors in all 50 states received training on ways to secure their information and protect their campaigns against cyberattacks.

At the start of the 2020 election season, we partnered with Defending Digital Campaigns (DDC), a nonprofit and nonpartisan organization, to give any eligible federal campaign access to free Titan Security Keys—the strongest form of two-factor authentication. This collaboration is a part of our Advanced Protection Program, which protects high-risk individuals, such as election officials, campaigns, and journalists, who have access to high visibility and sensitive information. In the lead up to the 2020 elections, DDC distributed more than 10,000 Titan Security key bundles to more than 140 U.S. federal campaigns. We recently expanded our support for DDC to provide eligible campaigns and political parties, committees, and related organizations, at both the federal and state levels, with knowledge, training and resources to defend themselves from security threats.

Protecting our platforms from abuse

In the years leading up to the 2020 election, we made numerous enhancements to protect the integrity of elections around the world and better secure our platforms. Among them, we introduced strict policies and processes for identity verification for advertisers who run election-related advertising on our platform; we launched comprehensive political ad libraries in the U.S. and in other countries around the world; we developed and implemented policies to prohibit election-related abuse such as voter suppression and deceptive practices on platforms like YouTube, Google Ads, Google Maps and Google Play; our Threat Analysis Group (TAG) launched a quarterly bulletin to provide regular updates on our work to combat coordinated influence operations across our platforms and flagged phishing attempts against the presidential campaigns; and we worked closely with government agencies, including the FBI’s Foreign Influence Task Force, and other companies to share information around suspected election interference campaigns.

On YouTube, throughout 2020, we identified and removed content that was misleading voters about where or how to vote, to help ensure viewers saw accurate information about the upcoming election. After December 8th, which marked the "safe harbor" deadline for states to certify their election, in accordance with our Presidential Election Integrity policy we began to remove content uploaded on or after December 9th that misled people by alleging that widespread fraud or errors changed the outcome of the 2020 U.S. presidential election. In addition, we continued to enforce our broader policies – for instance, from October to December 2020, we removed 13,000 YouTube channels for promoting violence and violent extremism; 89% of videos removed for violating our violent extremism policy were taken down before they had 10 views.

This work was in addition to improvements in the ranking systems we use to reduce the spread of harmful misinformation on YouTube: in January 2019, we announced that we would begin reducing recommendations of borderline content or videos that could misinform viewers in harmful ways but that do not violate YouTube Community Guidelines. Since then, we've launched numerous changes to reduce recommendations of borderline content and harmful misinformation, and we continue to invest in this work: our models review more than 100,000 hours of videos every day to find and limit the spread of borderline content.

Our work is never done, and we continue to learn and improve from one election cycle to the next, and continue to evolve our policies. That principle has guided our approach to new and evolving challenges, including COVID-19 misinformation.

Addressing the challenge of COVID-19 misinformation

This past year we’ve also focused on providing quality information during the pandemic. Since the outbreak of COVID-19, teams across Google have worked to provide quality information and resources to help keep people safe, and to provide public health, scientists and medical professionals with tools to combat the pandemic. We’ve launched more than 200 new products, features and initiatives—including the Exposure Notification API to assist contact tracing—and have pledged over $1 billion to assist our users, customers and partners around the world.

Today, when people search on Google for information for COVID-19 vaccines in the United States, we present them with a list of authorized vaccines in their location, with information on each individual vaccine from the FDA or CDC, as relevant. We also provide them with information about vaccination locations near them in Google Search and Google Maps, when that information is available. On YouTube, we launched COVID-19 information panels directing viewers to the CDC’s information about the virus and, later on, about vaccines. These information panels are featured on the YouTube homepage, and on videos and in search results about the pandemic. Since March 2020, they have been viewed over 400 billion times. And we continue to work with YouTube creators to pair them with health experts who can get the facts to a wide range of audiences – we promote this content in our “ask the experts” feature. Another way we’ve been helping is by offering over $350 million in Ad Grants to help more than 100 government agencies and non-profit organizations around the world run critical public service announcements (PSAs) about COVID-19. Grantees can use these funds throughout 2021 for things like vaccine education and outreach campaigns.

In parallel to our efforts to elevate authoritative information about the pandemic and vaccines, we have worked across our services to combat harmful misinformation about these topics. Across our products, we’ve had long-standing policies prohibiting harmful and misleading medical or health-related content. When COVID-19 hit, our Trust and Safety team worked to stop a variety of abuses stemming from the pandemic, including phishing attempts, malware, dangerous conspiracy theories, and fraud schemes. We took quick action to remove content that promoted inaccurate or misleading claims about cures, masks, and vaccines; our teams have removed 850,000 videos related to dangerous or misleading COVID-19 medical information, and in total, we blocked nearly 100 million COVID-related ads throughout 2020. Our teams have also been planning for new threats and abuse patterns related specifically to COVID-19 vaccines. For example, in October, we expanded our COVID-19 medical misinformation policy on YouTube to remove content about vaccines that contradicts consensus from health authorities, such as the Centers for Disease Control or the World Health Organization (WHO).

Developing Clear and Transparent Policies

We were able to act quickly and decisively because of the significant investments we have made over years, not only to make information useful and accessible, but also to remove and reduce the spread of harmful misinformation. Across all of this work, we strive to have clear and transparent policies and enforce them without regard to political party or point of view. We work to raise up authoritative sources, and reduce the spread of misinformation in recommendations and elsewhere. Teams across the company work in a variety of roles to help develop and enforce our policies, monitor our platforms for abuse, and protect users from everything from account hijackings and disinformation campaigns to misleading content and inauthentic activity. And we don’t do this work alone; we work closely with experts to stay ahead of emerging threats.

Supporting innovation in journalism and the development of new business models

At Google, we believe that a vibrant news industry is vital to tackling misinformation on a societal scale. We invested millions to support COVID-19 related fact checking initiatives, providing training or resources to nearly 10,000 journalists In addition to helping journalists tackle misinformation, we have long been committed to supporting newsrooms and journalists in the United States and abroad. Over the past 20 years, we have collaborated closely with the news industry and provided billions of dollars to support the creation of quality journalism in the digital age.

We share a strong interest in supporting a diverse and sustainable ecosystem of quality news providers. Our products are designed to elevate high quality journalism and connect consumers to diverse news sites — from global media companies to smaller digital startups.

We are proud that our services help people all over the world find relevant, authoritative news about issues that matter to them. Each month, people click through from Google Search and Google News results to publishers' websites more than 24 billion times — that’s over 9,000 clicks per second. This free traffic helps new publishers increase their readership, build trust with readers and earn money through advertising and subscriptions. We also recently announced a new investment in Google News Showcase and committed $1 billion over the next three years to pay publishers to produce editorially curated content experiences and for limited free user access to paywalled content. In less than one year, we have been able to partner with over 500 publications across more than a dozen countries, spanning global, national, regional, metro and local publications.

Our commitment to the future of news extends beyond our products and services. We launched the Google News Initiative to support journalistic innovation and the emergence of new business models. Since 2018, we have committed $61 million in funding to support more than two thousand news partners across the United States and Canada. As part of this initiative, we have also helped more than 447,200 journalists develop knowledge and skills in digital journalism through in person and online trainings through the Google News Lab. And when the pandemic hit, we turned our resources to support local news organizations and fact-checkers — contributing $10.6 million to over 1,800 local newsrooms across the U.S. and Canada through our Journalism Emergency Relief Fund and committing $6.5 million to combat Covid-19 misinformation. We look forward to continuing this work with our partners in the news industry to ensure a thriving and healthy future for journalism.

The role of Section 230 in fighting misinformation

These are just some of the tangible steps we’ve taken to support high quality journalism and protect our users online, while preserving people’s right to express themselves freely. Our ability to provide access to a wide range of information and viewpoints, while also being able to remove harmful content like misinformation, is made possible because of legal frameworks like Section 230 of the Communications Decency Act.

Section 230 is foundational to the open web: it allows platforms and websites, big and small, across the entire internet, to responsibly manage content to keep users safe and promote access to information and free expression. Without Section 230, platforms would either over-filter content or not be able to filter content at all. In the fight against misinformation, Section 230 allows companies to take decisive action on harmful misinformation and keep up with bad actors who work hard to circumvent their policies.

Thanks to Section 230, consumers and businesses of all kinds benefit from unprecedented access to information and a vibrant digital economy. Today, more people have the opportunity to create content, start a business online, and have a voice than ever before. At the same time, it is clear that there is so much more work to be done to address harmful content and behavior, both online and offline.

Regulation has an important role to play in ensuring that we protect what is great about the open web, while addressing harm and improving accountability. We are, however, concerned that many recent proposals to change Section 230—including calls to repeal it altogether—would not serve that objective well. In fact, they would have unintended consequences—harming both free expression and the ability of platforms to take responsible action to protect users in the face of constantly evolving challenges.

We might better achieve our shared objectives by focusing on ensuring transparent, fair, and effective processes for addressing harmful content and behavior. Solutions might include developing content policies that are clear and accessible, notifying people when their content is removed and giving them ways to appeal content decisions, and sharing how systems designed for addressing harmful content are working over time. With this in mind, we are committed not only to doing our part on our services, but also to improving transparency across our industry. I look forward to sharing more about our approach with you today, and working together to create a path forward for the web’s next three decades.



Jack Dorsey
Chief Executive Officer
Twitter

WRITTEN TESTIMONY OF TWITTER CEO JACK DORSEY (@JACK)
U.S. HOUSE COMMITTEE ON ENERGY & COMMERCE
MARCH 25, 2021

Twitter’s purpose is to serve the public conversation. While much has changed in the world since we started fifteen years ago, we believe our mission is more important than ever.

Every day Twitter grapples with complex considerations on how to address extremism and misinformation. How do we prevent harm, while also safeguarding free expression and the right of diverse individuals to express a range of views? How do we develop policies that can be built at scale and adapt rapidly, especially given diverse regulatory models around the world? What role should our company play in determining these pivotal questions? What information should we rely on when making decisions? How do we earn the trust of those who use our service? These are even harder questions in an increasingly polarized world, which has consequently heightened concerns about information sources. Quite simply, a trust deficit has been building over the last several years, and it has created uncertainty — here in the United States and globally. That deficit does not just impact the companies sitting at the table today but exists across the information ecosystem and, indeed, across many of our institutions.

This Committee has expressed interest in what we are doing to combat “falsehoods about the COVID-19 vaccine” and “debunked claims of election fraud.” We have COVID-19 and vaccine misinformation policies, as well as a COVID information hub. Our civic integrity and platform manipulation policies are available on our Help Center, along with information on our bans on state-controlled media advertising and political advertising. As a follow-up to our preliminary post-election update, we are conducting a review of the 2020 U.S. election, the findings of which we intend to share.

Our efforts to combat misinformation, however, must be linked to earning trust. Without trust, we know the public will continue to question our enforcement actions. I believe we can earn trust by focusing on the following: enhancing transparency, ensuring procedural fairness, enabling algorithmic choice, and strengthening privacy.

Building & Earning Trust

Every day, millions of people around the world Tweet hundreds of millions of Tweets, with one set of rules that applies to everyone and every Tweet. We strive to implement policies impartially and at scale. We built our policies primarily around the promotion and protection of three fundamental human rights — freedom of expression, safety, and privacy.

At times, these rights can conflict with one another. As we develop, implement, and enforce our policies, we must balance these rights. Additionally, our policies must be adaptable to changes in behavior and evolving circumstances. This is why we must be transparent, embrace procedural fairness and choice, and protect privacy.

Transparency

While Twitter has made significant progress with respect to transparency, we know that we can do more to strengthen our efforts. People who use our service should understand our processes — how potential violations of our rules are reported and reviewed, how content-related decisions are made, and what tools are used to enforce these decisions. Publishing answers to questions like these will continue to make our internal processes both more robust and more accountable to the people we serve.

Twitter’s open nature means our enforcement actions are plainly visible to the public, even when we cannot reveal the private details behind individual accounts that have violated our rules. We use a combination of machine learning and human review to assess potential violations of the Twitter Rules. We take a behavior-first approach, meaning we look at how accounts behave before we review the content they are posting. If an account owner breaks our Rules and may be required to delete a Tweet, we have worked to build better in-app notices to communicate with both the account that reports a Tweet and the account that posted it with additional information about our actions. In January, we published our biannual update to the Twitter Transparency Center, with additional data about actions we have taken to disrupt state-backed information operations, to enforce our COVID-19 policy, and take action on Tweets that violate our Rules. In addition to ensuring transparency around our decisions, we are seeking ways to enhance transparency around how we develop our content moderation policies. In recent months, for example, there have been increased questions about how we should address policy violations from world leaders. As a result, we are currently re-examining our approach to world leaders and are soliciting feedback from the public. Our feedback period is currently open and our survey will be available in more than a dozen languages to ensure a global perspective is reflected.

Procedural Fairness (Accountability & Reliability)

Twitter is focused on advancing procedural fairness in our decision-making. We strive to give people an easy, clear way to appeal decisions we make that they think are not right. Mistakes in enforcement — made either by a human or an automated system — are inevitable and why we strive to make appeals easier. We believe that all companies should be required to provide those who use their service with straightforward processes to appeal decisions that impact them.

Algorithmic Choice

We believe that people should have transparency or meaningful control over the algorithms that affect them. We recognize that we can do more to provide algorithmic transparency, fair machine learning, and controls that empower people. The machine learning teams at Twitter are studying techniques and developing a roadmap to ensure our present and future algorithmic models uphold a high standard when it comes to transparency and fairness.

We also provide people control over algorithms that affect their core experience on Twitter. We have invested heavily in building systems that organize content to show individuals relevant information that improves their experience. With 192 million people last quarter using Twitter daily in dozens of languages and countless cultural contexts, we rely upon machine learning algorithms to help us organize content by relevance to provide a better experience for the people who use our service.

Privacy

We have always believed that privacy is a fundamental human right. We believe that individuals should understand the personal data that is shared with companies and have the tools to help them control their information. To help people better understand their options, we have created the Twitter Privacy Center, which acts as a hub for information about our privacy and data protection work.

We are constantly working to improve the controls people have to manage their personal data and experience on Twitter. In addition, we continue to support efforts to pass strong federal privacy legislation to safeguard important privacy rights.

Innovations to Address Misinformation

We also recognize that addressing harms associated with misinformation requires innovative solutions. Content moderation in isolation is not scalable, and simply removing content fails to meet the challenges of the modern Internet. This is why we are investing in two experiments — Birdwatch and Bluesky. Both are aimed at improving our efforts to counter harmful misinformation.

Birdwatch

In January, we launched the “Birdwatch” pilot, a community-based approach to misinformation. Birdwatch is expected to broaden the range of voices involved in tackling misinformation, and streamline the real-time feedback people already add to Tweets. We hope that engaging diverse communities here will help address current deficits in trust for all. More information on Birdwatch can be found here. We expect data related to Birdwatch will be publicly available at Birdwatch Guide, including the algorithm codes that power it.

Bluesky

Twitter is also funding Bluesky, an independent team of open source architects, engineers, and designers, to develop open and decentralized standards for social media. This team has already created an initial review of the ecosystem around protocols for social media to aid this effort. Bluesky will eventually allow Twitter and other companies to contribute to and access open recommendation algorithms that promote healthy conversation and ultimately provide individuals greater choice. These standards will support innovation, making it easier for startups to address issues like abuse and hate speech at a lower cost. Since these standards will be open and transparent, our hope is that they will contribute to greater trust on the part of the individuals who use our service. This effort is emergent, complex, and unprecedented, and therefore it will take time. However, we are excited by its potential and will continue to provide the necessary exploratory resources to push this project forward.

Conclusion

As we look to the future, I agree with this Committee that technology companies have work to do to earn trust from those who use our services. For Twitter, that means tackling transparency, procedural fairness, algorithmic choice, and privacy. I think that this approach will be a growing trend across all companies and organizations, both big and small. I look forward to your questions.




CROWDFUNDING OR DAY SPONSORSHIP OPPORTUNITIES


If you receive value from what Macon Media provides to the community, please consider becoming a supporter and contribute at least a dollar a month. Those who support Macon Media with at least a dollar a month receive early access to video of some events and meetings before they are made public on the website. Videos and news involving public safety are not subject to early access.



Become a Patron!



Or, if you prefer Pay Pal, try PayPal.me/MaconMedia


Published at 11:00am on Thursday, March 25, 2021



0 comments :