Title: Can Lawmakers Save Democracy from Big Tech?
The American public is gaining a greater understanding of the deleterious effects that social media is having on society and the unhealthy concentration of power that rests with tech giants like Facebook and Google. Yet, it is unclear at this point whether our deeply polarized and often dysfunctional Congress has the necessary skill to address the challenges that Big Tech presents to American democracy. Like many other vitally important issues lawmakers are currently grappling with, failure to act promptly may have irreversible consequences that will reverberate for decades.
The Evolution of Social Media
When social media began to explode just over a decade ago, the principal societal concern was how the collection and sharing of personal information over the internet would erode and possibly destroy the concept of privacy. Many have argued that privacy is a fundamental building block for human flourishing—allowing for the creation of trusted relationships, enabling creativity, and protecting a zone of autonomy where individuals can live their lives free from the scrutiny of outsiders. The advent of social media threatened these values by encouraging the voluntary sharing of mass quantities of information, photos, and videos over the Internet and enabling behemoths like Facebook and Google to use, store, and distribute an unprecedented quantity of personal data to advance their corporate interests. The government had few powers to restrain social media, as laws developed in the 1970s to protect data privacy were generally inapplicable to social media. Indeed, legislation in recent decades has been directed towards stimulating the growth of the industry, by restricting taxation on internet transactions and providing big tech companies immunity from liability for information posted on their platforms.
Social media has evolved far from its origins as a place to share photos or post about what you cooked for dinner. Most critically, these platforms have become the main vehicle for many people to receive news and engage in political discourse. Companies are no longer merely using the personal information they collect to sell targeted advertisements, but using sophisticated algorithms fueled by artificial intelligence to control information flows and impact human behavior.
The starkest example of this phenomenon was when Facebook tweaked the algorithm controlling its news feed after the 2020 election to promote more reliable news and tamp down misinformation about the election. While many focused on why Facebook acted at this time and its effectiveness at combatting election misinformation, it is far more remarkable that a private company has the power to make decisions, in secret, that can so strongly impact which information reaches the public. Even in the glory days of newspapers and television, no single media company had this much power to mold public opinion. And even if the so-called mainstream media weighed in on political matters—it did so transparently, publishing stories that all could see and evaluate for bias, not by secretly adjusting a hidden algorithm. Facebook adjusted its algorithm after the 2020 election, but it also has the power to adjust its algorithm before an election in a way that might favor one candidate over another. No one outside the company would know. This is anti-democratic.
Big Tech has also accrued more power in recent years because the amount and type of data they collect has multiplied exponentially. In the early days of social media, companies had access to the data that their users voluntarily consented to provide them. Now, companies collect location data that emanates constantly from our devices, whether we are using them or not. They know everything we do while on their sites—every click, every hover over a post, and every keystroke.
Data also used to be collected primarily from our phones and computers. Now, many of us have dozen of devices that create data—smart TVs, home security systems, and voice-activated personal assistants, which also collect data even when we are not using them. It is estimated that by the end of 2021, there will be 46 billion devices connected to the global Internet of Things, with the expectation that this number will grow to 125 billion by 2031. Companies use these masses of data to understand individual consumers’ preferences even more accurately than individuals understand themselves. Their algorithms also develop “suggestions” of content to keep users active on a platform: the more viewing time – the more money that can be made.
Harvard professor Shoshana Zuboff has accurately described this business model as “surveillance capitalism” – a form of unrestrained capitalism that is far more dangerous than the massive trusts of the early-twentieth century famously corralled by Theodore Roosevelt. These trusts asserted economic and political power directly, but they had limited means to sway mass public opinion. Big Tech companies do all three.
Finally, the use of mass platforms to spread misinformation is causing disruption and political instability both in the United States and abroad. Whereas social media was once seen as a great equalizer enabling truth to confront political power, authoritarian regimes have learned to use these tools to spread propaganda, crush opposition, and assert greater control over their populations. Modern democracies, especially those with legal protections for free speech, are proving to be deeply vulnerable to false information and conspiracy theories that proliferate across social media. Politically polarized populations that endorse different versions of factual reality are finding it increasingly difficult to achieve consensus to address pressing problems. Support for the core principle of democracy is declining around the world.
Baby Steps on Regulating Big Tech
Legislators are just beginning to address the problems created in the early days of social media – let alone grapple with the myriad impacts these platforms are having on our democracy. Recent legislative activity has focused on protecting personal privacy by requiring platforms to provide users with more control over their personal data. The European Union led the way with the General Data Privacy Regulation (GDPR), which went into effect in 2018. Comprehensive privacy laws have also been enacted in California and some other U.S. states. These frameworks all share common features: requiring companies to tell consumers how their information is being collected and used, providing them an opportunity to correct or delete this information, and giving consumers the right to prohibit the sale of their information to third parties. Congress has come close to enacting a national comprehensive privacy law in recent years, but partisan divisions have arisen over the preemptive scope of the law and enforcement mechanisms, so privacy regulation in the United States remains patchwork and spotty.
While new privacy protections are much needed and almost universally supported, they are inadequate for the present situation. First, the evidence shows that the GDPR has had virtually no impact on the core business model of tech giants like Facebook, Google, and Amazon. While these companies have had to provide consumers more power to control their data, the burden remains on the consumer to actively dive into each platform’s complicated privacy and security settings – time-consuming acts with which many consumers simply do not bother. Many observers believe any legislation passed by Congress will be even less restrictive than California’s privacy law, so new federal legislation is unlikely to curb Big Tech’s business model significantly. Second, tech giants are finding ways to subvert the regulations and collect personal information. For example, when many Android users opted-out of having their location data collected, Google made these settings more difficult to find and started collecting location data via non-Google apps.
But more importantly, greater consumer privacy protection only begins to scratch the surface relating to the outsized power that Big Tech is exercising in our modern society. It is legitimate to worry that if Congress goes through the arduous task of enacting comprehensive privacy legislation, its appetite for taking on Big Tech will be exhausted, and the other substantial problems Big Tech is causing will be left to fester.
Robust Action Is Needed
Comprehensive data privacy legislation needs to be the beginning, not the end, of an intensive process to regulate Big Tech and social media. There are signs of bipartisan support for such efforts – these ideas need to be developed, incorporated into legislation, and enacted.
First, Congress must address the inordinate economic and social power that many Big Tech companies have amassed. The U.S.’s century-old antitrust laws are intended to limit corporate conglomerates that harm consumers. Tech companies, however, give away services for free, so applying classic anti-trust concepts to them is awkward. A suite of bills recently passed by the House Judiciary Committee are designed to curb the anti-competitive practices of Big Tech. They give regulators the power to break up companies when they grow so large that their business lines create an “irreconcilable conflict of interest.” These bills, which are being aggressively lobbied against by Big Tech, are a good start.
Second, Congress should require social media companies to be far more transparent with the algorithms that determine how news and other information is distributed. Legislative interference with editorial decisions of this nature is generally barred by the First Amendment. But under current law, social media platforms are not “publishers” of information and, as such, they do not enjoy the same First Amendment rights as content producers, like newspapers. Congress could require companies to disclose the nature of their algorithms and give the Federal Trade Commission (FTC) the power to prohibit algorithms that spread misinformation.
Congress should also consider legislation that would provide consumers with the ability to control the algorithms that determine the content they see in their news feeds. This year, Facebook introduced a feature that enabled users to filter the content that appears on their feeds – allowing them to identify “favorite” sites or sort their feed chronologically. Congress could mandate that news aggregator sites give consumers even greater ability to control the content to which they are exposed. Weakening Big Tech’s power to control information flows would enhance and protect democratic norms, especially during elections.
Third, regulating the distribution of misinformation is a difficult problem, but Congress could strengthen existing regulatory frameworks. The FTC has the power to prohibit “unfair and deceptive practices in or affecting commerce.” More robust FTC enforcement against social media companies on issues that go beyond commercial advertising, such as false information about COVID-19 or vaccinations, is one option. Congress could also require social media companies have systems for labeling information that the companies determine to be blatantly false and that cause harm to an important public interest.
Another promising solution is for platforms to provide users who choose to view misinformation with targeted advertisements that “redirect” them to healthier content. These techniques have been used voluntarily by platforms to curb recruitment to violent extremist organizations. Congress may not be able to mandate companies deploy these tools without running afoul of the First Amendment, but it could fund research and provide tax benefits to companies with robust anti-misinformation programs. Congress could also get to the root of the problem by mandating that public schools develop media literacy programs to educate citizens on how to distinguish truth from falsehood on the internet—something the public is currently ill-equipped to do.
Congress has been asleep at the wheel for far too long when it comes to protecting society from the harms flowing from social media and curbing the power of Big Tech. Enacting comprehensive privacy legislation would be an important, but insufficient, regulatory step. In light of the substantial problems caused by Big Tech, Congress needs to start thinking “Big” as well.
…
David Schanzer is a professor at the Duke Sanford School of Public Policy and the Director of the Triangle Center on Terrorism and Homeland Security.
Image Credit: Creative Commons Attribution 4.0 International
More News
War and conflict have profound effects on society, including the critical fields of science. The paper argues that scientific protectionism, which includes restricting international collaborations and open science, threatens innovation,…
South Korea has faced an onslaught of cyberattacks in recent years, primarily from North Korea, which has employed AI technologies for sophisticated assaults. In response to these attacks, South Korea…
The rise of generative artificial intelligence (AI) in the music industry has sparked a significant debate among artists and AI developers. This technology, which transforms vast datasets into original content,…