The American public is gaining a greater understanding of the deleterious effects that social media is having on society and the unhealthy concentration of power that rests with tech giants like Facebook and Google. Yet, it is unclear at this point whether our deeply polarized and often dysfunctional Congress has the necessary skill to address the challenges that Big Tech presents to American democracy. Like many other vitally important issues lawmakers are currently grappling with, failure to act promptly may have irreversible consequences that will reverberate for decades.
The starkest example of this phenomenon was when Facebook tweaked the algorithm controlling its news feed after the 2020 election to promote more reliable news and tamp down misinformation about the election. While many focused on why Facebook acted at this time and its effectiveness at combatting election misinformation, it is far more remarkable that a private company has the power to make decisions, in secret, that can so strongly impact which information reaches the public. Even in the glory days of newspapers and television, no single media company had this much power to mold public opinion. And even if the so-called mainstream media weighed in on political matters—it did so transparently, publishing stories that all could see and evaluate for bias, not by secretly adjusting a hidden algorithm. Facebook adjusted its algorithm after the 2020 election, but it also has the power to adjust its algorithm before an election in a way that might favor one candidate over another. No one outside the company would know. This is anti-democratic.
Big Tech has also accrued more power in recent years because the amount and type of data they collect has multiplied exponentially. In the early days of social media, companies had access to the data that their users voluntarily consented to provide them. Now, companies collect location data that emanates constantly from our devices, whether we are using them or not. They know everything we do while on their sites—every click, every hover over a post, and every keystroke.
Data also used to be collected primarily from our phones and computers. Now, many of us have dozen of devices that create data—smart TVs, home security systems, and voice-activated personal assistants, which also collect data even when we are not using them. It is estimated that by the end of 2021, there will be 46 billion devices connected to the global Internet of Things, with the expectation that this number will grow to 125 billion by 2031. Companies use these masses of data to understand individual consumers’ preferences even more accurately than individuals understand themselves. Their algorithms also develop “suggestions” of content to keep users active on a platform: the more viewing time – the more money that can be made.
Harvard professor Shoshana Zuboff has accurately described this business model as “surveillance capitalism” – a form of unrestrained capitalism that is far more dangerous than the massive trusts of the early-twentieth century famously corralled by Theodore Roosevelt. These trusts asserted economic and political power directly, but they had limited means to sway mass public opinion. Big Tech companies do all three.
Legislators are just beginning to address the problems created in the early days of social media – let alone grapple with the myriad impacts these platforms are having on our democracy. Recent legislative activity has focused on protecting personal privacy by requiring platforms to provide users with more control over their personal data. The European Union led the way with the General Data Privacy Regulation (GDPR), which went into effect in 2018. Comprehensive privacy laws have also been enacted in California and some other U.S. states. These frameworks all share common features: requiring companies to tell consumers how their information is being collected and used, providing them an opportunity to correct or delete this information, and giving consumers the right to prohibit the sale of their information to third parties. Congress has come close to enacting a national comprehensive privacy law in recent years, but partisan divisions have arisen over the preemptive scope of the law and enforcement mechanisms, so privacy regulation in the United States remains patchwork and spotty.
But more importantly, greater consumer privacy protection only begins to scratch the surface relating to the outsized power that Big Tech is exercising in our modern society. It is legitimate to worry that if Congress goes through the arduous task of enacting comprehensive privacy legislation, its appetite for taking on Big Tech will be exhausted, and the other substantial problems Big Tech is causing will be left to fester.
Robust Action Is Needed
Comprehensive data privacy legislation needs to be the beginning, not the end, of an intensive process to regulate Big Tech and social media. There are signs of bipartisan support for such efforts – these ideas need to be developed, incorporated into legislation, and enacted.
First, Congress must address the inordinate economic and social power that many Big Tech companies have amassed. The U.S.’s century-old antitrust laws are intended to limit corporate conglomerates that harm consumers. Tech companies, however, give away services for free, so applying classic anti-trust concepts to them is awkward. A suite of bills recently passed by the House Judiciary Committee are designed to curb the anti-competitive practices of Big Tech. They give regulators the power to break up companies when they grow so large that their business lines create an “irreconcilable conflict of interest.” These bills, which are being aggressively lobbied against by Big Tech, are a good start.
Second, Congress should require social media companies to be far more transparent with the algorithms that determine how news and other information is distributed. Legislative interference with editorial decisions of this nature is generally barred by the First Amendment. But under current law, social media platforms are not “publishers” of information and, as such, they do not enjoy the same First Amendment rights as content producers, like newspapers. Congress could require companies to disclose the nature of their algorithms and give the Federal Trade Commission (FTC) the power to prohibit algorithms that spread misinformation.
Congress should also consider legislation that would provide consumers with the ability to control the algorithms that determine the content they see in their news feeds. This year, Facebook introduced a feature that enabled users to filter the content that appears on their feeds – allowing them to identify “favorite” sites or sort their feed chronologically. Congress could mandate that news aggregator sites give consumers even greater ability to control the content to which they are exposed. Weakening Big Tech’s power to control information flows would enhance and protect democratic norms, especially during elections.
Third, regulating the distribution of misinformation is a difficult problem, but Congress could strengthen existing regulatory frameworks. The FTC has the power to prohibit “unfair and deceptive practices in or affecting commerce.” More robust FTC enforcement against social media companies on issues that go beyond commercial advertising, such as false information about COVID-19 or vaccinations, is one option. Congress could also require social media companies have systems for labeling information that the companies determine to be blatantly false and that cause harm to an important public interest.
Congress has been asleep at the wheel for far too long when it comes to protecting society from the harms flowing from social media and curbing the power of Big Tech. Enacting comprehensive privacy legislation would be an important, but insufficient, regulatory step. In light of the substantial problems caused by Big Tech, Congress needs to start thinking “Big” as well.
David Schanzer is a professor at the Duke Sanford School of Public Policy and the Director of the Triangle Center on Terrorism and Homeland Security.
Image Credit: Creative Commons Attribution 4.0 International
A year ago, Russia’s cyberwar against Ukraine was reviled as it deployed hostile information and systems interventions with synchronized physical hostilities. Yet, the results of the cyberwar have been far…
ChatGPT and other natural language models have recently sparked considerable intrigue and unease. Governments and businesses are increasingly acknowledging the role of Generative Pre-trained Transformers (GPTs) in shaping the cybersecurity…