IE 11 is not supported. For an optimal experience visit our site on another browser.

Congress has had a hands-off approach to Big Tech. Will the AI arms race be any different?

Washington has long balked at regulating Silicon Valley. But as ChatGPT gains popularity, tech-savvy lawmakers are pushing Congress to put limits on artificial intelligence.
Get more newsLiveon

WASHINGTON — Senate Majority Whip Dick Durbin, D-Ill., acknowledged he’s “got a lot to learn about what’s going on” with artificial intelligence, saying it’s “very worrisome.”

Sen. Richard Blumenthal, D-Conn., a member of the Commerce and Science Committee, called AI “new terrain and uncharted territory.”

And Sen. John Cornyn, R-Texas, said that while he gets classified briefings about emerging technology on the Intelligence Committee, he has just an “elementary understanding” of AI.

During the past two decades, Washington balked at regulating Big Tech companies as they grew from tiny startups to global powerhouses, from Google and Amazon to the social media giants Facebook and Twitter.

Lawmakers have always been hesitant to be perceived as stifling innovation, but when they have stepped in, some have shown little understanding of the very technology they were seeking to regulate.

Now, artificial intelligence has burst on the scene, threatening to disrupt the American education system and economy. After last fall’s surprise launch of OpenAI’s ChatGPT, millions of curious U.S. users experimented with the budding technology, asking the chatbot to write poetry, rap songs, recipes, résumés, essays, computer code and marketing plans, as well as take an MBA exam and offer therapy advice.

For more on this story tune into “Meet the Press NOW” on News NOW airing at 4 p.m. ET Tuesday.

Seeing the unlimited potential, ChatGPT has spurred what some technology watchers call an “AI arms race.” Microsoft just invested $10 billion in OpenAI. Alphabet, the parent company of Google, and the Chinese search giant Baidu are rushing out their own chatbot competitors. And a phalanx of new startups, including Lensa, is coming on the market, allowing users to create hundreds of AI-generated art pieces or images with the click of a button.

Leaders of OpenAI, based in San Francisco, have openly encouraged government regulators to get involved. But Congress has maintained a hands-off approach to Silicon Valley — the last meaningful legislation enacted to regulate technology was the Children’s Online Privacy Protection Act of 1998 — and lawmakers are once again playing catch-up to an industry that is moving at warp speed.

“The rapid escalation of the AI arms race that ChatGPT has catalyzed really underscores how far behind Congress is when it comes to regulating technology and the cost of their failure,” said Jesse Lehrich, a co-founder of the left-leaning watchdog Accountable Tech and a former aide to Hillary Clinton.

“We don’t even have a federal privacy law. We haven’t done anything to mitigate the myriad societal harms of Big Tech’s existing products,” Lehrich added. “And now, without having ever faced a reckoning and with zero oversight, these same companies are rushing out half-baked AI tools to try to capture the next market. It’s shameful, and the risks are monumental.”

'Enormous disruption'

Congress isn’t completely in the dark when it comes to AI. A handful of lawmakers — Democrats and Republicans alike — want Washington to play a greater role in the tech debate as experts predict that AI and automation soon could displace tens of millions of jobs in the U.S. and change how students are evaluated in the classroom.

And they are getting creative in communicating that message to Hill colleagues and constituents back home. In January, Rep. Jake Auchincloss, a millennial Democrat from Massachusetts, delivered what was believed to be the first floor speech written by AI, in this case, ChatGPT. The topic: his bill to create a U.S.-Israel artificial intelligence center.

The same month, Rep. Ted Lieu, D-Calif., one of four lawmakers with computer science or AI degrees, had artificial intelligence write a House resolution calling on Congress to regulate AI.

Rep. Ted Lieu, D-Calif., at the Capitol on Jan. 25, 2023.
Rep. Ted Lieu, D-Calif., at the Capitol on Jan. 25.Michael Brochstein / Sipa USA via AP

“Let me just first say no staff members lost their jobs and no members of Congress lost their jobs when AI wrote this resolution,” Lieu joked in an interview. But he conceded: “There’s going to be enormous disruption from job losses. There’ll be jobs that will be eliminated, and then new ones will be created.

"Artificial intelligence to me is like the steam engine right now, which was really disruptive to society," Lieu added. "And in a few years, it’s going to be a rocket engine with a personality, and we need to be prepared for enormous disruptions that society is going to experience."

One lawmaker is heeding the call from colleagues to educate himself about fast-advancing technology: 72-year-old Rep. Don Beyer, D-Va. When he’s not attending committee hearings, voting on bills or meeting with constituents, Beyer has been using whatever free time he has to pursue a master’s degree in machine learning from George Mason University.

“The explosion of the availability of all knowledge to everybody on the planet is going to be a very good thing — and a very dangerous thing,” Beyer said in a joint interview with Lieu and Rep. Jay Obernolte, R-Calif., in the House Science, Space and Technology Committee hearing room.

Threats to national security and society

The danger with AI isn’t what has been portrayed in Hollywood, lawmakers said.

“What artificial intelligence is not is evil robots with red laser eyes, à la the Terminator,” said Obernolte, who earned a master’s degree in artificial intelligence from UCLA and founded the video game developer FarSight Studios. 

Instead, AI poses threats to national security as well as to society — from deepfakes that could influence U.S. elections to facial recognition surveillance to the exploitation of digital privacy.

“AI has this uncanny ability to think the same way that we do and to make some very eerie predictions about human behavior," Obernolte said. "It has the potential to unlock surveillance states, like what China has been doing with it, and has the potential to expand social inequities in ways that are very damaging to us, to the fabric of our society.

"So those are the things that we’re focused on stopping.”

With the security threat from China growing, TikTok is also in Congress’ sights. Lawmakers banned the viral video-based app, owned by China’s ByteDance, from government devices in December. Sen. Josh Hawley, R-Mo., and other China hawks have pushed legislation that would ban TikTok entirely in the U.S., saying it could give the Chinese Communist Party access to Americans’ digital data.

But the bill hasn’t picked up sufficient support. On Tuesday, Hawley also introduced legislation that would ban children under 16 from being on social media and another bill to commission a report about the harms social media imposes on kids.

House Speaker Kevin McCarthy, R-Calif., once a darling of Silicon Valley, has become one of the most vocal critics of Big Tech. He's working to have all House Intelligence Committee members, Republicans and Democrats, take a specially designed course at MIT focused on AI and quantum computing.

Some AI can “help us find cures and medicine," McCarthy told reporters. But he said: "There’s also some threats out there. We’ve got to be able to work together and have all the knowledge.”

Lieu, an Air Force veteran, doesn’t think AI will ever gain consciousness: “No matter how smart your smart toaster is, at the end of the day it’s still a toaster.”

But Lieu warns that AI is being built into systems that could kill human beings. 

“You’ve got AI running in vehicles, they can go over 100 miles per hour, and if it malfunctions it could cause traffic accidents and kill people,” he said.

“You have AI in all sorts of different systems that if it goes wrong, it could affect our lives. And we need to make sure that there are certain limits or safety measures to make sure that AI, in fact, doesn’t do great harm.”