Congress Warned Of Possible “Bomb In A China Shop” With AI

On Tuesday, the Senate Subcommittee on Privacy, Technology, and the Law held its first hearing on the proliferation of artificial intelligence technology, which Chairman Richard Blumenthal referred to as a “bomb in a China shop,” The Epoch Times reported.

In his opening statement, Senator Blumenthal warned that the dangers from artificial intelligence are “real” and “present,” and called for “sensible safeguards” that would not obstruct innovation in AI technology. Blumenthal said the purpose of the hearing is to “demystify” artificial intelligence technology and hold it accountable, saying Congress must act to “write the rules of AI” before it is too late.

Testifying before the subcommittee were OpenAI CEO Sam Altman, IBM’s chief privacy and trust officer Christina Montgomery, and Gary Marcus, New York University professor emeritus of psychology and neural science who answered questions on possible federal oversight of artificial intelligence.

In his testimony, Altman suggested that the federal government consider “licensing and testing requirements” for companies looking to develop and release AI models that exceed a certain “threshold of capabilities,” according to NBC News.

The OpenAI CEO also said that the public may eventually adapt to the onslaught of artificial intelligence-generated false media and information, noting that when Photoshopped images first “came onto the scene,” there was a time when people were fooled but they “pretty quickly developed an understanding” that the images were not authentic. He said the same is true of AI-generated information, “but on steroids.”

Gary Marcus told the subcommittee that the artificial intelligence industry is close to leveraging massive amounts of personal data to develop “hyper-targeting of advertising.” He said, at this point, AI technology is “partway there” and eventually will “certainly get there.”

Marcus called for the creation of a Cabinet-level department to oversee artificial intelligence, warning that the risks from AI are large.