A document shared on Monday shows that the tech advocacy group BSA, which is backed in part by Microsoft, is pushing for rules on how to use artificial intelligence in national privacy laws.
Business software companies like Adobe, IBM, and Oracle are part of BSA. Microsoft is one of the stars in AI because it recently put money into OpenAI, the company that made ChatGPT, an AI chatbot that learns from its users. But Google, which is the other major U.S. company working on AI right now, is not a member.
The push comes at a time when many members of Congress, like Senate Majority Leader Chuck Schumer, D-N.Y., have talked about how important it is to make sure that regulations keep up with how quickly AI technology is changing.
The Group Is Advocating for Four Key Protections:
- Congress should make clear requirements for when companies must evaluate the designs or impact of AI.
- Those requirements should kick in when AI is used to make “consequential decisions,” which Congress should also define.
- Congress should designate an existing federal agency to review company certifications of compliance with the rules.
- Companies should be required to develop risk-management programs for high-risk AI.
Craig Albright, vice president of U.S. government relations at BSA, said, “We are a business group that wants Congress to pass this law.” “So, we’re trying to get more people to notice this chance. We think it hasn’t gotten as much attention as it could or should.”
Albright said, “It’s not meant to answer every question about AI, but it’s an important answer to an important AI question that Congress can do.”
Advanced AI tools that are easy to use, like ChatGPT, have sped up the push to put limits on the technology. Even though the U.S. has made a risk management system that people can choose to use, many people have pushed for even stronger protections. In the meantime, Europe is trying to finish its AI Act, which will make sure that high-risk AI is safe.
Albright said that as Europe and China move forward with plans to regulate and support new technologies, U.S. officials need to ask themselves if digital transformation is “an important part of an economic agenda.”
“If it is, we should have a national agenda for digital transformation,” he said. This would include rules for AI, national privacy standards, and a strong cybersecurity policy.
In a message to Congress that BSA sent to CNBC, the group suggested that the American Data Privacy and Protection Act, a joint privacy bill that passed out of the House Energy and Commerce Committee last Congress, is the right vehicle for new AI rules.
Even though the bill still has a long way to go before it becomes law, BSA said it already has the right framework for the kind of national AI guardrails the government should put in place.
BSA hopes that when the ADPPA is revived, as many people expect, it will have new rules for AI. Albright said that the group had talked to the House Energy and Commerce Committee about their ideas, and that the committee had a “open door” policy for many different people.
When asked for feedback, a House E&C representative didn’t answer right away.
Even though ADPPA still has to overcome some problems to become law, Albright said that getting any bill passed is hard work.
“What we’re saying is, you can get this. Albright said, “This is something where people from both parties can agree. “So, our hope is that this will be included in whatever laws they pass.”