
New Washington AI Network Podcast Hosted by Tammy Haddad Features
Linda Moore, President and CEO of TechNet
Moore Says the European Union’s AI Legislation Is “Very Vague” and Argues for a Cohesive National AI Policy in the United States
“The EU has tended to … want to get there first rather than taking their time to get it right,” says Moore. “I do feel that the EU AI Act is still very broad and, in some ways, very vague”
“While DC is moving slowly, the states are filling the void … they’re passing privacy laws at a fast clip,” says Moore about federal versus state regulations. “Having vastly different privacy laws from state to state is not only confusing consumers, but it also is confusing businesses”
Washington, DC – In a new episode of the Washington AI Network Podcast, host Tammy Haddad interviews Linda Moore, the president and CEO of TechNet, an advocacy group for technology companies and executives promoting the growth and innovation economy. TechNet recently launched the AI for America program – a $25 million initiative to educate the public about how artificial intelligence is being used to improve lives, grow the economy, and keep us safe.
Moore discusses the evolving landscape of AI regulation, criticizes the EU’s AI legislation as “very vague,” and argues for balanced federal and state policies in the U.S.
Linda Moore on the EU’s AI Act:
“The EU has tended to – on a lot of things, including privacy as well – to want to get there first rather than taking their time to get it right, I would say. So, tech companies and also academia are working with the EU just like they’re working with U.S. policymakers to try to put in place really good regulations. But I do feel that the EU AI Act is still very broad and, in some ways, very vague. And so large companies who have a lot of resources will probably be able to navigate that easier and better than smaller and medium-sized companies. We really need to think about the smaller and medium-sized companies and the impact on them.”
Linda Moore on whether there is a clear path forward on AI policy in the U.S.:
“No, but I think that it’s really heartening to see that U. S. policymakers are really taking their time to understand the technology before regulating it. They have really taken their time at the White House in working with the companies and securing voluntary commitments while also then unveiling the executive order last fall.”
Linda Moore on President Biden’s AI Executive Order:
“Wide-reaching. The most sweeping executive order in recent memory. Almost every agency was tasked with a lot of work and really tight deadlines. And so, we’re engaging with the agencies now, as they send us requests for comment on the best practices and policies to put in place. And I find that they’re being incredibly thoughtful, And it’s very impressive how much they’re undertaking and how quickly they’re turning it around.”
President Biden and his team really deserve a lot of credit for stepping forward and understanding they needed to take the reins of putting in place AI regulations and securing commitments from the companies while Congress took a longer time to come to a point when they can really pass legislation. There are a couple of pieces of legislation that might pass this year … But in the meantime, the agencies in the White House are making terrific progress on a lot of areas.”
Linda Moore on what states are doing in terms of AI legislation:
“At TechNet, we do 50-state advocacy as well. And so, we are right now tracking today, at this moment, 297 AI bills. 297. It’ll eclipse 300 by the end of today, I’m confident of that. Last year, we tracked 69 AI bills, okay? A lot of the legislative sessions haven’t even convened yet. Does that give you a sense of the volume?
So, while D.C. is moving slowly, the states are filling the void. And a lot of the legislation that they’re looking at are the use of AI in government, task forces to study the impact and the risks of AI. Some of them, though, are dealing with deep fakes, visual, voice, especially in the election context, and the assigning of liability of that sort of thing. And our belief is that it should be on the creator of the content. And then a few bills have been introduced in just a few states that are further reaching that are more in line with the enormity of the task that NIST within the Department of Commerce is taking on.”
Linda Moore on states passing privacy laws in the absence of federal legislation:
“Fifteen states have passed comprehensive data privacy laws since 2018. Forty-six states have considered their own privacy laws. There will definitely be more states who pass their own privacy laws this year. So that’s an example of states seeing the void that Washington has left for them on not dealing with a really important issue, and I don’t expect there to be a federal privacy law this year.
We have a coalition [United for Privacy] that we’ve stood up across industry. You know, a lot of people think that privacy affects only tech companies. That’s not true. Think about how many people deal with your data every single day. It’s not just tech companies. It’s all kinds of companies. And so, we’re pushing for a federal privacy law, and AI has given us a moment of people understanding, ‘Oh, you know, these models are trained on a lot of data. It would be good to have rules of the road on handling data.’
Meanwhile, back in the states, they’re passing privacy laws at a fast clip. And so, we’re working with policymakers to try to find at least common themes among the privacy laws so that they’re more interoperable because having vastly different privacy laws from state to state is not only confusing consumers, but it also is confusing businesses. And small and medium-sized businesses cannot keep up with this ever-changing, ever-growing, ‘Hey, did you see the latest today,’ kind of privacy landscape that is delivering regulations on them of how to handle data. Larger companies would have an easier time with it. They have more resources. But smaller and medium-sized companies are our main concern.”
Linda Moore on TechNet’s AI for America initiative:
“One of the things that we found when we were doing the polling for AI for America, our initiative that is set up to demystify and familiarize people with AI, is that a lot of people feel that AI just burst upon the scene with ChatGPT.
That’s not the case. We’ve been using AI for decades. People use it every single day when they navigate traffic, when they ask Siri or Alexa to do a task for them, when they’re surfing the web to do research. And so, familiarizing people with, no, this is something you’ve been using for a long time is a real aha moment for people. We also found that people had this feeling that there were no rules of the road and no laws at all that apply to AI. That’s not true. Anything that you do that is assisted with AI that’s against the law, it’s against the law. You can and will be prosecuted.”
Tammy Haddad: “But don’t you think that’s because social media hasn’t really been regulated and because people keep talking about that it’s the wild wild west that that’s a real issue, right? How do you get people to feel like this is something that helps them and it’s not going to hurt them?”
Linda Moore: “That is the whole point of AI for America, and we’re very clear-eyed about it. Recognizing the genuine risks associated with AI is a very important part of responsible development and deployment. And I think it’s really important that U.S. tech companies have made these commitments with the U.S. government on secure and safe development and deployment of AI. They’re working with Congress to put in place good laws and regulations, and then also they’re setting up their own bodies and working with their outside organizations on safety and evaluations. And benchmarking and red teaming and all of the things that they need to do to make sure that they win people’s confidence and support for AI and that it is put together and rolled out in a way that is beneficial to all Americans.”
–
About the Washington AI Network Podcast
The Washington AI Network Podcast is hosted by media veteran and Washington AI Network founder Tammy Haddad and produced and recorded by Haddad Media. It is available on Apple Podcasts, Google Podcasts, Spotify, Amazon Music, and Audioboom).
To hear more thought-provoking AI conversations with key stakeholders, subscribe to the Washington AI Network podcast and visit https://washingtonainetwork.com/.
About Washington AI Network
Launched in July 2023 in response to the increasing public attention on artificial intelligence (AI), the Washington AI Network is a dynamic bipartisan forum that brings together diverse stakeholders from industry, government, civil society, and academia to foster collaboration, knowledge sharing, and responsible development and deployment of AI technologies.
Our mission is to serve as an inclusive hub for thought leadership, innovation, and ethical discourse, hosting meaningful conversations, fostering partnerships and actionable initiatives that address the challenges and opportunities presented by AI. The Washington AI Network is powered by Haddad Media.
About Tammy Haddad
Tammy Haddad is the founder of the Washington AI Network. She is a media innovator whose company, Haddad Media, devises winning strategies for some of the world’s top media brands, technology disruptors, innovative startups and nonprofit organizations. A veteran television executive, she is the former Vice President and political director of MSNBC and executive producer of Hardball with Chris Matthews, Larry King Live, senior broadcast producer of The Today Show and The Late Late Show. Haddad is the recipient of two John Foster Peabody Awards and a Gracie.Haddad is also the founder of WHCInsider, a website covering the political and media cultures in the nation’s capital, and founder of the Washington Women Technology Network, a forum that connects women leaders in business, government and media. She previously hosted the Cone of Silence podcast, the cable television show, The First Producer’s Club, and co-hosted Bloomberg’s Masters in Politics podcast.
