Blog post

AI Regulation is on the Horizon – Part 3

The path to ethical AI leads through an ever-changing landscape of government policy.
Are you ready for the changes ahead? 

The American Approach to AI Regulation and Bridging the Transatlantic Gap

Although the European Union is currently the trailblazer in setting broad tech and AI policy, the United States isn’t too far behind. The US has been decidedly more cautious and slower moving, likely because the US is home to several large AI companies and concerns that regulation may stifle innovation. Nonetheless, American regulators and lawmakers have signaled an intent to bring regulation to AI, giving fair warning that it’s on the horizon. 

The US government’s approach thus far has taken a different tack from the European Commission, which seeks to set horizontal guidelines for all of Europe. In the US, AI policy is being developed primarily by individual regulatory agencies per their specific purview, notable of which are: 

  • The Federal Trade Commission (FTC), which has set itself the general, if daunting, mission of regulating the use of biased algorithms. Under the authority of Section 5 of the FTC Act, the Fair Credit Reporting Act (FCRA), and the Equal Credit Opportunity Act (ECOA), the FTC has signaled to AI-powered businesses that:  
  • systems should be trained on inclusive datasets,  
  • should have their underlying algorithms thoroughly tested before and after deployment to ensure bias doesn’t emerge,  
  • should be transparent in disclosing to customers how their data is used,  
  • and should refrain from exaggeration as to what their algorithm can deliver.  

Perhaps most notable is the FTC’s message in announcing this new undertaking: “hold yourself accountable—or be ready for the FTC to do it for you.” 

  • The US Department of Commerce (DoC), commissioning NIST (the National Institute of Standards and Technology), has signaled its intent to develop a risk management framework not unlike the EU’s AI Act (AIA). Taking public commentary into consideration, NIST’s framework will likely set the stage for an American understanding of the risks of AI bias, and how to promote accuracy, security, and privacy. The DoC also established the National Artificial Intelligence Advisory Committee (NAIAC), which aims to advise the President and federal agencies on AI, the state of US AI competitiveness, the state of US AI science, and issues related to the AI workforce with particular focus on bolstering opportunities for historically underrepresented populations in the AI space.  

Other federal entities are developing AI policy relevant to their own agency purview, but there are also broader initiatives and guidelines that have been developed by the White House as well. 

The National Artificial Intelligence Initiative Office (NAIIO) Seal. As a whole, it symbolizes the Office’s commitment to promote scientific and educational advancement in the Federal government and the private sector and to drive U.S. leadership in AI

Of note is the National Artificial Intelligence Initiative Act—a component of the National Defense Authorization Act of 2021—to support research and development, education, and training programs in the AI space. Thus far, the AI Initiative has spawned the National Artificial Intelligence Office, which aims to coordinate and implement the broader US AI strategy, which itself has helped oversee the launch of a National AI Research Resource—a critical government resource to help bring high-powered computing and the expansive stores of government data to US research institutions and the public sector to broaden American access to innovation in the AI space. In line with the US’ Governmental Accountability Office (GAO), which through its own reporting has identified practices to help ensure responsible AI use by government agencies involved in the development, deployment, and monitoring of AI systems, the White House has also called for an AI Bill of Rights to demonstrate principles that will safeguard citizens from potential AI harms. 

These are the broad strokes in which the US is currently approaching AI policy and does not take into account laws that are being developed at the state level, such as California’s 2018 Consumer Privacy Act or Virginia’s 2021 Consumer Data Protection Act. While the US is for the moment a step behind their European peers, regulation between them likely means an end to the freewheeling days of AI development without serious state oversight—a task that was until now exclusively reserved to the companies and researchers developing AI. The clock is ticking, particularly as coordination and cooperation to iron out policy differences and streamline agreement will undoubtedly follow implementation on either side of the Atlantic. 

Bridging the Gap 

The aims of both the EU’s three (and growing) technology acts and the US’ agency-by-agency policies are largely united in promoting broader, overarching values for tech and AI: 

  • trustworthiness,  
  • fairness,  
  • inclusivity,  
  • diversity,  
  • privacy,  
  • and the upholding of democratic values and human rights.  

However, in addition to these values—most of which are foundational to the ethical AI space—both Western powers also seek to promote innovation and economic growth by providing a set of standards and legal certainty for tech businesses developing and using AI. 

As implementation will differ dramatically due to the unique qualities of both the EU and its 27 member states with that of the US, there will undoubtedly also be differences between these policies that will need ironing out, specifically given their potential to affect companies with an interest in doing business across borders. It would, after all, be cumbersome and disruptive for businesses to navigate one set of rules for one region while needing to navigate others in another. Now that the regulation ball has begun rolling between both Western powers, similar moves have been made toward transatlantic cooperation, which will be a vital step in seeing that these regulations have the least disruptive impact on global technology businesses. 

In its inaugural statement on September 29, 2021, the US – EU Trade and Technology Council (TTC) announced the US and the EU’s commitment to the aforementioned principles in promoting AI technology together, signaling the first official statement of bilateral policy cooperation. Interest and calls for transatlantic cooperation have also come from the private sector as well, notably by Defined.ai’s very own CEO and founder, Daniela Braga, who recently participated in a panel discussing the shape and measures of success for transatlantic cooperation on technology governance policy

While it is still “early days” for tech policy, it’s even earlier for the transatlantic interoperability of tech regulation. Nonetheless, it behooves us to consider what the shape of ongoing cooperation will look like, and most importantly, what the tech industry will need to ensure these regulations are successful in their proposed aims. In the words of Defined.ai’s Braga: “we need more auditors, legal clarification, and global certifications” across international markets to make implementation and compliance as easy as possible, and “data and model sharing across the two side of the Atlantic in areas that are human-relevant, otherwise we will move at very different speeds.” 

Are regulation and innovation at odds, or are both integral? Check back soon for the final piece in this series as Defined.ai explores the arguments for regulation, what it implies for the future of AI-powered businesses, and how we can help you stay ahead of the curve.

0

Leave a comment

Your email address will not be published. Required fields are marked *

Terms of Use agreement

When contributing, do not post any material that contains:

  • hate speech
  • profanity, obscenity or vulgarity
  • comments that could be considered prejudicial, racist or inflammatory
  • nudity or offensive imagery (including, but not limited to, in profile pictures)
  • defamation to a person or people
  • name calling and/or personal attacks
  • comments whose main purpose are commercial in nature and/or to sell a product
  • comments that infringe on copyright or another person’s intellectual property
  • spam comments from individuals or groups, such as the same comment posted repeatedly on a profile
  • personal information about you or another individual (including identifying information, email addresses, phone numbers or private addresses)
  • false representation of another individual, organisation, government or entity
  • promotion of a product, business, company or organisation

We retain the right to remove any content that does not comply with these guidelines or we deem inappropriate.
Repeated violations may cause the author to be blocked from our channels.

Thank you for your comment!

Please allow several working hours for the comment to be moderated before it is published.

You may also like