Blog post

AI Regulation is on the Horizon – Part 2

The path to ethical AI leads through an ever-changing landscape of government policy.
Are you ready for the changes ahead? 

Europe’s Trailblazing Tech Regulation 

The Digital Services Act aims to protect users and businesses in the online space by safeguarding and facilitating the trade of content, goods, and services while banning illegal content, as well as bringing transparency to the algorithms that power online trade and services. Of note for those involved with AI is the latter point, given that algorithms have the potential for bias or influencing behavior and market conditions.  

A pertinent and widely used example would be recommendation algorithms, which on an e-commerce platform can manifest bias by recommending (or not recommending) certain products to certain demographics on the one hand, while promoting certain products or services via recommendation ranking at the expense of competitors on the other—for example, a search engine result highly ranking a particular set of banks over others for certain users based on their browsing history. This can be problematic if one of these personalized recommendations offers a set of financial services at less than favorable rate versus competitors that weren’t ranked as highly, again due to that user’s browsing history placing them in a particular demographic or regional group. In an everyday retail setting, cookie data could lead some users identified as women in a particular age range and demographic group to see more ads for maternity wares, regardless of their actual life circumstances and with no clear indication why they’re seeing these ads, nor the ability to stop such recommendations. 

In pushing new standards of AI interpretability alongside regular bias audits, the DSA aims to provide users subject to algorithmic decisions the transparency to know why they’re being targeted by advertisements as well as an understanding of how businesses, products, and services are ranked and displayed to consumers on a given platform. For businesses, this gives them the transparency to understand how their business is ranked in search results. 

Though less explicitly concerned with AI, the Digital Markets Act similarly aims to protect online commerce by creating a “level playing field” for digital businesses, regardless of their size and reach. This entails protecting smaller businesses from the gravity of larger ones—here dubbed “gatekeepers,” for their ability to dictate the terms by which other businesses and consumers can access their services and even operate in the online space. A good example of this are large search engines or social media platforms—as these major tech platforms are the way that many people interface with or do business on the internet, the search results they see, the news stories that show up on their feed, and the products they’re advertised shape their everyday internet experience. From a business perspective, how one’s business is ranked and advertised in search results can lead to business failure or success. 

Expounding further: although people can now consume more information and services than ever before thanks to the modern internet, these same users would be hard pressed to dictate the shape and terms of these services without regulation—see the privacy notifications and options now made available to users via the GDPR. Despite criticism over platforms’ use (and misuse) of user cookie data, the ability to opt out of tracking cookies became law and common practice thanks to compliance with the GDPR. The imbalanced power dynamic thus makes clear that users utilize these platforms and services on the tech industry’s terms, rather than their own, and thus need a powerful third party—states—to push for change on their behalf. 

With the EU’s new and expanded policies, it’s not a stretch to envision that the data practices of familiar platforms, big and small, could be affected. For example, the generation, collection, storage, and use of data on large platforms may in theory confer or perpetuate unfair market advantages which the proposal would seek to mitigate. With data being the lifeblood of AI, the market protecting intent of the DMA could thus affect AI’s data training and use. 

Unlike the DMA however, the recently proposed AI Act explicitly deals with artificial intelligence and is the most detailed in its approach to regulating the technology. In particular, the AIA seeks to make AI safe and fair by framing their development and use as a question of risk or risk assessment. 

The AI risk pyramid, via Digital EU 

This risk framework consists of four categories based on severity, with the top-most representing AI systems that pose unacceptable risks—AI systems that, for example, conduct real time biometric identification for law enforcement purposes (which currently is only allowed in the EU under limited circumstances), engage in social scoring schemes, or in any way manipulate users, infringe on their rights, or cause social harm. These systems, by default, are banned outright. 

The next risk category are high risk systems, or AI systems that deal in: 

  • Consumer and public sector services like those determining creditworthiness for loans, or assessing/scoring examinations which may determine access to education or professional opportunities  
  • Recruiting or worker management, such as applicant tracking systems and resume/CV-sorting and recommending systems 
  • Nonpublic biometric identification, such as ID systems used in a work environment  
  • Safety, such as in transportation, infrastructure, or procedures requiring a high degree of safety (e.g., medicine and surgery) 
  • Law enforcement, the administration of justice, and migration, where the reliability of evidence, the application of law, and judicial decisions are critical to due process and border crossing 

Much of the AIA’s risk framework is dedicated to securing and auditing these high-risk systems, even though they theoretically comprise a minority of AI systems being developed or are currently active in the field. As a result, high risk systems are subject to the highest level of scrutiny before and after deployment. Entities that employ high-risk AI systems are thus obligated to monitoring them for transparency, cybersecurity, risk management and mitigation, and data quality. Furthermore, they will also be held responsible for regular reporting of detailed activity logs, providing ample information to its users about its operation and risks, and providing documentation on operation and purpose for authorities to assess compliance. 

The next risk category, limited risk, makes up a larger proportion of AI systems in use in the EU, and contain such systems as chatbots and virtual assistants. When these types of systems are employed, users should be notified and made aware that they are interacting with a machine, thus giving them the ability to proceed interacting with their informed consent or to ultimately decline its automated services. 

Rounding out the risk categories is what the AIA proposes as the largest category for AI-based systems: minimal risk. These applications tend to be video games with AI difficulty scaling, spam filters, inventory management systems, and various other AI that the European Commission believes pose minimal or no risk to safety or human rights. 

The risk framework is apparently a starting point for the EU’s marquee AI regulation, as the European Commission has noted that it will exercise its power to expand and change regulation as the uses and risks of AI evolve alongside the technology’s development. As seen above, the bulk of the regulation is aimed at mitigating the risks and harms of what the European Commission has defined as high-risk systems, though it is likely that these definitions will be refined and changed in the years ahead. 

Insofar as AI is concerned, these proposed rules have overlapping implications. What are they and how will they Interested in how the United States’ regulatory efforts stack up to the European Union’s proposed policies? 

Stay tuned for the next part of this series on AI regulation as we explore US AI policy, the role of transatlantic cooperation, its implications for the tech industry, and how’s CEO has gotten involved in bridging the US-Europe gap. 


Leave a comment

Your email address will not be published. Required fields are marked *

Terms of Use agreement

When contributing, do not post any material that contains:

  • hate speech
  • profanity, obscenity or vulgarity
  • comments that could be considered prejudicial, racist or inflammatory
  • nudity or offensive imagery (including, but not limited to, in profile pictures)
  • defamation to a person or people
  • name calling and/or personal attacks
  • comments whose main purpose are commercial in nature and/or to sell a product
  • comments that infringe on copyright or another person’s intellectual property
  • spam comments from individuals or groups, such as the same comment posted repeatedly on a profile
  • personal information about you or another individual (including identifying information, email addresses, phone numbers or private addresses)
  • false representation of another individual, organisation, government or entity
  • promotion of a product, business, company or organisation

We retain the right to remove any content that does not comply with these guidelines or we deem inappropriate.
Repeated violations may cause the author to be blocked from our channels.

Thank you for your comment!

Please allow several working hours for the comment to be moderated before it is published.

You may also like