By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts. View our Cookie Policy for more information.
Global

Is my Startup regulated by the AI Act?

August 1, 2024

Silvia Urgas

Counsel, Senior Associate Co-Head of the IP/IT Practice group TGS Baltics. Silvia is a member of the dispute resolution practice group and has represented clients in several civil matters. In addition, Silvia specialises in IP and has advised clients in many copyright issues and trademark disputes.

Is my Startup regulated by the AI Act?

European Union artificial intelligence (AI) Act was published on 12 July 2024. Considering the grand scale of the text, it is the first legal act of its kind in the world and the first such wide-scale attempt to generally regulate AI systems.

AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that generates from user input outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

Should you care about the EU AI Act?

Well, yes, if your startup is active in any of the following fields: employment, education, critical infrastructure, health services, banking, insurance, and justice. All fintech, legaltech, healthtech, banking and insurance startups should therefore dive into the AI Act or at least read the following short overview before they find themselves faced with a hefty fine.

Even if nothing of the above applies to you and your startup, then as a resident of any EU member state, you might be wondering whether the AI Act increases or decreases the chances that one day you will wake up in an Orwellian surveillance society.

Hopefully the latter will not become a reality, as the AI Act’s main aim is to ensure AI is used in accordance with European values, protecting health, safety, fundamental rights, democracy, the rule of law and the environment. At the same time, the AI Act also aims to support innovation and free cross-border movement of AI-based goods and services, so the economic benefit of using AI could help the EU single market.

The AI Act applies to a wide range of persons, including but not limited to:

  1. providers of AI systems or general-purpose AI models, such as companies who create AI solutions;
  2.  deployers (users) of AI systems that have their establishment or are located within the EU, such as companies who integrate AI developed by others into their products or services.

The AI Act has established a risk-based approach and divided AI systems into four different groups, based on the risk they create:

  1. unacceptable risk
  2. high-level risk
  3. limited risk
  4. minimal risk

Unacceptable risk

AI systems that qualify under unacceptable risk will be banned. Such AI systems include:

  • social scoring and classification;
  • making risk assessments of natural persons to predict or assess risk of criminal behaviour;
  • biometric categorisation based on sensitive characteristics;
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • inferring emotions of a natural person in workplace and education institutions;
  • manipulating human behaviour or exploiting people’s vulnerabilities;
  • categorising persons based on biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation;
  • ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement.

Use of biometric identification systems by law enforcement is still allowed in exhaustively listed and narrowly defined situations, such as a targeted search of a missing person or preventing a terrorist attack.

High level risk

Startups that use or develop AI and are active in fields such as fintech, legaltech, insurance, surveillance, education and employment must ensure that the AI used or created by them is in accordance with the requirements for AI systems with a high level of risk.

Such AI systems may negatively affect safety of fundamental rights. The classification as high-risk does not only depend on the function performed by the AI system, but also on the specific purpose for which that system is used.

Such systems are, for example, used in the following areas:

  • biometrics, such as remote biometric identification systems, biometric categorisation systems, systems used for emotion recognition;
  • critical infrastructure (digital infrastructure, transport, supply of water, gas, heating, electricity);
  • educational or vocational training;
  • employment, workers management, and access to self-employment;
  • access to and enjoyment of essential private services and essential public services and benefits (evaluating the eligibility of natural persons for essential public assistance benefits and services, including healthcare services;
  • evaluating the creditworthiness of natural persons or establishing their credit score;
  • risk assessment and pricing in relation to natural persons in the case of life and health insurance;
  • evaluating and classifying emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services);
  • law enforcement;
  • migration, asylum, and border control management;
  • administration of justice and democratic processes;
  • AI used as a safety component of certain products (e.g. AI application in robot-assisted surgery), such as products which are required to undergo a third-party conformity assessment before being placed on the EU market (CE markings).

High risk AI systems are not banned but the following requirements have to be followed:

  • adequate risk assessment and mitigation systems;
  • high quality of the datasets feeding the system to minimise risks and discriminatory outcome;
  • detailed technical documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
  • logging of activity to ensure traceability of results;
  • clear and adequate information to the users;
  • appropriate human oversight measures to minimise risks;
  • high level of robustness, security and accuracy.
If your startup deals with any of the above issues, it is absolutely necessary to familiarize yourself with the AI Act.

For example, if your fintech uses AI to assess the credit risk of possible clients who are natural persons, final human oversight must be ensured. AI use must also be adequately logged so all results could be traced back to the initial input. As always, cybersecurity measures, including measures to protect personal data must be adopted.

Limited and minimal risk

Limited risks are mostly associated with the lack of transparency in AI usage. When using AI systems as chatbots, humans should be made aware that they are interacting with a machine, unless obvious from the circumstances. Using deep fake should also be transparent so people know the content has been artificially generated or manipulated.

The AI Act allows free use of minimal risk AI suchas AI-enabled video games or spam filters.GDPR rules for personal data use must still be followed. As few personal data as possible must be processed, and the principles of transparency, lawfulness and confidentiality followed.

The full text of the (General Data Protection Regulation (GDPR) can be found on EUR-Lex.

For more information on the AI Act

  • The full text of the AI act is available on EUR-Lex
  • European Commission’s website includes several short articles and relevant links
  • The AI Act Explorer website by the Future of Life Institute offers a great high-level overview of the AI Act
  • Law firm TGS Baltic has prepared series of articles on the AI Act, including an AI checklist for business, GDPR requirements of using AI and the status of text and data mining exceptions in the Baltic states.

Are you an early-stage startup with questions or challenges you're facing?

At Tenity, we have programs designed just for you! Dive in and discover the support and inspiration you might be missing to take your startup to the next level. Don't miss the opportunity to elevate your startup—learn more and apply to Tenity Programs today!