AI Act: Five recommendations on how companies should react now

The first law for AI has been approved and gives manufacturers of AI applications between six months and three years to adapt to the new rules. Anyone who wants to use AI in sensitive areas will have to strictly control the AI data and its quality and create transparency.

The new EU law regulates the use of artificial intelligence (AI). (Image: www.depositphotos.com)

With the AI Act, the EU has regulated what is currently the most dynamic and important branch of the data industry, as it did with the GDPR in April 2016 and the Digital Operational Resilience (DORA) in January of this year. Many of the new tasks under the AI Act will be familiar to data protection officers and every compliance officer from the GDPR.

The law sets out a definition for AI and defines three security levels: minimal, high and unacceptable. AI applications that companies want to use in healthcare, education and critical infrastructure fall under the highest security category of "high-risk". Those in the "unacceptable" category are banned because they could threaten people's safety, livelihoods and rights, for example.

By definition, these AI systems must be trustworthy, transparent and accountable. Operators must carry out risk assessments, use high-quality data and document their technical and ethical decisions. They must also record how the performance of their systems develops and inform users about the nature and purpose of their systems. AI systems should also be supervised by humans and allow interventions. They must be highly robust and achieve a high level of cyber security.

Companies now need clear guidance. Swiss companies are also currently uncertain about the specific impact of the new law on them. This is because they want to exploit the great potential of this technology and at the same time be future-proof in order to be able to implement the upcoming details of the regulation. There are five clear recommendations on how companies can approach this without creating legal risks and still not getting in the way of users. And at the same time, to be positioned in such a way that the AI Act can be fully implemented without turning IT upside down:

  • Let AI act with confidence: If you want to achieve this, you have to tame AI completely. The only way to achieve this is to tightly control the data and data flows into and out of the AI. This tight control is similar to the requirements of the GDPR for personal data. Companies should always bear this compliance in mind when using AI and developing it themselves. If you want to use AI in compliance with the GDPR and the AI Act, you should seek advice from a data protection expert before introducing it.
  • Know the exact dataA large part of the law focuses on reporting the content used to train the AI, i.e. the data sets that have given it the knowledge to perform. Companies and their employees need to know exactly what data they are feeding the AI with and what value this data has for the company. Some AI providers deliberately delegate this decision to the data owners because they know the data best. They must train the AI responsibly, and data access should only be activated for authorized persons.
  • The question of copyrights: Previous models of AI have used available internet and book crawlers to train their AI. This was content that contained protected elements - one of the areas the AI Act aims to clean up. If companies have used such data sets without labeling them accurately, they may have to start all over again.
  • Understand the content of the data: This is an essential task. In order for data owners to make the right decisions, the value and content of the data must be clear. On a day-to-day basis, this task is huge and most companies have piled up mountains of information that they know nothing about. AI and machine learning can help massively in this area and alleviate one of the most complex problems by automatically identifying and classifying companies' data according to their own relevant record strategy. Predefined filters immediately fish compliance-relevant data such as credit cards, mortgage data or construction plans out of the data pond and mark them. This analysis could also clarify some security parameters and detect unsecured data, for example. As soon as this AI examines the company data, it develops a company-specific language. And the longer it works and the more company data it examines, the more accurate its results become. The charm of this AI-driven classification is particularly evident when new requirements need to be met. Whatever the AI act brings up in the long term, ML and AI-driven classification will be able to search for these additional attributes and provide the company with a degree of future security.​​​​​​​
  • Control data flowsOnce the data has been categorized and classified with the correct characteristics, the underlying data management platform can automatically enforce rules without the data owner having to intervene. This reduces the chances of human error and risk. For example, a company could enforce that certain data, such as intellectual property or financial data, may never be passed on to other storage locations or external AI modules. Modern data management platforms control access to this data by automatically encrypting it and requiring users to authorize themselves via access controls and multi-factor authentication.

Swiss companies unsettled

The AI Act will also have an impact on Switzerland, as the regulation is applicable if an AI system is used within the EU or its output is "used" in the EU. This means, for example, that Swiss companies may be affected if they make their systems accessible to other companies, public bodies or individuals within the EU. Similarly, predictions, recommendations or decisions made by AI-based systems in Switzerland may have an impact if they are "used" within the EU.

The new EU law affects companies that, for example, develop software for educational institutions that correct automated exams or decide on scholarships. According to experts, companies that analyze EU health data using AI are also covered by the law. The same applies to Swiss banks that use AI to check the creditworthiness of EU citizens.

Swiss companies are currently uncertain. Small and medium-sized enterprises and start-ups in particular often do not have the human and financial resources to carry out extensive regulatory clarifications.

What sanctions are possible?

The EU has another similarity with the GDPR and DORA. Once in force, sanctions for non-compliance are enforced. Anyone who violates important requirements from the AI Act will face penalties of up to 35 million euros or 7 percent of global turnover. The AI Act is likely to be published this summer and will come into force 20 days after publication in the Official Journal of the EU. Most of its provisions will apply after 24 months. The rules for prohibited AI systems apply after six months, the rules for GPAI after twelve months and the rules for high-risk AI systems after 36 months.

Source: www.cohesity.com / www.srf.ch / www.infosec.ch

(Visited 454 times, 1 visits today)

More articles on the topic