By Robin Röhm, Tech EU
The “first-of-its-kind” agreement between the US and European Union to enhance cooperation in artificial intelligence (AI) is a major step forward in the development of AI – and has significant consequences for startups as well as governments. Far from the consumer-driven world of ChatGPT, the agreement will see collaboration on research that addresses global issues such as climate change, healthcare, and emergency response.
However, with significant concerns around trust and ethics in AI, this announcement is also likely to see growing calls for AI to become regulated. The adoption of AI by organisations has more than doubled since 2017, meaning the impact of increasing regulation will be far-reaching for Europe’s startup ecosystem.
AI regulation is coming
While previous AI collaborations between the US and Europe focused mainly on privacy, this first ‘sweeping’ AI agreement is designed to speed up and increase the efficiency of government operations and services. Energy grids are just one example of how citizens could notice a difference. As data is collected on how electricity is being used and where it is generated, proactive steps could be taken to redirect energy and balance the grid so that freak weather conditions or surges in demand don’t result in power failures. This, of course, is particularly relevant during the winter with high energy costs across Europe.
With businesses racing to put their AI developments into practice and roll out applications into the market, calls are growing louder for clearer rules about how AI is controlled. The European Union is leading the charge in drafting a regulatory framework and its AI Act is now making its way through the European Parliament. As with the EU’s introduction of GDPR in 2018, the EU AI Act could become a global standard that determines the role AI is allowed to play in our everyday lives.
What will regulation look like? At this stage, it is still hard to tell, but the indications from the EU AI Act are that different levels of risk will be assigned to different AI applications. These will very likely look at applications including human oversight, transparency, risk management, cybersecurity, data quality, monitoring, and reporting obligations. Materials will start to become available on how companies can comply with the regulations, which means it is essential that businesses – from start-ups to multinationals – have the capability to understand the coming regulation and take action to avoid any violation or penalty.
With the regulation not yet in force, now is the time for companies to manage the risks associated with AI. One way to do this is through an audit of all AI systems used in the company, with a team looking at data risk and establishing an AI governance structure to ensure that business operations are not threatened should systems need to be removed when regulation comes in.