Written by Mihalis Kritikos,
Positive, reliable and human-centric artificial intelligence (AI) relies on the willingness of Europe as a whole to design a balanced and inclusive governance framework that would allow it to become a leader in the development of trustworthy AI technologies worldwide. That was the main conclusion reached in the frame of the high-level workshop organised by the Panel for the Future of Science and Technology (STOA) on 29 January 2020 at the European Parliament in Brussels. The first STOA event for this parliamentary term (2019-2024) drew a full house with Members of the European Parliament, European Commission leaders, academic experts and representatives of international organisations debating how to strike the right balance on AI. Harnessing the numerous benefits that the transformative power of AI can bring needs to also take account of the necessity to mitigate a number of potential risks – from hampering people’s fundamental rights, such as privacy or non-discrimination – to undermining European values such as democracy, human dignity and the right to assemble.
The event proved to be a timely occasion to discuss how Europe could maximise the benefits and address the challenges of AI in a human-centric way, coming only a few days before the publication of the European Commission’s legislative plans on AI in the form of a White Paper on 19 February 2020. Trust and security of EU citizens will be at the centre of the EU’s strategy. There was a consensus that AI poses a wide range of new risks that need to be addressed in a proactive and step-wise manner, by putting in place the necessary safeguards and standards that would ensure that European citizens remain protected. The panellists also agreed that AI regulation should be pursued on the basis of a thorough risk-assessment approach and a thorough evaluation – and potential adaptation – of the EU regulatory framework within clearly defined ethical boundaries.
The event was opened by the STOA Chair and moderator of the event, Eva Kaili (S&D, Greece), who argued that the development of AI is a battlefield between those in favour of a transnational regulatory control of its applications and those supporting digital protectionism and localised solutions for its governance. Her opening question about how to achieve digital sovereignty without protectionism set the ground for a highly engaging discussion between Members and the keynote speaker, Margrethe Vestager, Executive Vice-President of the European Commission for ‘a Europe fit for the digital age’. The Commission work programme for 2020 contains a series of legislative initiatives in the field of AI aiming at setting the terms for the best possible use of the potential of digital data and the development and uptake of artificial intelligence that respects our European values and fundamental rights. Speaking directly after the announcement of the work programme, Vestager highlighted that positive, reliable and human-centric AI relies on two ecosystems – one of trust and one of excellence – for the EU to reap benefits while protecting our freedoms and values. In an attention-grabbing speech, Vestager emphasised the need to make sure that the deployment of AI respects all EU values of an open and free society, by strengthening stakeholder engagement and enhancing the transparency and explainability of algorithmic decision-making. She argued in favour of a cautious regulatory approach when it comes to high-risk AI applications, especially in the domains of healthcare and transport, and noted that the Commission’s efforts should also focus on enhancing transparency about the capabilities, but also the limitations, of artificial intelligent systems.
Several Members posed a series of demanding questions on AI and data ownership, privacy, access and the EU’s General Data Protection Regulation (GDPR), quality and traceability of data, attracting talent and teaching skills necessary for the EU AI industry, cybersecurity, and fostering trust. Responding, Vestager highlighted that ‘AI is a race, not just in a geopolitical sense, but rather in our ability to serve our citizens’, by adopting norms that are innovation-friendly and respectful of the European socio-ethical acquis. She noted that not all innovation is equal, therefore we need the kind of AI innovation that will be shaped by simple but firm priorities that are enforceable, focused on social good and have citizens in the driver’s seat. Given the pervasive character of the technology and its data-driven nature, she emphasised the need for a common European data ethics framework and the development of common data spaces.
Following the discussion with Commissioner Vestager, Professor Andrea Renda, Senior Research Fellow at the Centre for European Policy Studies (CEPS), compared AI with a ‘beast we need to tame’ for a purpose, because AI should be seen as a means, not an end in itself. He noted that AI should be built with speed and control, based on European principles and take man, machine and the planet into consideration. Highlighting the need to make sure that those who create value should also be those who profit from it and calling for a reconsideration of open data policies in the field of AI, Renda argued in favour of a smart regulation, centred on the need to adopt rules and standards that are flexible, proportionate and based on a risk-assessment approach. Anthony Gooch, Director of Public Affairs and Communications at OECD, then shared his experience of working on technological trends and noted that it is not the technology itself, but its use, that matters, arguing for the need for more reflective oversight structures. Gooch noted that AI is a general-purpose technology that will affect everybody and everything, so it is essential to ensure that the process of embedding ethical principles and values in AI-based decision-making systems is transparent and inclusive.
Twitter Hashtag: #AIforEU
In her closing remarks, Eva Kaili argued for smart regulatory solutions that are enforceable and principle-based, and ensure protection of privacy, fundamental human rights and democracy. Such norms should contain positive obligations that could facilitate the embedding of values such as transparency and explainability in AI development. She also emphasised the need to assess the capacity of the current EU ethical and legal framework to confront the governance challenges that are associated with the deployment and application of a disruptive and transformative technology, to respond to European needs. She further argued that Europe has a unique opportunity to shape the direction of AI, at least from a socio-ethical perspective.
Finally, as STOA Chair, Eva Kaili announced that the event marks the establishment of a specialised Centre for AI, under STOA responsibility. The Centre will coordinate efforts in the domain of AI; set global standards and provide the necessary public space for debating the terms of the development, design and deployment of AI applications in a wide range of policy areas; and provide expertise on the possibilities and limitations of AI and its implications from an ethical, legal and societal perspective.
If you missed out this time, you can access the presentations and watch the webstream of the workshop via the STOA events page.
Source Article from https://epthinktank.eu/2020/02/11/artificial-intelligence-made-in-europe/