top of page

Law - the cornerstone between AI and trust

Where do we stand?

Where do we, as humans, stand in a world where you can buy your groceries online, meet your soulmate online or build a successful business from your PC without ever leaving the house? Maybe soon you’ll even get legal advice on how to get a divorce from your neighborhood friendly AI, who’s secretly lurking in your mobile phone at all your conversations with your exes, who knows?

Imagine how easy it is to simply execute an online contract by answering simple questions from a chatbot and finally signing the contract by drawing your signature in the designated screen. Oh, but you don't have to imagine it since this is how most contracts are concluded nowadays. If you believe this is too modern for your taste, just wait until smart contracts fully step in and all your well-refined-clauses are reduced to computer code, stored and replicated on the system and supervised by the network of computers that operate the blockchain.

What could happen?

But imagine if your AI-powered chatbot mistake your identity with the identity of a non-existent person [1] or if Siri fails to call the police when you ask and instead plays very cool tunes by the Police while you are robbed of all your possessions?

These scenarios could happen all over the world. Just think about a situation in which the robots currently cleaning Singapore [2] airport confuse the quantity of detergent and a dangerous substance leaks and produces harmful effects to children. Or think about what could happen if a child suffers emotional trauma while witnessing tests of a highly-advanced robot dog [3] alongside US police officers and the parents request material compensation for his prejudice for not being informed of such tests and for not testing the impact of such robot dogs on citizens prior to the test itself?

Who shall be responsible?

The producer for not performing trial tests on the impact of the new technology before releasing it for actual viability tests?

The programmer for not expecting and not including this scenario in the AI program?

The AI for not learning as fast as it should and for failed to identify the actual situation and to act as the user intended?

Or the user, for not properly conveying the intended instruction to the AI?

For the moment, no certain answer can be provided to these questions by applying the current legal framework.

What should attorneys do?

Us, as attorneys not knowing how to program a bot or to wire an AI system in order to deliver a response and learn from its ever-changing environment, may be subject to a very high risk when taking a new client, who possibly eluded legal dispositions and committed money laundering, for example. Surely this may not be a problem in certain jurisdictions, but in most European countries [4], a lawyer is obliged to declare whether its client has committed money laundering. Otherwise, the lawyer is held criminally liable, as well, for not disclosing such information to the competent authorities.

Of course, the consequences generated by not knowing what AI entails have been analyzed and certain measures have been taken for the protection of the interested parties. Finland, for example, has come up with a solution of increasing digital alphabetization among European citizens. During its EU Presidency, Finland developed a free online course called AI Elements [5], designed to guide even the most refractory individual within the intertwined jungle of artificial intelligence. Lawyers can also benefit from such a course but is this actually the proper solution?

What about citizens who are not at all connected to the legal system? Should they also take courses in order to learn how AI works? Or do they simply trust the AI process and the system and accept its consequences without questions?

Reforming the legal system

The key feature of any successful system is the trust it generates for its stakeholders. In case of an AI - based system, trust can be achieved with mutual support from both lawyers and IT experts, by creating a solid legal framework regulating any aspects regarding the liability and risks of AI products.

Considering the large investments made by particulars in technological applications, the European Union has already created the first steps for creating a functional Digital Market.

On 10 April 2018, 25 European countries signed a Declaration of cooperation on Artificial Intelligence.

On 8 April 2019, the High-Level Expert Group on Artificial Intelligence (AI HLEG) published the Ethics Guidelines that set out three components for Trustworthy AI: lawful AI, ethical AI and robust AI.

Also, in June 2019, the AI HLEG presented their Policy and Investment Recommendations for Trustworthy AI (the Recommendations) during the first European AI Alliance Assembly. The emphasis of the AI HLEG with respect to the legal policies envisaged to be adopted was directed towards a risk-based approach to the regulation process in order to ensure that AI risks are assessed and properly dealt with. In the words of the AI HLEG, “the higher the impact and/or probability of an AI-created risk, the stronger the appropriate regulatory response should be”.

Apart from the focus on risks, another important aspect revealed by the Recommendations was that it is necessary to re-evaluate the currently enforceable legal framework in order to adapt it to the necessities of the AI systems recently created.

Future steps

The bridge between a healthy AI and an adequate legal system is created through trust. Trust is formed by respecting society’s rules. These rules should encompass the mitigation of risks, liability and fairness of decision-making. Law makers should to convey a message of cross-border collaboration in order for this legal framework to be effective and provide equal guarantees for citizens worldwide.

But for the moment, technological progress is way ahead of the creation of any enforceable legal rules. Until humanity establishes a proper, functioning and ethical legal framework governing AI products, the stakeholder’s reluctance towards AI systems shall prevent their large-scale use.

The role of a legal professional like myself is to adapt and adjust the current legal framework so that the final beneficiary, the citizen, is protected, and ethical values are not sacrificed on the altar of developing technology. Attorneys are thus challenged not only to contribute with suggestions to drafting new legal provisions but also efficiently apply current available law in order to foster the safe use of emerging digital solutions.


Useful links


About the Author Roxana – Mihaela Catea is an experienced litigation and commercial lawyer affiliated to Bucharest Bar, Romania. Roxana is a PHD student and university assistant professor in Commercial Law at Nicolae Titulescu University, in Bucharest, Romania. Roxana is also passionate technology lecturer and writer who integrates legal technological progress in her academic research and professional

bottom of page