Algorithmic consent and liability – When AI acts: How the law interprets consent and allocates liability
Part 2 of the series: Algorithmic Consent, Liability and Corporate Governance – How Kuwait Financial Institutions Must Make AI Work with the Law
By Emile Khoury Hélou (Of Counsel)
As AI systems perform increasingly autonomous functions in the banking and finance sectors- such as executing trades, accepting offers, and adjusting terms- the boundaries of consent and liability become central issues. This touches the core of contract formation and responsibility for automated decision-making.
Understanding Algorithmic Consent
This evolution toward autonomous decision-making brings the concept of “algorithmic consent” to the forefront, challenging traditional notions of how consent is expressed and attributed in a legal context. “Algorithmic consent” describes situations where an AI system performs actions traditionally associated with legal personality, without itself possessing legal personality.
Some foreign civil law jurisdictions have begun examining whether autonomous AI negotiation systems can independently produce legally binding outcomes.
Furthermore, certain foreign civil law jurisdictions are actively examining whether liability arising from actions performed by an automated AI system may be reassigned to the system itself, thereby shifting responsibility as a defence mechanism to disclaim liability. Such a reallocation of liability would, by implication, confer a limited form of legal capacity on the AI system.
However, under Kuwaiti law, AI cannot possess legal personality. The Kuwaiti Civil Code recognises only natural persons and legally established entities, with no third category. Contract formation requires offer, acceptance, and clear consent by competent parties—conditions AI cannot satisfy alone.
CMA Perspective
Article 3-2-2 of Book 19 of the CMA Bylaws identifies two Digital Financial Advisory models:
- Fully Digital, with minimal human interaction; and
- Hybrid, where clients may discuss automated advice with staff.
Even in fully digital models, Article 3-2-1 clarifies that algorithms analyse client data and provide recommendations, but responsibility rests entirely with the licensed service provider, not the algorithm. AI remains a tool; consent and liability attach to the deploying entity.
Implications
AI cannot assume rights or liabilities. Its actions expose the deploying institution to legal and operational risk. How those institutions design the right governance frameworks—and how boards manage AI risk—is explored in the next part of the series.
Next in the series: AI Governance: Board Duties, Risk Controls, and Oversight.






