Noah Nzuki (Cognizant) on the EU AI Act and Responsible AI

09 July 2024 5 min. read

While artificial intelligence (AI) and its little sister generative AI (GenAI) offer unprecedented opportunities for enhancing services, customer experience, efficiency and more, staying within the regulatory and ethical boundaries is becoming increasingly important, writes Noah Nzuki from Cognizant.

GenAI’s ability to leverage vast amounts of available data, advanced computational powers and machine learning to generate new content – text, images, music, or other types of data – with varying levels of autonomy at an unprecedented scale has distinguished it from other AI systems.

It promises opportunities for enhancing human experience and unlocking capabilities across many businesses and industries through outputs such as predictions, recommendations, or decisions that influence social, physical or virtual environments.

Noah Nzuki (Cognizant) on the EU AI Act and Responsible AI

However, if left unchecked, GenAI can produce undesirable results like bias, drift, leakage and negative cognitive behavioural influencing. Therefore, to achieve results in terms of adoption, business goals and user acceptance businesses will need to deploy GenAI in a regulated and responsible way.

The European Union (EU) Artificial Intelligence Act is the world’s first comprehensive AI law that puts common regulatory checks and balances in place for the use and supply of AI systems in the EU. It seeks to ensure that AI systems placed in the European market and used in the EU are safe with regard to the health, safety – including environmental – and fundamental rights of individuals.

Responsible AI

Responsible AI is about implementing principles and best practices for building safe, secure, and trustworthy AI systems. At Cognizant, we use established frameworks to achieve this by focusing on identifiable risks and operationalizing guardrails for trustworthy GenAI.

Focusing on features that are unique to GenAI like the ability to continuously learn and produce indistinguishable outcomes, like deepfakes, we help put in place data and AI governance systems to meet the requirements for explainability, transparency and information provision to users as outlined by the EU Ethics Guidelines for Trustworthy Artificial Intelligence.

Our activities in this scope seek to ensure that GenAI models are reflective of fundamental human rights and values – as enshrined in the EU AI Act. In other words, the outcomes of such models should be legal, moral and reflective of the applicable ethical codes like the EU Ethics Guidelines for Trustworthy Artificial Intelligence.


A key requirement for High-Risk-AI-Systems (HRAIS) under Article 10 of the EU AI Act is Human Oversight. A common way of operationalizing Human Oversight is Human-In-The-Loop (H-I-T-L) which provides humans with continuous involvement and oversight in the development and deployment of AI systems.

Effectively, it facilitates continuous human input and monitoring, ensuring that AI systems produce accurate and fair results. Where necessary, human intervention to alter or stop the outcome of the AI system is possible. This is especially important to mitigate risks associated with prompt engineering where the outcomes of foundation models can evolve and or are at best harder to predict at creation.

Our past is lined with successfully deploying systems with H-I-T-L where Subject Matter Experts in Privacy and Security deploy, monitor and evaluate Privacy Enhancing Technologies (PETs) as they orchestrate and automate data protection and privacy requirements under GDPR for instance.

We leverage this expertise in assessing, designing, validating and automating data and AI governance measures, as applicable, to ensure appropriate levels of performance, explainability, interpretability, corrigibility, safety, and cybersecurity. As such, we help businesses combine human expertise with AI systems to realise the benefits of GenAI whilst protecting their bottom line by meeting compliance obligations and preventing harm to society.

Looking over the horizon…

My observation is that many businesses embarked on the GenAI journey swiftly but remain at a tactical level. They boost executive sponsorship, demonstrate considerable understanding of applicable regulations and available frameworks for responsible AI, and are experimenting widely with GenAI use-cases, including identifying high-risk ones.

However, enterprise-wide standards and controls for monitoring, measurement, analysis and model evaluation to ensure valid results or to meet strict regulatory data quality and governance criteria as a baseline for training, validation, testing data sets for High-Risk AI systems are still embryotic.

With the political agreement between the European Parliament and the Council on the Artificial Intelligence Act (AI Act) reached in December 2023 and the European Parliament’s plenary vote now passed, the demand for Responsible AI will accelerate.

Cognizant is committed to building solutions that meet this growing demand for safe, transparent, unbiased, and ethical GenAI. We believe that responsible generative AI, and the governance around it, is not just a legal or compliance obligation but a strategic imperative for businesses.

Embracing ethical AI practices and embedding governance throughout the AI lifecycle fosters innovation, enhances trust with customers, and mitigate risks associated with AI deployment.

Further, embedding the features of responsible AI that can be automated throughout the AI value chain unlocks AI’s true potential and can help people make informed decisions and tackle complex business objectives at scale. So, as you embark on your exciting GenAI journey, whether in assessment, design and governance operationalization phases, we’re best placed to help you ‘tame the beast’ as we say in industry-speak.

About the author: Noah Nzuki is ESG Governance Lead for EMEA at Cognizant.