It argues that although many AI-related “harm taxonomies” and incident databases exist, they rarely make explicit connections between AI-inflicted harms and violations of human rights. The report fills this gap by mapping ten core human rights against concrete examples of AI harms — such as biased algorithms in hiring or healthcare, invasive facial-recognition misuse, or deepfakes that undermine freedom of expression.
By doing so, the report provides public and private actors — companies, governments, watchdogs — with a practical tool to evaluate whether a given AI system or deployment could infringe basic freedoms. The mapping table becomes a first step for organizations to identify risky AI uses.
Crucially, the report warns that many businesses still treat human-rights concerns as irrelevant to their AI deployments — even in the face of growing evidence that AI can cause real, widespread harms.
Find the report on:




























