Algorithmic Transparency

Algorithmic transparency is the principle that the factors that influence the decisions made by algorithms should be visible, or transparent, to the people who use, regulate, and are impacted by systems that employ those algorithms. Although the phrase was coined in 2016 by Nicholas Diakopoulos and Michael Koliska about the role of algorithms in deciding the content of digital journalism services, the underlying principle dates back to the 1970s and the rise of automated systems for scoring consumer credit.

The phrases "algorithmic transparency" and "algorithmic accountability" are sometimes used interchangeably – especially since they were coined by the same people – but they have subtly different meanings. Specifically, "algorithmic transparency" states that the inputs to the algorithm and the algorithm's use itself must be known, but they need not be fair. "Algorithmic accountability" implies that the organizations that use algorithms must be accountable for the decisions made by those algorithms, even though the decisions are being made by a machine, and not by a human being.

Position on the Adoption Curve

Presentations about Algorithmic Transparency

Tech Lead Fairness, Transparency, Explainability & Privacy Efforts @LinkedIn Krishnaram Kenthapadi

Fairness, Transparency, and Privacy in AI @LinkedIn

Software developer @IBM, committer to Apache Bahir and contributor to Jupyter Enterprise Gateway Christian Kadner

Create a Fair & transparent AI Pipeline with AI Fairness 360