Discussion about this post

User's avatar
A.J. Sutter's avatar

The final sentence in this piece doesn't follow. Consider the use of an algorithm to make an interview/don't interview, hire/don't hire, accept/reject loan application, etc. decision - that decision is the output. Suppose that the training set is based on historical outcomes rather than on samples of an idealized latent set or other synthetic data. If there is actual ethical bias in the training data, then the *better* the data accuracy, the more consistently the ethical bias will be exhibited in the output.

The piece also rests on a too-literal interpretation of what people usually mean when they say things like "The algorithm is biased against [societal group]." The term 'algorithm' in the latter statement is usually a synecdoche for the entire process of using AI/ML to make economically and socially significant decisions. Whether the problem is in the training data or in the algorithmic programming per se isn't material in most non-expert colloquial contexts.

Finally the suggestion that "the term bias needs to be divorced from ethical considerations and fully focused on accuracy" isn't going to be realized: at least in the US, its ethical meaning is deeply embedded into laws and legal culture.

Expand full comment
Bill Buppert's avatar

Bias is the human condition and the conscious identification and acknowledgment of bias everywhere let's you get a better single most accurate picture.

Expand full comment
18 more comments...

No posts