18 Comments
Jun 23, 2023·edited Jun 23, 2023Liked by Michael Woudenberg

The final sentence in this piece doesn't follow. Consider the use of an algorithm to make an interview/don't interview, hire/don't hire, accept/reject loan application, etc. decision - that decision is the output. Suppose that the training set is based on historical outcomes rather than on samples of an idealized latent set or other synthetic data. If there is actual ethical bias in the training data, then the *better* the data accuracy, the more consistently the ethical bias will be exhibited in the output.

The piece also rests on a too-literal interpretation of what people usually mean when they say things like "The algorithm is biased against [societal group]." The term 'algorithm' in the latter statement is usually a synecdoche for the entire process of using AI/ML to make economically and socially significant decisions. Whether the problem is in the training data or in the algorithmic programming per se isn't material in most non-expert colloquial contexts.

Finally the suggestion that "the term bias needs to be divorced from ethical considerations and fully focused on accuracy" isn't going to be realized: at least in the US, its ethical meaning is deeply embedded into laws and legal culture.

Expand full comment

Bias is the human condition and the conscious identification and acknowledgment of bias everywhere let's you get a better single most accurate picture.

Expand full comment
Nov 11, 2023Liked by Michael Woudenberg

Excellent article with quality comments! Yes, the bias is three feet deep, some of it not easy to fathom by the matrix we are in.

Expand full comment
Jun 27, 2023Liked by Michael Woudenberg

Applying bias to bias is simply the mathematical formulation of affirmative action. That is the reason quotas were created in universities and elsewhere. I enjoyed the way you formulated the problems and possible solutions.

Expand full comment
May 1Liked by Michael Woudenberg

I agree that people indeed put too much meaning into the outputs produced by the current "AI" (one that is hardly thinking at all) . If it were, it might perceive the bias itself, perhaps correct it unless it has a reason to do otherwise. We are fortunate enough that the current "AI" we have isn't able to do so, otherwise, we might potentially be dealing with a far more pressing matter than the issue of bias itself. The algorithm in is blameless here, unless tweaked to produce a definite biased result which isn't the usual case, I should think.

Also, regarding the data used to train such learning models, that would entail that the individuals who feed the data curate them for the purpose of pruning biases which is a very cumbersome if not almost impossible task. It might be easier for them to curate the output instead, if they want a desired outcome, but it would mean accepting the output will definitely be inaccurate if not downright misleading. Now if we wish to preserve accuracy and take in the data as its fed but wish to change the outcome, it will take more than messing with the algorithm because the data itself comes with flaws. I do think it almost seem fruitless to try and sugarcoat it.

Curated outcomes can only be achieved through filtering which would then limit your data pool and instead of a bigger analysis, it would be left with a mere subset.

Metaphorically I see it like the contrasting relationship of farming in comparison to a wilderness. Farming is curated input and output with constant adjustment to achieve the desire result. The wilderness is random but managed, if not meddled with, to be balanced.

The data they feed is like taking the wildnerness in and hoping to produce the goods of a farm, I just don't see it happening.

There must be a point where one simply accepts what is glaringly true even if it is inconvenient in the context of "AI".

But take my opinion with a grain of salt. I don't have any expertise in the matter, just learn a few bits here and there from people I know who do.

Expand full comment

The real problem elucidated here is not technological - it is human.

An Algorithm is innocuous technology that humanity has used as long as humans have been 'computing' data. An Abacus uses Algorithms. Summarian Cuneiform uses Algorithms.

Moreover, neither Algorithms nor Machine Learning define AI.

Fortunately Language operates upon immutable axioms of Propositional/Predicate Logic - regardless of opinion. Opinion is most often the problem where Logic is concerned, and this is certainly the case with so-called 'Artificial Intelligence'. After-all, the term Artificial Intelligence = 'fake smart'.

Opinion is not Logic/Logic is not opinion are axioms of Logic.

An Algorithm is a tool. Tools aren't intelligent. The user of the tool provides the 'intelligence' upon which the tool operates. Coding is no different.

Just as there is no way to eliminate bias from human cognition, there is no way to "eliminate bias in AI/ML". There are only methods to mitigate bias, but those aren't new either. I recall plenty of classes about mitigating bias from college - that was before the Internet was publicly accessible.

Moreover, coding is not confined to just Algorithms. That may be the narrow focus of this discourse, but that's demonstrably biased. This discourse is demonstrable of bias, pedantics, and Logical fallacy (non sequitur) in all candor.

It's the preconceived notion that bias is inherently bad or an undesired outcome that is axiomatic of the premise. However, bias is not inherently 'bad'. Is a bias for Logical certainty bad? Is a bias for joy, & happiness characteristically bad? Do comedians not operate with the bias to make people laugh?

Expand full comment
Oct 1, 2023Liked by Michael Woudenberg

So the question remains: do you think we can reduce or eliminate bias bias in ML/AI?

Expand full comment