Interrogating an algorithm is a human right

Being able to interrogate an AI system about how it reached its conclusions is a human right

Who watches the algorithms that are watching us?

  • What if we demand that companies give users an explanation for decisions that automated systems reach
  • What if the algorithms that calculate all those decisions have programmed themselves in ways that we cannot understand their behavior

Early iterations of AI were constructed that AI ought to be reasoned according to rules and logic, making their inner workings transparent.

Now, machines are constructed like biology,. They can learn by observing and experiencing

The program can generate its own algorithm based on example data and a desired output.

The machine programs itself.

This creates the bigly problem, you can’t look inside a neurological network to see how it works.

A network’s reasoning is embedded in the behavior of simulated neurons arranged in hundreds of intricately interconnected layers.

  1. The neurons in the first layer each receive an input and then perform a calculation before outputting a new signal…
  2. These outputs are fed, in a complex web, to the neurons in the next layer.
  3. An overall output is produced
  4. Back-propagation tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output

The many layers in a deep network enable it to recognize things at different levels of abstraction.

In a system designed to recognize dogs:

  • Lower layers will recognize simple things like outlines or color
  • Higher layers recognize more complex stuff like fur or eyes
  • Topmost layer identifies it all as ‘dog’

Just as many aspects of human behavior are impossible to explain in detail, maybe it isn’t possible for AI to explain everything it does to our inquiring feeble minds.

I think if we’re going to use these things and rely on them, then let’s get our grips on how and why they’re giving us ‘answers’. But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each others.

Or be FUBAR’d