AI responsibility in a hyped-up world
Per Axbom has an interesting essay on the ethics of AI, an important issue given the onrushing ubiquity of AI in our daily life. He writes:
It’s never more easy to get scammed than during an ongoing hype. It’s March 2023 and we’re in the middle of one. Rarely have I seen so many people embrace a brand new experimental solution with so little questioning. Right now, it’s important to shake off any mass hypnosis and examine the contents of this new bottle of AI that many have started sipping, or have already started refueling their business computers with. Sometimes outside the knowledge of management.
AI, a term that became an academic focus in 1956, has today mostly morphed into a marketing term for technology companies. The research field is still based on a theory that human intelligence can be described so precisely that a machine can be built that completely simulates this intelligence. But the word AI, when we read the paper today, usually describes different types of computational models that, when applied to large amounts of information, are intended to calculate and show a result that is the basis for various forms of predictions, decisions and recommendations.
Clearly weak points in these computational models then become, for example:
- how questions are asked of the computational model (you may need to have very specific wording to get the results you want),
- the information it relies on to make its calculation (often biased or insufficient),
- how the computational model actually does its calculation (we rarely get to know that because the companies regard it as their proprietary secret sauce, which is referred to as black box), and
- how the result is presented to the operator* (increasingly as if the machine is a thinking being, or as if it can determine a correct answer from a wrong one).
* The operator is the one who uses, or runs, the tool.

What we call AI colloquially today is still very far from something that ‘thinks’ on its own. Even if texts that these tools generate can resemble texts written by humans, this isn’t stranger than the fact that the large amount of information that the computational model uses is written by humans. The tools are built to deliver answers that look like human answers, not to actually think like humans.
Or even deliver a correct answer.
It is exciting and titillating to talk about AI as self-determining. But it is also dangerous. Add to this the fact that much of what is marketed and sold as AI today is simply not AI. The term is extremely ambiguous and has a variety of definitions that have also changed over time. This means very favorable conditions for those who want to mislead.
Problems often arise when . . .
Leave a Reply