The Real Problem With AI: Lack of Accountability
If you’ve ever been in a car accident (at least in the US), there’s one thing that’s always repeated by lawyers: never admit fault. This type of legalese talk has slowly crept into our everyday lives, and it is an attempt to dodge accountability for potential consequences. In our current world, there is no room for adequate accountability, and this easily translates into the workplace. The fear of the modern worker is not that they will be replaced by AI, but that AI has no true accountability because it is both a nebulous technology that is poorly understood and has an error margin that can be relied upon by its implementers and the ones who control AI. The true issue of AI lies with the displacement of blame.
The greatest question anyone can ask to the detractors of AI technology is, what margin of error are you comfortable with for a particular task? High-value vendor accounting error margin? Customer service email error margin? Database management? Selfless driving? All of these will have a varying degree of response depending upon the person and also the task, although the digestif rests on the severity of consequences should a fuckup occur.
While many can point to examples of something as simple as ChatGPT inventing history, it doesn’t get to the crux of the issue; what error rate is acceptable for a robot? Our conception of the tasks that a robot processes is that it should be 100% accurate, or at least 99.99% should an act of god occur (I mean look at how wild some of your insurance policies are). Radiologists said that AI should have an error rate of 6.8% on average compared to humans at 11.3%. We expect AI to be twice as good as humans.
We do not place the same standard onto humans, and it is because there is no retribution one can enact which would affect AI. AI does not have remorse, regret, or repercussion for a mistake, so therefore punishment can never truly be realized, and this is the main issue.
The US prison system is often criticized because it does not rehabilitate its inmates, but rather it is simply a holding cell to allow the passage of time to occur as the toll for those aggrieved by the criminal. This prison system is actually quite common around most of the world, and it is one based on punishment to prevent citizens from enacting personal revenge. The individual feels substantially aggrieved if punishment isn’t delivered, and this is the same case with AI whenever it produces an error which cannot be traced back to an individual.
The core issue with AI shouldn’t be about replacement of jobs, but that the lack of accountability is what may create an unequal society, one where humans are punished, and machines (and their human owners) can dodge blame and contribute errors to the nebulous algorithms. Until we can pinpoint errors and faults to a human, AI will never have a place in a society where we must have someone directly responsible for poor outcomes.
Notes about the author: Traditionally an esports writer, having written over 100 articles, produced over 300 YouTube videos, multiple documentaries and several investigative pieces.
** AI Generated thumbnail, but not AI generated written words; we’re too proud for that.

