Nearest human pays the price for algorithms goof-up

May 29, 2019, 8:27 AM EDT
(Source: Marc van der Chijs/flickr)
(Source: Marc van der Chijs/flickr)

Recently, an automated investment system drew the wrath of a Hong Kong tycoon after losing him $20 million fortunes in trade. Luckily, the algorithm could not be sued and so the blame fell upon the nearest human, the salesman who persuaded the businessman to invest through technology.

The lawsuit in the matter once again ignited the long-standing debate over who should bear the liability when algorithms mess up, writes MIT Technology Review. A dig into the history and one would know that the faults of automated systems result in humans being treated as “liability sponges.”

Last year, when an Uber self-driving car was involved in a fatal crash in Arizona, the company escaped any criminal liability, but the safety driver stands exposed to vehicular manslaughter. These cases give rise to another critical question: who is getting the blame for tech companies’ experiments with algorithms? Why a human operator or safety driver should absorb all moral and legal responsibilities despite their limited control over the systems they are interacting with?

 

YOU MIGHT ALSO LIKE