The pandemic brought chaos around the world and spared no one. One of those difficulties was a school bus driver working in a small rural town. As the number of infected people began to grow, the school closed and steady work suddenly felt like nothing. Basically, an unexpected series of events changed his life overnight. Having children dependent on him made it difficult for him to be in a situation of varying magnitude, and the problem quickly shifted toward meeting basic needs, “how to bring food to the table?”
Fortunately for him, the pandemic also opened up new opportunities. Because people spent more time at home, they couldn’t go out and desperately needed supplies. This situation boosted the small goods delivery business overnight. He had already made a few trips before the pandemic as part-time gigs, but it quickly became his only source of income after losing his main job.
The task was not easy; running and delivering thousands of packages in a limited time. But the worst was not there; it was the leader of artificial intelligence.
The leader of artificial intelligence had no face. It wasn’t someone you could hit a joke with, maybe get angry or even discuss concerns. The Manager application became the main communication channel; it was able to follow the movement of the vehicle and sometimes also required impossible actions. These demanding requirements are due to the fact that some companies make very bold promises to their customers. Instead of making their customers wait, they offer them deliveries on the same day. So the algorithm monitors drivers ’access to the distribution station if they completed the route in a predefined time window, whether they left the packet on the porch hidden from thieves, and so on. The algorithms scan the incoming data, analyze it, and decide whether or not the driver should acquire more routes. As simple as turning a button on or off.
However, the algorithm does not seem to give much weight to uncontrollable factors for the driver, such as driving miles of winding dirt road in the snow or waiting for an hour’s packet to be picked up because the distribution point is full of other drivers. These problems and many others throw drivers on schedule, which negatively affects their delivery ratings.
In one special case, the driver had a piercing. When he reported his situation, the company asked him to return the package, which he did even though he was almost flat. His rating dropped from “high” to risky almost immediately as he gave up his technical delivery. After the incident, he received emails from the company in which he assured that he was still one of their best drivers. Probably it was just a two-way artificial intelligence that tried hard to be empathetic because the next day the algorithm re-evaluated his score and stopped him cold simply by blocking his application.
He was amazed! He had delivered over 8,000 packages, was rated one of the best drivers, and due to a flat tire, they shot him on the spot! Fortunately, the system provides a complaint, so he applied. Once again, an empathic two-way AI sent him an email a few days later saying he regretted the delays and was processing his petition. I’m sure artificial intelligence lost many insomnia processing times when thinking about their precarious situation and their children. But a few days later, the two-way AI sent him another email stating that the company’s status had not changed since the petition and they would not continue his service. At the end of the email, it “really” wished him success in his future endeavors.
As a result of this decision, he began to fight financially. He stopped paying the mortgage, the bank took his car, he almost lost his house, became dependent on state aid, and his children passed the worst Christmas of their lives.
This episode is not a horror story set in a distant dystopian future dominated by artificial intelligence. It happened last year to a 42-year-old single mother in the United States.
There are many flaws in such a system. We cannot create algorithms and make them demand unrealistic goals. It makes no sense. While we should strive for productivity, we cannot treat people as machines. If there is something wrong with that person’s performance, it should be discussed humanely, taking into account his or her background. If the algorithm evaluates a person, it must have the right to examine the evaluation, object to the allegations, and appeal it. Finally, we must never remove human judges from the loop. Algorithms are not infallible, they are subject to prejudice, and they cannot determine the future of human life on the basis of a separate mathematical formula.
This story is not a one-time mistake; many other similar cases sprout every day. Companies are working to automate their staffing operations, banks are approving loans through a computerized system, and courts are relying on algorithms for conditional issuance. If we want to solve a situation, algorithms that affect people must be transparent in their decisions, they must justify them and provide people with the information they need. People should have a faculty that can summon mistakes and get simple corrective mechanisms. Legislators are required to enact rules to prevent harm before it is too late. Only then can we hope to create a just society in which artificial intelligence helps us all improve our lives and does not act as a digital executioner for our livelihoods.