Governments, all over the world, increasingly encourage citizens to interact with automated bureaucratic processing systems, rather than human representatives, when filing tax returns, applying for welfare, applying for a passport, etc. Human interaction is becoming an exception reserved for when the computer fails. This trend is not limited to governments. Private companies frequently deploy automated response systems as the first line of defence against engagement, only giving their most tenacious callers access to human beings.
This automation of customer service, and bureaucracy, can reduce queues and processing times, but, as with sex robots, the total automation of bureaucracy threatens to concentrate power in the hands of a few controllers. The potential for these controllers to abuse such systems is tremendous.
The film Elysium depicts the extensive use of robots in confrontational roles, such as police, or parole officers. Machines don’t have empathy, and have unlimited patience. These traits may be desirable for some roles. A human benefit officer, who’s dealing with difficult applicants, may eventually resort to bending the rules to help them. An automated system can say “no” all day (even all week). Human officials have high salaries that must be covered, so they will feel pressure to process people quickly. They are also afraid, that if a client complains about them, they might lose their job. An automated system (especially running on your PC) has zero hourly running costs and if an applicant, whose housing benefit get cut off by the automated system, commits suicide, there’s no one to blame. Perhaps, in some cases, troublesome applicants should not be prioritised, but, in other cases, they are troublesome precisely because of their desperate situation.
Concerns over who’s responsible for driverless car accidents are just the tip of a much larger iceberg. Who’s responsible when an algorithm blocks your credit card payment? Who’s responsible when an automated welfare system accidentally cuts off your unemployment benefits? Or wrongfully cancels your immigration visa? Or mistakenly cancels someone’s health insurance without informing them? Or fails to pay your salary that week? Or calls in your loan after mistakenly finding you in breach of its terms? Or delists your business, or reduces the ranking of your company, costing you tens of thousands in sales? For those of you who are weird like me and read the full terms and conditions of the various automatic services you subscribe to, the answer is clear: “Company X accepts no responsibility whatsoever for any damage or harm caused by the failure of our software.” This is pretty much ubiquitously across the terms and conditions of all software services.
Furthermore, what about tenants evicted by robo-bailiffs for not paying rent? Or robot police and security guards? If a human bailiff, security guard or police officer inappropriately physically assaults someone, they could lose their job or be sent to prison. But what happens if a RoboCop, bailiff, or security guard does the same? The corporation that made it would be liable, but fining a large corporation is a far smaller deterrent than imprisoning a human worker, thereby destroying his career. Programmers may calculate the legal liability for the harm caused by a particular decision tree is less than the time it saves their clients, or the money it makes them. Security robocrats with these decision trees could do more harm than human employees who bear direct criminal responsibility for their actions. And if the final software comes from a long supply chain, where one company uses a software package supplied by another company, and sells the final program on to a third company, which uses it in a slightly different manner to the supplier’s original specifications, it might be impossible to pinpoint the source of the blame. This could create a moral hazard as, in many cases, bosses might prefer for robocrats, unconcerned with criminal responsibility, to make certain decisions: refusing to pay out insurance, sending out fines to raise money for a municipal government, overestimating tax liability, cutting benefits, overcharging on bills, etc., etc.
People harmed by automated robocratic decisions may be less motivated to pursue them in court. Court cases generally involve evidence, time and legal fees. When another person has consciously wronged or mistreated us, we often feel compelled to seek justice against them despite the cost and inconvenience, but when the decision of something harms us, it no longer seems worth the effort to pursue it. Algorithm designers may take this into account when programming decision-making strategies to maximize their client’s profits.
Robocracy also contributes to the growth of unpaid work which Guy Standing has drawn attention to. Frequent job changes mean more time applying for jobs, reskilling, networking. Beyond that, there’s self-assessed tax returns, work visas (for those who find work abroad) along with registering (and perhaps later deregistering – which is sometimes even harder) with other nations’ tax systems. Today we must also check our own food out at the supermarket and be our own travel agent, booking hotels, planes and organising our itinerary. This is largely because an automated system’s time is free while an employee’s time is expensive. A customer or job applicant’s time may be valuable to them, but it costs nothing to companies and government bureaucracies, so institutions are increasingly dumping work onto customers and applicants at every available opportunity. Once upon a time, if a company or a government asked a customer or a tax-payer to fill out a form, they had to pay a bureaucrat to read that form. Today robocratic algorithms can process it with humans only looking at a small sample of flagged forms or metadata generated by statistically analysing thousands of forms. This creates a moral hazard, for the designers of forms and applications, to make them lengthier, effectively imposing unpaid work on the people who have to fill them.
Beyond that, as AI advances, it will be capable of processing evermore complicated laws. There is a danger that laws may someday get too complex for human lawyers or judges to comprehend. At that point, it will be necessary to fully automate the court system. Past civilizations collapsed under the weight of their own bureaucracy. Today, however, intelligence is so cheap the legal system might sustain itself, even as it gets exponentially more complex. However, if it becomes too complex for humans to handle, the time may come where robot police, bring human beings before robocrat judges and robot juries which send them to fully-automated prisons.
The potential of technology to facilitate the interests of its designer is massive. But what if the designer’s interests clash with other people? From the perspective those at the receiving end, certain technologies may reduce their quality of life and diminish their autonomy. The effects of automating decisions, which may affect and harm people who have never consented to let robots determine their destiny, deserve our intense scrutiny.
John