Confucius felt truly conflicted. He had watched humans spark the engine of the first train and boot the first computer, but nothing had given him such chills of uncertainty as watching them race to build an artificially intelligent decision-maker.

Of course, the Old Master was skeptical of their self-assurance that they could create such a machine, but he couldn’t help pondering its implications for the system of ethics he had created.

He saw enormous potential in AI’s ability to process vast amounts of data—but also gigantic risk. The AI could make informed decisions, using “information” on scale unfathomable to finite humans. Rid of the all-too human emotions that have been corrupting politics over millennia, this machine showed promise in weighing situations towards the golden mean. Perhaps it would bring about real harmony if it was allowed to make crucial political decisions.

The more he thought about it, the more his sense of uneasiness grew. Hard as he tried, he couldn’t predict how AI would interact with the often frivolous humans. Their spontaneity, after all, is a condition for an ethical life. Creating themselves as they go through life, their sociability enables them to learn from the virtuous how to be ethical, cultivate virtue within themselves and apply it. How could, an emotionless automated decision-maker achieve truly anything beyond the simulation of an ethical mindset. More importantly, how could it ever inspire and teach virtue to the analog humans?

In an unusual moment of self-doubt, the man who clarified righteousness in China’s infancy, thought to himself, “Perhaps the digital lies beyond the limits of my comprehension.” And with this thought, he plunged back into the real world.

He landed in Shenzhen; he figured a good place to start his probe into man-made intelligence was this freshly state-planned city. Dressed in postmodern threads he wandered between the glass towers and spoke to ordinary people. He asked them why they prefer to give up everyday decisions to smartphones.

A common reply ran along the lines of “It makes my life easier.” It seemed the humans were enamored by algorithms’ speed in figuring out the most efficient way to return home or what products they would like to buy. “There is no other way,” others replied, defeated.

These answers perplexed Confucius even further. It was clear to him that computers could process an astounding amount of information. Issues of governance and judicial disputes could be resolved more harmoniously by a being that could deliberate using all this data. But these conclusions were solely based on quantitative data, which failed to capture a critical part of the human experience.

He met an engineer working on autonomous vehicles. The man kindly explained to him that in order to prevent harm, they coded principles into the car’s algorithm, which in turn learned from a variety of situations how to uphold the coded values. Even though the car did not understand why causing injury was discouraged, even though the technology wasn’t perfect yet, one day it will have analyzed so much data that will outperform human drivers at preventing accidents.

“By applying this method to different cases where decisions are necessary, we can create the perfect ruler! The most accurate decision-maker of all time,” the engineer exclaimed.

Confucius asked the engineer, “If human drivers are not learning why they should prevent accidents, then how are they learning to make all the everyday decisions that demand their empathy and care for others?” To this, the man had no adequate response.

The Old Master was almost offended by the engineer’s naiveté. He regarded good government entirely as a matter of accuracy, neglecting the vital ripple effect of virtue: inspiring individuals to be the best they can be.

After this conversation, the Old Master thought he had heard enough. He knew that, fundamentally, government is about making choices. There are few—if any—instances where one policy can be quantitatively determined as the best, and the drive behind implementing policy is not factual, but moral. Even the smallest government move is underlined by a moral judgement: “we should provide a welfare state to the poor,” or “we should uphold the free market.”

Regardless of how big the data AI could process, they could never grasp human emotion, and it is emotive intelligence that can bring about judgements, as opposed to decisions. Seeing the virtuous judge situations over time empowers and teaches ordinary people to become benevolent.

Of course, he thought, there are factual reasons why some propositions will bring about a better, more harmonious society, but facts are not the main drivers of the human condition. People believe in things; in human rights or the scientific method. These beliefs spur their behavior.

If any leader wishes to change society for the better, they can’t only preach with facts. That is AI’s biggest shortfall. The digitalized approach might be successful in coming up with factual decisions, but it will never succeed in inspiring people to be better; to judge fairly.

Already, humans have relinquished the future to these smart machines, as if they have no choice. So many have stopped fighting for what they believe in, as if their belief muscle has been numbed by social media.

The Old Master returned to the heavens, hoping that the human race would deliberate carefully before they let machines call the shots.

Eliza was TechNode's blockchain and fintech reporter until July 2021, when she moved to CoinDesk to cover crypto in Asia. Get in touch with her via email or Twitter.

Leave a comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.