Bias in Artificial Intelligence? The irreplaceable riddle of man.
Because I run a website and depend on computers, I keep up with basic technology news. That’s how I know, for example, that so-called “smart” watches have provided data to help convict killers (see Fitbit Data Ties 90-Year-Old Man to Murder). A man visited his daughter-in-law and reported that she was fine when he left. But the woman’s FitBit data, combined with neighborhood video surveillance, proved that her heart rate had escalated dramatically, and then completely stopped, while his car was still parked outside her home.
I find such things interesting, but I did not need the technology news to alert me to the existence of prejudice in artificial intelligence (for example, see this story: Artificial Intelligence Has a Bias Problem, and It’s Our Fault). This is not real prejudice, of course, because machines cannot make judgments of any kind, whether based on extraneous factors or relevant information. Nor can they exhibit a morally culpable bias because they are not moral agents. The same deficiency applies to animals and plants, which are at least alive.
In the end, even “artificially intelligent” machines cannot be prejudiced because they have no intellect with which to form good or bad judgments for themselves, and they have no wills to interfere with such judgments, or to act upon them. Machines, with a complete lack of personal responsibility, do exactly what they are told unless they break. Their behavior has unintended consequences only because their designers and programmers are human. The programmers may write faulty code, causing the machine to do things that were not intended but are nonetheless what the code instructs it to do.
At the deeper levels of artificial intelligence, computing machines make all kinds of correlations among whatever data is available to them based on the coded toolset they have been given to process that data. But the idea of “intelligence” comes into play only though a fuzzy use of the term to describe data-and-response behaviors. These require not intellect but simple processing in accordance with rules the machine has no choice but to obey.
One of the unexpected side-effects of this process—at least for those who do not understand the fundamental, unbridgeable difference between the human person and a machine—is that human prejudices will always be reflected in the way artificial intelligence responds to the data it is given to process. A good example is provided in the article linked at the beginning of my second paragraph. Here “artificial intelligence” is generated from the books and articles fed into the machine, and the code which runs the machine makes it catalog “connections” based on things like word frequency and the proximity of some words to others.
Another example can be found in the way robotic assistants and “fuzzy friends” can be programmed to respond to questions or even to moods, based on measurements of vocal tone, words used, or other physical symptoms (such as heart rate). More interesting applications for “smart” machines are being devised daily.
The Glory of Being Human
It may be nice to have a machine that sometimes responds in positive ways to our needs. But it can never be a substitute for somebody who largely knows and understands us in a human way. From machines we will get what the programmers think we are most likely to want, within the limits of the data-response patterns they believe to be appropriate—given their own human limitations and errors, their fully human prejudices and their characteristic human blindness.
There may also be immense utility in such developments. I already use Amazon’s Alexa to make my life easier in small ways through voice commands. This is possible because of the vast array of “skills” available for Alexa, and the ways in which the Alexa-powered devices can interact with other devices through WiFi, Bluetooth and even old infrared signals (when paired with a device that can translate commands into traditional remote control impulses). But this is no substitute for human interaction, and our society will grow psychologically, morally and spiritually sicker and increasingly desperate if we end up leaving more and more care of ourselves and others to machines.
We may hope that these machines will be as free of prejudice as possible. I expect that a car guided by artificial intelligence will stop automatically when persons of any color cross its path. But any given artificial intelligence may have more difficulty “recognizing” people with different skin colors in different situations, and who can say whether a car will ever be taught not to stop for someone whose facial characteristics, bodily shape, and/or dress are associated with a hated group? Might there already be a product in testing somewhere that will refuse to interact positively with...Catholics?
I am being whimsical, at least for now. My point is that there will be apparently prejudicial responses in artificial intelligence because human beings can do next to nothing without reflecting their own prejudices in what they do. The whole data set for artificial “learning” (correlation) is a mass of both human intelligence and human prejudice, and I am not even suggesting that this is a bad thing. One man’s prejudice is another’s patient and learned discrimination, and if the human person stops “discriminating”—stops separating right from wrong, stops distinguishing good from best, and stops striving for an awareness of when he needs to learn more to understand properly—then we might as well all be machines, singing in perfect harmony to a tune composed by Those In Charge.
The essence of what it means to be human in the world is perhaps best summarized in Pope St. John Paul II’s description of “man as a moral actor”. There is never any substitute for an informed moral judgment which does its best to take advantage of God’s grace so that it might not be influenced by falsehood or prejudice. It is just this that constitutes the glory of the human person, whom Alexander Pope described as “Sole judge of truth, in endless error hurl’d; / The glory, jest and riddle of the world” (Essay on Man, Epistle II, first stanza).
Artificial intelligence is useful, but it is not intelligence. We depend on our own intellect and will and moral judgment, and we benefit immensely from the intellect and will and moral judgment of others. These cannot be replaced by machines.
To put the matter succinctly: We need the riddle, because we need the love.
All comments are moderated. To lighten our editing burden, only current donors are allowed to Sound Off. If you are a current donor, log in to see the comment form; otherwise please support our work, and Sound Off!
Posted by: garedawg -
Oct. 09, 2018 11:16 AM ET USA
If you've taken a statistics class, you may remember the procedure for drawing the best straight line through a graph of data. The so-called Artificial Intelligence is just a glorified version of this. Certainly it is very useful for many things, but depends completely on good data that is evenly spread throughout the domain of interest. "Machine Learning" is a better description, I think.