The human paradigm of knowing, how it is different, why it is not replicable with AI

By Thomas B. Fowler ( bio - articles - email ) | Jul 01, 2024

[Part 3 of Technology and the limitations of artificial intelligence]

Human Knowing and how it differs from AI

Most AI researchers concede that computers are not and cannot be sentient in the sense of perceiving things as real. “Sentience” means awareness of the world as something real and existing independently of us, but perhaps more importantly, it means awareness of other people as humans, and at least some understanding of how they view the world and perceive others.

Human knowing operates on a radically different principle than AI, namely, it is a thoroughly integrated system of sensing, motor skills, and brain—whereby it has direct contact with reality at a very basic level, and uses this direct contact to formulate its way of knowing about reality at higher levels. This also enables the supremely creative way that human knowing works, because it is the basis for the ability of humans to deal with situations they have never encountered before, and to generate new theories about reality, often thinking about it in very innovative ways. Humans can “think outside the box”, whereas AI cannot.

AI can of course generate random “ideas”, understood in the rather limited sense of data structures or random chatbot statements; but that is not how humans develop new theories or deal with unexpected situations, as anyone who has done either can attest. This means that our impression of reality is different than what can be achieved by any sort of paradigm based on separation of functions:

The impression of reality is not impression of what is transcendent, but rather transcendental impression. Therefore “trans” does not mean being outside of or beyond apprehension itself but being “in the apprehension”, yet “going beyond” its fixed content. In other words, that which is apprehended in the impression of reality is, by being real, and inasmuch as it is reality, “more” than what is it as colored, sonorous, warm, etc. [Xavier Zubiri, Sentient Intelligence, p. 44.]

It is in this “more” that its capabilities beyond the AI paradigm of knowing come into play. That paradigm, by design, can only ape what human intelligence does. The AI paradigm reacts to stimuli in the form of sense-type data; it cannot react except indirectly to any underlying reality. It does not “know” what it is doing because it does not have contact with reality. Raj Rajkumar, a professor of engineering at Carnegie Mellon University who collaborates with General Motors Company, has admitted the fundamental difference between machines and humans:

We are sentient beings, and we have the ability to reason from first principles, from scratch if you will, while AI on the other hand is not conscious, and doesn’t even understand what it means that there’s a physical world out there. [“Self-Driving Cars Have a Problem: Safer Human-Driven Ones”, Wall Street Journal, 20 June 2019.]

Humanoid robots

Efforts to design humanoid robots—an acid test of AI’s ability to mimic human intelligence—have foundered. Rodney Brooks, co-founder of the iRobot corporation, founded another company in 2008 to create “cobots”, which are “collaborative robots” designed to work alongside humans—already a giant step away from humanoid robots. The company folded because, as it turned out, “… building robots with human-like capability is really, really hard. There are many things humans can do easily that are almost impossible for robots to replicate.” So, even with all the astounding advances in computation ability—probably far beyond anything dreamt of by Alan Turing—robots (and AI) are still little advanced from 70 years ago.

The Category Mistake

AI systems, or at least those theorizing about their capabilities, suffer from another problem: the category mistake. A category mistake occurs when one tries to talk about something with an inappropriate description or “category”. An example is, “my feelings are green”. This is what occurs in many discussions of AI capabilities. In order to be able to explain—or explain away—our ordinary experience of the world and other people, not to say religious experience, any type of physicalist theory must show:

…what physical configuration in the brain corresponds, for instance, to concepts like fourth dimension, n-dimensional manifold, and the like…[They] will have to explain what well-determined pattern in the brain is the equivalent of the indeterminacy principle and of indeterminacy itself. They will have to show what molecular fullness corresponds to the concept of vacuum or empty space…They will be beset…with the problem of finding the physical force…that will adequately translate the feeling of love, hatred, and curiosity into the categories of physics. [Stanley Jaki, Brain, Mind, and Computers, p. 130.]

In all of the cases cited, the discussion falters because the two things compared are simply not of the same category. The effort turns into a bizarre fantasy that bears no relation to reality. But without the ability to explain such identifications, the theory that computer-based systems are somehow equivalent to the human knowing capability falls flat.

AI and Ethics

The philosophical ramifications of AI appear in another context, namely ethical theory. Faith in technology alone quickly leads to conundrums. For example, many today are concerned that AI will spin out of control and threaten humanity; as Elon Musk puts it:

It has the potential—however small one may regard that probability, but it is non-trivial—it has the potential of civilizational destruction. [Elon musk says his AI project will seek to understand the nature of the universe]

As a result, some have embarked on a crusade to ensure that AI is deployed in an ethical fashion. This has become known as “effective altruism”.

[Effective altruism] believes that carefully crafted artificial-intelligence systems, imbued with the correct human values, will yield a Golden Age—and failure to do so could have apocalyptic consequences. [“How a Fervent Belief Split Silicon Valley—and Fueled the Blowup at OpenAI”, Wall Street Journal, 22 November 2023.]

The problem is that those in the technology community do not understand a key fact about ethics, viz. that there are no free-floating ethical theories. Any theory of ethics—any moral code—must be based on an antecedent theory about what is real. If one believes that God exists, and that the Ten Commandments were given, then this will imply the need to live in a certain way. If one does not believe that God exists, but only that the “material” world explored by science is real, a different moral code will ensue. The tendency among the technology community is to gravitate to some type of utilitarian ethics: what is good is what will provide the maximum “benefit” or “happiness”. As is well-known, utilitarianism is unworkable because of difficulties in defining benefit or happiness, and because actions never stop having consequences. Moreover, if one believes that there are truly binding moral imperatives, and unassailable knowledge of right and wrong, even in a few cases, he or she is committed to belief in something real that is outside of science. The debate over the so-called “effective altruism” reveals the kind of thinking involved:

The turmoil at OpenAI exposes the behind-the-scenes contest in Silicon Valley between people who put their faith in markets and effective altruists who believe ethics, reason, mathematics and finely tuned machines should guide the future. [“How a Fervent Belief Split Silicon Valley—and Fueled the Blowup at OpenAI”, Wall Street Journal, 22 November 2023.]

No, they can’t and they won’t. No binding moral injunctions are possible on such a basis; only pragmatic suggestions, because there is no metaphysical ground. Any moral judgement about AI (or any other technology) cannot be done from within the technology ambit itself; it must be done on a higher plane, outside of the limited realm of science and technology, where a holistic view of knowledge and the place of humans in the world order can be discerned. That is, it must be done in a viable faith-oriented context, and not on the basis of a surrogate religion, which those in the scientific/technology community often proffer.

Why AI Will Achieve but a Fraction of Its Goals

Several issues illustrate the key difference between AI “intelligence” and human intelligence:

a) Symbol manipulation vs. interaction with reality

The goal of human knowing is always to know something about reality, irrespective of any operational value. Neither an animal nor AI seeks the reality of the real. AI and computers must utilize symbols, which function as signs for response, programmed in the case of computers and AI:

A digital computer is a device which manipulates symbols, without any reference to their meaning or interpretation. Human beings, on the other hand, when they think, do something much more than that. A human mind has meaningful thoughts, feelings, and mental contents generally. Formal symbols by themselves can never be enough for mental contents, because the symbols, by definition, have no meaning (or interpretation, or semantics) except insofar as someone outside the system gives it to them. [“Artificial Intelligence and the Chinese Room: An Exchange”, New York Review of Books, February 16, 1989.]

The machines have no connection to what things are in reality; they can only manipulate symbols and then take some sort of programmed action, such as opening a valve or scanning a scene for obstacles:

As philosopher Charles Peirce observed more than a century ago the links between computational symbols and their objects are indefinite and changing. The map is not the same as the territory. The links between symbols and objects have to be created by human minds. Therefore, computations at the map level do not translate to reliable outcomes on the territorial level. [“Mind Over Matter: Setting the Record Straight on AI”, Gilder’s Daily Prophecy, August 20, 2019, Italics added.]

b) Difference between knowing what things are, and how things behave

There is a profound difference between knowing what things are, and how things behave. Though historically many have thought these two the same, or at least that one immediately leads to the other, in fact they are distinct. Knowing what something is engages the transcendental order of human understanding, how the thing relates to other things, and the fact that it exists in reality as a thing. Knowing how something behaves enables us to control it, or to make other things that behave in similar ways. Knowing this, or equivalently, knowing how to make something that behaves in a particular way, is not operating at the most fundamental level of human knowing. AI systems perforce act only on the basis of how things behave, or nominalistically on the basis of names, but never on the basis of reality.

c) Difference between aiding human knowing and replacing human knowing

There is a distinction between aiding human knowing and replacing human knowing. It is clear that computers have been doing more and more of the first for decades. Our modern technological society could not exist in anything like its present form without computer-based automation of functions at one time done by humans. But computers are unable to do the second.

d) Creative thinking and understanding vs. rote or algorithmic manipulation

AI tools such as ChatGPT can scan the Internet and assemble much information, even invent “facts”, but they are not creative in the true sense. On the other hand, human knowing is nothing if not radically creative, even in simple everyday tasks, such as driving a car. And it is especially so for science, math, literature, music, art, and many other fields. The great advances in science always come when someone breaks with established tradition, as Einstein broke with establishment science with his theory of relativity. AI algorithms cannot creatively and analytically think through a question, using information learned from reading and research, where a critical eye is needed to discern what is valuable and a view of reality is needed to synthesize new ideas.

e) The effect of utilizing the wrong paradigm for human knowing in AI

The main effect is that AI will be expected to do things that it will never be able to do. This will cause expenditures of money and time that can never come to fruition and attempts to substitute AI devices for people. In fact, this latter has been occurring for some time, with extremely frustrating results. The automated response systems used by many banks and other commercial entities is a case in point.

f) Locked into the past vs. looking to the future

With respect to human knowing, all categories of AI are backward-looking rather than forward-looking, because they are based on existing knowledge. None has the ability to create new visions of reality or new theories. This does not mean that they cannot be used to make predictions or forecasts about the future; even simple regression analysis can do that. And they can be used to enable us to see things that we otherwise could not see, such as simulations of the evolution of the universe. But these simulations are based on current theories, e.g., about the constitution of the universe and the laws governing it. AI cannot advance human knowledge in any theoretical sense.


Previous in series: The AI paradigm of knowing and its problems (Part 2 of Technology and the limitations of artificial intelligence)
Next in series: Artificial Intelligence: Summary and Conclusions (Part 4 of Technology and the limitations of artificial intelligence)

Thomas Fowler Sc.D. has been analyzing data and programs for 50 years, serving as a consultant to government agencies. He has also been a professor of mathematics, physics, and engineering, and is the author of four books and 145 articles, several dealing with the climate change issue. He is especially concerned about the increasing polticization of science and engineering, and its effect on student education and the ability of elected officials to make accurate judgments. See full bio.

Sound Off! CatholicCulture.org supporters weigh in.

All comments are moderated. To lighten our editing burden, only current donors are allowed to Sound Off. If you are a current donor, log in to see the comment form; otherwise please support our work, and Sound Off!

  • Posted by: djpbrennan4960 - Jul. 13, 2024 10:36 AM ET USA

    In a 1963 work by Neville Moray entitled "Cybernetics", Moray made a similar point to this article, contrasting "experience" and "behaviour". As an example, Moray notes that we can map the areas of the brain reacting when a light bulb is illuminated, but cannot program the experience of seeing a bulb illuminate. Moray also takes a page out of Turing's theses to indicate that since this is true of any machine it will always be true of all machines so "future" machines will not cross this line.