Artificial Intelligence: Summary and Conclusions
By Thomas B. Fowler ( bio - articles - email ) | Aug 05, 2024
[Part 4 of Technology and the limitations of artificial intelligence]
Artificial Intelligence (AI) has become a trendy label for a class of computer-based technologies that seek to replicate or replace human knowledge and ways of knowing. Extravagant claims have been made for AI, leading to fears of AI “taking over” or causing catastrophes of various sorts. Because AI deals with notions of “intelligence”, “thinking”, and “knowledge,” it directly connects with philosophy. The limits of AI ultimately have to do with its fundamental inability to perceive reality. A brief foray into philosophy reveals that Ideas about AI are based on erroneous notions of human knowing, stemming from the English empiricist tradition, which culminated in David Hume.
AI is also grounded on certain standard engineering practices that are solidly based on our understanding of how to create reliable systems, but were never intended to replace or replicate human ways of knowing. The dangers associated with AI are not that it will take over the world or become sentient, but that due to the ongoing complexification of society, AI will be used to direct and control large-scale systems independently of humans and therefore without the connection to reality that this kind of control needs to stave off catastrophic errors.
The dreams of the early days of computers becoming sentient and “taking over” have not materialized, scaling has not brought qualitative changes, and AI has gone on to solve important problems, but not become conscious or even capable of many simple human tasks. All of this points to the conclusion that humans are a different kind of reality, and that AI will continue the path of supplementing human reason but only replacing it in narrowly focused applications. There may, of course, be many such applications; but they do not include any sort of “taking control”.
Difference between AI and human knowing
The clear implication is that the paradigm of knowing employed by AI is completely different than the way that human knowing operates. This will not be affected by scale changes in computer hardware, because it pertains to basic architectural and functional differences. This ultimately limits the capabilities of any AI system, however implemented. The salient characteristics of human knowing not realizable with the AI model include:
- Creative judgement and problem solving
- Seeking underlying truths about reality
- Formulation of new scientific theories and mathematical theorems and new fields of science, mathematics, and other disciplines
- Understanding of things as real in a transcendental sense
- Creation of significant works of art and literature
- Critical discernment of the value of a text
- Ability to synthesize information in creative, holistic ways based on critical evaluation of sources
In order to be a “threat” to anything beyond certain types of jobs, AI would have to be able to do these things, which it cannot. It will forever be restricted to matters that can be done in some algorithmic fashion. Therefore, AI does not show or prove that humans are just another material object, thus rendering all forms of religion superfluous. It shows quite the opposite: humans are unique and not reducible to physical computing machinery.
Reasonable expectations for AI
What can we expect from AI and related fields, based on our past experience and our understanding of the paradigms of human knowing and the AI model? AI will:
- Be able to automate some jobs, and displace some workers (though at the same time creating new jobs)
- Be able to supplement and assist human research and development activities
- Be able to aid humans in many other fields and actions
- Be essentially an extension of current computer capabilities
- Never be able to automate most jobs, since most require creative action on the part of the job holder
- Never be able to “take control” because it does not perceive reality
Threats, roles, expectations
The real threat from AI comes not from any possibility that it will become sentient and smarter than humans, but from complexification issues associated with use of AI-based systems to control critical infrastructure such as the power grid, military decisions, and economic systems. The threat is that programming errors, encounters with unanticipated situations, or hackers will disrupt the AI system and cause serious malfunctions. AI will also pose ethical problems in many areas, beyond the scope of this article, as well as important legal and societal issues.
Referring back to the list of AI implementations in Part I, we may summarize the role and expectations for each:
- Robots and robotic systems: Will never become sentient or able to take over functions requiring ability to perceive reality, e.g., interaction with people on personal level. They can be programmed for specific tasks and do them better or faster than humans.
- Neural networks and pattern recognition: Will fulfill specific functions that aid humans, but will never replace human abilities.
- Generative AI, including ChatGPT and similar applications: Inability to do real research and make critical judgements requiring perception of reality dooms them to low level roles since no serious decision making can be done on the basis of their“hallucinations”.
- Symbolic manipulation programs such as Mathematica: Extremely useful in the hands of those who understand the mathematics involved, and just need calculation assistance; they are not mathematical “superintelligence”
- Autonomous cars and other autonomous systems: Inability to perceive reality will restrict their ability to replace human drivers, though AI systems can aid drivers in many ways.
- Large-scale control programs that use some combination of the above: Dangerous if implemented without some type of human oversight or in situations where human perception of reality is essential.
It may well be the case that utilizing Hume’s theory of knowing, with all of its limitations, is the only real avenue open to AI systems. All evidence points to the conclusion that sentience and perception of reality will be forever out of reach.
Should AI work be shut down as an imminent danger to society? No, but at this juncture of history, we face a dilemma: we need the capabilities of AI-type systems to sustain our ever-increasing level of societal complexification and knowledge repository. At the same time we need to find ways to ensure that any system to which we entrust control will be safe, capable of effective human oversight, and on balance beneficial to society.
[For a more detailed version of this series, with complete references, please refer to my fuller treatment of artificial intelligence.]
Previous in series: The human paradigm of knowing: How it is different and why it is not replicable with AI (Part 3 of Technology and the limitations of artificial intelligence)
All comments are moderated. To lighten our editing burden, only current donors are allowed to Sound Off. If you are a current donor, log in to see the comment form; otherwise please support our work, and Sound Off!
-
Posted by: bkmajer3729 -
Aug. 06, 2024 10:50 PM ET USA
Thank you, Dr. Fowler! Your 4 set of articles puts AI in proper perspective. The need for AI is real but the need to understand its limitations and capabilities is greater. The last thing we need is people to be afraid of AI. But we do need folks to maintain a healthy respect for technology and recognize the tremendous benefits when properly used / applied.
-
Posted by: ifomis2828 -
Aug. 06, 2024 4:23 PM ET USA
Details of the arguments presented here are outlined in Jobst Landgrebe and Barry Smith's book Why Machines Will Never Rule the World. https://www.routledge.com/9781032309934
-
Posted by: feedback -
Aug. 06, 2024 8:01 AM ET USA
Very informative! My fascination with AI had reached the same levels as my fear of Covid when, in February of this year, Google's Gemini and Meta AI became famous for creating images of "diverse" Founding Fathers and Popes.