Introduction to AI Does Not Pose an Existential Risk to Humanity
This work is licensed under a Creative Commons Attribution 4.0 License. You can use/share it anyway you want, provided you attribute it to me (Blair Fix) and link to Economics from the Top Down.
ABOUT THIS ARTICLE – Editor
I was first introduced to Blair Fix’s blog when the scandal about big grocery chains profiting from the COVID crisis, first hit the news media a little more than a year ago.
His article “Mapping the Ownership Network of Canada’s Billionaire Families” helped explain the concentration of corporate control in Canada. It was far more helpful than listening to the predictable analyses that I found in traditional media. CAPITALISM versus FEUDALISM arose after I found myself particularly annoyed at the pitiful excuse for an analysis by a talking head consulted by interviewers at the CBC.
While I might not always completely agree with Dr. Fix’s take, it is more often the result of what universities call ‘critical thinking’ than I’ve come to expect.
“Competence, not intelligence
Let’s start with the elephant in the room, which is ‘intelligence’. Humans love to talk about ‘intelligence’ because we’re convinced we possess more of it than any other species. And that may be true. But in evolutionary terms, it’s also irrelevant. You see, evolution does not care about ‘intelligence’. It cares about competence — the ability to survive and reproduce.
Looking at the history of evolution, philosopher Daniel Dennett argues that the norm is for evolution to produce what he calls ‘competence without comprehension’. Viruses commandeer the replication machinery of their hosts without understanding the genetic code. Bacteria survive in extreme environments without comprehending biochemistry. Birds fly without theorizing aerodynamics. Animals reproduce without grasping the details of meiosis. And so on. In short, most of what makes an organism ‘competent’ is completely hidden from concious understanding.”
(See also “Good Enough” by Israeli philosopher Daniel Milo)
“Tech moguls like Sam Altman (the CEO of OpenAI) likely have a cynical agenda. When Altman hypes the risks of AI and calls for government regulation, what he really wants is to build a moat around his technology. If AI is tightly regulated, it means that the big players will have a built-in advantage. They can pay for the various ‘certifications’ and that ensure their technology is ‘safe’. The small companies won’t be able to compete. In short, when the titans of industry call for government intervention, it’s almost surely self-serving.”
“A truly great business must have an enduring ‘moat’ that protects excellent returns on invested capital,” Buffett said. The advantage of a “moat” is that it protects a business from challengers.”
Warren Buffett