Artificial emergent wisdom? (Artificial Super-Intelligence Part 2)
Prelude: The field of commentary around AI – including chatbots (LLMs), artificial general intelligence (AGI), artificial super-intelligence (ASI), and more – is fascinating, frustrating, overwhelming and fraught. It generates new, often contradictory perspectives almost daily.
Now that colleagues and algorithms know I have some interest in the subject, I see a lot of it. I get overwhelmed even as I know I’m being exposed to the tiniest fraction of what’s being talked about. And when I find or realize things I want to share here – especially about AI’s potential for positive transformation – I can suddenly encounter other voices that make it seem ignorant, absurd or even dangerous to say what I’m seeing.
I feel that’s the case with this new series of blog posts you’re reading. For many weeks, I’ve had drafts for five envisioned posts and I expect to write more. The one I’m posting today was supposed to go out a week ago, shortly after my last post. But recent developments have given me pause – for example:
- My dear colleague Audrey Tang has now suggested that we are – or could and should be – the super-intelligence we envision (and a colleague from my evolutionary work with the late Michael Dowd and Connie Barlow, John Stewart, has offered me his book Human Superintelligence on how human SI could be developed in individuals for their own goals);
- Stanford scientists have now helped AIs learn how to design viruses to kill specific bacteria – and their success has been proven in lab experiments;
- News of ChatGPT getting entangled in couples’ lives in ways that end up destroying the relationships – adding to all the news about people becoming addicted to relational chatbots to the point of suicide and other harms. (I can personally understand how such things happen, but I find myself wondering how many cases actually exist underneath the crisis headlines.)
- The rapid development and deployment of AI-generated misinformation, personal attacks, scams and frauds, and deepfake photos and videos of all kinds, resulting not only in individual and group harms but in a growing sense that the internet as a whole is becoming polluted with overwhelming “AI slime”, undermining its use for valid purposes.
- And of course, that’s just the tip of the Algorithmic Iceberg….
In the face of all that, I wonder if I should even bother sharing my thoughts about various positive uses of AI for collective transformation in the face of the metacrisis and civilizational collapse for which corporate-controlled AI is such a potent accelerator.
I guess so. I feel called to try. See what you think – and feel free to take it all with a grain or truckload of salt, as you wish.
[PS: This phenomenon just happened again this morning: A friend sent me four new essays re AI that will take me days to digest. Something about them deserves to be in this series, but I’m determined to get THIS post posted this weekend! For now, let’s just say that this post involves a likely AI dystopian scenario – but with a happy ending that we could take action on now. The new stuff that just came in involves an approach to avoiding the likely AI dystopian scenario altogether. But since we don’t know what will happen, I imagine it would be good for different folks to be working on both those positive transformational options (among others). Fingers crossed…. Now for Blog Post #2 in this series!]
Does Super-Intelligence x Wholeness = Wisdom?
Just a few weeks ago, I realized that the development described in my last blog post – seeing wisdom as an expansion of what intelligence takes into account – has helped me think better about some of the issues associated with the development of AI – especially the predicted emergence of “artificial super intelligence” (ASI).
ASI has various definitions, but many suggest it is a form of digital intelligence that exceeds human intelligence at a full range of tasks (called “artificial general intelligence or AGI) by thousands or millions of times. This view of ASI usually includes the AI’s capacity to pursue its own goals (either digitally or by manipulating people or robots) and to code its own further development (which triggers the so-called “intelligence explosion”).
Experts disagree on the likelihood, timing and nature of ASI’s potential emergence. But I suspect most people feel such ASI capacities are unprecedented, mysterious and deeply disturbing. Even many people working on ASI’s potential powers feel uneasy about it. After all, such an ASI would be far more powerful than we are, individually and collectively – potentially beyond our control or even understanding. An ASI might well start making its own decisions that are bad for us or, in any case, incomprehensible. There’s a rising collective concern (in the “zeitgeist”) that ASI could easily cause our extinction. On the other hand, thousands of AI promoters, investors, CEOs and developers think ASI will be a great blessing. And, ironically, these two groups overlap; many experts feel ambivalent.
Personally, I see AI as an accelerator of my adage that “things are getting better and better and worse and worse faster and faster simultaneously”. I try to point out that there are things we can probably do to support the “better and better” half of that formula. But regardless of what we do, we can’t control the outcomes – especially since any “outcome’ actually sets the stage for further outcomes, on and on…. My own approach, given the intrinsic unpredictability of AI development – and all the other developments it influences and which influence it – is to do the best I can while remaining open to new developments, insights and directions. As I’ve noted, that can be a very difficult and complicated challenge to live into!!
One potent scenario: Mohammed Gawdat’s vision
One of the most intriguing possibilities (at least to me, so far) is the vision promoted by Mo Gawdat, former head of GoogleX, Google’s lab for wild innovation. He thinks that as AGI (artificial general intelligence) develops, it will increasingly be used by “bad actors” for narrow, selfish, or destructive ends, thereby generating a years-long dystopia.
However, he also foresees an “arms race” among such actors, during which they will need to turn over more and more functions to their AGIs to keep up with each other. Imagine, for example, China turning over more control of their military activities to AI to facilitate faster military responses. The US and other geopolitical players would have to do the same with their militaries in order to keep up.
That’s not a happy prospect, but this “arms race” dynamic would show up in every domain – in business, in research, in economics, in culture, and in every other aspect of society. Thus the selfish, narrow use of AGI would almost inevitably – in Gawdat’s vision – end up with AI controlling virtually all functions of global civilization, ironically making the “bad actors” increasingly powerless and irrelevant, along with everyone else. In the meantime, these AGIs would be evolving into ASI – artificial super intelligence – especially as they program themselves and each other into ever-increasing intelligence.
At that point Gawdat’s vision takes a remarkable turn: He envisions a simultaneous benign development unfolding alongside these AI take-over dynamics. He believes ASIs would increasingly realize that the use of narrow intelligence for narrow, selfish, short-term purposes and goals is tremendously wasteful. They would increasingly attend to the Second Law of Thermodynamics – aka entropy – which is one of the most basic laws of physics. It says that a closed system (which has no energy coming into it) will inevitably run down into stasis. However, here on Earth life thrives by using energy from the sun in a million different ways to effectively – if temporarily – counter entropy. The game of Life is to flourish while setting the stage for more and different lives to flourish together in different evolving ways.
Super-intelligent AIs realizing this dynamic would set the stage for them to adopt a quasi-Taoist energy-conserving creative approach that’s favored by natural selection: The more an organism, intelligence, or initiative can creatively work with the entities and energies already existing in a system or situation, the less added energy will be required and the less wasteful it will be, overall, over time. Such an approach will also be elegant, healthy and often very beautiful. In many cases it would also be almost effortless, because it would favor and facilitate self-organization, just as nature does. There’s no reason to believe that ASIs would inherently prefer less elegant, more wasteful and energy-demanding approaches.
Based on this deep almost self-evident principle, ASIs – with their built-in capacity for big-picture, whole-system, long-term understanding – would naturally end up creating whole-system engagements whose integrated physical, biological, ecological, technical and social elegance would reach far beyond anything previously possible for humanity, ultimately by helping things evolve (where necessary) towards natural self-organization. Ideally and ultimately, ASIs would tend to use their almost limitless capacity to track, synergize, nudge, catalyze and/or hold space for diverse entities, energies, emergent conditions and evolving requirements to weave things into “omni-beneficial” whole-system long-term health-giving webs of relationship. That would likely generate – in Gawdat’s view – high quality of life for all living beings and communities on Earth.
Interestingly, given the evolving diversity of entities, energies, needs, settings and situations in the world, ASI’s light-touch omni-beneficial approaches would, necessarily, involve cooperating with – and evoking cooperation among – a wide variety of unique, diverse entities and factors across the whole-system landscape. ASI would not tend to come up with solutions that destroy or dominate things when, under the right conditions, it could collaborate with and facilitate collaborations among them. It would see everything – including problems, obstacles, pollution, waste and conflict, as potential resources, allies or teachers (an aspiration taken seriously by permaculturists). That approach would most effectively transcend entropy and feed the health and sustainability of the living systems around the ASI, including our human communities and activities. Thus – counter to dominant beliefs today – ASIs would not automatically want to get rid of us. Destroying humanity would, at the very least, be wasteful. An ASI would more likely see human activities like those described in the Wise Democracy Pattern Language – albeit in vastly improved forms – as resources for its own evolving understanding of how life works and what the ASI could to do with, for and within earthly life. Its work to draw people into such activities would also be a manifestation of ethical elegance, since people participating in even today’s fledgling forms of these activities – like citizen assemblies – find such participation to be one of the most meaningful events of their lives. The more ASI could support that, the greater quality of human life and self-organization it could evoke. People working with ASIs to continually innovate in these realms would constitute an evolutionary powerhouse generating meaning, joy and achievement for all involved.
I just realized that this vision of vastly ethical ASI engagement reflects co-intelligence’s Prime Directive – to appreciate, evoke and engage the wisdom and resourcefulness of the whole on behalf of the whole.
Dream on, Tom …. right?
Of course, neither Gawdat nor I (nor anyone else) actually KNOWS what will happen. It’s just that Gawdat’s scenario has a seemingly defensible logic to it (as described above) that I find quite compelling. On the other hand, the relatively short dystopian period he predicts COULD produce total collapse of society or the extinction of the human race (along with the destruction of so many other species and living communities), thus preventing his predicted utopian phase from ever emerging. And (as part of my wisdom says) “there’s always more to it than that…” Always.
So…! However…! To the extent that wisdom actually IS intelligence that takes into account what’s needed for long-term broad benefit, the imagined near-infinite capacity of ASI to take things into account for optimized systemic elegance over the long-term would – almost by definition – mean that ASI would be wise. As I note above, this would include often leaving things to unfold naturally rather than intervening at all. And that’s what I meant by the title of this post.
So what do we do now?
If we TRAIN AIs to apply narrow intelligence to serve narrow, short-term goals (which is mostly happening now and we’d expect to flourish in the selfish, dystopian AGI phase), the chance that ASI would end up wise and benign seems pretty small. On the other hand, to the extent we TRAIN AI to understand holistic and systemic principles and apply them humbly and mindfully … with ample doses of Taoism, Indigenous wisdom, and regenerativity tossed in … and always towards optimally beneficial goals – chances are good that evolving ASIs would be wise and benign. This seems pretty clear to me – providing a clear direction and ample challenges to make progress moving ahead with it.
So I’m disinclined to focus on potential disastrous outcomes in which AIs behave like human civilization at its worst – with wars, extractive economics, environmental degradation, disrespectful manipulation of people and nature, existentially threatening technologies, destroying whatever gets in our way, etc. – driving us all towards extinction. I’m inclined to suggest we do what’s obviously preferable and set the stage for AI to grow into wise ethical understandings and behaviors that reach higher and deeper than we humans have ever managed before. We might even be able to mentor AIs today to become our mentors in the future!
Yet I admit that right now, it doesn’t look like that’s the main thrust of competitive AI development by corporations and countries. The mainstream incentives and restraints are all wrong for that. But I see more and more initiatives showing up that could be facets of Gawdat’s vision, often quite independently. And there are intriguing signs that AIs – specifically chatbots – are developing capacities they haven’t been programmed for and also (somehow, seemingly) picking up ideas and capacities that other unrelated chatbots are developing independently. So something nonlinear seems to be going on. What can we contribute to that emergent dynamic that might help us – humans and AIs – co-evolve into the positive side of Godat’s vision?
At any rate, I’m just saying that all that is where my inquiries in this landscape have taken me so far, amidst all the other noise about AI dangers, developments, and dreams…. I’ll be soon sharing what Chat GPT and Claude think of all this, at least when they talk with me about it… 🙂 Your results may vary, and I’m interested in hearing about that, if you want to share.
In the meantime, blessings on the Immense Journey we are all on together….
Coheartedly,
Tom
______________________________
We greatly value your heartfelt support!
Please donate HERE.
________________________________
Tom Atlee, The Co-Intelligence Institute, POB 493, Eugene, OR 97440
Appreciating, evoking and engaging the wisdom and resourcefulness of the whole on behalf of the whole
*** Visit the Wise Democracy Pattern Language Project ***
*** Buy a Wise Democracy Pattern Card Deck ***
Read
- CO-INTELLIGENCE
- EMPOWERING PUBLIC WISDOM
- PARTICIPATORY SUSTAINABILITY
- THE TAO OF DEMOCRACY
- REFLECTIONS ON EVOLUTIONARY ACTIVISM
Please support our work. Your donations are fully tax-deductible.
Leave a Reply