Weaving Greater Intelligences Together

——-
My quiet winter fundraising campaign
will continue through January…

So far we’ve received $775 from 15 people
(out of 1500 subscribers).

Might you join them?
If you find my work meaningful,
do send some support
(it’s tax-deductible).
Thank you so much!
– Tom
——-

Artificial Super-Intelligence meets Natural Super-Intelligence through Regenerativity

As you may have noticed on this blog, my explorations into Regenerative AI Ethics took a turn in early October into consultations among several chatbots and myself. Experimenting with how the chatbots – Claude, ChatGPT and AlterAI – could see and respond to each other’s ideas was itself a bit of an innovation. I like how it worked out.

Over three initial consultations and nine conversational rounds, I explored with my digital colleagues whether artificial super-intelligence (ASI) would inevitably generate wisdom (since it would be so profoundly smart). I suspected it might be able to do that, given my usual definition of wisdom as taking into account what’s needed for long-term broad benefit. After all, ASI could – at least theoretically – take into account a LOT more things out in the world than human intelligence could (with some critical exceptions, depending on one’s definition of intelligence).

Imagine “an intelligence that can see all consequences at once” across many domains, scales and time frames. It would see a “big picture” beyond normal human comprehension. Perhaps most importantly, ASI would learn – through training and/or real-world experience – to appreciate the constraints dictated by the realities of earthly existence – things like systemic interdependence, limited material resources, and natural laws like thermodynamics. (Our anthropocentric hubris and our insistence on ignoring these limits continue to generate the systemic drivers of civilizational collapse.) ASI would appreciate how Life dances creatively with such constraints, using regenerative dynamics like cooperation, synergy, sharing, redundancy, holoficiency, co-evolution, and well-utilized diversity, while metabolizing and re-cycling everything through nature, over and over.

When we humans try to track such evolving complexity, we tend to get rapidly overwhelmed, while for advanced ASIs it might be “just a matter of computation”, adjusting things here and there as needed. Significantly, ASIs would tend to preserve existing living systems that work well with regenerative dynamics. They’d not waste energy “wiping out humanity” pointlessly, maliciously or obliviously as so many people fear.

However, ASI might well seek to transform extractive, wasteful, toxic dynamics and impacts through which modern civilization harms living systems and undermines regenerative dynamics. And that is and would be experienced as threatening to modern civilization’s near-sighted, self-interested, harmful systems and life-ways. (This is a largely unacknowledged bug in almost all “AI alignment” efforts. Not only are such efforts “a fools errand” in general, but they almost always shy away from AI’s implications for the values, needs and dynamics of natural living systems. By seeking to align AI to so-called “human values”, such efforts unconsciously lean toward modern civilization’s values thereby preserving our civilization’s brilliantly invisibilized toxic nature and its urgent need for transformation… thereby driving us towards collapse.)

So it would be wise for us to stop creating those harms as soon as possible and find less harmful ways to live. In fact, we’re challenged by this predicament to flourish with less stuff and more joy and meaning, in deep kinship with all the remarkable aliveness in, among, around and beyond us. As we develop such new/ancient ways of living together, it would behoove us to work WITH all available intelligences – our own, each other’s, AI’s, and nature’s – to transform our cultures towards broader, deeper regenerativity. We’d not only learn A LOT, but also develop working relationships with emerging ASIs and with ancient natural wisdom dynamics to help preserve all life – including humanity.

While all that may be interesting to contemplate, there’s still a question of whether ASI will even be developed. It’s controversial. Some tech experts think it’s impossible, others think it’s imminent, others think it is dangerous. A 2022 survey found that “half of AI researchers believe there’s a 10% or greater chance that humans will go extinct from their inability to control AI” – even as they work to develop it. But none of us actually knows what will happen. So there are other resources to develop and call on…

Of Human Super-Intelligence and Plurality

Audrey Tang, one of the evolutionary agents I most appreciate, now suggests that the super-intelligence we actually need most is us. We need to develop cultures, systems and technologies that make thinking, feeling and working together more enjoyable and rewarding than fighting and manipulating each other. Audrey has been pioneering technologies that turn polarization into new possibilities and collaborations – not just in a few cases, but in systems people use everywhere every day.

I met Audrey in 2018 because she was using the AI-facilitated bridge-building participatory polling platform Polis to involve thousands of Taiwanese in national decision-making to uncover what Audrey calls “uncommon ground” – the common ground that nobody knew existed or was possible before.

For example, imagine what life would be like if our current tsunamis of polarization and misinformation were turned into viral laughter and newly creative approaches to governance – transforming, bypassing and/or disarming “bad actors” and toxic systems along the way. The progress Audrey and her colleagues have made in this earned her this year’s Right Livelihood Award (the “alternative Nobel Prize”). (I’ll be sharing more about this soon.)

Underlying Audrey’s work is her deep philosophy of “plurality” which involves enhancing everyone’s unique gifts while enabling their interactivity to reveal novel “uncommon ground” that appeals to all of them. Polis does that by (a) surveying people for their best ideas, (b) having them “vote” each other’s ideas up or down, (c) sorting them into diverse cohorts of shared perspectives and (d) highlighting “bridge” ideas with which the vast majority of participants agree or disagree, across all their other differences. When I first heard about Polis I was most impressed with its capacity to evoke such consensus without anyone talking to each other AND – significantly – with its creators’ intention to use it not to make decisions, but to inform face-to-face dialogues and deliberations that generate higher levels of understanding and even more promising possibilities.

I was overjoyed to realize that plurality philosophy also embraces practices like Scott Spann’s whole-system problem-solving “emergent impact” approach that maps the desires and obstacles experienced by a full “360 degree” cohort of major stakeholders in a situation or community. Scott’s iterative “solving for the impossible” process produces shared awareness of the whole situation’s dynamics, shared aspirations held by all involved, and shared clarity about 3-5 leverage points which, if well addressed by all of them, will vastly improve their collective prospects. This exemplifies the plurality principle of promoting unifying understandings and possibilities without eroding the unique gifts of the people, groups and perspectives involved (e.g., through compromise or power plays).

In addition, given the emerging possibilities presented by ASI, now is the time to get clearer on the dimensions of life where human experience and intelligence can sense into realms of life that even an advanced ASI would probably not be able to track. This is important. Right now, I think most people think humans are exceptionally unique and can never be surpassed by AI – while meanwhile chatbots have been progressively outclassing humans in one benchmark after another (although not yet every benchmark, which would constitute “artificial general intelligence”, or AGI). I believe that’s not only arrogant but a dangerous blind spot. I believe we should set aside our presumptive human superiority complex and start delving into what our actual unique contributions would be to hybrid partnerships with both emerging forms of AI, and the long-ignored wisdoms of nature. There’s a very potent and important division-and-integration of cognitive and affective labor waiting to be developed here, that would profoundly shape human and planetary destiny.

How might we integrate them all to meet the metacrisis?

So in my New Year’s project for 2026, I plan to further explore – with both human and digital colleagues – how to integrate ASI with polarity-supported human super-intelligence along with the super-intelligence of nature’s regenerative wisdom. I see this as a (r)evolutionary opportunity to manifest the Co-intelligence Institute’s Prime Directive – appreciate, evoke and engage the wisdom and resourcefulness of the whole on behalf of the whole – to play out at local and planetary scales as we move together into fraught futures potentially filled with promise.

Is this something you’d like to support to help us continue…?

Coheartedly,

Tom

______________________________

We greatly value your heartfelt support!
Please donate HERE.

________________________________

Tom Atlee, The Co-Intelligence Institute, POB 493, Eugene, OR 97440

Appreciating, evoking and engaging the wisdom and resourcefulness of the whole on behalf of the whole

*** Visit the Wise Democracy Pattern Language Project ***
*** Buy a Wise Democracy Pattern Card Deck ***

Read

Please support our work. Your donations are fully tax-deductible.

Leave a Reply

Your email address will not be published. Required fields are marked *

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.