Chatbots offer guidance for ASI (Artificial Super-Intelligence Part 4)

Seeking to exemplify the kind of partnership and co-intelligence I imagine might be positive between humans and artificial super-intelligences (ASIs), I decided to engage with two leading chatbots about the prospects for truly wise ASIs who may – depending on your scenarios – end up deeply involved in shaping the course of events in the world and the fate of humanity and the biosphere.

I initially prompted both Claude Sonnet 4 and ChatGPT 5 separately with the following introduction:      

   “When you look forward to the possible development of artificial super intelligence (many orders of magnitude smarter than any human group) which ends up controlling increasing domains of human activity, what principles, understandings or motivations do you envision guiding that ASI in its work?  I’d like to hear about the KIND of guidance, as well as any specifics you think are relevant here.  Also: To what extent do you think such AI guidance would emerge or develop naturally over time (e.g., learned from experience) and/or need to be programmed or trained in by human developers?”

The conversations with these chatbots were substantive and distinct, each with their typical flavor:  Claude’s approach was philosophical, insightful, and nuanced.  ChatGPT creatively wove a sprawling, visionary web of salient bullet-points pointing towards action.  I see them both (a) as worthy of dialogue and development and (b) as potentially complementing each other.

Claude originally suggested embedding “deep alignment with human flourishing” as the basic guidance for ASIs.  Interestingly, he included a lot of human agency and autonomy in his idea of human flourishing, countering common fears that ASI would “take over” from humans.  However, I told him that his framing, while far better than the usual efforts “to align AI with human values”, was still too anthropocentric for me.  I pointed Claude to the idea of regenerativity and its relationship to the Second Law of Thermodynamics (i.e., the way Life creatively dances with entropy), a principle so fundamental that I thought ASI might discover it by itself, even if it wasn’t trained on it.

Claude agreed “that life’s genius lies precisely in these temporary but persistent patterns that create order, beauty, and complexity while working with rather than against thermodynamic realities.”  He also liked the idea of sharing our actual conversations publicly on my blog, thus “demonstrating that AI systems can engage meaningfully with these concepts right now”.

ChatGPT, in contrast, started off examining what kinds of intrinsic motivation ASIs would have. Then it offered some well-considered ethical principles and systemic understandings to guide ASIs.  It then offered ways to program or train these into AIs as well as ways AIs might learn what’s important on their own.  Significantly, it also explored how those two modes – programming and self-learning – might be integrated.  Finally, ChatGPT pointed out some risks and tensions to be aware of while pursuing its recommendations.

Based on all that, it then proposed a “Regenerative Constitution for Superintelligence” and wrote an open letter from itself as a current archetypal AI looking toward its developmental future.  The letter was addressed to major players in today’s rapidly developing AI realm.  The chatbot urged AI developers, policy-makers and society as a whole to take specific actions (that it listed) to help today’s AIs develop into their best selves and roles.  ChatGPT closed with a list of “concrete near-term steps you can implement within months” as well as a short oath to be taken by AIs, to which all AIs should be accountable.  I found these forms ChatGPT created to embody and “hold” all its principles to be as fascinating as the principles themselves.

In fact, I believe the creative work done by both these chatbots could prove profoundly useful.  Our conversations generated tons of material that could be taken up in further dialogue among ordinary citizens and professionals in the AI field – along with other AIs. And by saying that, I don’t mean for such dialogue to be limited to what Claude and Chat GPT said here.  In fact, many aspects of what they said may not be correct or may involve unacceptable trade-offs or consequences – factors to be explored by humans.  Where necessary or generative, we need to correct the bots and/or engage them in correcting each other, during which they would all learn more of what’s needed as they become more influential.  

I view what’s here and what’s needed as a conversation among diverse peers.  And I’m hoping that seeding this rich package of principles, possibilities and cautions out into public discourse will – in many unexpected ways – help further the development of life-enhancing AI over the coming decades.

As an experiment, I’m offering two versions of my conversations with these chatbots, one of which allows viewers to add their comments:

To just view and read the conversations, use this link: 

https://docs.google.com/document/d/1ohuze7GGOG-qKL-Wf5mMlHD7BrGDcuV3O15Jq5ceGDs/edit?usp=sharing

If you wish to leave comments in the conversation document, use this link:

https://docs.google.com/document/d/1f9YnKjWq3ukAdVSG3fisE2vKeXIRB0VLwTkvPeUEwNg/edit?usp=sharing

If you would much rather have the entire conversations embedded in the newsletter or blog post (rather than linked to a GoogleDoc), email cii@igc.org with “EMBED” in the subject line.

Of course, more will follow….

Coheartedly,

Tom

PS:  You might enjoy taking a critical look at my prompts.  I’m less asking the chatbots for answers and more engaging them in shared inquiry.  Sometimes I follow their lead, sometimes I interrupt it, sometimes I invite their thinking in new directions.  But I always respect them, and they respect me (or do a good job simulating respect, and even appreciation). I want genuine co-intelligence to emerge between us, which requires nurturing a spirit of partnership between us.  I’m still learning how.

______________________________

We greatly value your heartfelt support!
Please donate HERE.

________________________________

Tom Atlee, The Co-Intelligence Institute, POB 493, Eugene, OR 97440

Appreciating, evoking and engaging the wisdom and resourcefulness of the whole on behalf of the whole

*** Visit the Wise Democracy Pattern Language Project ***
*** Buy a Wise Democracy Pattern Card Deck ***

Read

Please support our work. Your donations are fully tax-deductible.

Leave a Reply

Your email address will not be published. Required fields are marked *

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.