ChatGPT – A Partner in Unknowing (AI Meets Wisdom Series – Part 1)

ChatGPT can help us realize how much we fall short in our approaches to complex challenges.  By wisely using its capacity to summarize the state of mainstream knowledge about how to approach such challenges, it can bring us into a state of unknowing.  From that place outside of arrogance and certainty, we can become more discerning, creative and adaptive in our responses, improving our prospects for a less catastrophic, more life-affirming future.

When confronted with social or environmental challenges, most thinking people try to embrace the best approaches articulated by mainstream experts and actors and reject any heavily biased, shallow, wasteful, or destructive options.  They promote what are broadly considered thoughtful, beneficial approaches.  We find versions of this on all sides of political advocacy and decision-making.

While this approach works on straightforward problems, not all problems are as straightforward as they seem.  All too often, seemingly common-sense approaches don’t actually get implemented or, if they do, they don’t work out as well as we imagined or, even worse, they generate problematic side-effects.  The complex evolving world we live in all too often seems determined to defeat our best efforts to do obviously good things.

In the following article Dana Karout notices something that I’ve been noticing about applying ChatGPT to major social and environmental challenges: GPT is really good at gathering, organizing and summarizing best-practice approaches that have been suggested and tried out by mainstream experts and officials.  Disconcertingly, it accomplishes that miracle in mere seconds.  

When I first observed this, it gave me a strange, surreal “high” feeling.  What could the world be like, if this is possible?  But Karout noticed something else. She realized that that seemingly brilliant phenomena just leaves us with the problems I described in my second paragraph above.  Both AI and thousands of experts promote approaches that are increasingly inadequate in light of the increasing complexity of real life.  

Life now demands from us deeper insight and greater creativity than we are used to providing.  But first, we need to let go of all those things we thought we knew about how to solve things.  We need to start with a clear slate, to see complexity more fully and to step outside the box, play, and try out truly new approaches.

Karout notes how powerfully ChatGPT can clarify the innocent shallowness of our – and its! – original common sense, tripping us into a state of “unknowing”.  In future posts I’ll share a bit about my experiments in what to DO with that challenge , but right now I want to start this “AI Meets Wisdom” series with Karout’s invaluable insight into this unique and surreal gift that AI introduces into our evolution.

Her article is long and somewhat meandery as she tries to embody the “unknowing” state in the kind of essay we usually expect to clarify things for us. I honor her in that effort but, for the sake of accessibility for my readers, I’ve excerpted what I see as her key argument below.  Enjoy!

Coheartedly,
Tom


Excerpts from ChatGPT: A Partner in Unknowing

BY DANA KAROUT

LAST SPRING, I was part of a teaching team of eight who were working with a group of sixty students to explore the premise that, for some questions, unknowing, rather than knowledge, is the ground of thought we need. ChatGPT was our partner in that endeavor. In a case study we presented to the class, a teenager—pseudonym Jorge—was caught with a gallon bag of marijuana on school grounds. He faced expulsion from school if he were reported to his parole officer. Meanwhile, not reporting him would be considered breaking the law. We asked our students to design a course of action, imagining themselves as the school’s teachers and administrators.

They drew on their academic knowledge and professional expertise. They debated the pros and cons of different options, such as reporting Jorge to his parole officer, offering him counseling, or involving his family and community. They were well-versed in speaking to the broader context of the case, such as the racial and socioeconomic disparities in the criminal justice system, the effects of drug prohibition, how to use techniques of harm reduction, and the role of schools in fostering social change. Their answers sounded sensible, but the situation demanded real labor—it demanded sweat rather than sensibility, and there could be no sweat till their answers mattered.

An hour into their conversation, we presented the students with ChatGPT’s analysis of the case study.

ChatGPT suggested that we “initiate a review of [the school’s] existing policies and procedures related to substance abuse, with the goal of ensuring they are consistent, transparent, and reflective of best practices.” It elaborated that “the school should take a compassionate approach [but] also communicate clearly that drug abuse and related offenses will not be tolerated,” and that, “this approach should be taken while ensuring that the school is responsive to the unique needs of its students, particularly those from low-income and working-class backgrounds.” That is, ChatGPT didn’t say much that was useful at all. But—as the students reflected in their conversation after reading ChatGPT’s analysis—neither did they. One student noted that they were just saying “formulaic, buzzwordy stuff” rather than tackling the issue with fresh thinking. They were unnerved by how closely the empty shine of ChatGPT’s answer mirrored their own best efforts. This forced them to contend with whether they could be truly generative, or whether, as some of them put it, they were “stuck in a loop” and had not been “really [saying] anything” in their discussions. Suddenly, their answers mattered.

The students’ initial instinct to regurgitate what they were familiar with, rather than risk a foray into unfamiliar propositions, says much more about the type of intelligence our culture prioritizes than the actual intelligence of our students. Indeed, some of our best students, who go on to attend our most prestigious institutions, are rewarded for being able to synthesize large amounts of information well. However, as I came to realize, the high value we place on this capacity to efficiently synthesize information and translate it to new contexts risks creating hollow answers in response to questions with real human stakes, the most existential of our challenges.

Rather than giving us answers, generative AI could help take them away….

If we are to move beyond solutions that replicate the status quo in our institutions and our thinking, we will have to stretch what we think of as “intelligent” or creative beyond the sort of regurgitation that large language models are now able to do with remarkable ease. Our students put this challenge into action: after this encounter with their own unoriginality through ChatGPT, they moved into a space of creativity. They proposed interventions that went beyond both their and ChatGPT’s initial responses, responses that sounded absurd at first for how far they strayed from conventional thinking. These proposals ranged from joking that the teachers should join Jorge in smoking weed, thereby exposing themselves to the same legal risks as him, to abolishing schools altogether. In mirroring back to them the emptiness of their words, ChatGPT forced our students to confront how limited their existing knowledge was when applied to this situation. From this space, which I’m calling unknowing, our students felt free to play and experiment with the absurd. They then began to flirt with collective action, which could allow them “to both respect the law and to refuse it.” For example, proposing that they “[turn] Jorge in while simultaneously threatening to go on strike if he were expelled—neither acting as mere administrators nor mere saviors. Rather than abolishing schools altogether, shutting down this one school.”….

In our classroom case study, ChatGPT’s empty response to “what should we do?” revealed to our students not only their own ignorance, but also the perfect uselessness of knowing the answer to the wrong question. The right question for the moment might have then been, “ChatGPT, can you take away all my easy answers?” By easy answers, I mean the first set of generalizations that a mind grasps for when facing a situation in which it risks being ignorant. This is not a literal question for ChatGPT, but an orientation to ChatGPT’s pat responses. This orientation puts the onus back on the question asker to devise answers far more apt for the situation, and, as was the case of our students, that even hint at the revolutionary. “Can you take away my easy answers?” assumes that ChatGPT’s, or our, first response will not be the final answer, and reveals the bounds of the sort of intelligence that ChatGPT—and our dominant culture—prioritizes. It asks the people with the question to consider what other insights, experiments, and curiosities they might insert into their solutions. In this dynamic, ChatGPT becomes a partner, rather than an authority on what is intelligent or correct….

Adaptive leadership, developed by Ron Heifetz and others at the Kennedy School, distinguishes between two different types of problems: adaptive challenges and technical challenges. While the problem and solution of technical challenges are well-known—think everything from replacing a flat tire to performing an appendectomy to designing a new algebra curriculum—adaptive challenges demand an ongoing learning process for both identifying the problem and coming up with a solution. Addressing the climate crisis, sexism or racism, or transforming education systems [or addressing the metacrisis!] are adaptive challenges. Adaptive challenges, intricately intertwined with the human psyche and societal dynamics, prove resistant to technical solutions. They demand a shift in our awareness. A common leadership mistake, as Heifetz points out, is to apply a technical fix to a challenge that is fundamentally adaptive in nature. For example, we generate reports, make committees, or hire consultants to work a broken organizational culture, many times avoiding addressing the underlying issues of trust that are at the heart of the problem….

Taking the example of the climate crisis, I often ask myself, what is so threatening to some people in the US that they would see their homes burn down or swept away in an unprecedented storm and still not engage the challenge of climate change? The answers that come to me are not material, they are human. Challenges are often bundled—they have adaptive and technical components—and some technical solutions to the climate crisis, such as smarter grids or more renewable energy, will address key technical challenges. But these technical fixes are not enough, and will not be universally adopted in our current political reality. To face climate change effectively, we need to go beyond technical fixes and engage with the adaptive aspects of the challenge. We need to question our assumptions, values, and behaviors, and explore how they shape our relationship with the planet and each other. We need to learn, experiment, collaborate, and find new forms of consciousness and new ways of living that are more resilient and regenerative. And we need to learn how to better understand people whose beliefs are very different from ours. An adaptive process like the one I’m describing is messy—it involves psychological losses for all human stakeholders involved. This process unfolds amidst the “salt of life,” and requires a type of intelligence that is relational and mutual, deeply anchored in the humbling fact that our individual perspectives cannot capture the whole. Working with groups in seemingly intractable conflict, I’ve come to deeply believe that engaging in messy work across boundaries results in something that’s far greater than the sum of its parts….

ChatGPT defers to professionals and guidelines. Meanwhile guidelines and professionals, like our students at Harvard, sound more and more like ChatGPT. We need experts and authorities when dealing with technical challenges, but when approaching adaptive challenges we need to find a different orientation—this is what the adaptive leadership framework calls leadership, an improvisational and collective act always rooted in the question “what’s really going on here?”….

TO PARTNER WITH generative AI effectively, we need to first shift our predisposition towards artificial intelligence from dependency or fear, avoidance, and denial toward an openness to the big unknown of AI. By doing so, we can rethink what is and is not ChatGPT-able and thus expand what we consider possible in the face of adaptive challenges. This shift does not ignore generative AI’s dangers, nor does it diminish the very real fears around AI’s potential to take over jobs, nor does it presume that AI should not be regulated. It is instead a recognition of what American philosopher and writer Robert Pirsig said fifty years ago: “If you run from technology, it will chase you.”…. 

“There is an evil tendency underlying all our technology—the tendency to do what is reasonable even when it isn’t any good,” Pirsig asserts in his book. I wonder if it is this tendency towards reasonability that we have programmed into our computers, and in turn our computers have programmed into us…[leading us to see the AI] problem only over there without recognizing the deeper problem within us….

We need to learn to sing more improvisationally in our current environment—responding creatively to cues from each other and our tools—rather than rehearsing our operas to perfection. Our awareness needs to become more embodied, to develop a new—or perhaps return to a much older—state of mind that is not trying to produce quick answers but is instinctual and can stay with “I don’t know.”….

ChatGPT … can play a critical role in surfacing the contradiction that requires resolution. For what can be more clarifying for our moral questions than having our contradictions mirrored back to us?… What will lead us to engage with these tools in such a way that we come to unknowing and, ultimately, generativity?….

The forces that have shaped generative AI have shaped us too, and that there is much to learn by examining what it can mirror back to us about ourselves. A critical and thoughtful partnership with generative AI can help us develop new thought in the face of intractable challenges, but not by providing us with better answers. Rather, it can mirror back to us our most obvious answers, and if we are challenged to read those answers differently, we might be pushed into a space of unknowing and thus the ground of richer thought. This often happens when we push through a dilemma, or the contradictions that ChatGPT can help surface in our own thinking….

Whatever unknowing you identify might be a clue as to where there is possibility… You can choose to enter the world with that unknowing pushing against your thinking, and perhaps see what comes on the other side of it.


Tom Atlee, The Co-Intelligence Institute, POB 493, Eugene, OR 97440

Appreciating, evoking and engaging the wisdom and resourcefulness of the whole on behalf of the whole

…. main site …. main blog …. big archive …. 

*** Visit the Wise Democracy Pattern Language Project ***

*** Buy a Wise Democracy Pattern Card Deck ***

Read

Please support our work. Your donations are fully tax deductible in the US

Leave a Reply

Your email address will not be published. Required fields are marked *

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.