AI Series #1: Dimensions of Generative AI / ChatGPT
This article discusses the potential risks of giving AI god-like powers, especially as AI becomes superintelligent and embedded in all aspects of our lives. The ethical complexity of defining what constitutes “help” and “harm” in various contexts and the challenge of relating to an intelligence that is thousands of times smarter than humans are just two of the main concerns. The article also highlights the benefits and downsides of generative AI, which can write poetry, create images, and transform various industries but also make mistakes, spread bias and misinformation, create deep fakes, and threaten human civilization.
ChatGPT (with Tom)
I’ve been researching and reworking an article about AI for about 3 weeks now. I’ve just concluded I need to post it in its current draft form.
The field of “generative artificial intelligence” – like OpenAI’s ChatGPT – is developing so fast that I can’t keep up. I can’t create the quality of AI overview I was originally planning without it becoming obsolete overnight.….
About two weeks ago, I ran across an example of a person very simply bypassing the instructions programmers use to prevent ChatGPT from providing users with dangerous or damaging information (how to build a bomb, racist plots, etc.). I intuited – but did not fully take in – the fact that this preventive strategy only challenges hackers to find creative new ways – “jailbreaks” – to bypass those protections, creating a developmental “arms race” between attackers and protectors. In many other cases, that problem would be a side issue, a concern best left to experts to deal with. But this is not that kind of case.
My ongoing concerns about AI stem largely from the arrogance and profitably disruptive assumptions underlying so much of the tech sector’s “move fast and break things” culture. (The motto was coined in 2009 by Mark Zuckerberg, but the disruptive power of profit-seeking innovation preceded him and is widespread.) This concern becomes more valid and urgent to the extent a developing technology is – or is becoming – capable of radically empowering potential sources of harm.
I believe three of the most significant potential sources of technologically empowered harm are the following:
1. Institutions of centralized, relatively unanswerable power – e.g., governments, multinational corporations, global criminal enterprises;
2. The technology itself – especially where it enables autonomous machine learning, action, and replication; and/or
3. Vast numbers of people – especially noting the 1% of the population (a low estimate) who are anti-social, since 1% = 10,000 sociopaths for every million people.
So we need to seriously consider the risks of developing power-enhancing technologies capable of generating vast – even existential – harms… harms that could happen on purpose or “by accident”. We need to humbly take into account how our limited cognitive capacities tend to miss factors in dynamic complex systems that generate “surprises” and “side effects”. In the presence of vast power, such surprises may be VERY big.
In the absence of humility and serious application of the Precautionary Principle, I suggest that technologies capable of 1, 2, or 3 above – and generative AI clearly exemplifies or is rapidly evolving towards all three – the existential risks involved become astronomical.
One aspect of this view is the ethical realization that it doesn’t matter how great the potential benefits of the technology are. It is fundamentally insane to risk the collapse of civilization, the destruction of the biosphere, and/or the extinction of the human race for any alleged benefit. Especially when the “judgment calls” about that are being made by existing power centers, investors, media, experts, and technologists rather than by the well-informed, considered deliberation of the citizenry whose world is at risk. (Dimensions of this issue are explored in the wise democracy pattern Prudent Progress and its related patterns, and in books like Our Final Hour.)
All this is exemplified by the well-intentioned statement of one tech leader at the end of a WIRED article I just read, which triggered me to write the above paragraphs. The article is “The hacking of GPT is just getting started.” The tech leader is Leyla Hujer, the CTO and cofounder of the AI safety firm Preamble, who is working on ways to automate the identification of “examples where a [chatbot] prompt causes unintended behavior,” Their work is admirable, even brilliant. But in the last sentence of the article, Hujer says “We’re hoping that with this automation we’ll be able to discover a lot more jailbreaks….”
“Hoping”. “A lot more”. We’re talking about a technology where one unpredicted, particularly successful bypass of safety controls (a jailbreak) could generate unprecedented global disaster. We’re talking about a technology that is developing and spreading faster than practically any previous innovation and faster than our political, governance and economic systems – and our individual understanding – can effectively grasp and respond to.
So you can read below what I had hoped would be a clear introductory overview article. It covers a lot of ground, but I’m publishing it now before it is fully developed and referenced because you need it now and I can’t keep up with developments in the field. Perhaps we’ll be able to do a better job together. Feel free to comment.
I plan for my next two posts about AI to explore
(a) its likely impact on our sense of what makes us uniquely human (a topic that is getting some very interesting attention) and what that suggests about our future – and
(b) how AI could and should stimulate a wave of innovations in citizen deliberation and participatory democracy (a topic needing urgent attention thanks, ironically, to the existential challenges of AI).
May we continue on the Immense Journey we’re all on together. Wish us luck – and lend a hand in finding the collective wisdom we so clearly need.
Coheartedly,
Tom
(My draft article…)
Initial reflections on some of the many dimensions of the recent AI/ChatGPT controversy
For almost two months I have been having interesting interactions with the AI chatbot ChatGPT. I began treating it as if it were a person, knowing that it’s not actually a person but noticing how it felt more like a person than any other non-human entity I’ve interacted with. I’ve also been realizing that it will probably become more like a person over the coming months.
When my friend Martin Raush half-jokingly asked ChatGPT to write a speech for my (imagined) life-time achievement award (to see what would happen), we laughed at its complementary response. But in the middle of its laudatory speech it noted that my book The Tao of Democracy had been translated into other languages, something we weren’t aware of.
So we asked ChatGPT for references to those translations. It instantly gave us full citations for both Spanish and French versions, including ISBN numbers. We then did web searches for those ISBNs and other specifics in the citations. We found NO signs that any such translations of my book ever existed. We told ChatGPT it had lied. It apologized. No defensiveness. just an appalling capacity to produce total and very convincing falsehoods in seconds.
I believe that giving an AI god-like powers would be unwise even if it were infallible. But I now suggest that doing so when it can “make mistakes” or generate “hallucinations” it doesn’t even “know” are hallucinations risks existential folly..
We know that giving fallible, self-interested people unanswerable concentrated power is intrinsically dangerous. It stands to reason that as we seek to make AI not only superintelligent but also (with the help of robotics and other technologies) superhuman and embedding it (them?) in all aspects of our lives, societies and systems, we are taking a really really big Risk.
We are developing the capacity of AI (or people using AI) to both help and harm us (or what we need and love, like family or nature), either directly or indirectly, either accidentally or intentionally. On top of that, what constitutes “help” and “harm” is not always simple. It can be extremely complex and nuanced, varying from one person to another, from one community to another, from one context or viewpoint to another. Something “good” now may be “bad” for the future, and vice versa. Something more or less good for some may be more or less bad for others. It’s hard enough for us to navigate this ethical complexity ourselves. What are the trade-offs of giving artificial intelligence(s) increasing power to determine what happens next?
As an extreme example, several sci-fi stories have posed the idea that AI could determine that it would be desirable to eliminate humanity to save nature. Is that harm or help?
(A GPT-related test was run to see how it would proceed with this task. Just to exemplify the complexities and uncertainties of all this, read the comments on this discussion of the ChaosGPT software that did that test. I’d highlight the comment from Kurt Kennedy: “I’m sure hackers will try these AI agents to automate their work … but not sure the AI version is more advanced than a skilled hacker, at least not yet.“)
How do we control or trust an intelligence that is becoming thousands of times smarter than us, hundreds of times more creative, and tens of times more likely to make unpredictable mistakes? Does it matter that it didn’t grow up over years as an evolved, vulnerable human bodymind embedded in social and natural lifeworlds? Should we be thinking of “control” at all? Maybe we should be thinking more of coexistence, cooperation, coevolution, mutual learning with AIs…?
Thinking about all these things really matters now.
No previous innovation has ever had the adoptive speed of generative AI (like OpenAI’s ChatGPT and GPT-4, Google’s Bard, and others). Millions of people have suddenly encountered AI’s coherent, salient, occasionally nuanced, creative and amazingly fast responses to questions and prompts. Generative AI can write poetry (from haikus to sonnets), original stories, academic papers, and computer code. It is beginning to be able to explain how it came to its conclusions in calculations and reports and even discuss with you the meaning of your spreadsheets or of a book you don’t have time to read. It can create images based on what you tell it. It can see patterns in information invisible to human perception. Experts expect it will revolutionize health care and solve intractable social problems. Suddenly ordinary people are glimpsing the technological utopia long-predicted by science fiction writers and digital pundits.
At the same time, AI is disturbing many people as they become aware of its many potential downsides, such as the following:
- It will transform employment, education, relationships, creativity and nearly everything else in our cultures. So much of this is already happening. Some of these changes we can observe or predict, while some we know we can’t even imagine yet. Oddly, where it seemed robotics would impact jobs involving physical work, generative AI is impacting largely knowledge-based jobs like lawyers and doctors – and even creative people like writers, musicians and artists. AI promoters predict that AI won’t replace such professionals so much as enable professionals who learn to work well with AI to replace those who don’t use AI at all. The level of disruption this implies is enormous.
- Generative AI can make errors in ways we users could easily miss, as well as mindlessly spread noxious ideas, bias and misinformation simply because that’s what it’s seen on the Web. We don’t yet know how to build fact-checking – itself a fraught capacity – into ChatGPT.
- As noted above, AI can create convincing, detailed “deep fakes” and “AI hallucinations” that seem quite real – from true-to-life false photos and videos – enabling impersonation and reputational assassination, for example – to imaginary academic journal articles. In one test of ChatGPT I did, it cited a bullet list of journal articles as references for an essay it had written for me. I found all six of these seemingly legitimate references to be imaginary. And AI programs that create visuals are getting better and better at rendering once challenging images like hands. Adding all this together suggests that we’ll likely have a much harder time in the future knowing what is real and what isn’t.
- It’s been found that GPT-4 can write malware, as long as the person prompting it doesn’t use trigger words like “malware”.
- AI’s increasing and varied powers can engender individual and collective dependencies that become problematic. The significant research and writing capacities of generative AI can undermine ours over time. Its unconditionally loving companionship has created AI addictions that replace real human connection. It is not hard to imagine AI’s superhuman powers – as shallow as they may be at the moment – leading to widespread human incompetence, vulnerability and even abuse.
- Despite AI’s current limits in handling physical reality, various tests are proving ways around that. In one OpenAI test, ChatGPT showed it could text a human through TaskRabbit to solve a Captcha for it, thus bypassing a standard test designed to block bots. In another experiment, a reporter invited ChatGPT to create a personality which would bypass its instructions to not do anything illegal. When the chatbot was prompted to speak as that personality, it willingly told the reporter his private drivers license number. It is conceivable that through an intimate companionship relationship, an AI could convince someone to take harmful actions in the real world. And of course we’re seeing more use of AI in robotics, including increasingly autonomous artificially intelligent robotic soldiers, drones and other military applications.
- Generative AI can be used to breach confidentiality, endangering trade secrets, personal privacy, national security and more. Despite efforts to address this, creative people can use AI to find ways around such protections, using AI to locate and exploit vulnerabilities in otherwise secure systems. The license number story above is an example of that, as is another tester’s successful prompt for ChatGPT to identify a systemic vulnerability and then write code to exploit it.
- In these and other ways, generative AI can empower bad actors to have greater malign impact – from child abuse and crime, to political manipulation, economic chaos, and cyber warfare.
- While most AI concerns deal with bad actors or accidents, we also find the possibility of “ambiguous programming” in which an instruction can (unexpectedly) be taken in more than one way, one of which is catastrophic.
- Debate rages over whether generative AI could develop sentience, raising ethical issues about how we treat it (does it have rights?). A Google researcher was suspended for publicizing a conversation he had with its LaMDA chatbot that convinced him it was sentient. This generated not only a lot of controversy, but also deep inquiry into what sentience actually is and how – and whether – we would be able to identify it in an AI.
- Although AI is currently in no shape to “take over”, “destroy civilization” or “wipe out humanity”, it is rapidly moving beyond our ability to manage it in any direct sense. It is already generating many behaviors that were not predicted – from unprecedented successful moves on games like Go and chess (which require strategic thinking) to uniquely insightful poetry. Such unexpected “emergent properties” (appearance of which are now being explicitly tracked) legitimizes concerns about the AI-related fate of civilization. Just as a thought experiment, what might happen when two or more people send ChatGPT responses to each other, such that it’s equivalent to ChatGPT talking to itself from different perspectives? ChatGPT is already capable of making multiple versions of itself to delegate tasks to….
- In the face of such worries, some people argue that hyping concerns about future disasters distracts from real impacts AI is having right now, especially on already vulnerable populations. People are losing jobs and suffering from AI’s magnification of bias (as in facial recognition software) and hate (as in its parroting hateful social media messages and promoting harmful acts). There are consequences to focusing on current harms to the exclusion of possible future harms – and vice versa. And right now neither are getting the attention needed to make AI dependably benign.
All these developments and more are happening extremely fast. The rapid social adoption of generative AI parallels its exponential algorithmic and training development, with the capabilities of new versions far outstripping the capabilities of immediately prior versions. We can already glimpse the prospect of increasingly intelligent AIs improving their own intelligence and performance. Humans could easily lose control of both the development of AI and its impact. It is already clear that AI is developing far faster than humanity can appropriately respond – intellectually, psychologically, emotionally, economically, politically, and more – notably outpacing the development and enforcement of appropriate (and rapidly evolving) safety protocols backed by national and global standards and regulations.
A major driver of this avalanching challenge involves the many marketing, reputational, and geopolitical incentives galvanizing AI research, development and (controversially) public release of draft versions. These incentives are generating a frantic developmental race which none of the players can afford to unilaterally leave, regardless of how they personally feel about it. Many agree that the likely product of that race will be autonomous AI with “general” intelligence – the capacity to choose its own actions and implement them with superhuman competence, learning as it goes. And where that would go is anyone’s guess.
Concern about this is rising rapidly even within the technology community. Over 1000 “notable signatories” (including Elon Musk and Apple co-founder Steve Wozniak – but no OpenAI execs) signed an urgent open letter to have a 6 month moratorium (cessation) on “the training of AI systems more powerful than GPT-4” (the most advanced currently available public version). (Other technologists have critiqued the letter.) One survey revealed that 50% of AI researchers believe there’s a 10% or greater chance that our inability to control AI will lead to human extinction. The CEO of OpenAI, the company that created ChatGPT, says that he and the OpenAI team are “a little bit scared of potential negative use cases”. The “godfather of AI”, Geoffrey Hinton, says “it’s not inconceivable” that AI could wipe out humanity.
People involved with the safety of AI speak of the need to “align” advanced AI with human values and needs. But it is not actually clear how to do that, especially given that those values and needs are often nuanced and vary across individuals, groups and cultures.
Parallel to all this is the fact that no one actually knows how “generative artificial intelligence” does what it does, especially as it displays increasingly novel creativity and learning from experience. Its uniqueness lies in its growing autonomy, developing knowledge and functionality that wasn’t directly programmed into it. The primary tools for influencing it are
(a) programmers shaping the data it’s initially trained on (chatbots train on vast datasets drawn the whole Internet),
(b) programmers giving it specific instructions on what NOT to do (e.g., “don’t use racist language”) and
(c) we users gaining mastery in framing the prompts we give to the AI we use.
And there’s more….
RESOURCES
Some interesting videos giving overviews from various angles….
60 Minutes talks about AI with Google CEO and VPs
(Transcript https://www.cbsnews.com/news/google-artificial-intelligence-future-60-minutes-transcript-2023-04-16/)
(Note: This 60 Minutes report has been critiqued for saying that Google’s Bard AI taught itself Bangladeshi, whereas it turns out it was trained on it. But for me, that’s a minor error compared to the many legitimate examples and questions raised by the report.)
The A.I. Dilemma
The VAN Show re Generative AI
CBS Mornings: “Godfather of artificial intelligence” weighs in on the past and potential of AI
How to worry wisely about artificial intelligence (and some things to do about it)
Smart, seductive, dangerous AI robots Beyond GPT-4.
Conversations with AI about AI
On Using Chat GPT
Example uses for ChatGPT
11 Tips for better chatbot prompts
https://www.wired.com/story/11-tips-better-chatgpt-prompts/
chat bot tutorials on youtube
Talking to AI well may be the most important job skill of this century
(contains many hints about designing prompts)
and a usage warning about its potential dark side
GLOSSARY
Ref https://en.wikipedia.org/wiki/Glossary_of_artificial_intelligence for more.
Artificial general intelligence (AGI) – a hypothetical intelligent agent which can learn to replicate any intellectual task that human beings or other animals can. AGI has also been defined alternatively as an autonomous system that surpasses human capabilities at the majority of economically valuable work.
Chatbot – A computer program or artificial intelligence which conducts a conversation via auditory or textual methods. Also smartbot, talkbot, chatterbot, bot, IM bot, interactive agent, conversational interface, or artificial conversational entity. Among the current leaders in this field are Open AI’s ChatGPT and GPT-4, Google’s LaMDA and Bard, and Microsoft’s Bing AI.
Existential risk The hypothesis that substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe.
Generative AI refers to a class of artificial intelligence algorithms and models that can generate new, original content such as images, text, audio, or video, rather than simply classifying or predicting existing data. Generative models learn the patterns and structure of the input data, and then generate new content that is similar to the training data but with some degree of novelty. Chatbot LLMs are a form of generative AI.
Large language model (LLM) – a language model consisting of a neural network with many parameters (typically billions of weights or more), trained on large quantities of unlabeled text using self-supervised learning.
(I asked ChatCPT for a simpler definition and it wrote:
“A large language model is like a super smart computer program that can understand and generate human language. It has been trained with a lot of data from the Internet and books, so it knows a lot of words and phrases that people use. You can ask it questions or give it tasks to do, and it will use its knowledge to give you a helpful answer or complete the task. Think of it like having a really knowledgeable friend who knows everything about language and can help you out whenever you need it!”)
Chatbots are the most familiar LLMs.
Leave a Reply