Holoficiency = efficiency of, by and for the Whole

Most forms of efficiency depend on narrow intelligence promoting narrow, short-term goals.  This is not healthy for the world or our civilization’s longevity.  We need a word for efficiency of, by and for the Whole.  Holoficiency is already a real phenomenon – embracing everything from permaculture and crowdfunding for the common good to the way a healthy forest operates.  Our civilization needs to become more holoficient, starting this year.

In a few hours a 36 hour winter storm with freezing rain, snow and ice is predicted here in Eugene, Oregon, with probable power outages, etc.  So I want to get this out quickly…

I was in the midst of a conversation with CII’s Andy Paice this morning about the nature of the EFFECTIVENESS of wholeness (and interconnectedness and co-creativity… essentially, of co-intelligence).  I want to share the essence of that conversation, which intersected substantially with Daniel Schmachtenberger’s long, intense, and transformational conversation with Nate Hagen back in June. At least half of what follows is lightly edited or paraphrased from Daniel’s comments in that interview.

Due to my rush, the ideas here are not thoroughly developed, but well worth thinking about, nevertheless.  Here goes:

Normally, it is not considered effective – or realistic or even possible – to engage with “the whole” or even take it into account.  From the perspective of narrow, short-term goals and the narrow intelligence that effectively pursues those goals, thinking of the whole is just too big, and definitely inconvenient.  

On the other hand, broad intelligence – co-intelligence and wisdom, when they’re used – REQUIRES attending to long-term, broadly beneficial goals.  

So there’s often some tension between those two frames.  Here are some examples

Consensus process that seeks to take into account all concerns, needs and potential consequences involved in a decision naturally takes a lot of time.  But it usually addresses factors that would come back to haunt the group later, if they weren’t dealt with, so its defenders consider it wiser or more effective, from a long-term perspective.

EFFICIENCY

Efficiency usually entails getting rid of “unnecessary things” – steps in a process, redundant functions, non-essential expenses and people, and so on.  But the holistic concept of RESILIENCE often requires the presence of exactly those things to be there.  If you have at hand several sources of or resources for a vital function (like energy or food or communication), then if one of them goes down, you can call on another one.  

If efficient just-in-time production makes it unnecessary to keep a warehouse full of parts for your assembly line, it saves your company a lot of money and manpower.  But if the supply line for a part breaks, your whole assembly line stops.  

If a new technology makes a process much more convenient but opens up new avenues for hack attacks, then we can and should ask if that is really more efficient in the long run.

HOLOFICIENCY

The new concept of holoficiency opens up space to consider how to achieve the goal of efficiency – i.e., to do more with less – without risking the likely damages that so often accompany increased efficiency.  One way to define holoficiency would be “the optimum engagement of all aspects of a whole for the longterm wellbeing of the whole”.

So we’re talking about trying to rally and invest all relevant entities, relationships, resources, elements, capacities, qualities, and dynamics of wholeness (like synergy and vitality) – and even disturbances, waste, and excess – in the ongoing healthy co-creativity of the whole.  It’s pretty obvious that doesn’t come near to describing how we usually try to get more with less.  But it points us in the right direction.

Schmachtenberger points out that short-term efficiency thinking and action are major drivers of the dynamics of civilizational collapse. The ability of that thinking to do massive damage increases each time we increase people’s capacity to do it in bigger ways.  

Preferencing long-term holoficiency could be an alternative that could slow or reverse collapse – although it would require major changes in how we manage our civilization and our lives.

To lightly paraphrase Daniel Schmachtenberger, we tend to think about narrowly defined problems that affect specific people or situations, using specific metrics to demonstrate certain direct effects. It’s easier to “make progress” on narrowly defined “virtuous goals” than it is to take into account the long term wellbeing of the whole.  

But that same virtuous thing that we did may polarize some people who are now going to do problematic things that then show up as second or third order effects.  Maybe what we did had an effect on supply chains or climate or future generations.  We may be generating second, third, nth order effects, on a wide range of metrics (which we don’t even know to measure) on a very wide number of stakeholders (that we don’t even know how to factor in). 

So it is clear that factoring all that in is MUCH harder than ignoring it all in the name of efficiency.  But holoficiency is, in fact, a different KIND of thing to think about.  

So we face a serious challenge: it is cognitively easier – as we talk about intelligence – to figure out how to achieve a direct narrow goal than it is to ensure that goal doesn’t fuck other stuff up, sooner or later.

Thus the idea of efficiency, too, has a narrow boundary and wide boundary lens from which it can be viewed.  The idea of holoficiency creates a space to explore how we might do that bigger, more demanding job with limited resources.

TECHNOLOGY

Again I want to slightly paraphrase Schmachtenberger:  Technology is innovation towards goal achievement.  We’re creating technologies to achieve direct first order effects for a defined goal for a defined population. This is the Progress Narrative – the Technology Narrative, the GDP Narrative, and all the way down to the Science Narrative.  And we’re always working to optimize those technologies and their problem-solving powers. 

Most technologies will serve ANY short-term goals.  So their power is a very big double-edged sword – they enable both “good” goals and “bad” goals.  And AI promises to be even more effective at that, accelerating all efforts to achieve all goals – the vast majority of which are not well designed (given our civilization’s biases) to generate broad long-term benefits.  

When you think about fostering EVERYTHING THAT MATTERS, you’re not thinking about optimization; you’re thinking about a different thing – a thing I’m calling holoficiency – which is doing more with less from a long term holistic perspective. 

MODELS

Intelligence is the ability to achieve goals. Toward that end, all forms of intelligence have something to do with modeling – with trying to create a proxy of some limited section of reality that our intelligence can then use to inform our forecasting, our choice-making, and our goal-achieving efforts.  

But models tend to optimize for a narrow set of sense making, not the whole picture, and so they tend to blind us to what’s outside our models.  Taoism’s founder, Lao Tsu, was telling us to keep our sensing of base reality open and not mediated by our models. We don’t want our sensing to be limited to our previous understanding because our previous understanding is always smaller than the evolving Totality of What Is and that evolving Totality of What Is will be telling us how to change our models as we go along.

In other words, the totality of the story is ALWAYS more complex than however we talk about it or think about it.  There’s something very important and sacred about that fact.  Because when I make a model that separates everything out and then improve that model’s effectiveness, the chances are high that I’m going to start harming something that I don’t even realize I’m harming because it isn’t in my model.  

What I’m optimizing for is not the Whole Thing. This whole process is not helping me be wise.  Models that win in the short term can – and so often do – generate long term, broadly horrible outcomes or even extinction or other evolutionary cul de sacs.

Wisdom is related to wholes and wholeness and intelligence is related to a usually narrow sense of relevance applied to achieving a usually narrow short-term goal.  We need civilizational systems where the intelligence in and of the system is bound by and directed by wisdom.

If our civilization needs a new year’s resolution, I’d suggest that resolution would involve transforming itself to operate with holoficiency. We would collectively resolve to set things up – especially our decision-making and technologies – so that long-term broadly beneficial goals and principles constrain and guide all our narrow short-term goals and our humanly AND artificially intelligent efforts to achieve them.  

That seems to me to be one very powerful – holoficient – resolution that would make the most sense for all of us, given our growing global predicament.

________________________________

We greatly value your heartfelt support!
Donate HERE.

________________________________

So far we have received $3313 of the $15,000 I hope to raise on this blog for my part of the Co-Intelligence Institute’s December 2023-January 2024 fundraising goal of $222,000. Please join the 49 people who have helped us out this month with a donation of any amount.

________________________________

Tom Atlee, The Co-Intelligence Institute, POB 493, Eugene, OR 97440

Appreciating, evoking and engaging the wisdom and resourcefulness of the whole on behalf of the whole

*** Visit the Wise Democracy Pattern Language Project ***

*** Buy a Wise Democracy Pattern Card Deck ***
Read

Please support our work. Your donations are fully tax deductible in the US.

Leave a Reply

Your email address will not be published. Required fields are marked *

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.