New paper: From a False Sense of Safety to Resilience Under Uncertainty

Understanding how people act in crises and how to manage risk is crucial for decision-makers in health, social, and security policy. In a new paper published in the journal Frontiers in Psychology, we outline ways to navigate uncertainty and prepare for effective crisis responses.

The paper is part of a special issue called From Safety to Sense of Safety. The title is a play on this topic, which superficially interpreted can lead to a dangerous false impression: that we ought to intervene on people’s feelings instead of the substrate from which they emerge.

Nota bene: In June 2024, this topic is part of an online course for the New England Complex Systems Institute, and have some discount codes for friends of this blog. Do reach out!

The Pitfall of a False Sense of Safety

In the paper we first of all argue that we should understand so-called disaster myths, a prominent one being the myth of mass panic. This refers to the idea that people tend to lose control and go crazy during crises when they worry or fear too much, which implies we need to intervene on risk perceptions. But in fact, no matter what disaster movies or news reports show you, actual panic situations are rare. During crises, people tend to act prosocially. Hence, decision-makers should shift their focus from mitigating fear and worry – potentially leading to a false sense of safety – towards empowering communities to autonomously launch effective responses. This approach fosters resilience rather than complacency.

Decision Making Under Uncertainty: Attractor Landscapes

Secondly, we represent some basic ideas of decision making under uncertainty, via the concept of attractor landscapes. I now hope we would’ve talked about stability landscapes, but that ship already sailed. The idea can be understood like this: Say your society is the red ball, and each tile a state it’s in (e.g. “revolt”, “thriving”, “peace”, etc.) The society moves through a path of states.

These states are not equally probable; some are more “sticky” and harder to escape, like valleys in a landscape. These collections of states are called attractors. The area between two attractors is a tipping point (or here, kind of a “tipping ridge”).

I wholeheartedly encourage you to spend five minutes on Nicky Case’s interactive introduction to attractor landscapes here. It’s truly enlightening. The main thing to know about tipping points: as you cross them, nothing happens for a long time… Until everything happens at once.

The Dangers of Ruin Risks

All attractors are not made equal, though. For some, when you enter, you’ll never escape. These are called “ruin risks” (orange tile). If there is possibility of ruin in your landscape, probability dictates you will eventually reach it, obliterating all future aspirations.

As a basic principle, it does not make sense to see how close to the ledge you can walk and not fall. In personal life, you can take ruin risks to impress your friends or shoot for a Darwin Award. But keep your society away from the damned cliff.

As Nassim Nicholas Taleb teaches us: Risk is ok, ruin is not.

Navigating the Fog of War

In reality, not all states are visible from the start. Policymakers often face a “fog of war” (grey areas). Science can sometimes highlight where the major threats lie (“Here be Dragons”), but the future often remains opaque.

To make things worse for traditional planning, as you move a step from the starting position, the tiles may change. So you defined an ideal state, a Grand Vision (yellow) and set the milestones to reach it? If you remain steadfast, you could now be heading at a dead end or worse. Uh-oh.

(nb. due to space constraints, this image didn’t make it to the paper)

This situation, described in Dave Snowden’s Cynefin framework, is “complex.” Here, yesterday’s goals are today’s stray paths, so when complexity is high, you focus on the present – not some imaginary future. The strategy should be to take ONE step in a favourable direction, observe the now-unfolded landscape, and proceed accordingly.

The Cynefin Framework and Complex Systems

Sensemaking is a motivated, continuous effort to understand connections (which can be among people, places, and events) in order to anticipate their trajectories and act effectively.

Gary Klein

Sensemaking (or sense-making, as Dave Snowden defines it as a verb) refers to the attempt or capacity to make sense of an ambiguous situation in order to act in it. This is what we must do in complex situations, where excessive analysis can lead to paralysis instead of clarity.

Cynefin is a sense-making framework designed to enable conversations about such a situation, and offers heuristics to navigate the context. In the paper, we propose some tentative mappings of attractor landscape types to the Cynefin framework.

In general, our paper offers proposals for good governance, drawing from the science of sudden transitions in complex systems. Many examples pertain to pandemics, as they represent one of the most severe ruin risks we face (good contenders are of course wars and climate change).

By understanding the concepts illustrated here, policymakers could better navigate crises and build resilient societies capable of adapting to sudden changes.

If you want a deeper dive, please see the paper discussed in this post: At-tractor what-tractor? Tipping points in behaviour change science, with applications to risk management

NOTE: There’s another fresh paper out, this one in Translational Behavioural Medicine: How do behavioral public policy experts see the role of complex systems perspectives? An expert interview study. Could be of interest, too!

What Behaviour Change Science is Not

Due to frequent misconceptions about the topic, I wanted to outline a via negativa description of this thing called behaviour change science: in other words, what is it not? This is part of a series of posts clarifying the perspective I take in instructing a virtual course on behaviour change in complex systems at the New England Complex Systems Institute (NECSI). The course mixes behaviour change with complex systems science along with practical collaboration tools for making sense of the world in order to act in it.

Behaviour change science refers to an interdisciplinary approach, which often hails from social psychology, and studies changing human behaviour. The approach is motivated by the fact that many large problems we face today – be they about spreading misinformation, preventing non-communicable diseases, taking climate action, or preparing for pandemics – contain human action as major parts of both the problems and their solutions.

Based on many conversations regarding confusions around the topic, there is a need to clarify five points.

First, “behaviour change” in the current context is understood in a broad sense of the term, synonymous with human action, not as e.g. behaviourism. As such, it encompasses not only individuals, but also other scales of observation from dyads to small groups, communities and society at large. Social ecological models, for example, encourage us to think in such a multiscale manner, considering how individuals are embedded within larger systems. Methods for achieving change tend to differ for each scale; e.g. impacting communities entails different tools than impacting individuals (but we can also unify these scales). And people I talk to in behaviour change, understand action arises from interaction (albeit they may lack the specific terminology).

Second, the term intervention is understood in behaviour change context in a broader sense, than “nudges” to mess with people’s lives. A behaviour change intervention depicts any intentional change effort in a system, from communication campaigns to community development workshops and structural measures such as regulation and taxation. Even at the individual level, behaviour change interventions do not need to imply that an individual’s life is tampered with in a top-down manner; in fact, the best way to change behaviour is often to provide resources which enable the individual to act in better alignment with goals they have. Interventions can and do change environments that hamper those goals, or provide social resources and connections, which enable individuals to take action with their compatriots.

Third, behaviour change is not an activity taken up by actors standing outside the system that’s being intervened upon. Instead, best practices of intervention design compel us to work with stakeholders and communities when planning and implementing the interventions. This imperative goes back to Kurt Lewin’s action research, where participatory problem solving is combined with research activities. Leadership in social psychology is often defined not as the actions of a particular high-ranking role, but those available to any individuals in a system. Behaviour change practice is the same. To exaggerate only slightly: “Birds do it, bees do it, even educated fleas do it”.

Fourth, while interventions can be thought of as “events in systems”, some of which produce lasting effects while others wash away, viewing interventions as transient programme-like entities can narrow our thinking of how enablement of incremental, evolutionary, bottom-up behaviour change could optimally take place. Governance is, after all, conducted by local stakeholders in constant contact with the system, with larger leeway to adjust actions without fear of breaking evaluation protocol, and hopefully “skin in the game” perhaps long after intervention designers have moved on.

Fifth, nothing compels an intervention designer to infuse something novel into a system. For example, reverse translation studies what already works in practice, while aiming to learn how to replicate success elsewhere. De-implementation, on the other hand, studies what does not work, with the goal of removing practices causing harm. In fact, “Primum non nocere”; first, do no harm, is the single most important principle for behaviour change interventions .

Making sense of human action

Understanding and influencing human behavior is usually not a simple endeavor. Behaviors are shaped by a multitude of interacting factors across different scales, from the individual to the societal, and occur within systems of systems. Developing effective behavior change interventions requires grappling with this complexity. The approach taken in traditional behaviour change science uses behaviour change theories to make this complexity more manageable. I view these more akin to heuristic frameworks with practical utility – codification attempts of “what works for whom and when” – rather than theories in the natural science sense.

If you want a schematic of how I see behaviour change science, it might be something like the triangle below. It’s a somewhat silly representation, but what the triangle tries to convey, is that complex systems expertise sets out strategic priorities: Which futures should we pursue, and what kinds of methods make sense to get us going (key word is often evolution).

Behaviour change science, on the other hand, is much more tactical, offering tools and frameworks to understand how to make things happen closer to where the rubber hits the road.

But we will also go nowhere, unless we can harness collective intelligence of stakeholders and organisation / community members. This is why collaboration methods are essential. I will teach some of the ones I’ve found most useful in the course I mentioned in the intro.

If you want to learn more about the intersection of complex systems science and behaviour change, have a look at my Google Scholar profile, or see these posts: