Evidence is in the Past, Risk is in the Future: On Tail Events and Foresight

Context: This post outlines a manuscript in preparation and exhibits some of its visualisations, partly also presented at the European Public Health Conference (November 2025). If a blog format isn’t your poison, you can also see this video or this one-pager (conference poster).


It’s April 2025. Red Eléctrica, the electricity grid provider for the Iberian Peninsula, declares: “There exists no risk of a blackout. Red Eléctrica guarantees supply.”

Twenty days later, a massive blackout hits Portugal, Spain, and parts of France.

What the hell happened?

To understand this, we need to talk about ladders.

The Ladder Thought Experiment

Let’s take an example outlined in the wonderful article An Introduction to Complex Systems Science and Its Applications: Imagine 100 ladders leaning against a wall. Say each has a 1/10 probability of falling. If these ladders are independent, the probability that two fall together is 1/100. Three falling together: 1/1000. The probability of all 100 falling simultaneously becomes astronomically small – negligible, essentially zero.

Now tie all the ladders together with rope. You’ve made any individual ladder safer (less likely to fall on its own), but you’ve created a non-negligible chance that all might fall together.

This is cascade risk in interconnected systems.

Two Types of Risk

From a society’s perspective, we can understand risks as falling into one of two categories:

Modular risks (thin-tailed) don’t endanger whole societies or trigger cascades. A traffic accident in Helsinki won’t affect Madrid or even Stockholm. These risks have many typical precedents, slowly changing trends, and are relatively easy to imagine. We can use evidence-based risk management because we have large samples of past events to learn from.

If something is present daily but hasn’t produced an extreme for 50 years, it probably never will.

Cascade risks (fat-tailed) pose existential threats through domino effects. Pandemics, wars, and climate change fall here. They’re abstract due to rarity, with few typical precedents – events tend to be either small or catastrophic, with little in between.

If something hasn’t happened for 50 years in this domain, we might have just been lucky, and it might still hit us with astronomical force.

Consider these examples:

  • Workplace injuries
  • Street violence
  • Non-communicable diseases
  • Nuclear plant accidents
  • Novel pathogens
  • War

Before reading on, give it a think. Which are modular? Which are cascade risks?

I’d say most workplace injuries and street violence are modular (unless caused by organised crime or systemic factors like pandemics). Non-communicable diseases are also modular, although can be caused by systemic issues. Mega-trends perhaps, but you wouldn’t expect a year when they suddenly doubled, or became 10-fold.

Novel pathogens and wars are cascade risks that spread across borders and trigger secondary effects. These are the ladders tied together with a rope. Nuclear plants kind of depend; nowadays people try to build many modular cores instead of one huge reactor, so that failure of one doesn’t affect the failure of others. But as the mathematician Raphael Douady put it: “Diversifying your eggs to different baskets doesn’t help, if all the baskets are on board of Titanic” (see Fukushima disaster).

Is That a Heavy Tail, or Are You Just Happy to See Me?

Panels A) and B) below show pandemic data (data source, image source, alt text for details) – with casualties rescaled to today’s population. The Black Death around the year 1300 caused more than 2.5 billion deaths in contemporary terms. Histograms on the right show the relative number of different-sized events. The distribution shows tons of small pandemics and a few devastating extremes, with almost nothing in between (panel A, vertical scale in billions). We see a similar shape even when we get rid of the two extreme events (panel B, vertical scale in millions).

Panel A: “Paretian” dynamics of a systemic risk, illustrated by casualties from pandemics with over 1000 deaths, rescaled to contemporary population, with years indicating the beginning of the pandemic (Data from Cirillo & Taleb, 2020; COVID-19 deaths are presented until June 2024 according to model by The Economist & Solstad, S., 2021). Panel B: Same as panel A, zooming into the events with less than 1B deaths. This illustrates how the variance remains vast, even when the scale of events is much smaller. Panel C: Casualties from traffic accidents in Finland, illustrating the dynamics of a “thin-tailed”, localised risks. In this case, it would not be reasonable to expect a sudden increase to 10 000 casualties, whereas in the prior examples such jumps are an integral part of the occurrence dynamic.

Compare this to Panel C), Finnish traffic fatalities. Deaths cluster together predictably. You wouldn’t expect 10 000 road deaths in a single year – even 2 000 would be shocking.

Moving from observations to theory: The figure below compares mathematical “heavy-tailed” distributions to “thin-tailed” distributions. Heavy-tailed distributions depict:

  • Many more super-small events than thin-tailed distributions: Look at the very left side of the left panel below, where red line is above the blue one
  • Fewer mid-size events: Look at the middle portion of the left panel below, where blue line is higher than red
  • Extreme events of a huge magnitude that remain plausible: Look at the inset, which zooms into the tail (in thin-tailed distributions, mega-extremes are practically impossible like the ladders without a rope)

When we look at the right panel of the image above, thin-tailed distributions (like traffic deaths) should drop suddenly when plotted on a logarithmic scale. Fat-tailed distributions (like pandemics) should create a straight line, meaning very large events remain statistically plausible.

Or, at least that’s the theory, based on mathematical abstraction. Let’s see what the real data shows.

And here we go: The tail of actual pandemics looks like a straight line, while the tail of traffic deaths curves down like an eagle’s beak. Pretty neat, huh?

Evidence Lives in the Past, Risk Lives in the Future

In the interests of time, I’m going to skip a visualisation you see in the video (26:45). Main point is that for thin-tailed modular risks, we extrapolate from past data. For heavy-tailed cascade risks, we must form plausible hypotheses from current, weak, and incomplete signals.

This is the difference between induction (everything that happened before has these features, so future events will too) and abduction (reasoning to the most sensible course of action given limited information). All data is data from the past, and if the past isn’t a good indicator of the future, we need different ways of acting:

The mantra of resilience is early detection, fast recovery, rapid exploitation.
Dave Snowden

We need to detect weak signals early. The longer we wait, the bigger the destruction.

A Practical (piece of a) Solution: Participatory Community Surveillance Networks

In our research group, we’re developing networks of trusted survey respondents who participate regularly (see article), akin to the idea of “citizen sensor networks” also presented in the EU field guide Managing complexity (and chaos) in times of crisis. With such a network in place, during calm times, you can collect experiences and feedback on policy decisions. When crisis hits, you can pivot to gain rich real-time data from the field.

Why? Because nobody can see everything, and we see what we expect to see. If you don’t believe me, see if you can solve this mystery.

Given enough eyeballs, all bugs are shallow
– Eric S. Raymond

The process:

  1. Set up a network of trusted responders
  2. Collect experiences continuously
  3. Pivot when crisis takes place to gather data on how the disruption shows up in lived experience
  4. Avoid the trap of post-emergency mythmaking, and do a “lessons learnt” analysis with data collected during the disruption

Example: Inhabitant Developer Network

We developed an idea in a Finnish town, where new inhabitants would join the network as part of a “welcome to town” package. We could ask:

  • “What’s better here than where you lived before?” → relay to marketing
  • “What’s worse here than where you lived before?” → relay to development

When crisis occurs, we could pivot, asking about how the disruption shows up in people’s lived experience:

  • “What happened?”
  • “Give your description a title”
  • “How did this affect things important to you?”
  • “How well did you do during and after?” (1-10 scale)
  • “How prepared were you?” (1-10 scale)
  • … etc.

Respondents self-index these experiential snippets with quantitative indicators, giving us both qualitative richness and quantitative patterns. We can then e.g. examine situations where people were well-prepared but didn’t do well, or did well despite being unprepared – and filter e.g. by tags like rescue service involvement. This gives us rich data from the field to inform local decision makers.

From Experiences to Action

The beauty of collecting people’s lived experience is that they can later be used for citizen or whole-of-workforce engagement workshops. You can ask Dave Snowden’s iconical question: “How could we get more experiences like this, and fewer like those?”

This question holds an outcome “goal” lightly, allowing journeys to start with direction rather than rigid destination. It is understandable regardless of education level, and gives communities agency in developing solutions. This approach enables:

Anticipation: Use tailedness analysis as a diagnostic; use networks to detect weak signals before they explode.

Formulation: Design adaptive interventions with the community – interventions that are change instead of being fragile to the first unexpected shock.

Adoption: Build agency, legitimacy and buy-in through participatory processes. People support what they own or help create.

Implementation & Evaluation: Monitor in real-time, learn continuously, act accordingly. No more waiting six months for a report, or getting a quantitative result (“life satisfaction fell from 3.9 to 3.2”) only to need another research project to learn why: You can just look at the qualitative data to understand context.

Why This Matters

When Red Eléctrica declared “there exists no risk,” they were thinking in a thin-tailed world where past data predicts future outcomes. But interconnected systems – like them tied-together ladders – create heavy-tailed risks. For cascade risks, precaution matters more than proof. If you face an existential risk and fail, you won’t be there to try again.

As Nassim Nicholas Taleb puts it: Risk is acceptable, ruin is not (more in this post). And no individual human is capable of understanding our modern, interconnected environments alone.

Bring forth the eyeballs.


Related Posts

From Fruit Salad to Baked Bread: Understanding Complex Systems for Behaviour Change – Why treating behaviour change like assembling fruit salad instead of baking bread leads well-meaning efforts to stumble.

From a False Sense of Safety to Resilience Under Uncertainty – On disaster myths, attractor landscapes, and why intervening on people’s feelings instead of their response capacity is dangerous.

“Mistä tässä tilanteessa on kyse?”: Henkisestä kriisinkestävyydestä yhteisölliseen kriisitoimijuuteen (In Finnish) – From individual resilience to collective crisis agency: reflections from Finland’s national security event.

Riskinhallinta epävarmuuden aikoina: Väestön osallistaminen varautumis- ja ennakointimuotoiluun (In Finnish) – Risk management under uncertainty through participatory anticipatory design.


For deeper exploration of these concepts, I recommend Nassim Nicholas Taleb’s books: Fooled by Randomness, The Black Swan, and Antifragile, as well as the aforementioned EU field guide Managing complexity (and chaos) in times of crisis.

From Fruit Salad to Baked Bread: Understanding Complex Systems for Behaviour Change

New perspectives from my doctoral research, “Complex Systems and Behaviour Change: Bridging Far Away Lands.”

On May 16, 2025, I finally defended my doctoral dissertation – a side-project in the making for the last 9 years or so. I was pretty confident that this would have happened two years ago already when I submitted a rogue version of the dissertation summary for pre-examination. It was titled “Understanding and Shaping Complex Social Systems: Lessons from an emerging paradigm to thrive in an uncertain world”, which is also the name of a course I later started teaching in the New England Complex Systems Institute. The preprint was quickly downloaded almost 1000 times, and people reached out to me to thank for the clear exposition. But this version turned out to be a bit too rogue for one of the pre-examiners, and I rewrote the whole thing in 2024 – to be much more technical, and stylistically more conventional.

The defence was a success and here we are, the dissertation finally accepted by the academic establishment. Published summary can be downloaded here. The implicit promise is that after reading the work, you’ll be able to understand this cartoon, which you might recognise to have a relationship with the cover image:

As is traditional in the Finnish system, I began the occasion with a Lectio Praecursoria – an introductory speech. This talk introduced the groundwork for my research, exploring the often-overlooked connections between two seemingly distant scientific fields: complex systems and behaviour change.

This blog post adapts that initial speech, inviting you to explore these ideas with me.

The Core Idea: Why We Need to Rethink Behaviour Change

The research I present explores the intersection of two scientific domains that might seem, at first glance, quite distant. But what I want to do is share why building bridges between complex systems and behaviour change is not merely an academic curiosity, but, as I argue in this work, a vital step towards deepening our understanding of human action in our increasingly interconnected world, and ultimately, towards building a more robust basic science of behaviour change. [Side note: you can find my perspective to what behaviour change is NOT here, and connections to risk management here and here.]

The “Fruit Salad vs. Bread” Analogy: Understanding Different Types of Systems

To begin, let us talk about the difference between making fruit salad and baking bread. I am well aware of how ludicrous this sounds, but I believe that confusing these two processes consistently causes well-meaning efforts, particularly those aimed at changing behaviour, to stumble. So please bear with me.

Imagine making fruit salad for a bunch of children. You gather fruits you enjoy – perhaps pineapple, peach, and cherries. You’re fairly confident that if you like them separately, you’ll like them together. You chop them, combine them, and serve them. Now, if a child finds that cherries look too strange to be edible – and leaves them behind – it’s no catastrophe. They can still consume the pineapple and peach, which every reasonable person enjoys. The uneaten cherries can be consumed by someone else later. In fruit salad, we can combine ingredients, analyse the parts somewhat independently, and predict the outcome of the whole with reasonable certainty. With many ingredients, fruit salad can become complicated – a word whose origins (as pointed out by Dave Snowden) can be taken to mean “folded.” And what has been folded, can often also be unproblematically unfolded.

Now, think about baking bread. You combine yeast, flour, water, and salt. You’ve heard that olive oil is healthy, so you add a bit of that in. You mix, knead, let it rise, bake. The final loaf emerges. But what if the children dislike the taste of olive? You cannot simply remove the oil. Or what if you put in too much salt? The ingredients have interacted, transformed. The bread is an emergent product, something entirely new, fundamentally different from the mere sum of its parts. The whole portion intended for the children, not just the offending component, might have to be passed to an omnivorous family member. This process is better described not as complicated, but as complex, a word with roots that can be interpreted as “entangled” or “interwoven.”

Unlike with folding, what is interwoven cannot easily be disentangled without fundamentally changing its nature.

The Two Key Disciplines: Behaviour Change and Complex Systems

With this analogy in mind, let’s turn to the disciplines central to my research.

Behaviour change science is an inherently interdisciplinary field drawing from psychology, sociology, public health, and more. It strives to understand the web of factors – personal, social, environmental – that shapes our actions. Its goal is to help foster changes needed to tackle major societal challenges: from noncommunicable diseases (entailing, for example, physical activity behaviours) and sustainable work-life (entailing, for example, job crafting behaviours) to climate action and pandemic preparedness (entailing risk management behaviours). Human action is a core thread in all these pressing issues.

The other discipline central to this work is complex systems science. It originally grew out of physics, chemistry, and biology, but its principles increasingly reach into the psychological and social world. It studies systems composed of many interacting parts, where these interactions often dominate the system’s overall behaviour. A key insight is that the relationships between components can be more critical than the components themselves in determining the system’s properties. Think of water: ice, liquid, and steam involve the same H₂O molecules, but their differing interconnectivity leads to vastly different behaviours. Steam can make a sauna feel warm; ice can make swimming difficult afterwards. But the components remain the same.

Are We Using Fruit-Salad Tools for Bread-Like Problems?

When it comes to systems, some are more component-dominant, like fruit salad, while others are more interaction-dominant, like bread. My research argues that many phenomena central to behaviour change science – like motivation dynamics, the spread of social norms, or how people respond to interventions – are far more like bread than fruit salad. They occur as parts of complex, interaction-dominant systems.

The main contributions of my dissertation relate to the development of basic science. Early theories in behaviour change were driven by practitioners aiming to understand issues they faced. And practitioners are often very good at working with complexity, although their terminology to describe the phenomena at play might sometimes be limited. But still, many of the quantitative tools that were relied upon in developing these theories implicitly treated behaviour change phenomena like fruit salad. For instance, while linear regression analysis can incorporate simple interaction terms to account for some forms of interdependence, its main usage is to assign values to variables such as norms, intentions, and attitudes, assuming they are independent from each other – implying separability. Furthermore, there’s a common, often implicit, assumption that findings derived from group-level data directly translate to understanding how individuals change over time.

So, the central question becomes: If behaviour change is often entangled and emergent like bread dough, should our primary tools be those best suited for slicing separable fruit?

Beyond Linearity: Embracing the Complexity of Change

I argue that this potential mismatch – analysing bread with fruit salad tools – can hinder our understanding of behaviour change as a complex evolving process. Complex systems science suggests that variability, which might look like messiness or error from a purely linear perspective, is often not just noise; it can be the inherent signature of the dynamic system itself.

A key characteristic of these systems, which I investigated conceptually and empirically, is non-linearity. Imagine pushing a boulder near a hilltop:

You push a little, the boulder moves a little.

You push a little, the boulder moves a little.

You push a little… and the boulder tumbles dramatically into a new valley.

Perhaps now scientists rush to the scene to investigate what was distinct in your technique for the last push. And they will inevitably find results. But the magic was not in the push, but the relationship between the push, the boulder’s position, and the landscape. This kind of abrupt, disproportionate change is known as a critical transition.

Mapping Change: The Power of Attractor Landscapes

Complex systems science offers a powerful conceptual tool to map transition dynamics: the attractor landscape. Imagine a pool table with a single billiard ball. Each position on the table represents a possible state for the system, and the current status is represented by the location of the billiard ball. Now imagine the surface isn’t flat, but contains hills and valleys. The valleys represent stable patterns – the attractors, collections of similar states that “trap” the ball. It’s easy for the ball to settle into a valley; it requires more effort or perturbation to push it out. The ridges between valleys are called tipping points.

A slice of an attractor landscape showing two major ways systems can shift abruptly (from an article included in the dissertation)

Think of smoking, where dispositions in the North Atlantic world shifted gradually if at all for many decades. Imagine this as a landscape: one valley where smoking is socially acceptable, and another where it is frowned upon. There was little change for a long time, until a tipping point was reached, leading to widespread disapproval and significant policy changes. Pushing the system over the ridge requires effort or a significant nudge, but once crossed, it naturally settles into a new attractor valley, a new stable pattern. However, this landscape isn’t necessarily static; it can transform and be reshaped. Think of this like the hills and valleys of the pool table rising and falling over time.

Notice how different this landscape representation is from conventional flowcharts suggesting neat, linear causes and effects. It shifts focus towards understanding the system’s dispositions, its underlying tendencies and stabilities. It encourages a focus on nurturing the conditions, tending the substrate, working the soil, from which desired behaviours – in deeper, more stable valleys – can emerge, and sustain themselves more naturally.

Evidence in Action: From Work Motivation to Public Health

In my research, I used analytical techniques adapted from dynamical systems theory to investigate empirical evidence for such attractor states and shifts within fine-grained, moment-to-moment work motivation data. I also explored its applicability to societal-level data on COVID-19 protective behaviours. This work suggests the landscape metaphor is not just a useful theoretical vehicle; these patterns can be observed and studied in real-world behaviour change contexts.

In addition to non-linearity, some of the patterns of complex systems I examined in this research were “non-stationarity” and “non-ergodicity”. In my work, I clarify these terms in the behaviour change context and demonstrate how to study them empirically in time series data, with methods such as cumulative recurrence network analysis.

The Key Takeaway: Complexity as a Feature, Not a Bug

In essence, the core message of this work is that the bread-like complexity of human behaviour change isn’t just noise or a problem to be simplified away. It’s a fundamental characteristic we must embrace and understand scientifically if we want our science to accurately reflect the phenomena it studies. Complex systems science provides concepts and tools that acknowledge interdependence, emergence, and context-sensitivity of change phenomena. And we aim not to eliminate this complexity, but to enlist it.

Looking Ahead: Building a Bridge to a More Robust Science of Human Action

By building bridges between behaviour change science and complex systems science, the research presented here argues that a complex systems perspective can help us build a more robust and realistic science of human action – one that recognises behaviour not just as a collection of separable ingredients like a fruit salad, but as an emergent, interwoven process like baking bread.

This, I believe, is crucial. It is crucial for developing a science better equipped to understand the intricate dynamics of behaviour change. It is crucial for us to seize the opportunities that arise when we learn to converse with complex systems, instead of just trying to push them around. And it is crucial for navigating the critical policy challenges of our time, which invariably involve understanding and enabling human action.


What are your thoughts? Leave a comment or reach out. My current research interests mainly revolve around risk management (see paper described here) – particularly, understanding and shaping communities’ capacities to respond, recover, and adapt from shocks. I’m a 72hours.fi trainer, and would be happy to collaborate in e.g. projects to make the EU’s new preparedness strategy a feasible reality.

Picture of me doing a sound check before the doctoral defense. It was held in Zoom as I was in Germany, the chair was in Finland, and the opponent in the U.S. 😅

Affordance Mapping to Manage Complex Systems: Planning a Children’s Party

I’ve recently followed with interest Dave Snowden’s development of “Estuarine Mapping”, also known as “Affordance Mapping”. The process is based on a complex systems framework to design and de-risk change initiatives (see link in the end of this post). After taking part in training sessions and facilitating some mapping exercises with groups, I found myself in want of a metaphor that didn’t require an understanding of coastal geography.

Enter the world of children’s parties. Snowden has a famous anecdote about organising a party for kids, which brilliantly illustrates the folly of applying traditional management techniques to complex systems. Inspired by this tale, I’ve reimagined it here as a simplified depiction of the Affordance Mapping process. So here we go.

Picture yourself tasked with organising a birthday bash for a group of energetic seven-year-olds. But instead of reaching for a conventional party-planning checklist, you decide to employ the Affordance Mapping process. What would you do?

First, you’d start by surveying the party landscape. You’d identify all the elements that could influence the party – from the near-immovable dining table to the ever-shifting moods of the kids. We’ll call these our party elements.

Next, you’d create a map of these elements. On one axis, you’d have how much energy it takes to change each element – moving the dining table would be high energy, while changing the music playlist would be low. On the other axis, you’d have how long it takes to make these changes – getting pizza delivered, or setting up a bouncy castle might take an hour, while changing a game rule could be instant.

Now, you’d draw a line in the top right corner. Everything above this line is beyond your control – things you absolutely can’t change, like the fact that Tommy’s allergic to peanuts. You’d also draw a second line for things that are outside your control, but amenable in collaboration with other parents, like how the party should end by 6 PM. You’d also mark a zone in the bottom left corner, for elements that change too easily and might need stabilising, like the kids’ attention spans or the volume level.

The result might look something like this:

The exciting part is the middle area. Here’s where you can actually make changes to improve the party; the things you can manage. But you can also try to make some elements more manageable via (de)stabilisation efforts, or remove some altogether.

For example, you might decide to:

  1. Keep some elements as they are (the classic musical chairs game)
  2. Remove others that aren’t fun (the complicated crafts project your spouse found on Pinterest)
  3. Modify some to make them more enjoyable (have kids organise themselves into a line arranged by height, when moving outdoors after the cake is done with)

You’d come up with small experiments to test these ideas. Maybe you’ll try introducing a new party game like “freeze dance” to alleviate boredom in waiting for transitions from one activity to the next, or rearranging the gift-opening area. You’d also think about how changing one element might affect others – will having a water balloon toss right before snack time lead to damp clothes?

Finally, you’d plan how to amplify emergent positive side-effects, and mitigate negative ones. You’ll also redraw your party map before next year’s party. This way, you’re always working towards a more fun and dynamic party, understanding that some elements will always be shifting (like the kids’ favorite songs) while others stay constant (like the need for cake).

Technical note. The items on the map, in the lingo of the complex systems philosopher Alicia Juarrero, represent “constraints“; things that modulate a system’s behaviour. In complex systems, these are intertwined in such deep ways, that their effects are seldom amenable to an analysis of linear causality. To change a system’s macro-level state, you execute multiple parallel micro-interventions that aim to affect these constraints. For a recent open access book chapter outlining the rationale, see here: As through a glass darkly: a complex systems approach to futures.

New paper: From a False Sense of Safety to Resilience Under Uncertainty

Understanding how people act in crises and how to manage risk is crucial for decision-makers in health, social, and security policy. In a new paper published in the journal Frontiers in Psychology, we outline ways to navigate uncertainty and prepare for effective crisis responses.

The paper is part of a special issue called From Safety to Sense of Safety. The title is a play on this topic, which superficially interpreted can lead to a dangerous false impression: that we ought to intervene on people’s feelings instead of the substrate from which they emerge.

Nota bene: In June 2024, this topic is part of an online course for the New England Complex Systems Institute, and have some discount codes for friends of this blog. Do reach out!

The Pitfall of a False Sense of Safety

In the paper we first of all argue that we should understand so-called disaster myths, a prominent one being the myth of mass panic. This refers to the idea that people tend to lose control and go crazy during crises when they worry or fear too much, which implies we need to intervene on risk perceptions. But in fact, no matter what disaster movies or news reports show you, actual panic situations are rare. During crises, people tend to act prosocially. Hence, decision-makers should shift their focus from mitigating fear and worry – potentially leading to a false sense of safety – towards empowering communities to autonomously launch effective responses. This approach fosters resilience rather than complacency.

Decision Making Under Uncertainty: Attractor Landscapes

Secondly, we represent some basic ideas of decision making under uncertainty, via the concept of attractor landscapes. I now hope we would’ve talked about stability landscapes, but that ship already sailed. The idea can be understood like this: Say your society is the red ball, and each tile a state it’s in (e.g. “revolt”, “thriving”, “peace”, etc.) The society moves through a path of states.

These states are not equally probable; some are more “sticky” and harder to escape, like valleys in a landscape. These collections of states are called attractors. The area between two attractors is a tipping point (or here, kind of a “tipping ridge”).

I wholeheartedly encourage you to spend five minutes on Nicky Case’s interactive introduction to attractor landscapes here. It’s truly enlightening. The main thing to know about tipping points: as you cross them, nothing happens for a long time… Until everything happens at once.

The Dangers of Ruin Risks

All attractors are not made equal, though. For some, when you enter, you’ll never escape. These are called “ruin risks” (orange tile). If there is possibility of ruin in your landscape, probability dictates you will eventually reach it, obliterating all future aspirations.

As a basic principle, it does not make sense to see how close to the ledge you can walk and not fall. In personal life, you can take ruin risks to impress your friends or shoot for a Darwin Award. But keep your society away from the damned cliff.

As Nassim Nicholas Taleb teaches us: Risk is ok, ruin is not.

Navigating the Fog of War

In reality, not all states are visible from the start. Policymakers often face a “fog of war” (grey areas). Science can sometimes highlight where the major threats lie (“Here be Dragons”), but the future often remains opaque.

To make things worse for traditional planning, as you move a step from the starting position, the tiles may change. So you defined an ideal state, a Grand Vision (yellow) and set the milestones to reach it? If you remain steadfast, you could now be heading at a dead end or worse. Uh-oh.

(nb. due to space constraints, this image didn’t make it to the paper)

This situation, described in Dave Snowden’s Cynefin framework, is “complex.” Here, yesterday’s goals are today’s stray paths, so when complexity is high, you focus on the present – not some imaginary future. The strategy should be to take ONE step in a favourable direction, observe the now-unfolded landscape, and proceed accordingly.

The Cynefin Framework and Complex Systems

Sensemaking is a motivated, continuous effort to understand connections (which can be among people, places, and events) in order to anticipate their trajectories and act effectively.

Gary Klein

Sensemaking (or sense-making, as Dave Snowden defines it as a verb) refers to the attempt or capacity to make sense of an ambiguous situation in order to act in it. This is what we must do in complex situations, where excessive analysis can lead to paralysis instead of clarity.

Cynefin is a sense-making framework designed to enable conversations about such a situation, and offers heuristics to navigate the context. In the paper, we propose some tentative mappings of attractor landscape types to the Cynefin framework.

In general, our paper offers proposals for good governance, drawing from the science of sudden transitions in complex systems. Many examples pertain to pandemics, as they represent one of the most severe ruin risks we face (good contenders are of course wars and climate change).

By understanding the concepts illustrated here, policymakers could better navigate crises and build resilient societies capable of adapting to sudden changes.

If you want a deeper dive, please see the paper discussed in this post: At-tractor what-tractor? Tipping points in behaviour change science, with applications to risk management

NOTE: There’s another fresh paper out, this one in Translational Behavioural Medicine: How do behavioral public policy experts see the role of complex systems perspectives? An expert interview study. Could be of interest, too!

What Behaviour Change Science is Not

Due to frequent misconceptions about the topic, I wanted to outline a via negativa description of this thing called behaviour change science: in other words, what is it not? This is part of a series of posts clarifying the perspective I take in instructing a virtual course on behaviour change in complex systems at the New England Complex Systems Institute (NECSI). The course mixes behaviour change with complex systems science along with practical collaboration tools for making sense of the world in order to act in it.

Behaviour change science refers to an interdisciplinary approach, which often hails from social psychology, and studies changing human behaviour. The approach is motivated by the fact that many large problems we face today – be they about spreading misinformation, preventing non-communicable diseases, taking climate action, or preparing for pandemics – contain human action as major parts of both the problems and their solutions.

Based on many conversations regarding confusions around the topic, there is a need to clarify five points.

First, “behaviour change” in the current context is understood in a broad sense of the term, synonymous with human action, not as e.g. behaviourism. As such, it encompasses not only individuals, but also other scales of observation from dyads to small groups, communities and society at large. Social ecological models, for example, encourage us to think in such a multiscale manner, considering how individuals are embedded within larger systems. Methods for achieving change tend to differ for each scale; e.g. impacting communities entails different tools than impacting individuals (but we can also unify these scales). And people I talk to in behaviour change, understand action arises from interaction (albeit they may lack the specific terminology).

Second, the term intervention is understood in behaviour change context in a broader sense, than “nudges” to mess with people’s lives. A behaviour change intervention depicts any intentional change effort in a system, from communication campaigns to community development workshops and structural measures such as regulation and taxation. Even at the individual level, behaviour change interventions do not need to imply that an individual’s life is tampered with in a top-down manner; in fact, the best way to change behaviour is often to provide resources which enable the individual to act in better alignment with goals they have. Interventions can and do change environments that hamper those goals, or provide social resources and connections, which enable individuals to take action with their compatriots.

Third, behaviour change is not an activity taken up by actors standing outside the system that’s being intervened upon. Instead, best practices of intervention design compel us to work with stakeholders and communities when planning and implementing the interventions. This imperative goes back to Kurt Lewin’s action research, where participatory problem solving is combined with research activities. Leadership in social psychology is often defined not as the actions of a particular high-ranking role, but those available to any individuals in a system. Behaviour change practice is the same. To exaggerate only slightly: “Birds do it, bees do it, even educated fleas do it”.

Fourth, while interventions can be thought of as “events in systems”, some of which produce lasting effects while others wash away, viewing interventions as transient programme-like entities can narrow our thinking of how enablement of incremental, evolutionary, bottom-up behaviour change could optimally take place. Governance is, after all, conducted by local stakeholders in constant contact with the system, with larger leeway to adjust actions without fear of breaking evaluation protocol, and hopefully “skin in the game” perhaps long after intervention designers have moved on.

Fifth, nothing compels an intervention designer to infuse something novel into a system. For example, reverse translation studies what already works in practice, while aiming to learn how to replicate success elsewhere. De-implementation, on the other hand, studies what does not work, with the goal of removing practices causing harm. In fact, “Primum non nocere”; first, do no harm, is the single most important principle for behaviour change interventions .

Making sense of human action

Understanding and influencing human behavior is usually not a simple endeavor. Behaviors are shaped by a multitude of interacting factors across different scales, from the individual to the societal, and occur within systems of systems. Developing effective behavior change interventions requires grappling with this complexity. The approach taken in traditional behaviour change science uses behaviour change theories to make this complexity more manageable. I view these more akin to heuristic frameworks with practical utility – codification attempts of “what works for whom and when” – rather than theories in the natural science sense.

If you want a schematic of how I see behaviour change science, it might be something like the triangle below. It’s a somewhat silly representation, but what the triangle tries to convey, is that complex systems expertise sets out strategic priorities: Which futures should we pursue, and what kinds of methods make sense to get us going (key word is often evolution).

Behaviour change science, on the other hand, is much more tactical, offering tools and frameworks to understand how to make things happen closer to where the rubber hits the road.

But we will also go nowhere, unless we can harness collective intelligence of stakeholders and organisation / community members. This is why collaboration methods are essential. I will teach some of the ones I’ve found most useful in the course I mentioned in the intro.

If you want to learn more about the intersection of complex systems science and behaviour change, have a look at my Google Scholar profile, or see these posts:

Crafting Policies for an Interconnected World

This piece has been originally published as: Heino, M. T. J., Bilodeau, S., Fox, G., Gershenson, C., & Bar-Yam, Y. (2023). Crafting Policies for an Interconnected World. WHN Science Communications, 4(10), 1–1. https://doi.org/10.59454/whn-2310-348

While our knowledge expands faster than ever, our ability to anticipate and respond to global challenges or opportunities remains limited. A political upheaval in one country, a technological innovation in another, or an epidemic in a far-away city – any of these can create a global change cascade with many unexpected repercussions. Why is this? A significant part of the answer lies in our increased global connectivity, which produces both new risks and novel opportunities for collaborative action. 

In this rapidly evolving world, proactive and adaptive public policies are paramount, with a primary focus on human well-being, rights, and needs. The COVID-19 pandemic serves as a stark reminder that while traditional political and economic systems claim to represent public interests and allocate resources optimally, there’s often a gap between claim and reality. That people vote for political leaders doesn’t guarantee they will focus on public well-being or the availability of resources. A genuine human-centered focus on well-being, satisfaction, and quality of life becomes indispensable.

Reflecting on our pandemic response, mostly hierarchy-based and bureaucratic, we observed glaring operational shortcomings: delayed responses, disjointed actions, and ineffective execution of preparedness plans [1]. However, what has been less discussed is the insight that the crisis offers into the role of uncertainty due to nonlinear risks in shaping policy outcomes. 

Complex systems may present unseen, extreme risks that can spiral into catastrophic failures if left unaddressed early on. These failures can occur upon reaching instabilities and “tipping points,” that result in abrupt large-scale losses of well-being or resilience of a system, be it an ecosystem or a social system such as a nation [2–4]

The poor understanding of such non-linear risks is apparent through the ongoing  phases of the pandemic, where those who called for increased precaution were often accused of “fearmongering”. A misinterpretation of human reactions is a likely contributor: contrary to the common belief, people do not usually panic in emergencies. Instead, they tend to respond in constructive, cooperative ways, if given clear and accurate information. The widespread belief in a mass panic during disasters belongs to a group of misconceptions, studied in social psychology under the umbrella term of “disaster myths” [5–7]. The real danger lies in creating a false sense of security. If such a sense is shattered due to an unexpected event and lack of preparation, the fallout can be far more damaging in terms of physical, mental, and economic impact, not to mention loss of trust. Thus, the general recommendation for communication is to not downplay threats.  Instead, authorities need to offer the public clear information about potential risks and, crucially, guidance on how to prepare and respond effectively. This guidance has the potential to transform anxiety and passivity into positive self-organized action [8].

Human action lies at the core of many contemporary challenges, from climate change to public health crises. After all, it is human behavior – collective and nonlinear – that fuels the uncertainty of the modern world. The recognition of how traditional approaches can fall short in our increasingly volatile and complex contexts has led to increased demand for “strategic behavioral public policy” [9]

How can we advance our understanding of human behavior linked to instabilities and tipping points and turn them into capabilities for policy makers? The key is to understand how networks of dependencies between people link behaviors across a system. Complex systems science [10], as a field of study, involves understanding how different parts of a system interact with each other, creating emergent properties at multiple scales that cannot be predicted by studying the parts individually: There is no tsunami in a water molecule, no trusting relationship in an isolated interaction, no behavioral pattern in a single act, and no pandemic in an isolated infection [11]. Yet, the transformative potential of combining behavioral science with an understanding of complex systems science, a crucial tool for decision-making under uncertainty, remains largely untapped.

There are significant opportunities in weaving complex systems perspectives into human-centered public policy, infusing a deeper understanding of uncertainty into the heart of policy-making. A fusion of behavioral insights with an understanding of complex systems is not merely an intellectual exercise but a crucial tool for decision-making in crisis conditions and under uncertainty. As some examples:

  1. It urges us to prepare for uncommon events, like pandemics with impacts surpassing those of major conflicts like World War II. This realization comes as we discover that what would be extremely rare events in isolated systems, can become relatively frequent in an interconnected world [12–14]. A long-standing example is how economic crises, which many experts considered rare enough to be negligible, have repeatedly caught us off-guard.
  2. It emphasizes the importance of adaptability in seizing unforeseen opportunities and minimizing potential damages. Central to this adaptability is the concept of “optionality.” This means maintaining a broad array of choices and opportunities, allowing for increased adaptability and selective application based on evolving circumstances. Recognizing that we cannot anticipate every twist and turn of the future, our best approach is indeed to embrace evolutionary strategies; creating systems that effectively solve problems, instead of trying to solve each unique problem separately [15]. An important takeaway is that instead of over-optimizing for current conditions, investing in buffers and exploration – even if they seem redundant – becomes vital when the future is uncertain.
  3. It empowers us to distribute decision-making power to collaborative teams. This is because teams can solve many more high complexity problems than individuals can, and significant portions of the modern world are becoming too complex for even the most competent individuals to fully grasp [16,17].

However, integrating these insights is easier said than done. The shift requires significant capacity building among policymakers. It begins with understanding why novel approaches are necessary, and ensuring the adequate systems for preparedness are empowered. Training programs can help policymakers grasp the concepts of risk, uncertainty, and complex systems.

Developing human-centric policies under uncertainty

One recent training to improve competence in behavioral and complex systems insights [18], emphasized three factors of the policy development process: co-creation, iteration, and creativity. These are briefly outlined below.

  • Co-creation: Ideal teams addressing complex challenges have members with a diversity of backgrounds and expertise, where everyone is able to contribute their knowledge to shared action. Much can be achieved by limiting the influence of hierarchy and enabling interaction between team members and other stakeholders; formal approaches include e.g. the implementation of “red teams” [19]. Those who are most impacted by the plans, need to play a key role in the process. They are often citizens, who can provide critical information and expertise about the local environment [20,21].
  • Iteration: Mistakes naturally occur as an intrinsic part of gaining experience, developing the ability to tackle complex challenges, and building organizations to address them.  In general, ideas and systems for responding to complex contexts need to be allowed to evolve through (parallel) small-scale experiments and feasibility tests in real-world contexts. Feasibility testing should leverage the aforementioned optionality, retaining the ability to roll back in case of unforeseen negative consequences – or to amplify positive aspects that are only revealed upon observing how the plan interacts with its context [21,22]
  • Creativity: Excessive fear and stress impede innovation. If the design process is science-based, inclusive, and supports learning from weaknesses revealed by iterative explorations that can safely fail, we need not be afraid to try something different or outside of the box. In fact, this is where the most innovative solutions often come from.

Drawing on our earlier discussion on complex systems and human behavior, we understand that in the face of sudden threats, there is a critical need for nimbleness. Rapid response units, representing the frontline of our defense, should possess the autonomy to act, unencumbered by political hindrances. An example would be fire departments’ autonomy to respond to emergencies within pre-set and commonly agreed-upon protocols. The lessons from the pandemic and the insights from complex systems thinking underscore this. But how do we reconcile swift action with informed decision-making?

Transparent, educated communication, and trust based on the experience of success, can potentially bridge this gap. Science is how we understand the consequences of actions, and selecting the best consequences is essential for global risks. By ensuring policymakers and the public are informed and aligned, we can address risks head-on, anchored in commonly-held values and backed by science. As we lean into the practices discussed earlier, such as co-creation and iteration, our mindset too must evolve. Embracing new, sometimes unconventional, approaches will enable us to sidestep past policy pitfalls, especially those painfully highlighted by recent global events. Protecting rapid response teams from political interference upgrades our societal apparatus to confront the multifaceted challenges of our time. 

Learning anticipatory adaptation

Our ultimate aim is clear: proactivity. Rather than reacting once harm is done, we need to anticipate, adapt, and equip policymakers with the necessary insights and tools using a multidisciplinary approach that includes behavioral and complexity sciences. We can respond to the unpredictable, ensuring society is robust and resilient. This necessitates a collective call-to-action, urging citizens and organizations to develop institutions and inform policy makers to empower communities to thrive amidst uncertainties.

Bibliography

[1] Heino MT, Bilodeau S, Bar-Yam Y, Gershenson C, Raina S, Ewing A, et al. Building Capacity for Action: The Cornerstone of Pandemic Response. WHN Sci Commun 2023;4:1–1. https://doi.org/10.59454/whn-2306-015.

[2] Scheffer M, Bolhuis JE, Borsboom D, Buchman TG, Gijzel SMW, Goulson D, et al. Quantifying resilience of humans and other animals. Proc Natl Acad Sci 2018:201810630. https://doi.org/10/gfqjqr.

[3] Heino M, Proverbio D, Resnicow K, Marchand G, Hankonen N. Attractor landscapes: A unifying conceptual model for understanding behaviour change across scales of observation 2022. https://doi.org/10.31234/osf.io/3rxyd.

[4] Scheffer M, Borsboom D, Nieuwenhuis S, Westley F. Belief traps: Tackling the inertia of harmful beliefs. Proc Natl Acad Sci 2022;119:e2203149119. https://doi.org/10.1073/pnas.2203149119.

[5] Clark DO, Patrick DL, Grembowski D, Durham ML. Socioeconomic status and exercise self-efficacy in late life. J Behav Med 1995;18:355–76. https://doi.org/10/bjddw6.

[6] Drury J, Novelli D, Stott C. Psychological disaster myths in the perception and management of mass emergencies: Psychological disaster myths. J Appl Soc Psychol 2013;43:2259–70. https://doi.org/10.1111/jasp.12176.

[7] Drury J, Reicher S, Stott C. COVID-19 in context: Why do people die in emergencies? It’s probably not because of collective psychology. Br J Soc Psychol 2020;59:686–93. https://doi.org/10/gg3hr4.

[8] Orbell S, Zahid H, Henderson CJ. Changing Behavior Using the Health Belief Model and Protection Motivation Theory. In: Hamilton K, Cameron LD, Hagger MS, Hankonen N, Lintunen T, editors. Handb. Behav. Change, Cambridge: Cambridge University Press; 2020, p. 46–59. https://doi.org/10.1017/9781108677318.004.

[9] Schmidt R, Stenger K. Behavioral brittleness: the case for strategic behavioral public policy. Behav Public Policy 2021:1–26. https://doi.org/10.1017/bpp.2021.16.

[10] Siegenfeld AF, Bar-Yam Y. An Introduction to Complex Systems Science and Its Applications. Complexity 2020;2020:6105872. https://doi.org/10/ghthww.

[11] Heino MTJ. Understanding and shaping complex social psychological systems: Lessons from an emerging paradigm to thrive in an uncertain world 2023. https://doi.org/10.31234/osf.io/qxa4n.

[12] Cirillo P, Taleb NN. Tail risk of contagious diseases. Nat Phys 2020;16:606–13. https://doi.org/10/ggxf5n.

[13] Rauch EM, Bar-Yam Y. Long-range interactions and evolutionary stability in a predator-prey system. Phys Rev E 2006;73:020903. https://doi.org/10/d9zbc4.

[14] Taleb NN. Statistical Consequences of Fat Tails: Real World Preasymptotics, Epistemology, and Applications. Illustrated Edition. STEM Academic Press; 2020.

[15] Bar-Yam Y. Engineering Complex Systems: Multiscale Analysis and Evolutionary Engineering. In: Braha D, Minai AA, Bar-Yam Y, editors. Complex Eng. Syst. Sci. Meets Technol., Berlin, Heidelberg: Springer; 2006, p. 22–39. https://doi.org/10.1007/3-540-32834-3_2.

[16] Bar-Yam Y. Why Teams? N Engl Complex Syst Inst 2017. https://necsi.edu/why-teams (accessed August 9, 2023).

[17] Bar-Yam Y. Complexity rising: From human beings to human civilization, a complexity profile. Encycl Life Support Syst 2002.

[18] Hankonen N, Heino MTJ, Saurio K, Palsola M, Puukko S. Developing and evaluating behavioural and systems insights training for public servants: a feasibility study. Julkaisematon Käsikirjoitus 2023.

[19] UK Ministry of Defence (MOD). Red Teaming Handbook. GOVUK 2021. https://www.gov.uk/government/publications/a-guide-to-red-teaming (accessed August 9, 2023).

[20] Tan Y-R, Agrawal A, Matsoso MP, Katz R, Davis SLM, Winkler AS, et al. A call for citizen science in pandemic preparedness and response: beyond data collection. BMJ Glob Health 2022;7:e009389. https://doi.org/10.1136/bmjgh-2022-009389.

[21] Joint Research Centre, European Commission, Rancati A, Snowden D. Managing complexity (and chaos) in times of crisis: a field guide for decision makers inspired by the Cynefin framework. Luxembourg: Publications Office of the European Union; 2021.

[22] Skivington K, Matthews L, Simpson SA, Craig P, Baird J, Blazeby JM, et al. A new framework for developing and evaluating complex interventions: update of Medical Research Council guidance. BMJ 2021;374:n2061. https://doi.org/10.1136/bmj.n2061.

At-tractor what-tractor? Tipping points in behaviour change science, with applications to risk management

Back in 2020, our research group was delivering the last of five symposia included in a project called Behaviour Change Science and Policy (BeSP). I was particularly excited about this one because the topic was complexity, and the symposia series brought together researchers and policy makers interested in improving the society – without making things worse by assuming an overly narrow view of the world.

I had a particular interest in two speakers. Ken Resnicow had done inspiring conceptual work on the topic already back in 2006, and had been an influence on both me and my PhD supervisor (and BeSP project lead) Nelli Hankonen, in her early career. Sadly, the world hadn’t yet been ready for an extensive uptake of the ideas, and much of the methodological tools were inaccessible (or unsuitable) to social scientists. The other person of particular interest, Daniele Proverbio, on the other hand, was a doctoral researcher with training in physics and systems biology; I had met him by chance at the International Conference on Complex Systems, which I probably wouldn’t have attended, had it not been held online due to COVID. He was working on robust foundations and real-world applications of systems with so-called tipping points.

I started writing a paper with Ken, Daniele, Nelli and Gwen Marchand, who was also speaking at the symposium, as she had been working extensively on complexity in education. The paper started out as an introduction to complexity for behaviour change researchers, but as I took up a position in the newly founded behavioural science advisory group at the Finnish Prime Minister’s Office late 2020, the whole thing went to a back burner. It wasn’t just that, though. Being a scholar of motivation, I knew that being bored of your own words is a major warning sign, and things you prefer not to eat, you shouldn’t feed to others. So I didn’t touch the draft for over a year.

Meanwhile, I finished a manuscript which started out as a collection of notes from arguments about study design and analysis within our research group, when we were doing a workplace motivation self-management / self-organisation intervention. The manuscript demonstrated, how non-linearity, non-ergodicity and interdependence can be fatal for traditional methods of analysis. It was promptly rejected from Health Psychology Review, the flagship journal of the European Health Psychology Society – on the grounds that linear methods can solve all the issues, which was exactly the opposite of manuscript’s argument. That piece was later published in Behavioural Sciences, outlining the foundations of complex systems approaches in behaviour change science.

As the complexity fundamentals paper had now been written, I wasn’t too keen on continuing on our BeSP piece, before I was hit by a strange moment where everything I had dabbled with (and discussed with Daniele) for the previous year sort of came together. I re-wrote the entire paper in a very short time, partly around analysis I had started due to natural curiosity with no particular goal in mind.

This is non-linearity in action: instead of “productively” writing a little every day, you write nothing for a very long time, and then everything at once. And this is not a pathology – except in the minds of people who think everything in life should follow a pre-planned process of gradual fulfillment. I’ve spent decades trying to unlearn this, so I should know.

The paper turned out very non-boring to me, and I was particularly happy the aforementioned flagship journal (the one which rejected the earlier piece) accepted it with no requests for edits – despite being based on the same underlying ideas as the earlier one.

Graphical abstract of the attractor landscapes paper; courtesy of Daniele Proverbio. Describes two types of tipping points in systems with attractors.
Graphical abstract of the attractor landscapes paper; courtesy of Daniele Proverbio.

Implications to risk management

The theory underlying attractor landscapes and tipping points, points to two important issues in risk management. Firstly, large changes need not be the result of large events, but small pushes can suffice, when the system resides in a shallow attractor or on the top of a “hill” in the landscape. Secondly, the fact that earlier events have not caused large-scale behaviour change, does not imply that they continue not do so in the future. This is a mistake constantly made by Finnish doctors and epidemiologists throughout the pandemic, e.g. about people’s unwillingness to take up masks – we could stop COVID, for example, but don’t do so because people have been told this attractor is inescapable.

In a recent training for public servants, we experimented with conveying these ideas to non-scientists – lots of work to be done, but some did find it an enlightening escape from conventional linear thinking.

To sum up, some personal takeaways (your mileage may vary):

  1. The quality of motivation you experience when working on something boring is information: there might be a better idea, one actually worth your time, which gets trampled as you muddle through something less attractive. Same applies to health behaviours.
  2. Remain able to seize opportunities when they arise: steer clear of projects with deadlines, and milestones in particular. They coerce you to finish what you started, instead of dropping it for a time and starting anew much later.

The astute reader may have noticed, that I did not explain the damned attractors in this post at all. You’ll find all you need here:

What Does “Behaviour Change Science” Study?

This is an introductory post about this paper. The paper introduces to the object of study in “behaviour change science”, i.e. complex systems – which include most human systems from individuals to communities and nations.

In a health psychology conference many years ago (when we still travelled for that sort of thing), I wandered into the conference venue a bit late, and the sessions had already started. There was just one other person in the hallways, looking a bit lost. I was scared to death of another difficult-to-escape presentation cavalcade about how someone came up with p-values under 0.05, so I made some joke about our confusion and ended up preventing his attendance, too. Turned out he was a physicist recently hired in a behavioural medicine research group, sent to the conference to get his first bearings about the field. Understandably, he was confused with a hint of distraught: “I don’t understand a word about what these people talk about. And I’ve been to several sessions already without having seen a single equation!” (nb. if you don’t think this is funny, you’re probably not a social scientist.)

Given that back then I was finding my first bearings on network science, we had a lot to talk about during the rest of the conference. I don’t remember much about the conference, but I remember him making an excellent point about learning: The best way to learn anything is to talk to someone who’s just learned about the thing. While not yet mega-experts, they still have an idea of where you stand, and can hence make things much more understandable than those, who already swim in a sea of concepts unfamiliar to you.

In a recent paper about behaviour change as a topic of research, we tried to do exactly this. I know I’m crossing the chasm where I’m not yet the mega-expert, but am already losing the ability to see what people in my field find hard to grasp. I presented the paper in a research seminar and people found it quite challenging, but on the other hand, I’ve never seen such ultra-positivity from reviewers. So maybe it’s helpful to some.

This impeccably written manuscript provides a thorough, state-of-the-art review of complex adaptive systems, particularly in the context of behavior change, and it does an excellent job explaining difficult concepts.

– Reviewer 2

Here’s a quick test to see if it might be valuable to you. Have a look at this table, and if you think all is clear, you can skip the piece with good conscience:

I also made a video introduction to the topic. If you’re in a rush, you can just run through a pdf of the slides.

If you’re in an even bigger rush, the picture below gives a quick synopsis. To find out more, check out this post. You might also be interested in What Behaviour Change Science is Not.

The Complexity Matters Vodcast

On Fred Hasselman‘s initiative, we started a new show where we host a live-streamed discussion on complexity topics. I will gather a list of episodes with synopses in this post.

Note: The next episode is scheduled to take place on 12 January at 12:30 CET, when we interrogate Travis Wiltshire on issues regarding team dynamics!

S01E01: Complexity in psychological self-ratings.

We discussed Merlijn Olthof’s new paper Complexity in psychological self-ratings: implications for research and practice. Links are found in video comments on the YouTube page, but here are some extras:

Additional resources:

Interaction is not interaction: An interview with Fred Hasselman

I had the opportunity to interview Fred Hasselman, the main architect of casnet: An R toolbox for studying Complex Adaptive Systems and NETworks. We spoke of how compatible the complex systems perspective is with some methods widely used in social sciences.

A few notes:

  • Multilevel models (and what you put in those) come in many varietiesand some are useful
  • Interaction is not interaction
    • Interaction (1): Two variables are intertwined – or “coupled” – in such a way, that they cannot be separated without severing the phenomena arising from their interplay.
    • Interaction (2): A multiplicative, instead of additive, relationship in a linear regression model, where you can partial out variance and get nice beta weights for each variable to determine their individual impacts.
    • The two meanings presented above are logically inconsistent: See #36 in Scott Lilienfield’s “Fifty psychological and psychiatric terms to avoid
  • Interdependence means you can’t use the regular statistics which social scientists know and love.
    • … because you lose additivity.
  • “Don’t infer causality, observe it.”
    • When the system you’re looking at is an individual instead of e.g. the society, you’re in the quite happy position, that lab studies are possible (if you’re smart about them).
  • An excellent paper from Merljin Olthof: Complexity in psychological self-ratings: implications for research and practice
  • Additional resources:
    • A symposium we held on complexity in behavioural science, evidence and policy.
    • A workshop by Fred Hasselman (scroll to the end for an extensive reading list).
    • University of Helsinki course by Matti: CARMA – Critical Appraisal of Research Methods and Analysis.

Because every post needs an image, here’s Julia Rohrer‘s (2017) Theory of Regulation of Empty Theories (TROETE)