From Droplets to Fine Particles: How the pandemic changed our understanding of infection spread

This is an English translation of a Finnish post outlining some learnings from our government funded consortium of 5 universities, studying what we can learn from the COVID-19 pandemic response. The interdisciplinary collaboration has significantly influenced how I think of pathogen spread, and I found the post quite enlightening. I hence wanted to share this with a larger audience, because auto-translate tools still don’t work well for Finnish. Original authors are Lotta-Maria Oksanen, Tuomas Aivelo, Viktor Zöldi, and Tarja Sironen. I am grateful to the first author for providing feedback on the translation, but all mistakes in the final text are mine – please report if you see any.


Throughout time, people have pondered how infections spread and how they should be combated. The pandemic changed how we think pathogens in the respiratory tract spread.

Exposure occurring through the air is no longer seen as rare or exceptional. It is now viewed as a daily transmission route for respiratory infections in normal social interaction.

Research on the topic is being conducted in the Government-funded Lessons from the Pandemic Crisis (PAKO) project, which investigates COVID-era actions across different sectors to prepare for future crises. The University of Helsinki [medical faculty]’s sub-study forms a comprehensive picture of the understanding that prevailed in Finland. The study focuses on the view about transmission routes and the changes it faced, and the protection guidelines introduced. It also produces an analysis of the variants observed in Finland during the pandemic, their characteristics, and their speed of arrival.

The Historical View: Droplets at the Center of Infection

Before the pandemic, it was thought that respiratory infections spread via large droplets generated when a symptomatic person coughs or sneezes. These droplets cause infection when hitting the recipient’s mucous membranes, i.e., the eyes, nose, or mouth.

Over a hundred years ago, medicine was living through a time of change. It began to be understood that diseases are caused by pathogens. There was a desire to get rid of old beliefs, such as miasma – polluted air that originated from rotting sources and was believed to cause diseases. Dried saliva mixed with dust was also believed at that time to have great significance in the spread of diseases. Indeed, public spaces read: “Do not spit on the floor.” Studies were conducted in which test subjects, for example, swirled a solution containing bacteria in their mouths and read aloud. Dishes were placed on the floor in front of them, revealing how far the bacteria spread. Bacteria grew especially on the dishes that were close by. In the studies, the significance of fresh secretions was understood, and attention focused on large droplets.

The role of air as a carrier of infections lost apparent credibility even further, when a leading public health scientist Charles Chapin (1856-1941) criticised airborne transmission in his key textbook. He proposed that contact infection is the most central and obvious transmission route. The idea of contact and droplets as dominant modes of transmission lived strong for decades, and thus hand washing and droplets became familiar to everyone in pandemic guidelines as well.

The New View: Respiratory Infections Spread Via the Air

The problem with droplet thinking was that smaller particles remain in the air and do not settle on the dish. The old technology simply was not sufficient to observe these. Especially regarding viruses, technical challenges exist even today. Airborne transmission has, however, proven to be central in many transmission events. Advanced measurement methods have shown that nearly all particles we produce are very small and that we produce infectious particles also when breathing. In animal experiments, other transmission routes have been ruled out, demonstrating that infection must have occurred via the air – for example, a seminal research setup that demonstrated the airborne nature of tuberculosis has been replicated also with SARS-CoV-2

Human experiments have also yielded interesting results. In a study examining Rhinovirus, laboratory-infected “donors” and susceptible volunteers sat playing cards for 12 hours, sitting at a distance of 1.5-2m from each other. Some of the participants had their hands tied so that it was impossible for them to touch their faces, making infection possible only through the air. Infections were equally common among hands-tied and untied groups. In a separate branch of the experiment, where the cards were thoroughly soiled with the infectors’ nasal secretions, but the infectors did not participate in the game, no infections occurred at all.

Correspondingly, many pandemic-era super-spreader events – such as various concert and choir events, many of which observed safety distances – a significant portion of participants still fell ill. Such transmission events are only possible through airborne transmission. While it was previously thought that close proximity was proof of contact infection, it is now understood that exposure to small particles is also highest at close range.

Infection Through the Air: Threat or Opportunity?

Traditionally, air has been considered difficult to control. Modern technologies, such as ventilation, air purification, and respirator-grade masks, however, make it possible to target actions precisely at smaller particles and thus reduce infection risk. The UN held its first Conference on Healthy Indoor Air on 23 September 2025, and safe indoor air was defined as a key objective. Indeed, new international ventilation standards that take infection risks into account, have been proposed for new construction and renovation projects.

Taking airborne infection into account also opens an excellent opportunity to improve patient and occupational safety in social and health care. With awareness, this infection risk can be monitored and combated. Until now, protective guidelines have mainly concerned droplets, leaving a central part of the exposure unnoticed. Noting the airborne nature enables use of personal protective equipment that protects against it. An example would be recognising that the effect of screens or curtains is minimal on respiratory infection in a shared room without other additional measures – such as air purifiers – especially during prolonged exposure.

The Role of Information and the Challenge of Distortion

The pandemic has also made visible another phenomenon: the distortion of information. In the era of social media and rapid communication, research information and carefully collected evidence compete for attention with false information, partisan interpretation, and outright disinformation.

The phenomenon is global, but it escalated, for example, in the United States. Sharp dividing lines regarding e.g. the use of masks and vaccines emerged both spontaneously, and were created purposefully, during the pandemic. Information became a tool for political and ideological maneuvering – instead of a shared foundation for decision making.

Information alone is not enough. Structures and processes are also needed that enable the flow of information and support trust in experts, authorities, and research, as well as dialogue between these actors. Without this trust, even the best possible research information does not translate into action.

The pandemic showed how one can change course in the middle of a crisis if basic trust exists: Finns, for example, quickly adopted masking or kept their distance when the authorities recommended it. Conversely, new ideas do not end up in practice if the opportunity to bring new information to the decision-making table is lacking. Discussion must continue and trust must be built between crises; then the structures will be ready.

Looking Forward

From the history of medicine, we know that it takes a long time before new information changes practices. The necessity of hand washing between handling the deceased and treating living surgeries or deliveries was once difficult to accept. Nowadays, medical breakthroughs are considerably faster, but questioning old ways still often leads to negative reactions first. Over time, through education and dialogue, new information begins to gain a foothold.

Change is an opportunity. With advanced knowledge, we can bring different fields to the same table to solve the problem. For example, we can enhance measures that reduce infection risk in indoor air already in building design, and simultaneously reduce other exposure to particles. In this change, digitalisation and sensor technology offer new possibilities. Smart buildings can adjust ventilation energy-efficiently using real-time data. Collected data, in turn, helps visualise risky spaces and target corrective measures exactly where they are needed most – making healthy indoor air the new norm.

This means that in future epidemics, we will have more means at our disposal than just prohibitions and restrictions. What if in the next pandemic we didn’t have to greet the elderly from behind a window, but could hug them while wearing a respirator-grade mask? Often, it is precisely large structural hygiene changes, such as the cold chain or water purification, that bring significant public health benefits in the long run – is clean indoor air the next great change that also increases everyday health security? 

It is time to ask: are we ready to invest in behaviours, technology, and structures that make our workplaces, hospitals, schools, and homes healthier?


Original authors:

Lotta-Maria Oksanen, Postdoctoral Researcher, University of Helsinki

Tuomas Aivelo, Assistant Professor, Leiden University, Academy Research Fellow, University of Helsinki

Viktor Zöldi, Postdoctoral Researcher, University of Helsinki

Tarja Sironen, Professor, University of Helsinki

Evidence is in the Past, Risk is in the Future: On Tail Events and Foresight

Context: This post outlines a manuscript in preparation and exhibits some of its visualisations, partly also presented at the European Public Health Conference (November 2025). If a blog format isn’t your poison, you can also see this video or this one-pager (conference poster).


It’s April 2025. Red Eléctrica, the electricity grid provider for the Iberian Peninsula, declares: “There exists no risk of a blackout. Red Eléctrica guarantees supply.”

Twenty days later, a massive blackout hits Portugal, Spain, and parts of France.

What the hell happened?

To understand this, we need to talk about ladders.

The Ladder Thought Experiment

Let’s take an example outlined in the wonderful article An Introduction to Complex Systems Science and Its Applications: Imagine 100 ladders leaning against a wall. Say each has a 1/10 probability of falling. If these ladders are independent, the probability that two fall together is 1/100. Three falling together: 1/1000. The probability of all 100 falling simultaneously becomes astronomically small – negligible, essentially zero.

Now tie all the ladders together with rope. You’ve made any individual ladder safer (less likely to fall on its own), but you’ve created a non-negligible chance that all might fall together.

This is cascade risk in interconnected systems.

Two Types of Risk

From a society’s perspective, we can understand risks as falling into one of two categories:

Modular risks (thin-tailed) don’t endanger whole societies or trigger cascades. A traffic accident in Helsinki won’t affect Madrid or even Stockholm. These risks have many typical precedents, slowly changing trends, and are relatively easy to imagine. We can use evidence-based risk management because we have large samples of past events to learn from.

If something is present daily but hasn’t produced an extreme for 50 years, it probably never will.

Cascade risks (fat-tailed) pose existential threats through domino effects. Pandemics, wars, and climate change fall here. They’re abstract due to rarity, with few typical precedents – events tend to be either small or catastrophic, with little in between.

If something hasn’t happened for 50 years in this domain, we might have just been lucky, and it might still hit us with astronomical force.

Consider these examples:

  • Workplace injuries
  • Street violence
  • Non-communicable diseases
  • Nuclear plant accidents
  • Novel pathogens
  • War

Before reading on, give it a think. Which are modular? Which are cascade risks?

I’d say most workplace injuries and street violence are modular (unless caused by organised crime or systemic factors like pandemics). Non-communicable diseases are also modular, although can be caused by systemic issues. Mega-trends perhaps, but you wouldn’t expect a year when they suddenly doubled, or became 10-fold.

Novel pathogens and wars are cascade risks that spread across borders and trigger secondary effects. These are the ladders tied together with a rope. Nuclear plants kind of depend; nowadays people try to build many modular cores instead of one huge reactor, so that failure of one doesn’t affect the failure of others. But as the mathematician Raphael Douady put it: “Diversifying your eggs to different baskets doesn’t help, if all the baskets are on board of Titanic” (see Fukushima disaster).

Is That a Heavy Tail, or Are You Just Happy to See Me?

Panels A) and B) below show pandemic data (data source, image source, alt text for details) – with casualties rescaled to today’s population. The Black Death around the year 1300 caused more than 2.5 billion deaths in contemporary terms. Histograms on the right show the relative number of different-sized events. The distribution shows tons of small pandemics and a few devastating extremes, with almost nothing in between (panel A, vertical scale in billions). We see a similar shape even when we get rid of the two extreme events (panel B, vertical scale in millions).

Panel A: “Paretian” dynamics of a systemic risk, illustrated by casualties from pandemics with over 1000 deaths, rescaled to contemporary population, with years indicating the beginning of the pandemic (Data from Cirillo & Taleb, 2020; COVID-19 deaths are presented until June 2024 according to model by The Economist & Solstad, S., 2021). Panel B: Same as panel A, zooming into the events with less than 1B deaths. This illustrates how the variance remains vast, even when the scale of events is much smaller. Panel C: Casualties from traffic accidents in Finland, illustrating the dynamics of a “thin-tailed”, localised risks. In this case, it would not be reasonable to expect a sudden increase to 10 000 casualties, whereas in the prior examples such jumps are an integral part of the occurrence dynamic.

Compare this to Panel C), Finnish traffic fatalities. Deaths cluster together predictably. You wouldn’t expect 10 000 road deaths in a single year – even 2 000 would be shocking.

Moving from observations to theory: The figure below compares mathematical “heavy-tailed” distributions to “thin-tailed” distributions. Heavy-tailed distributions depict:

  • Many more super-small events than thin-tailed distributions: Look at the very left side of the left panel below, where red line is above the blue one
  • Fewer mid-size events: Look at the middle portion of the left panel below, where blue line is higher than red
  • Extreme events of a huge magnitude that remain plausible: Look at the inset, which zooms into the tail (in thin-tailed distributions, mega-extremes are practically impossible like the ladders without a rope)

When we look at the right panel of the image above, thin-tailed distributions (like traffic deaths) should drop suddenly when plotted on a logarithmic scale. Fat-tailed distributions (like pandemics) should create a straight line, meaning very large events remain statistically plausible.

Or, at least that’s the theory, based on mathematical abstraction. Let’s see what the real data shows.

And here we go: The tail of actual pandemics looks like a straight line, while the tail of traffic deaths curves down like an eagle’s beak. Pretty neat, huh?

Evidence Lives in the Past, Risk Lives in the Future

In the interests of time, I’m going to skip a visualisation you see in the video (26:45). Main point is that for thin-tailed modular risks, we extrapolate from past data. For heavy-tailed cascade risks, we must form plausible hypotheses from current, weak, and incomplete signals.

This is the difference between induction (everything that happened before has these features, so future events will too) and abduction (reasoning to the most sensible course of action given limited information). All data is data from the past, and if the past isn’t a good indicator of the future, we need different ways of acting:

The mantra of resilience is early detection, fast recovery, rapid exploitation.
Dave Snowden

We need to detect weak signals early. The longer we wait, the bigger the destruction.

A Practical (piece of a) Solution: Participatory Community Surveillance Networks

In our research group, we’re developing networks of trusted survey respondents who participate regularly (see article), akin to the idea of “citizen sensor networks” also presented in the EU field guide Managing complexity (and chaos) in times of crisis. With such a network in place, during calm times, you can collect experiences and feedback on policy decisions. When crisis hits, you can pivot to gain rich real-time data from the field.

Why? Because nobody can see everything, and we see what we expect to see. If you don’t believe me, see if you can solve this mystery.

Given enough eyeballs, all bugs are shallow
– Eric S. Raymond

The process:

  1. Set up a network of trusted responders
  2. Collect experiences continuously
  3. Pivot when crisis takes place to gather data on how the disruption shows up in lived experience
  4. Avoid the trap of post-emergency mythmaking, and do a “lessons learnt” analysis with data collected during the disruption

Example: Inhabitant Developer Network

We developed an idea in a Finnish town, where new inhabitants would join the network as part of a “welcome to town” package. We could ask:

  • “What’s better here than where you lived before?” → relay to marketing
  • “What’s worse here than where you lived before?” → relay to development

When crisis occurs, we could pivot, asking about how the disruption shows up in people’s lived experience:

  • “What happened?”
  • “Give your description a title”
  • “How did this affect things important to you?”
  • “How well did you do during and after?” (1-10 scale)
  • “How prepared were you?” (1-10 scale)
  • … etc.

Respondents self-index these experiential snippets with quantitative indicators, giving us both qualitative richness and quantitative patterns. We can then e.g. examine situations where people were well-prepared but didn’t do well, or did well despite being unprepared – and filter e.g. by tags like rescue service involvement. This gives us rich data from the field to inform local decision makers.

From Experiences to Action

The beauty of collecting people’s lived experience is that they can later be used for citizen or whole-of-workforce engagement workshops. You can ask Dave Snowden’s iconical question: “How could we get more experiences like this, and fewer like those?”

This question holds an outcome “goal” lightly, allowing journeys to start with direction rather than rigid destination. It is understandable regardless of education level, and gives communities agency in developing solutions. This approach enables:

Anticipation: Use tailedness analysis as a diagnostic; use networks to detect weak signals before they explode.

Formulation: Design adaptive interventions with the community – interventions that are change instead of being fragile to the first unexpected shock.

Adoption: Build agency, legitimacy and buy-in through participatory processes. People support what they own or help create.

Implementation & Evaluation: Monitor in real-time, learn continuously, act accordingly. No more waiting six months for a report, or getting a quantitative result (“life satisfaction fell from 3.9 to 3.2”) only to need another research project to learn why: You can just look at the qualitative data to understand context.

Why This Matters

When Red Eléctrica declared “there exists no risk,” they were thinking in a thin-tailed world where past data predicts future outcomes. But interconnected systems – like them tied-together ladders – create heavy-tailed risks. For cascade risks, precaution matters more than proof. If you face an existential risk and fail, you won’t be there to try again.

As Nassim Nicholas Taleb puts it: Risk is acceptable, ruin is not (more in this post). And no individual human is capable of understanding our modern, interconnected environments alone.

Bring forth the eyeballs.


Related Posts

From Fruit Salad to Baked Bread: Understanding Complex Systems for Behaviour Change – Why treating behaviour change like assembling fruit salad instead of baking bread leads well-meaning efforts to stumble.

From a False Sense of Safety to Resilience Under Uncertainty – On disaster myths, attractor landscapes, and why intervening on people’s feelings instead of their response capacity is dangerous.

“Mistä tässä tilanteessa on kyse?”: Henkisestä kriisinkestävyydestä yhteisölliseen kriisitoimijuuteen (In Finnish) – From individual resilience to collective crisis agency: reflections from Finland’s national security event.

Riskinhallinta epävarmuuden aikoina: Väestön osallistaminen varautumis- ja ennakointimuotoiluun (In Finnish) – Risk management under uncertainty through participatory anticipatory design.


For deeper exploration of these concepts, I recommend Nassim Nicholas Taleb’s books: Fooled by Randomness, The Black Swan, and Antifragile, as well as the aforementioned EU field guide Managing complexity (and chaos) in times of crisis.

From Fruit Salad to Baked Bread: Understanding Complex Systems for Behaviour Change

New perspectives from my doctoral research, “Complex Systems and Behaviour Change: Bridging Far Away Lands.”

On May 16, 2025, I finally defended my doctoral dissertation – a side-project in the making for the last 9 years or so. I was pretty confident that this would have happened two years ago already when I submitted a rogue version of the dissertation summary for pre-examination. It was titled “Understanding and Shaping Complex Social Systems: Lessons from an emerging paradigm to thrive in an uncertain world”, which is also the name of a course I later started teaching in the New England Complex Systems Institute. The preprint was quickly downloaded almost 1000 times, and people reached out to me to thank for the clear exposition. But this version turned out to be a bit too rogue for one of the pre-examiners, and I rewrote the whole thing in 2024 – to be much more technical, and stylistically more conventional.

The defence was a success and here we are, the dissertation finally accepted by the academic establishment. Published summary can be downloaded here. The implicit promise is that after reading the work, you’ll be able to understand this cartoon, which you might recognise to have a relationship with the cover image:

As is traditional in the Finnish system, I began the occasion with a Lectio Praecursoria – an introductory speech. This talk introduced the groundwork for my research, exploring the often-overlooked connections between two seemingly distant scientific fields: complex systems and behaviour change.

This blog post adapts that initial speech, inviting you to explore these ideas with me.

The Core Idea: Why We Need to Rethink Behaviour Change

The research I present explores the intersection of two scientific domains that might seem, at first glance, quite distant. But what I want to do is share why building bridges between complex systems and behaviour change is not merely an academic curiosity, but, as I argue in this work, a vital step towards deepening our understanding of human action in our increasingly interconnected world, and ultimately, towards building a more robust basic science of behaviour change. [Side note: you can find my perspective to what behaviour change is NOT here, and connections to risk management here and here.]

The “Fruit Salad vs. Bread” Analogy: Understanding Different Types of Systems

To begin, let us talk about the difference between making fruit salad and baking bread. I am well aware of how ludicrous this sounds, but I believe that confusing these two processes consistently causes well-meaning efforts, particularly those aimed at changing behaviour, to stumble. So please bear with me.

Imagine making fruit salad for a bunch of children. You gather fruits you enjoy – perhaps pineapple, peach, and cherries. You’re fairly confident that if you like them separately, you’ll like them together. You chop them, combine them, and serve them. Now, if a child finds that cherries look too strange to be edible – and leaves them behind – it’s no catastrophe. They can still consume the pineapple and peach, which every reasonable person enjoys. The uneaten cherries can be consumed by someone else later. In fruit salad, we can combine ingredients, analyse the parts somewhat independently, and predict the outcome of the whole with reasonable certainty. With many ingredients, fruit salad can become complicated – a word whose origins (as pointed out by Dave Snowden) can be taken to mean “folded.” And what has been folded, can often also be unproblematically unfolded.

Now, think about baking bread. You combine yeast, flour, water, and salt. You’ve heard that olive oil is healthy, so you add a bit of that in. You mix, knead, let it rise, bake. The final loaf emerges. But what if the children dislike the taste of olive? You cannot simply remove the oil. Or what if you put in too much salt? The ingredients have interacted, transformed. The bread is an emergent product, something entirely new, fundamentally different from the mere sum of its parts. The whole portion intended for the children, not just the offending component, might have to be passed to an omnivorous family member. This process is better described not as complicated, but as complex, a word with roots that can be interpreted as “entangled” or “interwoven.”

Unlike with folding, what is interwoven cannot easily be disentangled without fundamentally changing its nature.

The Two Key Disciplines: Behaviour Change and Complex Systems

With this analogy in mind, let’s turn to the disciplines central to my research.

Behaviour change science is an inherently interdisciplinary field drawing from psychology, sociology, public health, and more. It strives to understand the web of factors – personal, social, environmental – that shapes our actions. Its goal is to help foster changes needed to tackle major societal challenges: from noncommunicable diseases (entailing, for example, physical activity behaviours) and sustainable work-life (entailing, for example, job crafting behaviours) to climate action and pandemic preparedness (entailing risk management behaviours). Human action is a core thread in all these pressing issues.

The other discipline central to this work is complex systems science. It originally grew out of physics, chemistry, and biology, but its principles increasingly reach into the psychological and social world. It studies systems composed of many interacting parts, where these interactions often dominate the system’s overall behaviour. A key insight is that the relationships between components can be more critical than the components themselves in determining the system’s properties. Think of water: ice, liquid, and steam involve the same H₂O molecules, but their differing interconnectivity leads to vastly different behaviours. Steam can make a sauna feel warm; ice can make swimming difficult afterwards. But the components remain the same.

Are We Using Fruit-Salad Tools for Bread-Like Problems?

When it comes to systems, some are more component-dominant, like fruit salad, while others are more interaction-dominant, like bread. My research argues that many phenomena central to behaviour change science – like motivation dynamics, the spread of social norms, or how people respond to interventions – are far more like bread than fruit salad. They occur as parts of complex, interaction-dominant systems.

The main contributions of my dissertation relate to the development of basic science. Early theories in behaviour change were driven by practitioners aiming to understand issues they faced. And practitioners are often very good at working with complexity, although their terminology to describe the phenomena at play might sometimes be limited. But still, many of the quantitative tools that were relied upon in developing these theories implicitly treated behaviour change phenomena like fruit salad. For instance, while linear regression analysis can incorporate simple interaction terms to account for some forms of interdependence, its main usage is to assign values to variables such as norms, intentions, and attitudes, assuming they are independent from each other – implying separability. Furthermore, there’s a common, often implicit, assumption that findings derived from group-level data directly translate to understanding how individuals change over time.

So, the central question becomes: If behaviour change is often entangled and emergent like bread dough, should our primary tools be those best suited for slicing separable fruit?

Beyond Linearity: Embracing the Complexity of Change

I argue that this potential mismatch – analysing bread with fruit salad tools – can hinder our understanding of behaviour change as a complex evolving process. Complex systems science suggests that variability, which might look like messiness or error from a purely linear perspective, is often not just noise; it can be the inherent signature of the dynamic system itself.

A key characteristic of these systems, which I investigated conceptually and empirically, is non-linearity. Imagine pushing a boulder near a hilltop:

You push a little, the boulder moves a little.

You push a little, the boulder moves a little.

You push a little… and the boulder tumbles dramatically into a new valley.

Perhaps now scientists rush to the scene to investigate what was distinct in your technique for the last push. And they will inevitably find results. But the magic was not in the push, but the relationship between the push, the boulder’s position, and the landscape. This kind of abrupt, disproportionate change is known as a critical transition.

Mapping Change: The Power of Attractor Landscapes

Complex systems science offers a powerful conceptual tool to map transition dynamics: the attractor landscape. Imagine a pool table with a single billiard ball. Each position on the table represents a possible state for the system, and the current status is represented by the location of the billiard ball. Now imagine the surface isn’t flat, but contains hills and valleys. The valleys represent stable patterns – the attractors, collections of similar states that “trap” the ball. It’s easy for the ball to settle into a valley; it requires more effort or perturbation to push it out. The ridges between valleys are called tipping points.

A slice of an attractor landscape showing two major ways systems can shift abruptly (from an article included in the dissertation)

Think of smoking, where dispositions in the North Atlantic world shifted gradually if at all for many decades. Imagine this as a landscape: one valley where smoking is socially acceptable, and another where it is frowned upon. There was little change for a long time, until a tipping point was reached, leading to widespread disapproval and significant policy changes. Pushing the system over the ridge requires effort or a significant nudge, but once crossed, it naturally settles into a new attractor valley, a new stable pattern. However, this landscape isn’t necessarily static; it can transform and be reshaped. Think of this like the hills and valleys of the pool table rising and falling over time.

Notice how different this landscape representation is from conventional flowcharts suggesting neat, linear causes and effects. It shifts focus towards understanding the system’s dispositions, its underlying tendencies and stabilities. It encourages a focus on nurturing the conditions, tending the substrate, working the soil, from which desired behaviours – in deeper, more stable valleys – can emerge, and sustain themselves more naturally.

Evidence in Action: From Work Motivation to Public Health

In my research, I used analytical techniques adapted from dynamical systems theory to investigate empirical evidence for such attractor states and shifts within fine-grained, moment-to-moment work motivation data. I also explored its applicability to societal-level data on COVID-19 protective behaviours. This work suggests the landscape metaphor is not just a useful theoretical vehicle; these patterns can be observed and studied in real-world behaviour change contexts.

In addition to non-linearity, some of the patterns of complex systems I examined in this research were “non-stationarity” and “non-ergodicity”. In my work, I clarify these terms in the behaviour change context and demonstrate how to study them empirically in time series data, with methods such as cumulative recurrence network analysis.

The Key Takeaway: Complexity as a Feature, Not a Bug

In essence, the core message of this work is that the bread-like complexity of human behaviour change isn’t just noise or a problem to be simplified away. It’s a fundamental characteristic we must embrace and understand scientifically if we want our science to accurately reflect the phenomena it studies. Complex systems science provides concepts and tools that acknowledge interdependence, emergence, and context-sensitivity of change phenomena. And we aim not to eliminate this complexity, but to enlist it.

Looking Ahead: Building a Bridge to a More Robust Science of Human Action

By building bridges between behaviour change science and complex systems science, the research presented here argues that a complex systems perspective can help us build a more robust and realistic science of human action – one that recognises behaviour not just as a collection of separable ingredients like a fruit salad, but as an emergent, interwoven process like baking bread.

This, I believe, is crucial. It is crucial for developing a science better equipped to understand the intricate dynamics of behaviour change. It is crucial for us to seize the opportunities that arise when we learn to converse with complex systems, instead of just trying to push them around. And it is crucial for navigating the critical policy challenges of our time, which invariably involve understanding and enabling human action.


What are your thoughts? Leave a comment or reach out. My current research interests mainly revolve around risk management (see paper described here) – particularly, understanding and shaping communities’ capacities to respond, recover, and adapt from shocks. I’m a 72hours.fi trainer, and would be happy to collaborate in e.g. projects to make the EU’s new preparedness strategy a feasible reality.

Picture of me doing a sound check before the doctoral defense. It was held in Zoom as I was in Germany, the chair was in Finland, and the opponent in the U.S. 😅

Affordance Mapping to Manage Complex Systems: Planning a Children’s Party

I’ve recently followed with interest Dave Snowden’s development of “Estuarine Mapping”, also known as “Affordance Mapping”. The process is based on a complex systems framework to design and de-risk change initiatives (see link in the end of this post). After taking part in training sessions and facilitating some mapping exercises with groups, I found myself in want of a metaphor that didn’t require an understanding of coastal geography.

Enter the world of children’s parties. Snowden has a famous anecdote about organising a party for kids, which brilliantly illustrates the folly of applying traditional management techniques to complex systems. Inspired by this tale, I’ve reimagined it here as a simplified depiction of the Affordance Mapping process. So here we go.

Picture yourself tasked with organising a birthday bash for a group of energetic seven-year-olds. But instead of reaching for a conventional party-planning checklist, you decide to employ the Affordance Mapping process. What would you do?

First, you’d start by surveying the party landscape. You’d identify all the elements that could influence the party – from the near-immovable dining table to the ever-shifting moods of the kids. We’ll call these our party elements.

Next, you’d create a map of these elements. On one axis, you’d have how much energy it takes to change each element – moving the dining table would be high energy, while changing the music playlist would be low. On the other axis, you’d have how long it takes to make these changes – getting pizza delivered, or setting up a bouncy castle might take an hour, while changing a game rule could be instant.

Now, you’d draw a line in the top right corner. Everything above this line is beyond your control – things you absolutely can’t change, like the fact that Tommy’s allergic to peanuts. You’d also draw a second line for things that are outside your control, but amenable in collaboration with other parents, like how the party should end by 6 PM. You’d also mark a zone in the bottom left corner, for elements that change too easily and might need stabilising, like the kids’ attention spans or the volume level.

The result might look something like this:

The exciting part is the middle area. Here’s where you can actually make changes to improve the party; the things you can manage. But you can also try to make some elements more manageable via (de)stabilisation efforts, or remove some altogether.

For example, you might decide to:

  1. Keep some elements as they are (the classic musical chairs game)
  2. Remove others that aren’t fun (the complicated crafts project your spouse found on Pinterest)
  3. Modify some to make them more enjoyable (have kids organise themselves into a line arranged by height, when moving outdoors after the cake is done with)

You’d come up with small experiments to test these ideas. Maybe you’ll try introducing a new party game like “freeze dance” to alleviate boredom in waiting for transitions from one activity to the next, or rearranging the gift-opening area. You’d also think about how changing one element might affect others – will having a water balloon toss right before snack time lead to damp clothes?

Finally, you’d plan how to amplify emergent positive side-effects, and mitigate negative ones. You’ll also redraw your party map before next year’s party. This way, you’re always working towards a more fun and dynamic party, understanding that some elements will always be shifting (like the kids’ favorite songs) while others stay constant (like the need for cake).

Technical note. The items on the map, in the lingo of the complex systems philosopher Alicia Juarrero, represent “constraints“; things that modulate a system’s behaviour. In complex systems, these are intertwined in such deep ways, that their effects are seldom amenable to an analysis of linear causality. To change a system’s macro-level state, you execute multiple parallel micro-interventions that aim to affect these constraints. For a recent open access book chapter outlining the rationale, see here: As through a glass darkly: a complex systems approach to futures.

New paper: From a False Sense of Safety to Resilience Under Uncertainty

Understanding how people act in crises and how to manage risk is crucial for decision-makers in health, social, and security policy. In a new paper published in the journal Frontiers in Psychology, we outline ways to navigate uncertainty and prepare for effective crisis responses.

The paper is part of a special issue called From Safety to Sense of Safety. The title is a play on this topic, which superficially interpreted can lead to a dangerous false impression: that we ought to intervene on people’s feelings instead of the substrate from which they emerge.

Nota bene: In June 2024, this topic is part of an online course for the New England Complex Systems Institute, and have some discount codes for friends of this blog. Do reach out!

The Pitfall of a False Sense of Safety

In the paper we first of all argue that we should understand so-called disaster myths, a prominent one being the myth of mass panic. This refers to the idea that people tend to lose control and go crazy during crises when they worry or fear too much, which implies we need to intervene on risk perceptions. But in fact, no matter what disaster movies or news reports show you, actual panic situations are rare. During crises, people tend to act prosocially. Hence, decision-makers should shift their focus from mitigating fear and worry – potentially leading to a false sense of safety – towards empowering communities to autonomously launch effective responses. This approach fosters resilience rather than complacency.

Decision Making Under Uncertainty: Attractor Landscapes

Secondly, we represent some basic ideas of decision making under uncertainty, via the concept of attractor landscapes. I now hope we would’ve talked about stability landscapes, but that ship already sailed. The idea can be understood like this: Say your society is the red ball, and each tile a state it’s in (e.g. “revolt”, “thriving”, “peace”, etc.) The society moves through a path of states.

These states are not equally probable; some are more “sticky” and harder to escape, like valleys in a landscape. These collections of states are called attractors. The area between two attractors is a tipping point (or here, kind of a “tipping ridge”).

I wholeheartedly encourage you to spend five minutes on Nicky Case’s interactive introduction to attractor landscapes here. It’s truly enlightening. The main thing to know about tipping points: as you cross them, nothing happens for a long time… Until everything happens at once.

The Dangers of Ruin Risks

All attractors are not made equal, though. For some, when you enter, you’ll never escape. These are called “ruin risks” (orange tile). If there is possibility of ruin in your landscape, probability dictates you will eventually reach it, obliterating all future aspirations.

As a basic principle, it does not make sense to see how close to the ledge you can walk and not fall. In personal life, you can take ruin risks to impress your friends or shoot for a Darwin Award. But keep your society away from the damned cliff.

As Nassim Nicholas Taleb teaches us: Risk is ok, ruin is not.

Navigating the Fog of War

In reality, not all states are visible from the start. Policymakers often face a “fog of war” (grey areas). Science can sometimes highlight where the major threats lie (“Here be Dragons”), but the future often remains opaque.

To make things worse for traditional planning, as you move a step from the starting position, the tiles may change. So you defined an ideal state, a Grand Vision (yellow) and set the milestones to reach it? If you remain steadfast, you could now be heading at a dead end or worse. Uh-oh.

(nb. due to space constraints, this image didn’t make it to the paper)

This situation, described in Dave Snowden’s Cynefin framework, is “complex.” Here, yesterday’s goals are today’s stray paths, so when complexity is high, you focus on the present – not some imaginary future. The strategy should be to take ONE step in a favourable direction, observe the now-unfolded landscape, and proceed accordingly.

The Cynefin Framework and Complex Systems

Sensemaking is a motivated, continuous effort to understand connections (which can be among people, places, and events) in order to anticipate their trajectories and act effectively.

Gary Klein

Sensemaking (or sense-making, as Dave Snowden defines it as a verb) refers to the attempt or capacity to make sense of an ambiguous situation in order to act in it. This is what we must do in complex situations, where excessive analysis can lead to paralysis instead of clarity.

Cynefin is a sense-making framework designed to enable conversations about such a situation, and offers heuristics to navigate the context. In the paper, we propose some tentative mappings of attractor landscape types to the Cynefin framework.

In general, our paper offers proposals for good governance, drawing from the science of sudden transitions in complex systems. Many examples pertain to pandemics, as they represent one of the most severe ruin risks we face (good contenders are of course wars and climate change).

By understanding the concepts illustrated here, policymakers could better navigate crises and build resilient societies capable of adapting to sudden changes.

If you want a deeper dive, please see the paper discussed in this post: At-tractor what-tractor? Tipping points in behaviour change science, with applications to risk management

NOTE: There’s another fresh paper out, this one in Translational Behavioural Medicine: How do behavioral public policy experts see the role of complex systems perspectives? An expert interview study. Could be of interest, too!

What Behaviour Change Science is Not

Due to frequent misconceptions about the topic, I wanted to outline a via negativa description of this thing called behaviour change science: in other words, what is it not? This is part of a series of posts clarifying the perspective I take in instructing a virtual course on behaviour change in complex systems at the New England Complex Systems Institute (NECSI). The course mixes behaviour change with complex systems science along with practical collaboration tools for making sense of the world in order to act in it.

Behaviour change science refers to an interdisciplinary approach, which often hails from social psychology, and studies changing human behaviour. The approach is motivated by the fact that many large problems we face today – be they about spreading misinformation, preventing non-communicable diseases, taking climate action, or preparing for pandemics – contain human action as major parts of both the problems and their solutions.

Based on many conversations regarding confusions around the topic, there is a need to clarify five points.

First, “behaviour change” in the current context is understood in a broad sense of the term, synonymous with human action, not as e.g. behaviourism. As such, it encompasses not only individuals, but also other scales of observation from dyads to small groups, communities and society at large. Social ecological models, for example, encourage us to think in such a multiscale manner, considering how individuals are embedded within larger systems. Methods for achieving change tend to differ for each scale; e.g. impacting communities entails different tools than impacting individuals (but we can also unify these scales). And people I talk to in behaviour change, understand action arises from interaction (albeit they may lack the specific terminology).

Second, the term intervention is understood in behaviour change context in a broader sense, than “nudges” to mess with people’s lives. A behaviour change intervention depicts any intentional change effort in a system, from communication campaigns to community development workshops and structural measures such as regulation and taxation. Even at the individual level, behaviour change interventions do not need to imply that an individual’s life is tampered with in a top-down manner; in fact, the best way to change behaviour is often to provide resources which enable the individual to act in better alignment with goals they have. Interventions can and do change environments that hamper those goals, or provide social resources and connections, which enable individuals to take action with their compatriots.

Third, behaviour change is not an activity taken up by actors standing outside the system that’s being intervened upon. Instead, best practices of intervention design compel us to work with stakeholders and communities when planning and implementing the interventions. This imperative goes back to Kurt Lewin’s action research, where participatory problem solving is combined with research activities. Leadership in social psychology is often defined not as the actions of a particular high-ranking role, but those available to any individuals in a system. Behaviour change practice is the same. To exaggerate only slightly: “Birds do it, bees do it, even educated fleas do it”.

Fourth, while interventions can be thought of as “events in systems”, some of which produce lasting effects while others wash away, viewing interventions as transient programme-like entities can narrow our thinking of how enablement of incremental, evolutionary, bottom-up behaviour change could optimally take place. Governance is, after all, conducted by local stakeholders in constant contact with the system, with larger leeway to adjust actions without fear of breaking evaluation protocol, and hopefully “skin in the game” perhaps long after intervention designers have moved on.

Fifth, nothing compels an intervention designer to infuse something novel into a system. For example, reverse translation studies what already works in practice, while aiming to learn how to replicate success elsewhere. De-implementation, on the other hand, studies what does not work, with the goal of removing practices causing harm. In fact, “Primum non nocere”; first, do no harm, is the single most important principle for behaviour change interventions .

Making sense of human action

Understanding and influencing human behavior is usually not a simple endeavor. Behaviors are shaped by a multitude of interacting factors across different scales, from the individual to the societal, and occur within systems of systems. Developing effective behavior change interventions requires grappling with this complexity. The approach taken in traditional behaviour change science uses behaviour change theories to make this complexity more manageable. I view these more akin to heuristic frameworks with practical utility – codification attempts of “what works for whom and when” – rather than theories in the natural science sense.

If you want a schematic of how I see behaviour change science, it might be something like the triangle below. It’s a somewhat silly representation, but what the triangle tries to convey, is that complex systems expertise sets out strategic priorities: Which futures should we pursue, and what kinds of methods make sense to get us going (key word is often evolution).

Behaviour change science, on the other hand, is much more tactical, offering tools and frameworks to understand how to make things happen closer to where the rubber hits the road.

But we will also go nowhere, unless we can harness collective intelligence of stakeholders and organisation / community members. This is why collaboration methods are essential. I will teach some of the ones I’ve found most useful in the course I mentioned in the intro.

If you want to learn more about the intersection of complex systems science and behaviour change, have a look at my Google Scholar profile, or see these posts:

Crafting Policies for an Interconnected World

This piece has been originally published as: Heino, M. T. J., Bilodeau, S., Fox, G., Gershenson, C., & Bar-Yam, Y. (2023). Crafting Policies for an Interconnected World. WHN Science Communications, 4(10), 1–1. https://doi.org/10.59454/whn-2310-348

While our knowledge expands faster than ever, our ability to anticipate and respond to global challenges or opportunities remains limited. A political upheaval in one country, a technological innovation in another, or an epidemic in a far-away city – any of these can create a global change cascade with many unexpected repercussions. Why is this? A significant part of the answer lies in our increased global connectivity, which produces both new risks and novel opportunities for collaborative action. 

In this rapidly evolving world, proactive and adaptive public policies are paramount, with a primary focus on human well-being, rights, and needs. The COVID-19 pandemic serves as a stark reminder that while traditional political and economic systems claim to represent public interests and allocate resources optimally, there’s often a gap between claim and reality. That people vote for political leaders doesn’t guarantee they will focus on public well-being or the availability of resources. A genuine human-centered focus on well-being, satisfaction, and quality of life becomes indispensable.

Reflecting on our pandemic response, mostly hierarchy-based and bureaucratic, we observed glaring operational shortcomings: delayed responses, disjointed actions, and ineffective execution of preparedness plans [1]. However, what has been less discussed is the insight that the crisis offers into the role of uncertainty due to nonlinear risks in shaping policy outcomes. 

Complex systems may present unseen, extreme risks that can spiral into catastrophic failures if left unaddressed early on. These failures can occur upon reaching instabilities and “tipping points,” that result in abrupt large-scale losses of well-being or resilience of a system, be it an ecosystem or a social system such as a nation [2–4]

The poor understanding of such non-linear risks is apparent through the ongoing  phases of the pandemic, where those who called for increased precaution were often accused of “fearmongering”. A misinterpretation of human reactions is a likely contributor: contrary to the common belief, people do not usually panic in emergencies. Instead, they tend to respond in constructive, cooperative ways, if given clear and accurate information. The widespread belief in a mass panic during disasters belongs to a group of misconceptions, studied in social psychology under the umbrella term of “disaster myths” [5–7]. The real danger lies in creating a false sense of security. If such a sense is shattered due to an unexpected event and lack of preparation, the fallout can be far more damaging in terms of physical, mental, and economic impact, not to mention loss of trust. Thus, the general recommendation for communication is to not downplay threats.  Instead, authorities need to offer the public clear information about potential risks and, crucially, guidance on how to prepare and respond effectively. This guidance has the potential to transform anxiety and passivity into positive self-organized action [8].

Human action lies at the core of many contemporary challenges, from climate change to public health crises. After all, it is human behavior – collective and nonlinear – that fuels the uncertainty of the modern world. The recognition of how traditional approaches can fall short in our increasingly volatile and complex contexts has led to increased demand for “strategic behavioral public policy” [9]

How can we advance our understanding of human behavior linked to instabilities and tipping points and turn them into capabilities for policy makers? The key is to understand how networks of dependencies between people link behaviors across a system. Complex systems science [10], as a field of study, involves understanding how different parts of a system interact with each other, creating emergent properties at multiple scales that cannot be predicted by studying the parts individually: There is no tsunami in a water molecule, no trusting relationship in an isolated interaction, no behavioral pattern in a single act, and no pandemic in an isolated infection [11]. Yet, the transformative potential of combining behavioral science with an understanding of complex systems science, a crucial tool for decision-making under uncertainty, remains largely untapped.

There are significant opportunities in weaving complex systems perspectives into human-centered public policy, infusing a deeper understanding of uncertainty into the heart of policy-making. A fusion of behavioral insights with an understanding of complex systems is not merely an intellectual exercise but a crucial tool for decision-making in crisis conditions and under uncertainty. As some examples:

  1. It urges us to prepare for uncommon events, like pandemics with impacts surpassing those of major conflicts like World War II. This realization comes as we discover that what would be extremely rare events in isolated systems, can become relatively frequent in an interconnected world [12–14]. A long-standing example is how economic crises, which many experts considered rare enough to be negligible, have repeatedly caught us off-guard.
  2. It emphasizes the importance of adaptability in seizing unforeseen opportunities and minimizing potential damages. Central to this adaptability is the concept of “optionality.” This means maintaining a broad array of choices and opportunities, allowing for increased adaptability and selective application based on evolving circumstances. Recognizing that we cannot anticipate every twist and turn of the future, our best approach is indeed to embrace evolutionary strategies; creating systems that effectively solve problems, instead of trying to solve each unique problem separately [15]. An important takeaway is that instead of over-optimizing for current conditions, investing in buffers and exploration – even if they seem redundant – becomes vital when the future is uncertain.
  3. It empowers us to distribute decision-making power to collaborative teams. This is because teams can solve many more high complexity problems than individuals can, and significant portions of the modern world are becoming too complex for even the most competent individuals to fully grasp [16,17].

However, integrating these insights is easier said than done. The shift requires significant capacity building among policymakers. It begins with understanding why novel approaches are necessary, and ensuring the adequate systems for preparedness are empowered. Training programs can help policymakers grasp the concepts of risk, uncertainty, and complex systems.

Developing human-centric policies under uncertainty

One recent training to improve competence in behavioral and complex systems insights [18], emphasized three factors of the policy development process: co-creation, iteration, and creativity. These are briefly outlined below.

  • Co-creation: Ideal teams addressing complex challenges have members with a diversity of backgrounds and expertise, where everyone is able to contribute their knowledge to shared action. Much can be achieved by limiting the influence of hierarchy and enabling interaction between team members and other stakeholders; formal approaches include e.g. the implementation of “red teams” [19]. Those who are most impacted by the plans, need to play a key role in the process. They are often citizens, who can provide critical information and expertise about the local environment [20,21].
  • Iteration: Mistakes naturally occur as an intrinsic part of gaining experience, developing the ability to tackle complex challenges, and building organizations to address them.  In general, ideas and systems for responding to complex contexts need to be allowed to evolve through (parallel) small-scale experiments and feasibility tests in real-world contexts. Feasibility testing should leverage the aforementioned optionality, retaining the ability to roll back in case of unforeseen negative consequences – or to amplify positive aspects that are only revealed upon observing how the plan interacts with its context [21,22]
  • Creativity: Excessive fear and stress impede innovation. If the design process is science-based, inclusive, and supports learning from weaknesses revealed by iterative explorations that can safely fail, we need not be afraid to try something different or outside of the box. In fact, this is where the most innovative solutions often come from.

Drawing on our earlier discussion on complex systems and human behavior, we understand that in the face of sudden threats, there is a critical need for nimbleness. Rapid response units, representing the frontline of our defense, should possess the autonomy to act, unencumbered by political hindrances. An example would be fire departments’ autonomy to respond to emergencies within pre-set and commonly agreed-upon protocols. The lessons from the pandemic and the insights from complex systems thinking underscore this. But how do we reconcile swift action with informed decision-making?

Transparent, educated communication, and trust based on the experience of success, can potentially bridge this gap. Science is how we understand the consequences of actions, and selecting the best consequences is essential for global risks. By ensuring policymakers and the public are informed and aligned, we can address risks head-on, anchored in commonly-held values and backed by science. As we lean into the practices discussed earlier, such as co-creation and iteration, our mindset too must evolve. Embracing new, sometimes unconventional, approaches will enable us to sidestep past policy pitfalls, especially those painfully highlighted by recent global events. Protecting rapid response teams from political interference upgrades our societal apparatus to confront the multifaceted challenges of our time. 

Learning anticipatory adaptation

Our ultimate aim is clear: proactivity. Rather than reacting once harm is done, we need to anticipate, adapt, and equip policymakers with the necessary insights and tools using a multidisciplinary approach that includes behavioral and complexity sciences. We can respond to the unpredictable, ensuring society is robust and resilient. This necessitates a collective call-to-action, urging citizens and organizations to develop institutions and inform policy makers to empower communities to thrive amidst uncertainties.

Bibliography

[1] Heino MT, Bilodeau S, Bar-Yam Y, Gershenson C, Raina S, Ewing A, et al. Building Capacity for Action: The Cornerstone of Pandemic Response. WHN Sci Commun 2023;4:1–1. https://doi.org/10.59454/whn-2306-015.

[2] Scheffer M, Bolhuis JE, Borsboom D, Buchman TG, Gijzel SMW, Goulson D, et al. Quantifying resilience of humans and other animals. Proc Natl Acad Sci 2018:201810630. https://doi.org/10/gfqjqr.

[3] Heino M, Proverbio D, Resnicow K, Marchand G, Hankonen N. Attractor landscapes: A unifying conceptual model for understanding behaviour change across scales of observation 2022. https://doi.org/10.31234/osf.io/3rxyd.

[4] Scheffer M, Borsboom D, Nieuwenhuis S, Westley F. Belief traps: Tackling the inertia of harmful beliefs. Proc Natl Acad Sci 2022;119:e2203149119. https://doi.org/10.1073/pnas.2203149119.

[5] Clark DO, Patrick DL, Grembowski D, Durham ML. Socioeconomic status and exercise self-efficacy in late life. J Behav Med 1995;18:355–76. https://doi.org/10/bjddw6.

[6] Drury J, Novelli D, Stott C. Psychological disaster myths in the perception and management of mass emergencies: Psychological disaster myths. J Appl Soc Psychol 2013;43:2259–70. https://doi.org/10.1111/jasp.12176.

[7] Drury J, Reicher S, Stott C. COVID-19 in context: Why do people die in emergencies? It’s probably not because of collective psychology. Br J Soc Psychol 2020;59:686–93. https://doi.org/10/gg3hr4.

[8] Orbell S, Zahid H, Henderson CJ. Changing Behavior Using the Health Belief Model and Protection Motivation Theory. In: Hamilton K, Cameron LD, Hagger MS, Hankonen N, Lintunen T, editors. Handb. Behav. Change, Cambridge: Cambridge University Press; 2020, p. 46–59. https://doi.org/10.1017/9781108677318.004.

[9] Schmidt R, Stenger K. Behavioral brittleness: the case for strategic behavioral public policy. Behav Public Policy 2021:1–26. https://doi.org/10.1017/bpp.2021.16.

[10] Siegenfeld AF, Bar-Yam Y. An Introduction to Complex Systems Science and Its Applications. Complexity 2020;2020:6105872. https://doi.org/10/ghthww.

[11] Heino MTJ. Understanding and shaping complex social psychological systems: Lessons from an emerging paradigm to thrive in an uncertain world 2023. https://doi.org/10.31234/osf.io/qxa4n.

[12] Cirillo P, Taleb NN. Tail risk of contagious diseases. Nat Phys 2020;16:606–13. https://doi.org/10/ggxf5n.

[13] Rauch EM, Bar-Yam Y. Long-range interactions and evolutionary stability in a predator-prey system. Phys Rev E 2006;73:020903. https://doi.org/10/d9zbc4.

[14] Taleb NN. Statistical Consequences of Fat Tails: Real World Preasymptotics, Epistemology, and Applications. Illustrated Edition. STEM Academic Press; 2020.

[15] Bar-Yam Y. Engineering Complex Systems: Multiscale Analysis and Evolutionary Engineering. In: Braha D, Minai AA, Bar-Yam Y, editors. Complex Eng. Syst. Sci. Meets Technol., Berlin, Heidelberg: Springer; 2006, p. 22–39. https://doi.org/10.1007/3-540-32834-3_2.

[16] Bar-Yam Y. Why Teams? N Engl Complex Syst Inst 2017. https://necsi.edu/why-teams (accessed August 9, 2023).

[17] Bar-Yam Y. Complexity rising: From human beings to human civilization, a complexity profile. Encycl Life Support Syst 2002.

[18] Hankonen N, Heino MTJ, Saurio K, Palsola M, Puukko S. Developing and evaluating behavioural and systems insights training for public servants: a feasibility study. Julkaisematon Käsikirjoitus 2023.

[19] UK Ministry of Defence (MOD). Red Teaming Handbook. GOVUK 2021. https://www.gov.uk/government/publications/a-guide-to-red-teaming (accessed August 9, 2023).

[20] Tan Y-R, Agrawal A, Matsoso MP, Katz R, Davis SLM, Winkler AS, et al. A call for citizen science in pandemic preparedness and response: beyond data collection. BMJ Glob Health 2022;7:e009389. https://doi.org/10.1136/bmjgh-2022-009389.

[21] Joint Research Centre, European Commission, Rancati A, Snowden D. Managing complexity (and chaos) in times of crisis: a field guide for decision makers inspired by the Cynefin framework. Luxembourg: Publications Office of the European Union; 2021.

[22] Skivington K, Matthews L, Simpson SA, Craig P, Baird J, Blazeby JM, et al. A new framework for developing and evaluating complex interventions: update of Medical Research Council guidance. BMJ 2021;374:n2061. https://doi.org/10.1136/bmj.n2061.

Improving organisational capacity to operate under uncertainty: Start with training?

This post introduces some recent training / education efforts I’ve been involved in. The underlying motivation is to build societal resilience and anticipation capacity to thrive in modern environments; constantly changing and subject to “black swan” risks stemming from rapid shifts.

Surviving in a jungle, with training you’ve received in desert environments, is a tough task. In the same way many – particularly public sector – organisations are best adapted to handle tasks which are merely complicated instead of being “complex“. Why is this a problem? It’s because the right strategy for predictable contexts will not succeed in unpredictable ones. And the solution to repeated failures is not to do the wrong thing better (“We need to optimise better, with more data!”), but to ask a question: “What is it we’re doing wrong?”. The following quote illustrates the need to match tools with tasks:

“A chain always breaks first in one particular link, but if the weight it is required to hold is too high, failure of the chain is guaranteed”

– Yaneer Bar-Yam

In you’re trying to pick up an airplane with a keychain, you won’t succeed by continuously fortifying the weakest link that caused the previous failure. In a recent university course, CARMA: Critical Appraisal of Research Methods and Analysis (let’s call it CARMA-23), I explain the difference between complex and complicated with the following slide. The presentations are open access so if you want, have a look at this playlist, or this, this and this mini-lecture.

Figure: Complex and complicated, two of four contextual domains depicted in something called the Cynefin framework.

I ran CARMA for the first time in 2019 (let’s call that one CARMA-19), and participants considered it extremely useful. My hope back then, as with the newer iteration, has been to contribute to the formation of more informed social science graduates, who might later become more informed policy makers. Lord knows we direly need them. CARMA is a decision-making course disguised as a research methodology course; as such, it introduces fundamentals of something called the crisis of confidence in social and life sciences, before going into behaviour change in applied settings. That’s something civil servants are less interested to hear, and there’s another training catered to them.

Behavioural and complex systems insights

In 2022, our behaviour change and well-being research group ran a training pilot with more than 100 Finnish civil servants (let’s call this one BEHA-22). Training was on behavioural and complex systems insights for human-centric public policy; a fusion of the “behaviour change in complex systems” theme from CARMA, and more traditional strands of behaviour change science as it currently stands. BEHA-22 introduced four major topics:

  1. Behavior change science in a complex society: Moving past nudges, to self-organisation and tipping points
  2. Bias-resilient decision making: Making mistakes work for you
  3. Behaviour change interventions: Development and participatory annealing through iteration
  4. Applying behavioural insights at the edge of chaos: Antifragile positioning and embracing crises

I’ve been spending a lot of time with the (mostly very positive) feedback we received, and used it to both hone delivery as well as inspire experimentation with some new pedagogical methods. That process fed into the reincarnation of CARMA-19.

The return of CARMA

CARMA-23 progressed exploring the following goals (expanded upon in my dissertation):

  1. Becoming acquainted with the recent developments regarding the crisis of confidence in social and life sciences. Understanding what the research community is doing to improve the quality of published research. 
  2. Recognising the very crucial difference between absence of evidence and evidence of absence
  3. Understanding the rationale for visualising data, and what can be hidden when reporting summary statistics only. Learning to spot some common tricks used to visualise data in a favourable way to the presenter.
  4. Understanding that decisions in the field do not need to rely on correct predictive statements, let alone scientific evidence: convexity and heuristics (something I dubbed “making evidence-free decisions”, successfully confusing the hell out of people).
  5. Becoming familiar with general features of so-called complex systems, including how interconnectedness, linearity, stationarity/stability, homogeneity etc. differ between complex and merely complicated contexts. 
  6. Understanding the rationale behind interventions and intervening in complex systems, particularly for societal change. Seeing a difference between decomposition-based planning/design, and complexity-based planning/design.

You may want to check out summaries of the mini-presentations: Students could choose the videos they considered the most interesting, and summarised them in one sentence. They were asked to put this summary as the headline of a card in our course Padlet, and elaborate in the card content.

Made with Padlet


In the padlet, later parts of the course are to the left, and scrolling right will bring you towards the fundamentals explored in the beginning of the course. I also asked the students to list particularly interesting or confusing slides so I could develop the delivery of the ideas. Picture below shows what they said, when the slides are grouped by course section. Course starts from the top, proceeds downwards, forming a nice arc that hopefully conveys growing sense of confusion, with its eventual resolution:

Figure: number of slides tagged particularly interesting, contra those tagged particularly confusing, per section. If you’re wondering about “data nudes”, see here for a blog post and here for the video obscenity.

I’ve been playing around with narrative methodology recently, and in their final assignment, the students created narratives of their course experience after analysing it. A repeating theme is captured by this simple extract:

How have I been a university student for five years and this is the first time I’m learning about this?

– Viivi, a social policy major

In Reflection

What’s next? BEHA-23 is happening this fall in the form of a university course. I’m hoping to fuse what we’ve learned thus far, with some co-creation methodology I’m quite excited about at the moment. Hopefully, this will blossom as BEHA-24 available to larger audiences next year.

As we navigate the intricate terrains of modern decision-making, training becomes not just an asset but a necessity. The experiences with CARMA and BEHA iterations have showcased the evolving needs and aspirations of our public sector and society at large. As we prepare for the upcoming BEHA-23 and look forward to its potential in BEHA-24, the overarching goal remains unchanged: to equip individuals and organisations with the skills to not just survive but thrive amidst uncertainty. Whether you’re a policymaker, a researcher, or a curious individual, it would be wonderful to receive feedback on the lectures or summaries of CARMA-23.

How are you preparing for the unpredictable challenges of the future?


Epilogue

In the post about the previous iteration of CARMA, I posted all attendee reviews in the end. There’s a bit too much data now, but I wanted to include (with permission, slightly revised for grammar & brevity), a sample reflection essay from one of the participants in CARMA-23. It highlights very nicely the power of narratives and was a relatively pleasant pedagogical method for both the participants and yours truly. Here it is:

At the beginning of the course the student was not really expecting anything. She was forced to choose the course because of changes in her life. She was disappointed to do the course without seeing the teacher and other students.

But sometimes life surprises you – positively. The course opened her eyes to see what it was that she had felt bad about before. Going to university was the fulfilment of a youthful dream and she was so grateful for that, but she felt that something wasn’t right – research was not as ethical as she expected it: teachers told you to just do something now and you can do things differently later.

The course told her a different story: science can and should be done in a proper way… [Editorial note: omitted bits describing solutions to the replication crisis]

She told her husband and friends about the course and how mind blowing it has been. And she was very angry and confused about the world, and everything is just happening without control and the winner is who lies the best. It even made her feel more bad that when she found in the course how in the research visualising the data can be done in the ways that it tricks the viewer, especially with bar plots. She also acknowledged that she was part of the problem wanting herself and her deeds to be seen in the good light – not “naked” like data.

She was herself involved in decision making in politics so it was not surprising to her that it doesn’t always – and many times – rely on research. But the thing she learned is that it is ok to think that complex problems [often] should not and even can not be taken care of with slow processes of research. But still research has its own important place in serving the better society. And [we can study how to change things by not only looking at overt] behaviour, but the phenomena behind it, when it’s about complicated people living in a complicated society – as it always is!

Even though the reason for attending the course was not learning, but the necessity to pass the university, the course had made the research “great again” in her mind. And she understood that the course had shown that there are a lot of people who see “the emperor has no clothes”.

After closing her laptop after the last assignment she felt empty because it was the last one in the masters program in the university. She was thinking about the future and how to manage it. She fell asleep and in the dream the teacher of the course said: “Remember the birds!”. And in the morning she felt that all she had was this moment and all she needed in the future was her open eyes and open mind.

– Laura, a social policy major

The Shape of Change to Come

Understanding and shaping complex social psychological systems: Lessons from an emerging paradigm to thrive in an uncertain world” is the working title of my recently submitted dissertation on human action and change therein, which I’ve been working on as a side project. This is an executive summary (or a teaser trailer if you like) for non-academic readers. A pre-print can be found here.

“the  totality is not,

as it were, a mere heap,

but the whole is something

besides the parts”

– Aristotle

In an ever-evolving world, the role of human behaviour in addressing pressing challenges cannot be overstated. Complex issues like climate change, pandemic response, and psychosocial well-being all hinge on human actions. Thus, understanding and steering positive behaviour change is of paramount importance. Traditionally, behaviour change research uses a decomposition-based approach, dissecting behaviours pertaining to societal problems into smaller parts and addressing each one separately. To get the picture, imagine a designer focused on building an engine part by part, fine-tuning each piece before fitting them together. This method works when we can clearly map out the pieces, their interactions are limited, and their effects are well understood. This is the domain of the decomposition-minded planning; the “ordered regime”, so to say.

However, human behaviour often does not always exist in such neatly compartmentalised contexts. Instead, it often operates within complex, dynamic systems where the individual pieces continually interact, mutually influencing each other in unpredictable ways. Consider a forest, for example. It’s not just about individual trees; the entire ecosystem, with its array of flora and fauna, weather patterns, and soil conditions, all contribute to the forest’s health. You can not simply study a single tree to understand the whole forest. This ecosystem view is the realm of the complexity-minded designer, who acknowledges that problems may not be easily separated but are woven into a larger, interconnected tapestry; the “complex regime”.

The current work suggests that an awareness of these so-called complex systems can enhance our approach to behaviour change. It argues that we are all active participants in our environments, capable of self-determination and self-organisation. Our behaviour is not just the result of isolated influences; instead, it often emerges from an ongoing web of interdependencies. A small action today can lead to major impacts tomorrow, and long periods of apparent stability can suddenly be disrupted by bursts of rapid change. This inherently unpredictable nature of complex regimes means that past data can not always guide us in the future. 

To navigate these complex systems, we need a new kind of designer: the evolutionary-minded designer. This designer harnesses the power of evolution, creating a wide range of possible solutions and allowing the system to select the most appropriate ones. The goal is to create flexible, adaptive systems that are resilient in the face of change and uncertainty – not just solutions, which rely on correct prediction of the specifics of the future. 

The work presented in this dissertation provides concepts and tools to initiate this approach. It includes a compendium of self-management techniques to empower individuals, and proposes a model of behaviour change as an interconnected network of processes, rather than a series of isolated, static entities. It also discusses how traditional linear models may fail in the face of complex systems and suggests ways of understanding and influencing behaviour change, which may help bridge the gap between social psychology and complex systems science.

In a world that’s increasingly complex and interconnected, our approach to behaviour change must adapt. By embracing complexity, we can better equip ourselves to face the challenges of the future. Rather than trying to oversimplify these complex problems, we should recognize and leverage the inherent richness and unpredictability of human behaviour – where it exists – aiming to develop responsive, adaptable strategies that foster positive change in this uncertain world.


Further reading

All of this will be explained in due time, but if you’re dying to hear more, have a look at this post or these readings (particularly the last one):

Heino, M. T. J., & Hankonen, N. (2022). Itsekontrolli on yhteisöponnistus: Systeemisiä näkökulmia käyttäytymisen muutokseen. In E. Mäkipää & M. Aalto-Kallio (Eds.), Muutosten tiet kietoutuvat yhteen (Vol. 2022, pp. 69–79). https://content-webapi.tuni.fi/proxy/public/2022-09/muutostentiet_heino-hankonen_v4.pdf

Heino, M. T. J., Knittle, K., Noone, C., Hasselman, F., & Hankonen, N. (2021). Studying Behaviour Change Mechanisms under Complexity. Behavioral Sciences, 11(5), Article 5. https://doi.org/10.3390/bs11050077

Heino, M. T. J., Proverbio, D., Marchand, G., Resnicow, K., & Hankonen, N. (2022). Attractor landscapes: A unifying conceptual model for understanding behaviour change across scales of observation. Health Psychology Review, 0(ja), 1–26. https://doi.org/10.1080/17437199.2022.2146598

Bar-Yam, Y. (2006). Engineering Complex Systems: Multiscale Analysis and Evolutionary Engineering. In D. Braha, A. A. Minai, & Y. Bar-Yam (Eds.), Complex Engineered Systems: Science Meets Technology (pp. 22–39). Springer. https://doi.org/10.1007/3-540-32834-3_2

Siegenfeld, A. F., & Bar-Yam, Y. (2020). An Introduction to Complex Systems Science and Its Applications. Complexity, 2020, 6105872. https://doi.org/10/ghthww

At-tractor what-tractor? Tipping points in behaviour change science, with applications to risk management

Back in 2020, our research group was delivering the last of five symposia included in a project called Behaviour Change Science and Policy (BeSP). I was particularly excited about this one because the topic was complexity, and the symposia series brought together researchers and policy makers interested in improving the society – without making things worse by assuming an overly narrow view of the world.

I had a particular interest in two speakers. Ken Resnicow had done inspiring conceptual work on the topic already back in 2006, and had been an influence on both me and my PhD supervisor (and BeSP project lead) Nelli Hankonen, in her early career. Sadly, the world hadn’t yet been ready for an extensive uptake of the ideas, and much of the methodological tools were inaccessible (or unsuitable) to social scientists. The other person of particular interest, Daniele Proverbio, on the other hand, was a doctoral researcher with training in physics and systems biology; I had met him by chance at the International Conference on Complex Systems, which I probably wouldn’t have attended, had it not been held online due to COVID. He was working on robust foundations and real-world applications of systems with so-called tipping points.

I started writing a paper with Ken, Daniele, Nelli and Gwen Marchand, who was also speaking at the symposium, as she had been working extensively on complexity in education. The paper started out as an introduction to complexity for behaviour change researchers, but as I took up a position in the newly founded behavioural science advisory group at the Finnish Prime Minister’s Office late 2020, the whole thing went to a back burner. It wasn’t just that, though. Being a scholar of motivation, I knew that being bored of your own words is a major warning sign, and things you prefer not to eat, you shouldn’t feed to others. So I didn’t touch the draft for over a year.

Meanwhile, I finished a manuscript which started out as a collection of notes from arguments about study design and analysis within our research group, when we were doing a workplace motivation self-management / self-organisation intervention. The manuscript demonstrated, how non-linearity, non-ergodicity and interdependence can be fatal for traditional methods of analysis. It was promptly rejected from Health Psychology Review, the flagship journal of the European Health Psychology Society – on the grounds that linear methods can solve all the issues, which was exactly the opposite of manuscript’s argument. That piece was later published in Behavioural Sciences, outlining the foundations of complex systems approaches in behaviour change science.

As the complexity fundamentals paper had now been written, I wasn’t too keen on continuing on our BeSP piece, before I was hit by a strange moment where everything I had dabbled with (and discussed with Daniele) for the previous year sort of came together. I re-wrote the entire paper in a very short time, partly around analysis I had started due to natural curiosity with no particular goal in mind.

This is non-linearity in action: instead of “productively” writing a little every day, you write nothing for a very long time, and then everything at once. And this is not a pathology – except in the minds of people who think everything in life should follow a pre-planned process of gradual fulfillment. I’ve spent decades trying to unlearn this, so I should know.

The paper turned out very non-boring to me, and I was particularly happy the aforementioned flagship journal (the one which rejected the earlier piece) accepted it with no requests for edits – despite being based on the same underlying ideas as the earlier one.

Graphical abstract of the attractor landscapes paper; courtesy of Daniele Proverbio. Describes two types of tipping points in systems with attractors.
Graphical abstract of the attractor landscapes paper; courtesy of Daniele Proverbio.

Implications to risk management

The theory underlying attractor landscapes and tipping points, points to two important issues in risk management. Firstly, large changes need not be the result of large events, but small pushes can suffice, when the system resides in a shallow attractor or on the top of a “hill” in the landscape. Secondly, the fact that earlier events have not caused large-scale behaviour change, does not imply that they continue not do so in the future. This is a mistake constantly made by Finnish doctors and epidemiologists throughout the pandemic, e.g. about people’s unwillingness to take up masks – we could stop COVID, for example, but don’t do so because people have been told this attractor is inescapable.

In a recent training for public servants, we experimented with conveying these ideas to non-scientists – lots of work to be done, but some did find it an enlightening escape from conventional linear thinking.

To sum up, some personal takeaways (your mileage may vary):

  1. The quality of motivation you experience when working on something boring is information: there might be a better idea, one actually worth your time, which gets trampled as you muddle through something less attractive. Same applies to health behaviours.
  2. Remain able to seize opportunities when they arise: steer clear of projects with deadlines, and milestones in particular. They coerce you to finish what you started, instead of dropping it for a time and starting anew much later.

The astute reader may have noticed, that I did not explain the damned attractors in this post at all. You’ll find all you need here: