Complexity perspectives on behaviour change interventions

I had the great pleasure to be involved in organising a symposium on the topic of my dissertation. Many if not most societal problems are both behavioural and complex; hence the speakers’ backgrounds varied from systems science, and psychology to social work and physics. Below is a list of video links along with a short synopsis of the talks. See here for other symposia in the Behaviour Change Science and Policy series.

A live-tweeting thread on 1st day here, 2nd day (not including presentations by me, Nanne Isokuortti or Ira Alanko) here. See here for the official web page, and here for the YouTube playlist!

Nelli Hankonen: Opening words & introduction to the Behaviour Change Science & Policy (BeSP) project

  • See here for videos of previous symposia (I: Intervention evaluation & field experiments; II: Behavioural insights in developing public policy and interventions; III: Reverse translation: Practice-based evidence; IV: Creating real-world impact: Implementation and dissemination of behaviour change interventions)

Marijn de Bruijn: Integrating Behavioural Science in COVID-19 Prevention Efforts – The Dutch Case

  • Behaviour change efforts for COVID-19 protective behaviours are operations on a complex system’s user experience: A virus is the problem, but behaviour is the solution.
  • Knee-jerk communication responses of health officials can be improved upon by using methods derived from what works in real-world behavioural science interventions.
  • Protective behaviours entail feedback dynamics: for example, crowding leads to difficulty maintaining distance, which leads to perceiving that others don’t consider it important, which leads to more crowding, etc.

Nelli Hankonen: Why is it Useful to Consider Complexity Insights in Behaviour Change Research?

  • Complexity-informed approaches to intervention have been around for a long time, but only recently analytical methodology has become widely available.
  • There are important differences between “complicated” and “complex” behavioural interventions.
  • By not taking the complexity perspective into account, we may be missing opportunities to properly design interventions.

Olli-Pekka Heinonen: Complexity-Informed Policymaking

  • If a civil servant wants to be effective, maximum control doesn’t work – even what constitutes “progress” can be difficult to ascertain.
  • Systems, such as the society, move: what worked yesterday, might not work today.
  • Hence continuous learning, adaptation and experimenting are not optional for societal decision-making.

Gwen Marchand: Complexity Science in the Design and Evaluation of Behaviour Interventions

  • What does it mean to define behavior and behavior change from a complex systems perspective?
  • Focal units and well-defined timescales are key considerations for design and research of intervention 
  • Context acts to constrain and afford possible states for behavior change related to intervention

Jari Saramäki: How do Behaviours, Ideas, and Contagious Diseases Spread Through Networks?

  • People are embedded in networks that influence their behaviour and health
  • Network structure – how the networks’ links are organized – strongly affects this influence
  • Interventions that modify network structure can be used to promote or hinder the spread of influence or contagion.

Matti Heino: Studying Complex Motivation Systems – Capturing Dynamical Patterns of Change in Data from Self-assessments and Wearable Technology

  • Analysis of living beings involves addressing interconnected, turbulent processes that vary across time.
  • Recruiting less individuals and collecting more data on fewer variables, may be a considerably beneficial tradeoff to better understand dynamics of a psychological phenomenon.
  • Methods to deal with such data include building networks of networks (multiplex recurrence networks) and assessing early warning signals of sudden gains or losses.

If you’re interested in the links, download my slides here. I actually forgot to show what a multiplex network of variables combined from several theories looks like (you don’t condition on all other variables, so you can combine stuff from different frameworks without the meaning of the variables changing, as in a regression-based analysis). Anyway, it looks like this:

A single person’s multiplex recurrence network, i.e. a network of recurrence networks of work motivation variables queried daily for 30+ days. Colored connectors are relationships which can’t be attributed to randomness.

Nanne Isokuortti: From Exploration to Sustainment – Understanding Complex Implementation in Public Social Services

  • Illustrate the complexity in an implementation process with a real-world case example
  • Introduce Exploration, Preparation, Implementation, and Sustainment (EPIS) Framework
  • Provide suggestions how to aid implementation in complex settings

Ira Alanko: The AuroraAI Programme

  • The Finnish public sector is taking active steps to utilise AI to make using of  services easier
  • AI has opened a window for a systemic shift towards human-centricity in Finland
  • The AuroraAI-network is a collection of different components, not a platform or collection of chatbots

Daniele Proverbio: Smooth or Abrupt? How Dynamical Systems Change Their State

  • Natural phenomena don’t necessarily follow smooth and linear patterns while evolving.
  • Abrupt changes are common in complex, non-linear systems. These are arguably the future of scientific research.
  • There exist a limited number of transition classes. Understanding their main drivers could lead to useful insights and applications.

Ken Resnicow:  Behavior Change is a Complex Process. How does that impact theory, research and practice?

  • Behavior change is a complex, non linear process.
  • Sudden change is more enduring than gradual change.
  • Failure to replicate prior interventions can be understood from a complexity lens.

(nb. on the last talk: personally, I’m not a huge fan of mediation analysis, moderated or otherwise. Stay tuned for an interview where I discuss the topic at some length with Fred Hasselman)

Notes from the symposium by Grace Lau

Randomised experiments (mis?)informing social policy in complex systems

In this post, I vent about anti-interdisciplinarity, introduce some basic perspectives of complexity science, and wonder whether decisions on experimental design actually lead us to end up in a worse place than where we were, before we decided to use experimental evidence to inform social policy.

People in our research group recently organised a symposium, Interdisciplinary perspectives on evaluating societal interventions to change behaviour (talks watchable here), as part of a series called Behaviour Change Science & Policy (BeSP). The idea is to bring together people from various fields from philosophy to behavioural sciences, medicine and beyond, in order to better tackle problems such as climate change and lifestyle diseases.

One presentation touched upon Finland’s randomised controlled trial to test the effects of basic income on employment (see also report on first year results). In crude summary, they did not detect effects of free money on finding employment. (Disclaimer: They had aimed for 80% statistical power, meaning that if all your assumptions regarding the size of the effect are correct, in the long term, 20% of the time you’d get no statistically significant effect in spite of there being a real effect.)

During post-symposium drinks, I spoke with an economist about the trial. I was wondering, how come they used individual instead of cluster randomisation – randomising neighbourhoods, for example. The answer was resource constraints; much larger sample sizes are needed for the statistics to work. To me it seemed clear, that it’s a very different situation if one person in a network of friends got free money, as compared to if everyone did. The economist wondered: “How come there could be second-order effects when there were no first-order effects?” The conversation took a weird turn. Paraphrasing:

Me: Blahblah compelling evidence from engineering and social sciences to math and physics that “more is different”, i.e. phenomena play out differently depending on the scale at consideration… blahblah micro-level interactions create emergent macro-level patterns blahblah.

Economist: Yeah, we’re not having that conversation in our field.

Me: Oh, what do you mean?

Economist: Well, those are not things discussed in our top journals, or considered interesting subjects to research.

Me: I think they have huge consequences, and specifically in economics, this guy in Oxford just gave a presentation on what he called “Complexity economics“. He had been doing it for some decades already, I think he originally had a physics background…

Economist: No thanks, no physicists in my economics.

Me: Huh?

Economist: [exits the conversation]

Now, wasn’t that fun for a symposium on interdisciplinary perspectives.

I have a lot of respect for the mathematical prowess of economists and econometricians, don’t get me wrong. One of my favourites is Scott E. Page, though I only know him due to an excellent course on complexity (also available as an audio book). I do probably like him, because he breaks out of the monodisciplinary insulationist mindset economists are often accused of. Page’s view of complexity actually relates to our conversation. Let’s see how.

First off, he describes complexity (and most social phenomena of interest) as arising from four factors, which can be thought as tuning knobs or dials. Complexity arises, when each dial is not tuned into either of the extremes, which is where equilibria arise, but somewhere in the middle. And complex systems tend to reside far from equilibrium, permanently.

To dig more deeply into how the attributes of interdependence,
connectedness, diversity, and adaptation and learning generate
complexity, we can imagine that each of these attributes is a dial that
can be turned from 0 (lowest) to 10 (highest).

Scott E. Page

  • Interdependence means the extent of how much one person’s actions affect those of another’s. This dial ranges from complete independence, where one person’s actions do not affect others’ at all, to complete dependence, where everyone observes and tries to perfectly match all others’ actions. In real life, we see both unexpected cascades (such as the US decision makers’ ethanol regulations, leading to the Arab Spring), as well as some, but never complete, independence – that is, manifestations that do not fit into either extreme of the dial, but lie somewhere in between.
  • Connectedness refers to how many other people a person is connected to. The extremes range from people living in a cabin in the woods all alone, to hypersocial youth living in Instagram trying to keep tabs on everyone and everything. The vast majority of people lie somewhere in between.
  • Diversity is the presence of qualitatively different types of actors: If every person is a software engineer, mankind is obviously doomed… But the same happens if there’s only one engineer, one farmer etc. Different samples of real-world social systems (e.g. counties) consist of intermediate amounts of diversity, lying somewhere in between.
  • Adaptation and learning refer to the extent of the actors’ smartness. This ranges from following simple, unchanging rules, to being perfectly rational and informed, as assumed in classical economics. In actual decision making, we see “bounded rationality”, reliance on rules of thumb and tradition, as well as both optimising and satisficing behaviours – the “somewhere in between”.

The complexity of complex systems arises, when diverse, connected people interact on the micro-level, and by doing so produce “emergent” macro-level states of the world, to which they adapt, creating new unexpected states of the world.

You might want to read that one again.

Back to basic income: When we pick 2000 random individuals around the country and give them free money, we’re implicitly assuming they are not connected to any other people, and/or that they are completely independent the actions of others’. We’re also assuming that they are either the same, or that it’s not interesting that they are of different types. And so forth. If we later compare their employment data to that of those who were not given basic income, the result we get is an estimate of the causal effect in the population, if all assumptions would hold.

But consider how these assumptions may fail. If the free money was perceived as a permanent thing, and given to people’s whole network of unemployed buddies, it seems quite plausible that they would adapt their behaviour as a response of the dynamics of their social network changing. This might even be different in cliques of certain people, who might use the safety net of basic income to collectively found companies and take risks, and cliques of other people, who might alter their daily drinking behaviour to match the costs with the predictable income – for better or worse. But when you randomise individually and ignore how people cluster in networks, you’re studying a different thing. Whether it’s an interesting thing or a silly thing, is another issue.

Now, it’s easy to come up with these kinds of assumption-destroying scenarios, but a whole different ordeal to study them empirically. We need to simplify reality in order to deal with it. The question is this: How much of an abstraction can a map (i.e. a model in a research study, making those simplified assumptions) be, in order to still represent reality adequately? This is also an ontological question, because if you take the complexity perspective seriously, you say bye-bye to the kind of thinking that allows you to dream up predictable effects a button-press (such as a policy change) has over the state of a system. People who act in—or try steering—complex systems, control almost nothing but influence almost everything.

An actor in a complex system controls almost nothing but influences almost everything.

Scott E. Page

Is some information, some model, still better than none? Maybe. Maybe not. In Helsinki, you’re better off without a map, than with a map of Stockholm – the so-called “Best map fallacy” (explained here in detail). Rare, highly influential events drive the behaviour of complex systems: the Finnish economy was not electrified by average companies starting to sell more, but by Nokia hitting the jackpot. And these events are very hard, if not impossible, to predict✱.

Ok, back to basic income again. I must say that the people who devised the experiment were not idiots, and included e.g. interviews of people to get some idea about unexpected effects. I think that this type of an approach is definitely necessary when dealing with complexity, and all social interventions should include qualitative data in their evaluation. But, again, unless the unemployed don’t interact, with randomisation done individually you’re studying a different thing than when it’s done in clusters. I do wonder if it would have been possible to include some matched clusters, to see if any qualitatively different dynamics take place, when you give basic income to a whole area instead of randomly picked individuals within it.

Complex systems organizational map.jpg
The society is a complex system, and must be studied as such. Figure: Hiroki Sayama (click to enlarge)

But, to wrap up this flow of thought, I’m curious if you think it is possible to randomise a social intervention individually AND always keep in mind that the conclusions are only valid if there are no interactions between people’s behaviour and that of their neighbours. Or is it inevitable that that the human mind smoothes out the details?

Importantly: Is our map better now, than it was before? Will this particular experiment go in history as—like the economist stated in “there were no first-order effects”—basic income not having any effect on job seeking? (remember, aim was only 80% statistical power). Lastly, I want to say I consider it unforgiveable to only work within one discipline and disregard the larger world you’re operating in: When we bring science to policy making, we must be doubly cautious of the assumptions our conclusions stand on. Luckily, transparent scientific methodology allows us to be explicit about them.

Let me hear your thoughts, and especially objections, on Twitter, or by email!

✱ One solution is to harness convexity, which can be oversimplified like this:

  1. Unpredictable things will happen, and they will make you either better or worse off.
  2. Magnitude of an event is different from it’s effect on you, i.e. there are huge events that don’t impact you at all, and small events that are highly meaningful to you. Often that impact depends on the interdependence and connectedness dials.
  3. To an extent, you can control the impact an event has on you.
  4. You want to control exposure in such a way, that surprise losses are bounded, while surprise gains are as limitless as possible.