Pathways and complexity in behaviour change research

These are slides of a talk given at the Aalto University Complex Systems seminar. Contrasts two views to changing behaviour; the pathway view and the complexity view, the latter being at its infancy. Presents some Secret Analysis Arts of Recurrence, which Fred Hasselman doesn’t want you to know about. Includes links to resources. If someone perchance saw my mini-moocs (1, 2) and happened to find them useful, drop me a line and I’ll make one of this.

Lifestyle factors are hugely relevant in preventing disease in modern societies; unfortunately people often fail in their attempts to change health behaviour – both their own, as well as that of others’. In recent years, behaviour change design has been conceived as a process where one identifies deficiencies in factors influencing the behaviours (commonly called “determinants”). Complexity thinking suggests putting emphasis on de-stabilisation instead.

The perspective taken here is mostly at the idographic level. At the time of writing, we have behaviour change methods to affect e.g. skills, perceived social norms, attitudes and so forth – but very little on general de-stabilisation of the motivational system as an important predictor of change.

Perspectives are welcome!

ps. Those of you to worry about brainwashing and freedom of thought: Chill. Stuff that powerful doesn’t really exist, and if it did, marketers would know about it and probably rule the world. [No, they don’t rule the world, I’ve been there]

pps. Forgot to put it in the slides, but this guy Merlijn Olthof will perhaps one day tweet about his work about destabilisation in psychotherapy contexts. Meanwhile, you can e.g. be his 10th Twitter follower, or keep checking his Google Scholar profile, as there’s a new piece coming out soon!

Randomised experiments (mis?)informing social policy in complex systems

In this post, I vent about anti-interdisciplinarity, introduce some basic perspectives of complexity science, and wonder whether decisions on experimental design actually lead us to end up in a worse place than where we were, before we decided to use experimental evidence to inform social policy.

People in our research group recently organised a symposium, Interdisciplinary perspectives on evaluating societal interventions to change behaviour (talks watchable here), as part of a series called Behaviour Change Science & Policy (BeSP). The idea is to bring together people from various fields from philosophy to behavioural sciences, medicine and beyond, in order to better tackle problems such as climate change and lifestyle diseases.

One presentation touched upon Finland’s randomised controlled trial to test the effects of basic income on employment (see also report on first year results). In crude summary, they did not detect effects of free money on finding employment. (Disclaimer: They had aimed for 80% statistical power, meaning that if all your assumptions regarding the size of the effect are correct, in the long term, 20% of the time you’d get no statistically significant effect in spite of there being a real effect.)

During post-symposium drinks, I spoke with an economist about the trial. I was wondering, how come they used individual instead of cluster randomisation – randomising neighbourhoods, for example. The answer was resource constraints; much larger sample sizes are needed for the statistics to work. To me it seemed clear, that it’s a very different situation if one person in a network of friends got free money, as compared to if everyone did. The economist wondered: “How come there could be second-order effects when there were no first-order effects?” The conversation took a weird turn. Paraphrasing:

Me: Blahblah compelling evidence from engineering and social sciences to math and physics that “more is different”, i.e. phenomena play out differently depending on the scale at consideration… blahblah micro-level interactions create emergent macro-level patterns blahblah.

Economist: Yeah, we’re not having that conversation in our field.

Me: Oh, what do you mean?

Economist: Well, those are not things discussed in our top journals, or considered interesting subjects to research.

Me: I think they have huge consequences, and specifically in economics, this guy in Oxford just gave a presentation on what he called “Complexity economics“. He had been doing it for some decades already, I think he originally had a physics background…

Economist: No thanks, no physicists in my economics.

Me: Huh?

Economist: [exits the conversation]

Now, wasn’t that fun for a symposium on interdisciplinary perspectives.

I have a lot of respect for the mathematical prowess of economists and econometricians, don’t get me wrong. One of my favourites is Scott E. Page, though I only know him due to an excellent course on complexity (also available as an audio book). I do probably like him, because he breaks out of the monodisciplinary insulationist mindset economists are often accused of. Page’s view of complexity actually relates to our conversation. Let’s see how.

First off, he describes complexity (and most social phenomena of interest) as arising from four factors, which can be thought as tuning knobs or dials. Complexity arises, when each dial is not tuned into either of the extremes, which is where equilibria arise, but somewhere in the middle. And complex systems tend to reside far from equilibrium, permanently.

To dig more deeply into how the attributes of interdependence,
connectedness, diversity, and adaptation and learning generate
complexity, we can imagine that each of these attributes is a dial that
can be turned from 0 (lowest) to 10 (highest).

Scott E. Page

  • Interdependence means the extent of how much one person’s actions affect those of another’s. This dial ranges from complete independence, where one person’s actions do not affect others’ at all, to complete dependence, where everyone observes and tries to perfectly match all others’ actions. In real life, we see both unexpected cascades (such as the US decision makers’ ethanol regulations, leading to the Arab Spring), as well as some, but never complete, independence – that is, manifestations that do not fit into either extreme of the dial, but lie somewhere in between.
  • Connectedness refers to how many other people a person is connected to. The extremes range from people living in a cabin in the woods all alone, to hypersocial youth living in Instagram trying to keep tabs on everyone and everything. The vast majority of people lie somewhere in between.
  • Diversity is the presence of qualitatively different types of actors: If every person is a software engineer, mankind is obviously doomed… But the same happens if there’s only one engineer, one farmer etc. Different samples of real-world social systems (e.g. counties) consist of intermediate amounts of diversity, lying somewhere in between.
  • Adaptation and learning refer to the extent of the actors’ smartness. This ranges from following simple, unchanging rules, to being perfectly rational and informed, as assumed in classical economics. In actual decision making, we see “bounded rationality”, reliance on rules of thumb and tradition, as well as both optimising and satisficing behaviours – the “somewhere in between”.

The complexity of complex systems arises, when diverse, connected people interact on the micro-level, and by doing so produce “emergent” macro-level states of the world, to which they adapt, creating new unexpected states of the world.

You might want to read that one again.

Back to basic income: When we pick 2000 random individuals around the country and give them free money, we’re implicitly assuming they are not connected to any other people, and/or that they are completely independent the actions of others’. We’re also assuming that they are either the same, or that it’s not interesting that they are of different types. And so forth. If we later compare their employment data to that of those who were not given basic income, the result we get is an estimate of the causal effect in the population, if all assumptions would hold.

But consider how these assumptions may fail. If the free money was perceived as a permanent thing, and given to people’s whole network of unemployed buddies, it seems quite plausible that they would adapt their behaviour as a response of the dynamics of their social network changing. This might even be different in cliques of certain people, who might use the safety net of basic income to collectively found companies and take risks, and cliques of other people, who might alter their daily drinking behaviour to match the costs with the predictable income – for better or worse. But when you randomise individually and ignore how people cluster in networks, you’re studying a different thing. Whether it’s an interesting thing or a silly thing, is another issue.

Now, it’s easy to come up with these kinds of assumption-destroying scenarios, but a whole different ordeal to study them empirically. We need to simplify reality in order to deal with it. The question is this: How much of an abstraction can a map (i.e. a model in a research study, making those simplified assumptions) be, in order to still represent reality adequately? This is also an ontological question, because if you take the complexity perspective seriously, you say bye-bye to the kind of thinking that allows you to dream up predictable effects a button-press (such as a policy change) has over the state of a system. People who act in—or try steering—complex systems, control almost nothing but influence almost everything.

An actor in a complex system controls almost nothing but influences almost everything.

Scott E. Page

Is some information, some model, still better than none? Maybe. Maybe not. In Helsinki, you’re better off without a map, than with a map of Stockholm – the so-called “Best map fallacy” (explained here in detail). Rare, highly influential events drive the behaviour of complex systems: the Finnish economy was not electrified by average companies starting to sell more, but by Nokia hitting the jackpot. And these events are very hard, if not impossible, to predict✱.

Ok, back to basic income again. I must say that the people who devised the experiment were not idiots, and included e.g. interviews of people to get some idea about unexpected effects. I think that this type of an approach is definitely necessary when dealing with complexity, and all social interventions should include qualitative data in their evaluation. But, again, unless the unemployed don’t interact, with randomisation done individually you’re studying a different thing than when it’s done in clusters. I do wonder if it would have been possible to include some matched clusters, to see if any qualitatively different dynamics take place, when you give basic income to a whole area instead of randomly picked individuals within it.

Complex systems organizational map.jpg
The society is a complex system, and must be studied as such. Figure: Hiroki Sayama (click to enlarge)

But, to wrap up this flow of thought, I’m curious if you think it is possible to randomise a social intervention individually AND always keep in mind that the conclusions are only valid if there are no interactions between people’s behaviour and that of their neighbours. Or is it inevitable that that the human mind smoothes out the details?

Importantly: Is our map better now, than it was before? Will this particular experiment go in history as—like the economist stated in “there were no first-order effects”—basic income not having any effect on job seeking? (remember, aim was only 80% statistical power). Lastly, I want to say I consider it unforgiveable to only work within one discipline and disregard the larger world you’re operating in: When we bring science to policy making, we must be doubly cautious of the assumptions our conclusions stand on. Luckily, transparent scientific methodology allows us to be explicit about them.

Let me hear your thoughts, and especially objections, on Twitter, or by email!

✱ One solution is to harness convexity, which can be oversimplified like this:

  1. Unpredictable things will happen, and they will make you either better or worse off.
  2. Magnitude of an event is different from it’s effect on you, i.e. there are huge events that don’t impact you at all, and small events that are highly meaningful to you. Often that impact depends on the interdependence and connectedness dials.
  3. To an extent, you can control the impact an event has on you.
  4. You want to control exposure in such a way, that surprise losses are bounded, while surprise gains are as limitless as possible.

Idiography illustrated: Things you miss when averaging people

This post contains slides I made to illustrate some points about phenomena, which will remain forever out of reach, if we continue the common practice of always averaging individual data. For another post on perils of averaging, check this out, and for an overview of idiographic research with resources, see here.  

(Almost the same presentation with some narration is included in this thread, in case you want more explanation.)

Here’s one more illustration of why you need the right sampling frequency for whatever it is you study – and the less you know, the denser sampling you need initially. From a paper I’m drafting:

chaosplot

The figure illustrates a hypothetical percentage of a person’s maximum motivation (y-axis) measured on different days (x-axis). Panels: 

  • A) measurement on three time points—representing conventional evaluation of baseline, post-intervention and a longer-term follow-up—shows a decreasing trend.
  • B) Measurement on slightly different days shows an opposite trend. 
  • C) Measuring 40 time points instead of three would have accommodated both phenomena.
  • D) New linear regression line (dashed) as well as the LOESS regression line (solid), with potentially important processes taking place during the circled data points.
  • E) Having measured 400 time points instead, would have revealed a process of “deterministic chaos” instead. Not knowing the equation and the starting points, it would be impossible to predict accurately, but this doesn’t mean regression is helpful.

During the presentation, a question came up: How much do we need to know? Do we really care about the “real” dynamics? Personally, I mostly just want information to be useful, so I’d be happy just tinkering with trial and error. Thing is, tinkering may benefit from knowing what has already failed, and where fruitful avenues may lie. My curiosity ends, when we can help people change their behaviour in ways that fulfill the spirit of R.A. Fisher’s criterion for an empirically demonstrable phenomenon:

In relation to the test of significance, we may say that a phenomenon is experimentally demonstrable when we know how to conduct an experiment which will rarely fail to give us a statistically significant result. (Fisher 1935b/1947, p. 14; see Mayo 2018)

So, if I was a physiology researcher studying the effects of exercise, I would have changed fields (to e.g. PA promotion) when the negative effects of low activity became evident, whereas other people want to learn the exact metabolic pathways by which the thing happens. And I will quit intervention research when we figure out how to create interventions that fail to work <5% of the time.

Some people say we’re dealing with human phenomena that are so unpredictable and turbulent, that we cannot expect to do much better than we currently do. I disagree with this view, as all the methods I’ve seen used in our field so far are designed for ergodic, stable, linear systems. But there are other kinds of methods, which physicists started using when they left behind the ones that stuck with us, around maybe the 19th century. I’m very excited about learning more at the Complexity Methods for Behavioural Science summer school (here are some slides on what I presume will be among the topics).


Additional resources:

I don’t have examples on e.g. physical activity, because nobody’s done that yet, and lack of good longitudinal within-individual data is a severe historical hindrance. But some research groups are gathering longitudinal continuous data, and one that I know of, has very long time series of machine vision data on school yard physical activity (those are systems, too, just like individuals). Plenty has already been done in the public health sphere.

Hell do I know, this might turn out to be a dead-end, like most new developments tend to be.

But I’d be happy to be convinced that it is an inferior path to our current one 😉

blackbox

Complexity considerations for intervention (process) evaluation

For some years, I’ve been partly involved in the Let’s Move It intervention project, which targeted dysfunctional physical activity and sedentary behaviour patterns of older adolescents, by affecting their school environment as well as social and psychological factors.

I held a talk at the closing seminar; it was live streamed and is available here (on stage starting from about 1:57:00 in the recording). But if you were there, or are otherwise interested in the slides I promised, they are now here.

For a demonstration of non-stationary processes (which I didn’t talk about but which are mentioned in these slides), check out this video and an experimental mini-MOOC I made. Another blog post touching on some of the issues is found here.

 

blogiin kuva

Misleading simplifications and where to find them (Slides & Mini-MOOC 11min)

The gist: to avoid getting fooled by them, we need to name our simplifying assumptions when modeling social scientific data. I’m experimenting with this visual approach to delivering information to those who think modeling is boring; feedback and improvement suggestions very welcome! [Similar presentation with between-individual longitudinal physical activity networks, presented at the Finnish Health Psychology conference: here]

I’m not as smooth as those talking heads on the interweb, so you may want just the slides. Download by clicking on the image below or watch at SlideShare.

SLIDE DECK:

misleading assumptions 1st slide

Mini-MOOC:

 

Note: Jan Vanhove thinks we shouldn’t  become paranoid with model assumptions; check his related blog post here!