Idiography illustrated: Things you miss when averaging people

This post contains slides I made to illustrate some points about phenomena, which will remain forever out of reach, if we continue the common practice of always averaging individual data. For another post on perils of averaging, check this out, and for an overview of idiographic research with resources, see here.  

(Almost the same presentation with some narration is included in this thread, in case you want more explanation.)

Here’s one more illustration of why you need the right sampling frequency for whatever it is you study – and the less you know, the denser sampling you need initially. From a paper I’m drafting:

chaosplot

The figure illustrates a hypothetical percentage of a person’s maximum motivation (y-axis) measured on different days (x-axis). Panels: 

  • A) measurement on three time points—representing conventional evaluation of baseline, post-intervention and a longer-term follow-up—shows a decreasing trend.
  • B) Measurement on slightly different days shows an opposite trend. 
  • C) Measuring 40 time points instead of three would have accommodated both phenomena.
  • D) New linear regression line (dashed) as well as the LOESS regression line (solid), with potentially important processes taking place during the circled data points.
  • E) Having measured 400 time points instead, would have revealed a process of “deterministic chaos” instead. Not knowing the equation and the starting points, it would be impossible to predict accurately, but this doesn’t mean regression is helpful.

During the presentation, a question came up: How much do we need to know? Do we really care about the “real” dynamics? Personally, I mostly just want information to be useful, so I’d be happy just tinkering with trial and error. Thing is, tinkering may benefit from knowing what has already failed, and where fruitful avenues may lie. My curiosity ends, when we can help people change their behaviour in ways that fulfill the spirit of R.A. Fisher’s criterion for an empirically demonstrable phenomenon:

In relation to the test of significance, we may say that a phenomenon is experimentally demonstrable when we know how to conduct an experiment which will rarely fail to give us a statistically significant result. (Fisher 1935b/1947, p. 14; see Mayo 2018)

So, if I was a physiology researcher studying the effects of exercise, I would have changed fields (to e.g. PA promotion) when the negative effects of low activity became evident, whereas other people want to learn the exact metabolic pathways by which the thing happens. And I will quit intervention research when we figure out how to create interventions that fail to work <5% of the time.

Some people say we’re dealing with human phenomena that are so unpredictable and turbulent, that we cannot expect to do much better than we currently do. I disagree with this view, as all the methods I’ve seen used in our field so far are designed for ergodic, stable, linear systems. But there are other kinds of methods, which physicists started using when they left behind the ones that stuck with us, around maybe the 19th century. I’m very excited about learning more at the Complexity Methods for Behavioural Science summer school (here are some slides on what I presume will be among the topics).


Additional resources:

I don’t have examples on e.g. physical activity, because nobody’s done that yet, and lack of good longitudinal within-individual data is a severe historical hindrance. But some research groups are gathering longitudinal continuous data, and one that I know of, has very long time series of machine vision data on school yard physical activity (those are systems, too, just like individuals). Plenty has already been done in the public health sphere.

Hell do I know, this might turn out to be a dead-end, like most new developments tend to be.

But I’d be happy to be convinced that it is an inferior path to our current one 😉

blackbox

Deterministic doesn’t mean predictable

In this post, I argue against the intuitively appealing notion that, in a deterministic world, we just need more information and can use it to solve problems in complex systems. This presents a problem in e.g. psychology, where more knowledge does not necessarily mean cumulative knowledge or even improved outcomes.

Recently, I attended a talk where Misha Pavel happened to mention how big data can lead us astray, and how we can’t just look at data but need to know mechanisms of behaviour, too.

IMG_20161215_125659.jpg
Misha Pavel arguing for the need to learn how mechanisms work.

Later, a couple of my psychologist friends happened to present arguments discounting this, saying that the problem will be solved due to determinism. Their idea was that the world is a deterministic place—if we knew everything, we could predict everything (an argument also known as Laplace’s Demon)—and that we eventually a) will know, and b) can predict. I’m fine with the first part, or at least agnostic about it. But there are more mundane problems to prediction than “quantum randomness” and other considerations about whether truly random phenomenon exist. The thing is, that even simple and completely deterministic systems can be utterly unpredictable to us mortals. I will give an example of this below.

Even simple and completely deterministic systems can be utterly unpredictable.

Let’s think of a very simple made-up model of physical activity, just to illustrate a phenomenon:

Say today’s amount of exercise depends only on motivation and exercise of the previous day. Let’s say people have a certain maximum amount of time to exercise each day, and that they vary from day to day, in what proportion of that time they actually manage to exercise. To keep things simple, let’s say that if a person manages to do more exercise on Monday, they give themselves a break on Tuesday. People also have different motivation, so let’s add that as factor, too.

Our completely deterministic, but definitely wrong, model could generalise to:

Exercise percentage today = (motivation) * (percentage of max exercise yesterday) * (1 – percentage of max exercise yesterday)

For example, if one had a constant motivation of 3.9 units (whatever the scale), and managed to do 80% of their maximum exercise on Monday, they would use 3.9 times 80% times 20% = 62% of their maximum exercise time on Tuesday. Likewise, on Wednesday they would use 3.9 times 62% times 38% = 92% of the maximum possible exercise time. And so on and so on.

We’re pretending this model is the reality. This is so that we can perfectly calculate the amount of exercise on any day, given that we know a person’s motivation and how much they managed to exercise the previous day.

Imagine we measure a person, who obeys this model with a constant motivation of 3.9, and starts out on day 1 reaching 50% of their maximum exercise amount. But let’s say there is a slight measurement error: instead of 50.000%, we measure 50.001%. In the graph below we can observe, how the error (red line) quickly diverges from the actual (blue line). The predictions we make from our model after around day 40 do not describe our target person’s behaviour at all. The slight deviation from the deterministic system has made it practically chaotic and random to us.

chaosplot_animation.gif
Predicting this simple, fully deterministic system becomes impossible to predict in a short time due to a measurement error of 0.001%-points. Blue line depicts actual, red line the measured values. They diverge around day 35 and are soon completely off. [Link to gif]

What are the consequences?

The model is silly, of course, as we probably would never try to predict an individual’s exact behaviour on any single day (averages and/or bigger groups help, because usually no single instance can kill the prediction). But this example does highlight a common feature of complex systems, known as sensitive dependence to initial conditions: even small uncertainties cumulate to create huge errors. It is also worth noting, that increasing model complexity doesn’t necessarily help us with prediction, due to a problems such as overfitting (thinking the future will be like the past; see also why simple heuristics can beat optimisation).

Thus, predicting long-term path-dependent behaviour, even if we knew the exact psycho-socio-biological mechanism governing it, may be impossible in the absence of perfect measurement. Even if the world was completely deterministic, we still could not predict it, as even trivially small things left unaccounted for could throw us off completely.

Predicting long-term path-dependent behaviour, even if we knew the exact psycho-socio-biological mechanism governing it, may be impossible in the absence of perfect measurement.

The same thing happens when trying to predict as simple a thing as how billiard balls impact each other on the pool table. The first collision is easy to calculate, but to compute the ninth you already have to take into account the gravitational pull of people standing around the table. By the 56th impact, every elementary particle in the universe has to be included in your assumptions! Other examples include trying to predict the sex of a human fetus, or trying to predict the weather 2 weeks out (this is the famous idea about the butterfly flapping its wings).

Coming back to Misha Pavel’s points regarding big data, I feel somewhat skeptical about being able to acquire invariant “domain knowledge” in many psychological domains. Also, as shown here, knowing the exact mechanism is still no promise of being able to predict what happens in a system. Perhaps we should be satisfied when we can make predictions such as “intervention x will increase the probability that the system reaches a state where more than 60% of the goal is reached on more than 50% of the days, by more than 20% in more than 60% of the people who belong in a group it was designed to affect”?

But still: for determinism to solve our prediction problems, the amount and accuracy of data needed is beyond the wildest sci-fi fantasies.

I’m happy to be wrong about this, so please share your thoughts! Leave a comment below, or on these relevant threads: Twitter, Facebook.

References and resources:

  • Code for the plot can be found here.
  • The billiard ball example explained in context.
  • A short paper on the history about the butterfly (or seagull) flapping its wings-thing.
  • To learn about dynamic systems and chaos, I highly recommend David Feldman’s course on the topic, next time it comes around at Complexity Explorer.
  • … Meanwhile, the equation I used here is actually known as the “logistic map”. See this post about how it behaves.

 

Post scriptum:

Recently, I was happy and surprised to see a paper attempting to create a computational model of a major psychological theory. In a conversation, Nick Brown expressed doubt:

nick_usb

Do you agree? What are the alternatives? Do we have to content with vague statements like “the behaviour will fluctuate” (perhaps as in: fluctuat nec mergitur)? How should we study the dynamics of human behaviour?

 

Also: do see Nick Brown’s blog, if you don’t mind non-conformist thinking.