Yaneer Bar-Yam is a complexity scientist, who has worked with and warned about pandemics for 15 years. His interview with Esa-Pekka Pälvimäki and Thomas Brand (in English) regarding the COVID-19 situation in Finland can be found here; these are some of my notes and video extracts.
Ensisijainen asia: On ymmärrettävä, että voimme päästä taudista eroon. Voimme lopettaa tämän taudin siinä, missä olemme lopettaneet muitakin tauteja: SARS, MERS ja Ebola eivät ole globaaleja riesoja. Tästä lisää myöhemmin.
“On maita, jotka ovat toimineet viisaasti ja päässeet taudista eroon; ne ovat [historian silmissä] voittajia. Suomi ei ole vielä siellä… jos Suomi haluaa päästä johtajien joukkoon, sen tulee toimia nopeasti ja voimakkaasti taudin hävittämiseksi.”
Kaksi tietä ulos kriisistä:
Kahden viikon sulku johtaa siihen, että uudet tautitapaukset loppuvat lähes kokonaan. Niillä alueilla, joilla edelleen on tapauksia, sulkua tulee jatkaa. Hallituksen tulisi tukea kaupunkien ja muiden yhteisöjen päätösvaltaa siinä, että nämä voivat säädellä itse omia rajoituksiaan.
Viiden viikon kansallinen sulku: On paljon maita, jotka ovat menestyneet COVID-taistossa kansallisen sulun avulla (esim. Etelä-Korea, Kreikka, Islanti, Luxemburg, Kroatia; ks. kuva). Tähän joukkoon kuuluvat maat voivat avata matkustusrajoituksia toistensa välillä. Toim. huom. yksikään maa ei ole peitonnut virusta ilman päättäväisiä vastatoimia.
Mikä on yhteisöjen rooli epidemian torjunnassa? Mikäli lainvoimainen ulkonaliikkumiskielto tai muut liikkumisrajoitteet ovat mahdottomia, pandemiavaste voidaan tehdä yhteisöissä; viestinä on, että olemme samassa veneessä ja kaikki haluavat päästä takaisin normaaliin – palataan siis normaaliin mahdollisimman nopeasti! Kaikki eivät suosituksia tietenkään tule noudattamaan, mutta jos suurin osa niin tekee, se riittää. Yleinen ja hyväksi havaittu tekniikka epidemian hallintaan on ovelta ovelle kulkeminen ja yhteisön jäsenten voinnin tiedusteleminen; ovatko he terveitä, sairaita, tarvitsevatko he jotakin? Tätä voi pari viikkoa tehdä yhteisön jäsen, Suomessa kenties taloyhtiön suojelu/turvallisuusvastaava?
Yhteisöissä, joissa tauti leviää poikkeuksellisen vahvasti, tulee puhua johtajille ja kertoa, että taudista ja sen tuottamasta kärsimyksestä voidaan päästä eroon. Ei ole mitään tärkeämpää kuin se, että yhteisöt saadaan ottamaan omistajuus ja vastuu omista jäsenistään. Heitä, heidän huoliaan ja ongelmiaan tulee kuunnella ja kysyä, kuinka heitä voitaisiin parhaalla tavalla auttaa.
Käynnistyykö leviäminen väistämättä uudestaan, jos tautia kantava henkilö pääsee tartunnoista vapaalle alueelle? Uudet tartunta-aallot eivät ole tarpeellisia. Tartuntatauteja on hävitetty ennenkin, ja samoin voidaan Koronaviruskin hävittää: paikallisesti ja globaalisti. Kyse on valinnasta. Esim. 1-3 tapausta voidaan aina pysäyttää kontaktijäljityksen ja altistuneiden eristämisen avulla; voimme myös toimillamme vaikuttaa siihen, että tapausten ilmaantuminen on hyvin epätodennäköistä. Mutta jos tapauksia on esim. kymmenen, tarvitaan järeämpiä toimia.
Palaako tauti aina ja ikuisesti ulkomailta, kunnes rokotus on saatavilla; eihän minkään maan talous kestä niin pitkiä rajoituksia? Ei, taudin hävittäminen saadaan tehtyä viikoissa. Suomessa se saataisiin poistettua monista paikoista kahdessa viikossa, toisissa kolmessa tai useammassa. Viidessä-kuudessa viikossa se katoaisi kaikkialta. Tähän liittyy kiinnostava harha: taudin alkuvaiheessa ajateltiin taudittoman maailman kestävän ikuisesti, ja nyt ajatellaan taudin kestävän ikuisesti. Ei – normaalitila ei kestä ikuisesti, eikä poikkeustila kestä ikuisesti. SARS ja MERS eivät päätyneet nekään kiertämään maailmaa ikuisesti.
Entä laumaimmuniteetti? Laumaimmuniteetin hankinnan kustannus on valtava, eikä ole selvää, että se toimisi. Jos emme tee suurempia toimia, yritämme pitää tautitapaukset alhaalla ja odotamme rokotetta sekä laumaimmuniteettia, siinä voi mennä vuosia, ja se voi maksaa 250 000 henkeä.
Kaikkien – yritysten, yhteisöjen ja hallituksen – saaminen mukaan ponnistukseen.
Sulku (lockdown): Fyysisen etäisyyden (6–9 metriä; 2m ei riitä) pitäminen, tartuntojen rajoittaminen perheryhmissä (positiiviseksi testatut henkilöt lähetetään karanteeniin esim. hotelliin oman asunnon sijaan).
Tapausten tunnistaminen ja eristäminen (miellyttäviin paikkoihin, esim. hotelleihin) ajoissa.
Kasvosuojainten käyttäminen, erityisesti välttämättömissä palveluissa.
Edes jonkinasteiset matkustusrajoitukset. Liikkumisen rajoittaminen tarpeellisiksi nähtyihin palveluihin on parempi kuin se, että kaikki liikkuvat mielin määrin, mikä tuo tartuntatapaukset paikkoihin joissa niitä ei välttämättä muuten olisi.
Välttämättömien palveluiden käytön saaminen turvalliseksi. Turvalliset työtilat, etätyömahdollisuudet, ruokakauppojen kotiin-/kadullekuljetukset, jne.
Laajamittainen testaus, jotta tiedetään missä tarvitaan lisärajoituksia ja missä rajoituksia voidaan höllentää. Tietokonetomografiaa voidaan käyttää testaamisen nostamiseen uudelle tasolle; se tuottaa hyvin vähän vääriä negatiivisia havaintoja.
“Kuka tahansa, joka nykyään sanoo, ettei ole olemassa informaatiota, jonka perusteella kasvosuojainten voi sanoa olevan hyödyllisiä taudin leviämisen kannalta, on sokea. He pitävät maskia suun ja nenän sijaan silmillään. Näyttö on olemassa, tieteellinen ymmärrys on olemassa; tämän viestin pitää olla selkeä.”
– Yaneer Bar-Yam (39:34)
Valtava virhe vastatoimissa on, että ajattelemme ja toimimme kuin tämä olisi influenssa. Mutta siitä ei ole kyse; voimme oppia enemmän vakavista tartuntatautitapauksista selvinneiltä mailta, kuin voimme menneestä toiminnastamme vanhojen perus-influenssojen parissa. Esimerkiksi Ebola on tullut paikallisen häviämisen jälkeen takaisin vain siksi, että se on palannut eläinten kautta ihmisiin.
Vaihtoehtoinen strategia ei ole Flatten the Curve, ts. “Pidä tapausmäärät alhaalla ja odottele”.
Less than half (exact number depending on the field) of studies can be replicated
Way too few studies can be computationally reproduced, that is, getting the same results from the same data and same analysis code
Research tends to ignore context, making generalisability difficult
Published studies are reported intransparently, so it’s hard to tell what was actually done – and if p-hacking practices were used (e.g. the results were cherry picked from a large pool of random data)
There are several initiatives to address these concerns, but where do they spring from, and how can we eventually fix science in large scale? I’m going to suggest a solution which will rub a lot of people the wrong way. Incidentally, it is the same tool we need to fight the Coronavirus. But first, we need to understand Nassim Taleb’s presentation of the minority rule.
The basic idea is, that under particular conditions, once a stubborn niche population reaches a small level such as 3-4% of the total population, the majority will have to submit to the preferences of the minority. For example, consider a children’s party, where the organiser needs to make the decision on whether to offer milk products, as some of the guests are lactose-intolerant. Let us call these the inflexible ones: They would suffer great harm from milk products, so they avoid them. The majority of the guests, the flexible ones, can consume both lactose-free products, as well as those which contain milk. Given that the lactose-free supplies are easily available and of not significantly inferior quality, it makes the organiser’s (as well as those party guests who are inflexible) life much easier to serve no milk products at all.
As another example, during my previous life as a business person, I did a degree where my peers were about 50% Finnish, and 50% other nationalities ranging all the way from Russia to Peru. Us Finns spoke Finnish with each other, but whenever a non-Finnish person entered the group, we switched to English. The proportion of non-Finnish speakers was irrelevant, whenever it was above 0%.
So, an inflexible minority can drastically affect how the majority acts. But the infexibility can also stem from one’s worldview; if you had to decide on a daytime activity with a bunch of friends during Ramadan, and one of them was Muslim, you wouldn’t go to a steak house.
What does this mean for improving science and weakening the Coronavirus?
In order to promote good research, transparency advocates need to be inflexible about questionable research practices. To the point that they lose potential career opportunities – although they may, in turn, gain better ones as they can work with likeminded people.
In order to smash COVID-19, citizens need to be inflexible about risk behaviours. To the point that some people consider them overzealous and rigid – although it may not matter, if it leads to surviving the crash.
Both of these causes have a very important fractal, or multiscale component: Much of the action is not top-down but happens bottom-up; the individual reels in their family (or immediate research group), who then become norm-setters in their apartment building/neighbourhood (or scientific society of their research area), who again affect local governance (or scientific discipline).
But there are at least three crucial success factors for the behaviour change effect to work:
The inflexible group needs to be spatially spread widely, instead of being confined in particular geographic (or intellectual) pockets, in which case the majority can just isolate and ignore them.
The cost of aligning with the inflexible group needs to be small for the flexible group. For minority members to change behaviour, therefore, it may be necessary to take up some of its costs to the majority – at least initially. The other option is to move steps that are so small they are almost imperceptible.
Crucially, the inflexible group… Does. Not. Budge. People always tend to say that one “must not be so strict”, but there is a reason it is not okay to steal, murder, or cheat upon your spouse “just a bit”. If the inflexibles are perceived to be flexible, after all, the majority can expect to dominate over them.
For our case examples, spatial spread is mostly taken care of: The internet has done much to allow for the minority members to connect, while being perhaps the only ones in their own immediate vicinity passionate about their cause. So I’ll address #2-#3.
Lowering the cost of transparency: In the scientific transparency scene, this means the minority representatives need to spend tons of time learning about transparent research practices (e.g. pre-registration and data sharing, the TOP Factor, etc.). This knowledge they can then either disseminate to the rest of their research group, or act as the person who does most of the heavy lifting required in reporting reproducible work.
Lowering the cost of Coronavirus safety: The anti-Coronavirus advocates, on the other hand, need to make information easily available (as they do in endcoronavirus.org), share it, and translate it – both literally and figuratively. An example would be sharing research studies, ways to make and wear masks correctly, or how to acquire them (if you’re in Finland, check this out to have masks made for you, while donating some to healthcare workers). They may also need to learn about technicalities of video conferencing and other solutions, so that they can readily teach their peers after refusing face-to-face meetings.
Not budging in research transparency: The research transparency people obviously need to refuse co-authoring papers which contain p-hacking, hyperbole or other ways of distorting the findings to improve chances of publication. They need to refuse projects which do not plan to share analysis code (and data, within privacy constraints), ask about transparency before peer reviewing, and walk away from papers where the first author insists on presenting exploratory hypotheses as confirmatory ones, or is not willing to properly discuss constraints to generalisability, model assumptions (stationarity, homogeneity, independence, interference, ergodicity… see here if these are strange words) and sensitivity analyses.
Not budging in Coronavirus safety: The anti-Coronavirus folks need show example by performing hand hygiene, self-isolating, wearing masks, social distancing, and taking their kids off school/daycare – but also making sure their family does the same. In addition, they need to speak out when they see their friends or neighbours acting out risk behaviours, such violating the 2-meter (6-feet) physical distance requirement. They need to make it clear they are only available for meetings via video conferencing, which they’re happy to help setting up.
Remaining steadfast and vocal is not for everyone, and calling out behaviour you perceive to be wrong, can be extremely anxiety-provoking. That’s also why one needs to start with those closest to them. And it is hard to be inflexible in the beginning, when the majority norms are against you and everyone is expected to play along. The “happy” news is, that not everyone needs to be inflexible – just the small minority. (I’m putting happy in quotes, because the minority rule can be leveraged to gradually promote any fascist ideology the majority is foolish enough to tolerate.)
Hence, if you’re the type of person who feels strongly enough to be inflexible about these things, perhaps you can feel comforted by the idea that you don’t need to convert the majority: The stubborn few can create the critical mass and change the world.
For all the recordings, see our YouTube channel. There are two playlists; one for short snippets and another one for full-length lectures. Here are some tweets on the course, with links to further resources. For additional slides, see here. See the end of the post for literature!
Lecture 2 (video, slides[26-74]) – Introduction to the mathematics of change: Logistic Map, Return Plot, Attractors. [The beginning of the lecture was cut due to camera problems; please find a great introduction to the logistic map here.]
Lecture 3 (video, slides) – Basic Time Series Analysis: Autocorrelation Function, Sample Entropy, Relative Roughness.
Lecture 4 (video, slides [34 onwards, see also this, this and this]) – Detecting (nonlinear) structure in time series: Fractal Dimension, Detrended Fluctuation Analysis, Standardised Dispersion Analysis.
Lecture 5 (video, slides[1-16]) – Quantifying temporal patterns in unordered categorical time series data: Categorical Auto-Recurrence Quantification Analysis (RQA).
Lecture 6 (video, slides[17-52]) – Quantifying temporal patterns in continuous time series data: Continuous Auto-Recurrence Quantification Analysis, Phase-space reconstruction.
Lecture 7 (video, slides[52-70])– Recurrence Quantification Analysis in practice: Data preparation for RQA, “General recipe” (i.e. RQA workflow), lagged/windowed RQA, RQA in detecting cognitive phase transitions, RQA in neural imaging.
Lecture 9 (video, slides)– Multivariate Time Series Analysis – Dynamic Complexity & Phase Transitions in Psychology: Self-ratings as a research tool, the importance of sampling frequency, dynamic complexity as an early warning signal in psychopathology.
Lecture 10 (video, slides [1-37]) – Introduction to graph theory, with applications of network science: Complex networks, hyperset theory, network-based complexity measures, small-world networks.
Lecture 11 (video, slides [38-80]) – Multiplex recurrence networks for non-linear multivariate time series analysis: Recurrence networks, change profiles of ecological momentary assessments as an alternative to raw scores. Also see this paper!
Three recent papers directly related to the course’s topics:
Hasselman, F., & Bosman, A. M. T. (2020). Studying Complex Adaptive Systems with Internal States: A Recurrence Network Approach to the Analysis of Multivariate Time Series Data Representing Self-Reports of Human Experience. Frontiers in Applied Mathematics and Statistics, 6. https://doi.org/10.3389/fams.2020.00009
Heino, M. T. J., Knittle, K. P., Noone, C., Hasselman, F., & Hankonen, N. (2020). Studying behaviour change mechanisms under complexity [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/fxgw4
Olthof, M., Hasselman, F., & Lichtwarck-Aschoff, A. (2020). Complexity In Psychological Self-Ratings: Implications for research and practice [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/fbta8
More resources on complexity:
Mathews, K. M., White, M. C., & Long, R. G. (1999). Why Study the Complexity Sciences in the Social Sciences? Human Relations, 52(4), 439–462. https://doi.org/10.1023/A:1016957424329 [INTRO COMPLEXITY SCIENCE]
Richardson, M. J., Kallen, R. W., & Eiler, B. A. (2017). Interaction-Dominant Dynamics, Timescale Enslavement, and the Emergence of Social Behavior. In Computational Social Psychology (pp. 121–142). New York: Routledge. [INTERACTION-DOMINANCE]
Molenaar, P. C., & Campbell, C. G. (2009). The new person-specific paradigm in psychology. Current directions in psychological science, 18(2), 112-117. [ERGODICITY]
Kello, C. T., Brown, G. D., Ferrer-i-Cancho, R., Holden, J. G., Linkenkaer-Hansen, K., Rhodes, T., & Van Orden, G. C. (2010). Scaling laws in cognitive sciences. Trends in cognitive sciences, 14(5), 223-232. [SCALING PHENOMENA]
Lewis, M. D. (2000). The promise of dynamic systems approaches for an integrated account of human development. Child development, 71(1), 36-43. [STATE SPACE, DYNAMICS]
Olthof, M., Hasselman, F., Strunk, G., van Rooij, M., Aas, B., Helmich, M. A., … Lichtwarck-Aschoff, A. (2019). Critical Fluctuations as an Early-Warning Signal for Sudden Gains and Losses in Patients Receiving Psychotherapy for Mood Disorders. Clinical Psychological Science, 2167702619865969. [DYNAMIC COMPLEXITY]
Olthof, M., Hasselman, F., Strunk, G., Aas, B., Schiepek, G., & Lichtwarck-Aschoff, A. (2019). Destabilization in self-ratings of the psychotherapeutic process is associated with better treatment outcome in patients with mood disorders. Psychotherapy Research, 0(0), 1–12. https://doi.org/10.1080/10503307.2019.1633484 [DYNAMIC COMPLEXITY]
Richardson, M., Dale, R., & Marsh, K. (2014). Complex dynamical systems in social and personality psychology: Theory, modeling and analysis. In Handbook of Research Methods in Social and Personality Psychology (pp. 251–280). [INTRO COMPLEXITY SCIENCE – Social and personality psychology]
Wallot, S., & Leonardi, G. (2018). Analyzing Multivariate Dynamics Using Cross-Recurrence Quantification Analysis (CRQA), Diagonal-Cross-Recurrence Profiles (DCRP), and Multidimensional Recurrence Quantification Analysis (MdRQA) – A Tutorial in R. Frontiers in Psychology, 9. https://doi.org/10.3389/fpsyg.2018.02232 [MULTIDEMINSIONAL RQA]
Webber Jr, C. L., & Zbilut, J. P. (2005). Recurrence quantification analysis of nonlinear dynamical systems. In Tutorials in contemporary nonlinear methods for the behavioral sciences (pp. 26–94). Retrieved from http://www.saistmp.com/publications/spiegorqa.pdf [RQA]
Marwan, N. (2011). How to avoid potential pitfalls in recurrence plot based data analysis. International Journal of Bifurcation and Chaos, 21(04), 1003–1017. https://doi.org/10.1142/S0218127411029008 [RQA parameter estimation]
Boeing, G. (2016). Visual Analysis of Nonlinear Dynamical Systems: Chaos, Fractals, Self-Similarity and the Limits of Prediction. Systems, 4(4), 37. https://doi.org/10.3390/systems4040037 [LOGISTIC MAP, DERTERMINISTIC CHAOS]
Kelty-Stephen, D. G., Palatinus, K., Saltzman, E., & Dixon, J. A. (2013). A Tutorial on Multifractality, Cascades, and Interactivity for Empirical Time Series in Ecological Science. Ecological Psychology, 25(1), 1–62. https://doi.org/10.1080/10407413.2013.753804 [MULTI-FRACTAL ANALYSIS]
Kelty-Stephen, D. G., & Wallot, S. (2017). Multifractality Versus (Mono-) Fractality as Evidence of Nonlinear Interactions Across Timescales: Disentangling the Belief in Nonlinearity From the Diagnosis of Nonlinearity in Empirical Data. Ecological Psychology, 29(4), 259–299. https://doi.org/10.1080/10407413.2017.1368355 [(MULTI-)FRACTAL ANALYSIS]
Rickles, D., Hawe, P., & Shiell, A. (2007). A simple guide to chaos and complexity. Journal of Epidemiology & Community Health, 61(11), 933–937. https://doi.org/10.1136/jech.2006.054254. [INTRO COMPLEXITY SCIENCE – Public health]
Pincus, D., Kiefer, A. W., & Beyer, J. I. (2018). Nonlinear dynamical systems and humanistic psychology. Journal of Humanistic Psychology, 58(3), 343–366. https://doi.org/10.1177/0022167817741784. [INTRO COMPLEXITY SCIENCE – Positive psychology]
Gomersall, T. (2018). Complex adaptive systems: A new approach for understanding health practices. Health Psychology Review, 0(ja), 1 – 34. https://doi.org/10.1080/17437199.2018.1488603. [INTRO COMPLEXITY SCIENCE – Health psychology]
Nowak, A., & Vallacher, R. R. (2019). Nonlinear societal change: The perspective of dynamical systems. British Journal of Social Psychology, 58(1), 105-128. https://doi.org/10.1111/bjso.12271. [INTRO COMPLEXITY SCIENCE – Societal change]
This post curates Finnish translations (mostly NECSI guidelines) for stopping the Coronavirus pandemic. Tälle sivulle olen koonnut hyvinä pitämiäni suomenkielisiä tekstejä. Suomentajana Thomas Brand, ellei toisin mainita. Katso myös pandemioita pitkään tutkineen kompleksisuustieteilijä Yaneer Bar-Yamin haastattelu Suomen tilanteeseen liittyen.
Marraskuussa 2019 sain stipendin turvin mahdollisuuden osallistua Nassim Talebin riskinhallintaryhmän koulutukseen New Yorkissa. Siellä käsittelimme pandemiankaltaisia riskejä ja toimintaa niiden välttämiseksi. Muutamaa kuukautta myöhemmin pääsinkin elämään painajaista nähdessäni, että käytännössä kaikki länsimaat toimivat täysin vastoin varovaisuusperiaatetta (ts. joukkotuhon uhka on aina vältettävä agressiivisin toimin), luottaen “parhaaseen nykytietoon” viiveellä ilmenevän riskin torjumisen sijaan.
Alla hyviä kirjoituksia, jotka ovat pääosin alunperin NECSI-instituutin tuottamia. NECSI:lla on pitkä historia hallitusten ja järjestöjen kuten WHO:n konsultoinnissa mm. Ebola ja Zikavirus-epidemioita nitistettäessä, mutta myös muissa kompleksisissa ongelmissa, joihin perinteinen matemaattinen mallinnus ei pure. Koronavirus-pandemiaan liittyvään vapaaehtoisten globaaliin verkostoon voi liittyä täältä; tekemistä on käännöksistä some-aktiviteettiin, maskien ompeluun, hengityslaitteiden suunnitteluun, verkkosivujen ja mobiilisovellusten luomiseen ym.!
In this post, I introduce fat-tailed distributions and the concept of the Shadow Mean, with implications to how seriously multiplicative events should be taken in the society. [Addendum: If you want a technical treatment of the proper Shadow Mean approach instead of my caricature, see this]
I keep getting struck by how often we see well-meaning educated people comparing phenomena such as terrorism and epidemics to the “as much or more” dangerous lifestyle diseases. I even saw one of the smartest health psychologists I know commit this error in their professorial inauguration speech. Note, that I’m not against preventing non-communicable diseases; in fact, that’s what my dissertation is about. But we need to be vigilant on how risks work.
Here’s a chart from the aforementioned presentation, where you can clearly see that, all else equal, we should be diverting almost all our prevention resources to the biggest killers, which are lifestyle diseases:
The problem is, that all else is not equal. Why?
It has to do with a concept called “Shadow Mean” (capitalised for ominosity), which relates to “fat tailed” distributions. I’ll explain more later.
But let us first consider some properties of the Coronavirus pandemic, and how they differ from the common flu – and, by extension, to lifestyle diseases. To do so, I’ll give the floor to Luca Dellanna (Twitter, website), who kindly contributed his thoughts to this blog:
Luca Dellanna: Six unintuitive properties of the current pandemic
1/6: Asymmetry (part I)
“The cost of paranoia is bounded. The sooner we get paranoid, quicker we can get a handle on things, sooner we can confidently go back to business as usual the cost of “letting it happen” is unbounded. Here is the tradeoff in the US: Restrict international travel now and maintain our ability to move freely domestically or keep the flows coming and inevitably have to restrict movement both internationally and domestically. The choice is clear.” – Joe Norman (link)
There is enough evidence that the pandemic is inevitable. The only question is how big and how fast we want it.
The costs of preventing the pandemic are mostly linear. Closing down schools today for one month costs roughly as much as closing them for one month in April. Closing down 3 schools costs roughly half as closing down 6 (assuming the same size).
Instead, the costs of letting the pandemic grow are nonlinear.
Letting the pandemic run today might mean 100 more people infected tomorrow. Letting the pandemic run next week might mean 1000 more people infected the following day.
And it gets worse (see the next point).
“In the US, we have 2.3 million people in prison. I cannot imagine a way to stop #coronavirus from spreading like wildfire among that population. How will federal, state, & local authorities handle this?” – Jon Stokes (link)
Another example of the non-linear consequences of the pandemic.
A pandemic that “knocks-off” (i.e. prevents from working, for any reason) 0.1% of the workforce is bad but not that bad.
A pandemic that “knocks-off” (i.e. prevents from working, for any reason) 0.1% of the workforce in a clustered way is much worse: it means that some companies lose a large percentage of their workforce for a few days or weeks and must close the operations (whereas others are directly unaffected).
A pandemic that “knocks-off” (i.e. prevents from working, for any reason) 0.2% of the workforce is ten times worse than a 0.1% pandemic – for there are less workers to covers those who are sick, for one company closing creates problems downstream the supply chain, and so on.
The worst case is so bad that it makes sense planning for it even if it has low chances to happen (which is itself a strong assumption on too uncertain variables).
“The difference between the flu and the coronavirus is that between a tide and a tsunami. The same amount of water, but the impact is different because the tsunami arrives all at once.” – Roberto Burioni (link)
As I explained on Twitter, the problem is not (only) the current mortality, but the mortality we can get if our healthcare system gets overwhelmed. People won’t receive the care they need, even for conditions unrelated to the coronavirus.
“If a juggler can juggle 4 balls letting them drop 1% of time, then he can also juggle 10 balls letting them drop 1% of time.” – this is how most people estimate mortality. As if healthcare was a fully elastic system.
4/6: Asymmetry (part II)
“Asymmetry. Convex decision. So long as there is no risk of harm from masks & disinfectants, the decision is wise, in spite of the absence of evidence. – Nassim Nicholas Taleb (link)
Face masks do not offer full protection, but they do offer some protection. As long as you remove them carefully and they don’t make you sweat (so that you’re tempted to touch your face), they’re better than nothing.
Their cost is minimal and bounded, their benefit is large and unbounded (at least for you: they might save your life).
Of course, there is the argument that face masks are finite and they should be allocated where they’re the most needed. It’s a valid argument. But let’s focus on the asymmetry of the cost-benefit, because it applies to another method as well: washing hands and disinfecting.
Their cost is extremely low. I’m baffled that so few people are doing it first thing while arriving home.
Don’t be penny-wise but pound-foolish with your time.
“True epidemic in Iran and South Korea, community spread in Italy, confirmed transmission from Iran to multiple countries, the US basically isn’t testing anybody… and as far as I can tell it’s gauche even to mention [the virus] in public in the United States.” – @toad_spotted (link)
If a country doesn’t like to talk about a problem, it will have to talk about that problem.
Problems grow the size they need for you to acknowledge them. The virus is already here, it’s just not evenly detected. – Balajis Srinivasan (link)
“I just realized that when people say ‘yeah but you won’t die’ they mean ‘yeah you’ll become a carrier and make everyone you see sick but not die’.” – Paul McKellar (link)
There are many replies to “the coronavirus is not that mortal”.
“15% mortality in older people (80+ years old) almost means a Russian Roulette if they get infected”.
One’s chances of dying depend on the number of infected people he meets in his day-to-day (because the more he meets, the more the chances he gets the virus).
We don’t know! There are many reasons that prevent us from pinpointing the mortality of the virus in a way that is predictive of the future. We should assume the worst scenarios until we can rule them out. (Why? Because asymmetry and nonlinearities; the content of points #1 and #4 above.)
[Luca’s newsletter is pretty much the only one I’ve ever found positively thought-provoking; if you want to hear more of his ideas, subscribe here]
Horizontally challenged tails
What does this have to do with lifestyle diseases? Well, while the incidence of the common flu is quite unlikely to quadruple from one year to the next, it is much, much less likely, that the incidence of e.g. cardiovascular disease would do the same.
Let’s look at an example. In the left plot below, you see what a mortality rate from a fat tailed distribution would look like. There are two years, when you have an extreme case – something psychologists are used to just eliminating from the data. Note, that outliers are different from extremes; an outlier may be a badly measured observation, whereas an extreme lies within the conceivable boundaries of the phenomenon.
The left plot could signify a viral epidemic. Say we are living year 26; the mean observed annual mortality would be around 900, and you probably aren’t too worried; things are almost exclusively very calm. But, given the fat-tailed distribution, extreme values are possible and upon surviving year 27, the mean would be almost 6000. Before it’s seen, this is known as the Shadow Mean; there are yet unobserved cases we can infer from the mechanics that produce the fat-tailed distribution, but which are not (yet) observed empirically.
Contrast the situation with that on the right plot, which could signify deaths from accidents in a country like Finland. In 900 years, we still have not observed one with over 2500 deaths (nb. this is just simulated data from a thin-tailed distribution). The mean is about 1000 and if we omit the maximum observation, it remains practically identical.
N-th order matters
Time and second-order effects – that is, things that happen as an indirect consequence of an event – are of great importance when something extreme happens. Let us run a small scenario. Finland has 5½ million people. Let us consider that 25% would get infected (with a maximum of, say, 50%), and 5% (max. 20%) would require care in a hospital. This would already mean, that we would suddenly have 70 000 (max 550 000) extra patients in the healthcare system, which has been “streamlined” for years. Very different scenario than having the same number of extra patients over the course of a year or a decade – one, which lays fertile ground to second-order effects. These include the impact on people, who wouldn’t have big problems under normal situations, due to having hospital care capacity readily available.
Finally: This is not fearmongering or a call for hysteria. Cold-headed rational decision making calls for taking precautions here. If you stock up so that you can self-quarantine yourself for 14 days in the case of getting ill, and do it gradually by buying little extra every time you go to the store anyway, you are making a good decision. Here’s one more figure by Luca, illustrating the point:
This is the syllabus for my University of Helsinki course. Target audience is non-mathematical students in social sciences. The 2019 class consisted of social psychologists, social workers, sociologists and political scientists, so it’s quite a mishmash of topics I considered of high importance in life, research and everything.
UPDATE: Some people have been asking about how to cite this; OSF page with DOI, which includes the materials, is here.
Critical Appraisal of Research Methods and Analysis (CARMA) – Evaluating and not getting fooled by data in scientific and practical research contexts
(the violence is real, though)
Description: Research claims in news, science, and business can mislead people, either purposefully or inadvertently. How and why does this happen, and what mistakes, misconceptions and pitfalls should one avoid when evaluating data? This course will help participants assess data-based statements, and offer some tools to avoid getting fooled by them. It is meant for students who aspire to future careers, which involve undertaking, interpreting or commissioning research. This could include science in academic or other institutions, consumer/marketing research in business settings, evidence-based decision making as policy makers or journalists, among others. The course does not require specialising in quantitative methods, although basic familiarity can be useful.
Note: a lot of slides contain “animation” that doesn’t work if you watch the presentation on a scrolling mode instead of having one full slide on the screen at a time. So, download or zoom in.
The crisis of confidence in social and life sciences: State of affairs (4 September 2019) – slides
Learning objectives: Become acquainted with the recent developments regarding the so-called “replication crisis”.
Replication crisis: how it all started (this time around).
Medicine, you were supposed to be the best of us!
Consequences of problematic practices.
You’re not alone in misinterpreting p-values.
From questionable research practices and biased stories, to better evidence and/or decisions (11 September 2019) – slides
Learning objectives: Understand what the research community is doing to improve the quality of published research. Extrapolate to non-academic settings.
Transparency and Openness Promotion (TOP) guidelines to fight bad science.
Transforming publication practices with pre-prints
Disentangling confirmatory and exploratory research.
Tricky rule-of-thumb questions to ask when being presented research (1/2: “null findings”).
Magnificient mistakes and where to find them (18 September 2019) – slides
Learning objectives: Recognise some particular pitfalls in evidential statements. Understand that decisions in the field do not need to rely on correct predictive statements, let alone scientific evidence.
Tricky rule-of-thumb questions to ask when being presented research (2/2: “statistically significant” findings).
Ways tests can fail: Type I/II mistakes. Type M and Type S mistakes.
The difference between evidence of absence and absence of evidence: Black Swans and the Turkey Problem.
When you don’t need to be right: green lumber, and a first taste of convexity.
Heuristics: Simple rules that make us smart.
On interpreting data nudes instead of summary tables (25 September 2019) – slides
Learning objectives: Understand the rationale for visualising data, and what can be hidden when reporting summary statistics only. Learn to spot some common tricks used to visualise data in a favourable way to the presenter.
A crude redux to evidence of absence.
Data Nudes vs. Shitty Tables.
The End of Average.
What gets lost in looking at numbers alone: Uncertainty hidden in the absence of distributions.
Demons with(in) axes: Slaying or summoning effects with presentation tricks.
Dose-response effects masked by averages.
Complex systems and why they ruin everything straightforward (2 October 2019) – slides
Learning objectives: Become familiar with general features of so-called complex systems. Understand how they can be thought of in the context of practical interventions.
Intro to complexity, and general features of complex systems.
Interaction vs. component dominant systems.
Don’t camp at 1st order effects in dragon season.
Navigating the Four Quadrants
Never cross Heraclitus’ river, if it’s on average 1 meter deep: Interventions and their offspring (9 October 2019) – slides
Learning objectives: Understand the rationale behind interventions and experimenting/intervening in complex systems, as well as some limitations of big data.
Change comes in a triad.
Sales tricks to counter, use and abuse.
Pathway thinking & complexity thinking in behaviour change science.
Failures and unexpected effects of social interventions.
When is it safe(r) to intervene?
Dynamic/idiographic phenomena, and hidden assumptions (16 October 2019) – slides
Learning objectives: Describe the concepts of ergodicity and stationarity. Understand how they can mislead when not taken into account when e.g. assessing risks.
Assumptions, schmassumptions; mind your foundations!
Damned world not sitting still: Ergodicity & stationarity
The idiographic approach to science
The best map fallacy
The precautionary principle for policy and interventions
Frequency vs. consequences of being wrong: What matters more?
Recap on the course: The Fourth Quadrant will find you, so better put your house in order
Student evaluations, comments, and feedback
Some students provided spontaneous feedback, and I everyone an opportunity to give evaluations. These are comprehensive answers i.e. there is no publication bias or selective reporting here!
Great course!! Even if the statistics are not exactly your thing, this course will give you a lot of useful information and a better look on the research field. I feel that I did benefit a lot from this course. The teaching was great and got me interested in the things that haven’t interest me before.
Thank you Matti for this exiting and engaging course! I enjoyed substantially ambitious and well-prepared lectures. Even though I’m focusing on qualitative methodology in my own work, I found this course important and highly interesting.
Valter, a Social Work major
Can highly recommend the course. It shows that the teacher knows what he is talking about and is interested in the topics presented. The course can be a bit difficult but it’s teached in a fun way with concrete examples. Definitely not a boring course. The teacher is not boring either.
A great course, I learned a lot. After the course I find two learning outcomes especially important; learning to better evaluate research, but especially learning to treat academy as an institution.
The course had A LOT of stuff, and was sometimes a bit difficult to follow and keep up the connections between topics. With some improving for the structure and creating clear bridges from one topic to another, this course will be even more beneficial.
This course is an eye opener, it makes you have a different but more clear understanding of research particularly and the world in general. The teaching style was excellent and the content was practical. Personally, I found it easy to relate to my field of study and i’m sure anyone else would find it very practical too, regardless of their research being qualitative or quantitative.
Selestino, Public Policy major
Highly-stimulating overview of a range of interrelated complex topics. Presented in an engaging manner and involving multiple interdisciplinary perspectives, this course can change how you think.
Antti, social psychology major
The course shows and discusses many issues of contemporary quantitative research methods and provides tools and tips how to become a better researcher. Not a critical course towards quantitiative research methods though, so don’t think about taking the course as an excuse for not learning the methods!
I would recommend the course for first year master students who have some prior knowledge of quantitative research methods. You don’t have to know how to use them though as there are no quantitative excercises in the course.
In summary, a great remedy for any traumas that you might have from trying to learn quantitative research methods. The course itself doesn’t heal the wounds though as those skills are not teached but the lecturer does provide great sources where you can hone those skills on your own time. Hopefully, a second course where those skills are excercised will soon follow.
Aku, a 6th year social psychology student
The course was well designed and teacher’s enthusiasm and expertise motivated me to do my best. Altough, I was suprised to notice that the evaluation of the course was based on the assessment scale of pass and fail. After investing a lot of time and effort in doing the assignments, it would have been instructive to know in what scale did I performed. Nonetheless, I learned a lot in this course and it opened new perspectives which I can utilize in my masters thesis.
Henna, a sociology major
Big thanks for the course! It was a very interesting and fun set even though it included a lot of new things to be learned in a fast phase. You are very skilled at explaining things very clearly and in an entertaining way by using (for some reason often fatal :D) examples. Not that entertainment is the most essential aspect of a course but at least it helps to concentrate and remember the content. The course had a good balance of lecturing and group discussions, albeit it wasn’t always easy to come up with discussion points since there was so much to take in. Still, it was nice to hear what materials others had been reading or what they remember from the lectures. You were also very good at taking and answering questions in many different ways to ensure everyone understood the underlying point, and I never felt that I could not ask something I did not understand no matter “simple” the question.
CARMA for the win!
Social Psychology Master’s Student
I would recommend this course for every student because it gives you many new viewpoints concerning validity of scientific methods.
Thank you for the lecture course, Matti. Your passion to these topics really shows with the enthusiasm you presented the numerous examples in class, with the blog and tweets and with the breathtaking slideshows sometimes consisting of 100 slides or more. I appreciate you bringing up the importance of open science and “hacks”, with which it is possible to take the other direction with science. And honestly, without all of the examples with which you tied the topics to real life, I probably wouldn’t have had the slightest idea what this course was about. The in-class discussions didn’t work that well, and I think that was because it was hard to tie our thoughts together (and present them in class) because everyone had done assignments in different topics. Discussing itself was alright, though. I liked that the at-home assignments balanced the theory-heavy lectures also, where we could think of the topics more concretely, if we wanted to. All in all, I think this was a rather “easy” course to complete, but I like that, since studying is done for our own sake and for our education, not for teachers. Like critical thinking. And as I stated in the last assignment, during this course I learned that before, I wasn’t at all as critical as I thought I was. So, thanks for that!
A 4th year student
Thank you for an excellently organised course! Your effort in the implementation and enthusiasm toward the subject, as well as goals aiming to expand students’ understanding were very visible during the course. This motivated to do the intensive work required by internalising difficult topics.
Henna, a sociology major
Thank you for this course, I really liked it! I feel that I now have a deeper understanding of research methodology and am able to do more critical judgments than before. I also wish there would be a second Carma course.
These are slides of a talk given at the Aalto University Complex Systems seminar. Contrasts two views to changing behaviour; the pathway view and the complexity view, the latter being at its infancy. Presents some Secret Analysis Arts of Recurrence, which Fred Hasselman doesn’t want you to know about. Includes links to resources. If someone perchance saw my mini-moocs (1, 2) and happened to find them useful, drop me a line and I’ll make one of this.
Lifestyle factors are hugely relevant in preventing disease in modern societies; unfortunately people often fail in their attempts to change health behaviour – both their own, as well as that of others’. In recent years, behaviour change design has been conceived as a process where one identifies deficiencies in factors influencing the behaviours (commonly called “determinants”). Complexity thinking suggests putting emphasis on de-stabilisation instead.
The perspective taken here is mostly at the idographic level. At the time of writing, we have behaviour change methods to affect e.g. skills, perceived social norms, attitudes and so forth – but very little on general de-stabilisation of the motivational system as an important predictor of change.
Perspectives are welcome!
ps. Those of you to worry about brainwashing and freedom of thought: Chill. Stuff that powerful doesn’t really exist, and if it did, marketers would know about it and probably rule the world. [No, they don’t rule the world, I’ve been there]
pps. Forgot to put it in the slides, but this guy Merlijn Olthof will perhaps one day tweet about his work about destabilisation in psychotherapy contexts. Meanwhile, you can e.g. be his 10th Twitter follower, or keep checking his Google Scholar profile, as there’s a new piece coming out soon!
This post summarises what I wanted to say with a recent paper published in Health Psychology and Behavioural Medicine, which includes an RMarkdown website supplement with code. Related slideshow and a video walkthrough is available here. Note: If it’s not obvious, These are my opinions as the first author, and may or may not be shared with collaborators who are nice people and surely wouldn’t use such foul language in public.
Some Problems in Summarising and Presenting Data
Many research reports include lots of variables, presented in tables comparing two or more groups, say an intervention and a control, or males and females. Readers often look at the means and standard deviations, looking for statistically significant differences between the two. What’s the problem?
1. It’s often not clear what significance even means, or whether some correction for multiple testing has been applied.
First of all, following the logic of Neyman-Pearson hypothesis testing, to keep error rate under the alpha level, one would have to correct for multiple testing, and it is unclear how many tests one should correct for when hypotheses are not pre-specified. Ignoring this – especially, where it is unclear how to heed the recommendation to justify one’s alpha level – error rates can become surprisingly high, much more than the conventionally assumed 5%.
2. In the absence of randomisation, increased sample size leads to detecting more and more tiny differences.
When there has not been randomisation (as in the case of genders or baseline cohort descriptions), the null hypothesis of zero difference is never true, and its rejection only depends on statistical power. We are pretty much never interested in whether the populations differ by any arbitrarily small amount on any of the presented variables. What usually matters, is whether this difference is large enough to make a difference, that is, how big is the effect size. Two caveats follow: Firstly, in behavioural field trials, your participants are rarely independent from each other, but come clustered in e.g. classrooms (students), hospitals (patients) or offices (9-to-5 mental patients). Secondly, you almost always need to randomise clusters instead of individuals (here‘s why), which gives statistical power a huge ass-whooping.
Not accounting for the multilevel structure of the data when calculating effect sizes inflates the standard errors, possibly even making zero effects appear as medium-sized ones. But it is not a trivial task to derive trustworthy effect sizes for nested data (Lai & Kwok 2016). Although some solutions exist, they have not yet been empirically validated for finite populations in the second or third levels, nor is there currently a straightforward software implementation available – to my knowledge, that is. Therefore, a sensible option may be to present the means with their corresponding confidence intervals, encouraging the readers to refrain from merely considering non-overlapping intervals between groups as dichotomous hypothesis tests. In Shitty Table 1 you can see how this is done. That seem clear to you? Don’t worry, there are alternatives!
3. The shape of the distribution may matter much, much more than simple arithmetic mean.
Difference between two means is fun and neat, but only informative for approximately normal or symmetric distributions, which are not the norm in social and life sciences. See reading list in the end. But hey, surely everyone reports things like skewness and kurtosis? [Of course they don’t, and even if they did, a minority of social scientists could actually interpret the numbers.] Look at Shitty Table 2 to see for yourself, whether you consider this a good way to convey information.
An aside as regards the means: Few individual participants are described by the group-level summary statistics. In fact, using Daniels’ definition of an ‘approximately average individual’ as falling in the middle 30% of the range of values, only 1.50% of participants can be considered ‘average’ on all of the primary outcome measures (see supplementary website, section https://git.io/fpOy1). Also see this and this blog post, as well as the papers listed in the end.
Data Wants to be Seen Naked
In our paper, we present some ways behaviour change researchers could visualise their data, discuss some limitations and provide links to R code. Many, many other dedicated sources do this better, so feel free to check out this or this, for example. A principle I particularly like is to, whenever possible, include the raw data in the visualisation. This is because in abstractions, I personally have a hard time keeping in mind that I’m dealing with individuals operating in the world (complex dynamic systems in complex dynamic systems), and the raw data tends to ground me to some reality.
Data-visualisation and data exploration techniques (e.g. network analysis) can help reveal the dynamics involved in complex multi-causal systems – a challenging task with Shitty Tables. Data visualisations are crucial supplements to large numerical tables of descriptive statistics. With visualisations, researchers can communicate large amounts of information – including the associated uncertainty – in an accessible format, without requiring extensive mathematical expertise from the reader. This is important for researchers who intend to build on previous results, and in the paper we argue that such practices may also reduce problems that have led to the recent loss of confidence in the reproducibility and replicability of research findings in social and life sciences. Fully open data sharing would be ideal, but this is not always possible due to privacy concerns and, at the time of writing, remains a lamentably rare practice. In addition, open data does not necessarily accommodate stakeholders with low technical expertise in data analysis and visualisation, such as clinicians, patients and policy makers.
The benefits of presenting complex data visually should encourage researchers to publish extensive analyses and descriptions as website supplements, which would increase the speed and quality of scientific communication, as well as help to address the crisis of reduced confidence in research findings.
In Pretty Picture 2, looking closely you can observe that boys did more moderate-to-vigorous physical activity (x-axis is average daily hours) in every educational track. In spite of this, girls appeared more active when combining the educational tracks (shown as rows in the figure), because there is much more people in the practical nurse track, ,as well as those people being mostly girls. This is also known as the Simpson’s paradox, and is best investigated by visualising data.
Conventional approaches would have e.g. left the reader with an impression that the means of the multimodal or skewed variables (see Pretty Picture 1) are interpretable as central tendencies, and that the sample is homogenous (see Pretty Picture 2). Transparent and accessible sharing of data characteristics, analyses and analytical choices is imperative for increasing confidence in research findings; if nothing else, the elaborate supplements can act as a platform to present robustness tests and assumption explorations in.
The paper described in this post:
Heino, M. T. J., Knittle, K., Fried, E., Sund, R., Haukkala, A., Borodulin, K., … Hankonen, N. (2019). Visualisation and network analysis of physical activity and its determinants: Demonstrating opportunities in analysing baseline associations in the let’s move it trial. Health Psychology and Behavioral Medicine, 7(1), 269–289. https://doi.org/10.1080/21642850.2019.1646136
Tay, L., Parrigon, S., Huang, Q., & LeBreton, J. M. (2016). Graphical descriptives a way to improve data transparency and methodological rigor in psychology. Perspectives on Psychological Science, 11(5), 692–701.
On hypothesis testing for non-prespecified comparisons:
de Groot AD. The meaning of “significance” for different types of research [translated and annotated by Eric-Jan Wagenmakers, Denny Borsboom, Josine Verhagen, Rogier Kievit, Marjan Bakker, Angelique Cramer, Dora Matzke, Don Mellenbergh, and Han L. J. van der Maas]. Acta Psychologica. 2014;148:188–94.
Nosek BA, Ebersole CR, DeHaven AC, Mellor DT. The preregistration revolution. Proceedings of the National Academy of Sciences. 2018;201708274.
On effect sizes for cluster randomised situations:
Lai MHC, Kwok O-m. Estimating Standardized Effect Sizes for Two- and Three-Level Partially Nested Data. Multivariate Behavioral Research. 2016;51:740–56.
Lai MHC, Kwok O-m, Hsiao Y-Y, Cao Q. Finite population correction for two-level hierarchical linear models. Psychological methods. 2018;23:94.
On distributional shapes:
Choi, S. W. (2016). Life is lognormal! What to do when your data does not follow a normal distribution. Anaesthesia, 71(11), 1363-1366.
Saxon, E. (2015). Beyond bar charts. BMC Biology, 13(1), 60. doi: 10.1186/s12915-015-0169-6
Taleb, N. N. (2007). Black swans and the domains of statistics. The American Statistician, 61(3), 198-200.
van Rooij, M. M., Nash, B., Rajaraman, S., & Holden, J. G. (2013). A fractal approach to dynamic inference and distribution analysis. Frontiers in physiology, 4, 1.
Weissgerber, T. L., Garovic, V. D., Savic, M., Winham, S. J., & Milic, N. M. (2016). From static to interactive: Transforming data visualization to improve transparency. PLOS Biology, 14(6), e1002484. doi: 10.1371/journal.pbio.1002484
Weissgerber, T. L., Milic, N. M., Winham, S. J., & Garovic, V. D.(2015). Beyond bar and line graphs: time for a new data presentation paradigm. PLOS Biology, 13(4), e1002128. doi: 10.1371/journal.pbio.1002128
Daniels, G. S. (1952). The“average man”?. Wright-Patterson Air Force Base, OH: Air Force Aerospace Medical Research Lab.
Rose, T. (2016). The end of average: How to succeed in a world that values sameness. Penguin UK.
Rousselet, G. A., Pernet, C. R., & Wilcox, R. R. (2017). Beyond differences in means: Robust graphical methods to compare two groups in neuroscience. European Journal of Neuroscience, 46(2), 1738–1748. doi: 10.1111/ejn.13610
Trafimow, D., Wang, T., & Wang, C. (2018). Means and standard deviations, or locations and scales? That is the question!. New Ideas in Psychology, 50, 34–37. doi: 10.1016/j.newideapsych.2018.03.001
In this post, I vent about anti-interdisciplinarity, introduce some basic perspectives of complexity science, and wonder whether decisions on experimental design actually lead us to end up in a worse place than where we were, before we decided to use experimental evidence to inform social policy.
People in our research group recently organised a symposium, Interdisciplinary perspectives on evaluating societal interventions to change behaviour (talks watchable here), as part of a series called Behaviour Change Science & Policy (BeSP). The idea is to bring together people from various fields from philosophy to behavioural sciences, medicine and beyond, in order to better tackle problems such as climate change and lifestyle diseases.
One presentation touched upon Finland’s randomised controlled trial to test the effects of basic income on employment (see also report on first year results). In crude summary, they did not detect effects of free money on finding employment. (Disclaimer: They had aimed for 80% statistical power, meaning that if all your assumptions regarding the size of the effect are correct, in the long term, 20% of the time you’d get no statistically significant effect in spite of there being a real effect.)
During post-symposium drinks, I spoke with an economist about the trial. I was wondering, how come they used individual instead of cluster randomisation – randomising neighbourhoods, for example. The answer was resource constraints; much larger sample sizes are needed for the statistics to work. To me it seemed clear, that it’s a very different situation if one person in a network of friends got free money, as compared to if everyone did. The economist wondered: “How come there could be second-order effects when there were no first-order effects?” The conversation took a weird turn. Paraphrasing:
Me: Blahblah compelling evidence from engineering and social sciences to math and physics that “more is different”, i.e. phenomena play out differently depending on the scale at consideration… blahblah micro-level interactions create emergent macro-level patterns blahblah.
Economist: Yeah, we’re not having that conversation in our field.
Me: Oh, what do you mean?
Economist: Well, those are not things discussed in our top journals, or considered interesting subjects to research.
Me: I think they have huge consequences, and specifically in economics, this guy in Oxford just gave a presentation on what he called “Complexity economics“. He had been doing it for some decades already, I think he originally had a physics background…
Economist: No thanks, no physicists in my economics.
Economist: [exits the conversation]
Now, wasn’t that fun for a symposium on interdisciplinary perspectives.
I have a lot of respect for the mathematical prowess of economists and econometricians, don’t get me wrong. One of my favourites is Scott E. Page, though I only know him due to an excellent course on complexity (also available as an audio book). I do probably like him, because he breaks out of the monodisciplinary insulationist mindset economists are often accused of. Page’s view of complexity actually relates to our conversation. Let’s see how.
First off, he describes complexity (and most social phenomena of interest) as arising from four factors, which can be thought as tuning knobs or dials. Complexity arises, when each dial is not tuned into either of the extremes, which is where equilibria arise, but somewhere in the middle. And complex systems tend to reside far from equilibrium, permanently.
To dig more deeply into how the attributes of interdependence,
connectedness, diversity, and adaptation and learning generate
complexity, we can imagine that each of these attributes is a dial that
can be turned from 0 (lowest) to 10 (highest).
Interdependence means the extent of how much one person’s actions affect those of another’s. This dial ranges from complete independence, where one person’s actions do not affect others’ at all, to complete dependence, where everyone observes and tries to perfectly match all others’ actions. In real life, we see both unexpected cascades (such as the US decision makers’ ethanol regulations, leading to the Arab Spring), as well as some, but never complete, independence – that is, manifestations that do not fit into either extreme of the dial, but lie somewhere in between.
Connectedness refers to how many other people a person is connected to. The extremes range from people living in a cabin in the woods all alone, to hypersocial youth living in Instagram trying to keep tabs on everyone and everything. The vast majority of people lie somewhere in between.
Diversity is the presence of qualitatively different types of actors: If every person is a software engineer, mankind is obviously doomed… But the same happens if there’s only one engineer, one farmer etc. Different samples of real-world social systems (e.g. counties) consist of intermediate amounts of diversity, lying somewhere in between.
Adaptation and learning refer to the extent of the actors’ smartness. This ranges from following simple, unchanging rules, to being perfectly rational and informed, as assumed in classical economics. In actual decision making, we see “bounded rationality”, reliance on rules of thumb and tradition, as well as both optimising and satisficing behaviours – the “somewhere in between”.
The complexity of complex systems arises, when diverse, connected people interact on the micro-level, and by doing so produce “emergent” macro-level states of the world, to which they adapt, creating new unexpected states of the world.
You might want to read that one again.
Back to basic income: When we pick 2000 random individuals around the country and give them free money, we’re implicitly assuming they are not connected to any other people, and/or that they are completely independent the actions of others’. We’re also assuming that they are either the same, or that it’s not interesting that they are of different types. And so forth. If we later compare their employment data to that of those who were not given basic income, the result we get is an estimate of the causal effect in the population, if all assumptions would hold.
But consider how these assumptions may fail. If the free money was perceived as a permanent thing, and given to people’s whole network of unemployed buddies, it seems quite plausible that they would adapt their behaviour as a response of the dynamics of their social network changing. This might even be different in cliques of certain people, who might use the safety net of basic income to collectively found companies and take risks, and cliques of other people, who might alter their daily drinking behaviour to match the costs with the predictable income – for better or worse. But when you randomise individually and ignore how people cluster in networks, you’re studying a different thing. Whether it’s an interesting thing or a silly thing, is another issue.
Now, it’s easy to come up with these kinds of assumption-destroying scenarios, but a whole different ordeal to study them empirically. We need to simplify reality in order to deal with it. The question is this: How much of an abstraction can a map (i.e. a model in a research study, making those simplified assumptions) be, in order to still represent reality adequately? This is also an ontological question, because if you take the complexity perspective seriously, you say bye-bye to the kind of thinking that allows you to dream up predictable effects a button-press (such as a policy change) has over the state of a system. People who act in—or try steering—complex systems, control almost nothing but influence almost everything.
An actor in a complex system controls almost nothing but influences almost everything.
Is some information, some model, still better than none? Maybe. Maybe not. In Helsinki, you’re better off without a map, than with a map of Stockholm – the so-called “Best map fallacy” (explained here in detail). Rare, highly influential events drive the behaviour of complex systems: the Finnish economy was not electrified by average companies starting to sell more, but by Nokia hitting the jackpot. And these events are very hard, if not impossible, to predict✱.
Ok, back to basic income again. I must say that the people who devised the experiment were not idiots, and included e.g. interviews of people to get some idea about unexpected effects. I think that this type of an approach is definitely necessary when dealing with complexity, and all social interventions should include qualitative data in their evaluation. But, again, unless the unemployed don’t interact, with randomisation done individually you’re studying a different thing than when it’s done in clusters. I do wonder if it would have been possible to include some matched clusters, to see if any qualitatively different dynamics take place, when you give basic income to a whole area instead of randomly picked individuals within it.
But, to wrap up this flow of thought, I’m curious if you think it is possible to randomise a social intervention individually AND always keep in mind that the conclusions are only valid if there are no interactions between people’s behaviour and that of their neighbours. Or is it inevitable that that the human mind smoothes out the details?
Importantly: Is our map better now, than it was before? Will this particular experiment go in history as—like the economist stated in “there were no first-order effects”—basic income not having any effect on job seeking? (remember, aim was only 80% statistical power). Lastly, I want to say I consider it unforgiveable to only work within one discipline and disregard the larger world you’re operating in: When we bring science to policy making, we must be doubly cautious of the assumptions our conclusions stand on. Luckily, transparent scientific methodology allows us to be explicit about them.
Let me hear your thoughts, and especially objections, on Twitter, or by email!
Unpredictable things will happen, and they will make you either better or worse off.
Magnitude of an event is different from it’s effect on you, i.e. there are huge events that don’t impact you at all, and small events that are highly meaningful to you. Often that impact depends on the interdependence and connectedness dials.
To an extent, you can control the impact an event has on you.
You want to control exposure in such a way, that surprise losses are bounded, while surprise gains are as limitless as possible.