Statistical tests for social science

These are slides from my lecture on significance testing, which took place in a course on research methods for social scientists. Some thoughts:

  • I tried to emphasise that this stuff is difficult, that people shouldn’t be afraid to say they don’t know, and that academics should try doing that more, too.
  • I tried to instill a deep memory that many uncertainties are involved in this endeavour, and that mistakes are ok as long as you report the choices you made transparently.
  • Added a small group discussion exercise at about 2/3 of the lecture: What was the most difficult part to understand so far? I think this worked quite well, although “Is this what an existential crisis feels like?” was not an uncommon response.

I really think statistics is mostly impossible to teach, and people learn when they get interested and start finding things out on their own. Not sure how successful this attempt was in doing that. Anyway, slides are available here.

TLDR: If you’re a seasoned researcher, see this. If you’re an aspiring one, start here or here, and read this.

stat testing tausta

Misleading simplifications and where to find them (Slides & Mini-MOOC 11min)

The gist: to avoid getting fooled by them, we need to name our simplifying assumptions when modeling social scientific data. I’m experimenting with this visual approach to delivering information to those who think modeling is boring; feedback and improvement suggestions very welcome! [Similar presentation with between-individual longitudinal physical activity networks, presented at the Finnish Health Psychology conference: here]

I’m not as smooth as those talking heads on the interweb, so you may want just the slides. Download by clicking on the image below or watch at SlideShare.

SLIDE DECK:

misleading assumptions 1st slide

Mini-MOOC:

 

Note: Jan Vanhove thinks we shouldn’t  become paranoid with model assumptions; check his related blog post here!

Crises of confidence and publishing reforms: State of affairs in 2018 (slides)

After half a century of talk, the researcher community is putting forth genuine efforts to improve social scientific practices in 2018. This is a presentation for the University of Helsinki faculty of Social Sciences, on the recent developments in statistical practices and publishing reforms. Update: Slightly modified version of presentation, held in Aberdeen here!

Nota bene: If the embedded slide deck below doesn’t work, download a pdf here.

ps. We also had cake, to commemorate Replicability Project: Cake (aka Replicake). Wish you had been there!

pps. there was a hidden slide which apparently didn’t make it to the presentation. It was basically this disconcerting conversation.

Preprints, short and sweet

preprints_eff
Photo courtesy of Nelli Hankonen

These are slides (with added text content to make more sense) from a small presentation I held at the University of Helsinki. Mainly of interest to academic researchers.

TL;DR: To get the most out of scientific publishing, we may need imitate physics a bit, and bypass the old gatekeepers. If the slideshare below is of crappy quality, check out the slides here.

UPDATE: There’s a new (September 2019) paper out on peer review effectiveness. Doesn’t look superfab:

timvanderzee pic

ps. if you prefer video, this explains things in four minutes 🙂