The myth of the magical “Because”

In this post I try to answer the call for increased transparency in psychological science by presenting my master’s thesis. I ask for feedback about the idea and the methods. I’d also appreciate suggestions for which journal it might be wise to submit the paper I’m now starting to write with co-authors. Check out OSF for the Master’s thesis documents and a supplementary website for analyses in the manuscript in preparation (I presented the design analysis in a previous post).

In my previous career as a marketing professional, I was often enchanted by news about behavioral science. Such small things could have such large effects! When I moved into social psychology, it turned out that things weren’t quite so simple.

One study that intrigued me was done in the 70’s, and has since gained huge publicity (see here and here, for examples). The basic story is, that you could use the word because to get people to do things, due to a learned “reason → compliance” link.

because20in20media

Long story short, I was able to experiment in a within-trial setting of a health psychology intervention. Here’s a slideshow adapted from what I presented in the annual conference of the European Health Psychology Society:

 

Things I’m happy about:

  • Maintaining a Bayes Factor / p-value ratio of about 1:2. It’s not “a B for every p“, but it’s a start…
  • Learning basic R and redoing all analyses in the last minute, so I wouldn’t have to mention SPSS 🙂
  • Figuring out how this pre-registration thing works, and registering before end of data collection.
  • Using the word “significant” only twice and not in the context of results.

Things I’m not happy about:

  • Not having pre-registered before starting data collection.
  • Not knowing what I now know, when the project started. Especially about theory formation and appraisal (Meehl).
  • Not having an in-depth understanding of the mathematics underlying the analyses (although math and logic are priority items on my stuff-to-learn-list).
  • Not having the data public… yet. It will be in 2017 the latest, but hopefully already this autumn.

A key factor for fixing psychological science is transparency; making analyses, intentions and data available for all researchers. As a consequence, anyone can point out inconsistencies and use the findings to elaborate on the theory, making accumulation of knowledge possible.

Science is all about predicting, and everyone knows how anyone can say “yeah, I knew that’d happen”. The most impressive predictions are those made well before things start happening. So don’t be like me, and pre-register your study before the start of data collection. It’s not as hard as it sounds! For clinical trials, this can be done for free in the WHO-approved German Clinical Trials Register (DRKS). For all trials, the Open Science Framework (OSF) website can be used for pre-registering plans and protocols, as well as making study data available for researchers everywhere.There’s also an extremely easy-to-use pre-registration site AsPredicted.

One can also use the OSF website as a cloud server to privately manage one’s workflow (for free). As a consequence, automated version control protects the researcher in the case of accusations of fraud or questionable research practices.

ps. If there’s anything weird in that thesis, it’s probably because I have disregarded some piece of advice from Nelli Hankonen, Keegan Knittle and Ari Haukkala, for whose comments I’m indebted to.

10 thoughts on “The myth of the magical “Because”

  1. This looks great! I have a quick, truly just nitpicking, quip:
    You say: “A Bayesian Highest Density Interval (HDI) refers to the area where 95% of population observations are expected to land.”
    But that’s not quite right.
    (1) An HDI is a summary of the posterior distribution p(theta|x), which is a *parameter* space over Theta and not a space over the population members X. You’re making inferences about theta, population X’s mean/proportion/etc, after observing a sample {x_1, x_2, …, x_n} drawn from the population, and not the physical distribution of members of X.

    (2) The most notable thing about a 95% HDI is that it is the shortest range of the posterior distribution p(theta|x) such that the integral over that range is equal to .95. If the parameter space is continuous (most times it is) then there are an uncountable number of possible parameter values in the space, which means there are infinity parameter values both inside the HDI and outside the HDI.

    Like

    • Great, many thanks! I discovered I can do those plots very close to deadline, but I guess one always wishes they had more time. Will amend to the article.

      What did you think of the way robustness to different priors was presented (“… as concentration approached x, bf approached y”)? Was kinda struggling with how to say it; if you have a nice example paper in mind, I’d like to have a look how it’s done.

      Like

      • I think that’s a fine way to do it, but I also think the ideal way to convey prior robustness is through a plot. That way you can have the value of the BF on the y axis and the scale on the x axis, and the reader can track how much the BF changes across reasonable ranges of the scale. It also lets the reader see the shape of the relationship (concave, etc).

        Liked by 1 person

        • Ok, thanks. I have those plots in the code, maybe I should include them in the final paper or at least point the reader to an appendix.

          Like

          • Yep, I’d comment in the text (like you did) on the qualitative changes that happen across the scale and then stuff the graphs in an appendix or supplement for completeness.

            Liked by 1 person

  2. It wasn’t clear how many reminders each person received. Repeated messages, regardless of content, are probably not processed in the same way at the first one. For participants in this study, the later messages might be received with the thought “oh yeah, I need to wear the device,” regardless of the content of the message. I’d be curious to know if you find effects on the first message, when they might be more likely to read it.

    Like

    • Thanks for the comment! Each person received a message on six consecutive mornings. It was a valid point about the first message; don’t remember if I did this before, but just checked and all the groups are practically identical on how much they wore the device on the day they got their 1st message.

      Like

  3. I think as you note there is an alternative explanation that needs to be sorted out with this project. Did the manipulation work as you intended it to work? I think it is plausible that it just didn’t work because people didn’t carefully read the text messages. Beyond that I think there is a theoretically mismatch with the chosen persuasion technique and the behaviour that you are hoping to change. In persuasion dual process models (the elaboration likelihood model, the systematic/heuristic model) have been used extensively. These models generally predict that persuasion techniques that are processed more carefully (i.e., through central processing or systematically) are more likely to affect complex behaviour over time, than persuasion techniques that are processed less carefully (i.e., through the peripheral processing or heuristically).

    It seems that the behaviour you are seeking to change here is relatively complex and hard to get good compliance on, yet the persuasion technique you chose is quite low in careful processing. It is a lot to expect that a persuasion attempt that is so peripherally processed would affect difficult to change behaviour. Incidentally, this theorizing is consistent with the literature on the “because” heuristic that demonstrates that it affects easy behaviours (i.e., small requests) but not more difficult behaviours (i.e., large requests). I think this potential problem with using the “because” heuristic to change use of health monitoring equipment is compounded by using text message which may well exacerbate the peripheral processing of the messages. Certainly, part of the original “because” heuristic results were due to compliance and we know that compliance request are much more effect when made face to face than remotely (see variations of the Milgram experiment as an example).

    So, it seems there are theoretical and empirical evidence to suggest that your manipulation is unlikely to have the effect you predicted it would have and is unlikely to work in the way you predicted it would work. What does this mean for publication of your results? Well, I hate to be an old fuddy duddy, but I will risk that in an attempt to be helpful. The question I would ask you if you were my advisee is what else could you be doing with your time? One of the hardest things we have to sort out as scholars is what to spend our time on first and what to put on the back burner to deal with later if time permits.

    You could spend more time on this project and work on ruling out the alternative explanations for your results and develop clearer evidence that the “because” heuristic is not very helpful for promoting the types of behaviour you are examining, but what sort of contribution would that make? Are there other things you could be doing that would make a bigger contribution? These are not rhetorical questions. I don’t know the answers, but I think the answers should guide what you do next and whether you put in the time and effort to turn this thesis into a publication, and keep in mind that the time and effort to do so will be considerable.

    I wish you all the best as you continue your work on this project. Despite the reservations expressed above I like very much that you are trying to change important behaviours and I hope you continue to do so in your future work.

    Liked by 1 person

  4. Excellent comment, many thanks! These are important issues.

    We had implementation / manipulation checks, but they were admittedly weak (self-report). I’m not sure about how complex the target behavior was; putting on the device is much like putting on a belt, except that one doesn’t have to tighten the buckle. But, with vocational school adolescents, it’s amazingly hard to predict what’s complex, what’s simple and what’s just “screw that”. Still, your point stands. Also, thanks for pointing out the issue of face-to-face compliance.

    I notice how in writing this post I concentrated on the because-heuristic as I figured that’s what the general public would like to read about. I’m of the opinion that this research still invalidates the simplified version of the xerox experiment advocated in popular media and blogs like Psychology Today or Lifehacker. I recognise the strawman in the argument and surely do not wish to spend any more of my life than necessary fighting popular conceptions or even dabbling with trivial behavior science.

    I would like to see the results reported, which may be because
    a) I think the result with the because vs. succinct, combined with the result that reminders in general didn’t help*, is worth communicating to other researchers working with accelerometers,
    and/or
    b) sunken costs bias my judgement 🙂

    Thanks again for the helpful comment and let me know if you have further thoughts!

    *self-selection doesn’t suffice as an explanation, as opting in the reminders was almost completely explained how they were presented to participants (50% vs. 95% opt-in rates with a small change in recruitment prompt in the middle of the experiment).

    Like

Leave a reply to Ken Cancel reply