How do we figure out, whether our ideas worked out? To me, it seems that in psychology we seldom rigorously think about this question, despite having been criticised for dubious inferential practices for at least half a century. You can download a pdf of my talk at the Finnish National Institute for Health and Welfare (THL) here, or see the slide show in the end of this post. Please solve the three problems in the summary slide! 🙂
TLDR: is there a reason, why evaluating intervention program theories shouldn’t follow the process of scientific inference?
This post presents a Bayesian roundtable I convened for the EHPS/DHP 2016 health psychology conference. Slides for the three talks are included.
So, we kicked off the session with Susan Michie and acknowledged Jamie Brown who was key in making it happen, but could not attend.
Robert West was the first to present, you’ll find his slides “Bayesian analysis: a brief introduction” here. This presentation gave a brief introduction to Bayes and how belief updating with Bayes Factors works.
I was the second speaker, building on Robert’s presentation. Here are slides for my talk, where I introduced some practical resources to get started with Bayes. The slides are also embedded below (some slides got corrupted by Slideshare, so the ones in the .ppt link are a bit nicer).
The third and final presentation was by Niall Bolger. In his talk, he gave a great example of how using Bayes in a multilevel model enabled him to incorporate more realistic assumptions and—consequently—evaporate a finding he had considered somewhat solid. His slides, “Bayesian Estimation: Implications for Modeling Intensive Longitudinal Data“, are here.
Let me know if you don’t agree with something (especially in my presentation) or have ideas regarding how to improve the methods in (especially health) psychology research!