A side project over the last few weeks has been something called the "Replication value project," something by folks affiliated with the Center for Open Science (Charlie Ebersole et al.). Here's a link to the project page on OSF: https://osf.io/bwp24/
The idea behind the project is to determine a metric indicating the degree to which a particular unit of research should be replicated. As in, how might someone compare two papers and determine which paper is more in need of replication?
I'm a contributor to the project and came up with my own candidate formula for determining replication value. Nine other authors and groups did the same, and we're currently going through and commenting on the formulas of others. If there's interest, I'll share my formula in a future post. In my opinion, there are some useful ideas there (of course, many smarter minds than mine are participating and probably have a solution that's "more right").
I hope to share more information about this project as it continues on!
In other news, I recently had a data paper accepted at the Journal of Open Psychology Data that I co-wrote with a former undergraduate student I worked with, Cecilia Votta. I'll share that, too, once it's up online. The idea behind this project is that we ran an experiment for a particular experimental purpose, but the writeup never really went anywhere (and the data were not too interesting). So rather than keep things in a file drawer, I felt it would be a good idea to put them online so that others might be able to use them. Please contact me if you'd like the accepted draft of the manuscript.
Here's the citation for the data:
DeSoto, K. A., & Votta, C. M. (2015). Psychology data on the effects of study schedules on category-member classification: Dataset. figshare. http://doi.org/10.6084/m9.figshare.1570940.v7