Thanks to data collection efforts last week I have seven subjects' worth of pilot data for Project 15 Experiment 3. The temporary Excel file is saved as "E2 Pilot Data Check.xlsx" in the Dropbox folder. The first step was to ensure that no data has been lost as a result of programming errors. The good news is that all subjects have 100 observations in the study phase and 300 in the recognition test phase, which means nothing's getting swallowed up. This is great.
Next is taking a look at recognition memory performance and comparing it with prior research. Right now "old" rates for targets, related lures, and unrelated lures are .81, .47 and .17 respectively. Data from Project 15 Experiment 2 are in the same ballpark (.71, .29, .17). So initial data collection suggests folks are more conservative in Experiment 3 than in Experiment 2, which is certainly a possibility.
So now some time needs to be spent thinking about which analyses are the crucial ones in Experiment 3. Let's recopy what I wrote as a summary of the Experiment 2 data:
In general, targets receive remember responses and related lures receive know and guess responses. Confidence ratings are highest for remember, intermediate for know, and lowest for guess.
Currently, there are not significant differences in calibration (between-subjects and between-events) between remember and know responses. There are, however, significant differences in resolution (within-subjects and within-items).
Therefore, Experiment 2 of my dissertation proves that qualitative state of remembering (remembering versus knowing) affects the strength of the confidence-accuracy correlation.
The main question is calibration and resolution for the source accuracy-confidence relationship, I think. Everything else is icing on the cake.
So far -- and this is a super quick analysis -- there's an interesting difference in confidence-accuracy relationship depending on whether we're looking at recognition or source accuracy.