Data collection is going well. There's no longer a need for 50-59 year olds, as you can see from the figure below:

Subject counts

Now, the slow wait for those 60 and up begins. We are getting about seven a day, assuming the rate between the 21st and today remains constant, so I expect this step should take a little over a week.

Meanwhile, I also ought to be considering those indicating they cheated on the test. We have 23 self-identified cheaters out of 480 subjects, which is roughly a rate of one in 20 -- not great. I'll have to decide whether to re-run an equivalent number of subjects or just leave it be (likely, I'll just leave it be).

05/23/14; 08:59:56 AM

We need 31 50-59 year olds and 72 60-69 year olds. I opened new MTurk slots for folks 50+ and hope to get those rolling in soon. Chances are good the 70-89 age range will go relatively unfilled; currently, that group has been less than half of one percent of our participants.

This is more a note for myself than for anything else.

05/21/14; 02:23:58 PM

A few days ago I posted some calibration curves for remembering, knowing, and guessing. (It turns out they were constructed from pilot data instead of the full data set accidentally, but the same pattern bears out). In the third experiment of my dissertation -- I have 32 of 64 subjects' worth of data collected so far -- we get both recognition (old/new) accuracy and source accuracy and thus are able to plot two calibration plots, one for recognition and one for source. Take a look.

Calibration curves

They look pretty similar, which is both good and bad news. First note that, at least by eyeball, there's a bigger difference at high confidence (that's the x-axis, which I'm noticing I've left out) for source memory than old/new accuracy for remember versus know. Some prior literature, however, predicts that that know curve for source accuracy should not really be exceeding any point falling on the remember curve. Clearly, though, 80-100 confidence knows are more accurate, in terms of source, than 0-79 confidence remember ratings.

So this is going to be something I have to think about over the next few days. I think it might come down to the instructions. In my study, subjects make a recognition + source decision, then confidence, than remember/know/guess. This order -- as well as the specific prompt for the confidence rating -- might end up being very important in this line of research. Hmm...

05/08/14; 11:17:50 AM

Quiet April here on the blog -- I spent most of the month running subjects and working on another project I can't talk too much about yet. In the meantime, though, I'm preparing for the Show Me Mini Mental Life Conference, a small biannual (that means once every two years, right, or is that "biennial") conference jointly hosted by Washington University and Mizzou a bit down I-64. I'm giving a talk on some of my dissertation data, which I've written about in other places (Project 15).

As you might know, my dissertation research is interested in confidence-accuracy differences as a function of remembering, knowing, and guessing. What we can do is plot calibration curves for the accuracy of responses given while in each qualitative state of remembering. Here's the figure:

Calibration curves

As the figure shows, one type of response seems to pull apart from the rest. Namely, we see that remember responses given with high (e.g., 80-100) confidence appear to be significantly more accurate, on average, than remember responses given with lower confidence as well as any know or guess responses. Thus, the real-world implication is that if I'm sure I remember something, and can remember specific details about that memory (rather than a gut feeling or hunch that it happened), it's likely that the memory is an accurate one.

05/05/14; 02:28:00 PM

Last built: Fri, Jul 17, 2015 at 9:33 AM

By Andy DeSoto, Monday, May 5, 2014 at 2:28 PM.