Thanks to data collection efforts last week I have seven subjects' worth of pilot data for Project 15 Experiment 3. The temporary Excel file is saved as "E2 Pilot Data Check.xlsx" in the Dropbox folder. The first step was to ensure that no data has been lost as a result of programming errors. The good news is that all subjects have 100 observations in the study phase and 300 in the recognition test phase, which means nothing's getting swallowed up. This is great.

Next is taking a look at recognition memory performance and comparing it with prior research. Right now "old" rates for targets, related lures, and unrelated lures are .81, .47 and .17 respectively. Data from Project 15 Experiment 2 are in the same ballpark (.71, .29, .17). So initial data collection suggests folks are more conservative in Experiment 3 than in Experiment 2, which is certainly a possibility.

So now some time needs to be spent thinking about which analyses are the crucial ones in Experiment 3. Let's recopy what I wrote as a summary of the Experiment 2 data:

In general, targets receive remember responses and related lures receive know and guess responses. Confidence ratings are highest for remember, intermediate for know, and lowest for guess.

Currently, there are not significant differences in calibration (between-subjects and between-events) between remember and know responses. There are, however, significant differences in resolution (within-subjects and within-items).

Therefore, Experiment 2 of my dissertation proves that qualitative state of remembering (remembering versus knowing) affects the strength of the confidence-accuracy correlation.

The main question is calibration and resolution for the source accuracy-confidence relationship, I think. Everything else is icing on the cake.

So far -- and this is a super quick analysis -- there's an interesting difference in confidence-accuracy relationship depending on whether we're looking at recognition or source accuracy.

Confidence-Accuracy Correlations

04/22/14; 02:30:00 PM

I have been taking digital photographs since 2003. My collection is a mess, and I have no idea what to do about it. Everything from 2003-2009 is on an external hard drive, basically, and 2009 on is on the hard drive of the machine I'm updating currently. In 2003 it made sense to organize photos by albums, but in the modern era of cellphone cameras and one or two pictures per day forever, it makes more sense to organize in other ways (e.g., day/month/year, faces, locations). Much of my stuff is in Aperture right now, which I purchased thinking it would be a helpful upgrade over iPhoto, but it was a waste of money -- I feel it works even worse.

So what do I do? Here's what I'm thinking.

(1) Get all the files in one location, on my computer. File structure doesn't matter as much as having everything in one place does.

(2) Back those up on multiple drives in multiple locations, including the cloud (not sure where to go here -- Flickr offers the free terabyte, but I'd rather pay for Dropbox or Google. Meanwhile, most of my devices play the nicest with iCloud).

(3) Find some photo organizer/viewer that works the best. Whether this is iPhoto, Aperture, Picasa, or what I don't know and can't say. I guess this will take some research. (It seems as if photo organization has really taken a hit in the current era of the closed web, doesn't it?)

(4) Start to organize in a way that doesn't make me want to pull my hair out.

(5) Back all of that up too.

Am I right? Am I wrong? Any hints? Please help!

04/18/14; 10:18:34 PM

Just a note that I'm having some sudden trouble with my Github Page. Here's hoping things start operating smoothly soon.

04/09/14; 03:52:26 PM

My acquaintance Walter Hickey, from FiveThirtyEight, made an interesting comment that I wanted to address in over 140 characters.

It's an interesting idea to think about. At face value the idea makes sense -- a million people out there, any number of which have the time, energy, and expertise to evaluate your writing critically. But I'm not sure the evidence is out there suggesting that people actually do that. When I read critiques of news articles, they seem partisan or opinion based -- "I think this article is wrong because I disagree with you." Peer review, at its best, does more than this. Readers and commenters will be quick to pick up on and take down obvious flaws and logical fallacies. But can you count on them to investigate statistical analyses, interpretations of scientific theory? Do your readers know how to bootstrap, score psychological measures, or do a shift of benchwork?

A few of them do, but it's irresponsible to assume that those people are out there and are willing and motivated to clean up any mistakes you made. Scientific peer review, despite its faults, ensures that at least some wise brains have the professional responsibility to evaluate your arguments. Ideally, it matches up subject matter experts -- people who can bootstrap and evaluate theory, to continue the example -- to papers that require it.

Data journalists would be wise to seek out impartial reviewers instead of assuming that readers, commenters, and other publications will do the job of ensuring accuracy for them.

Click here for related reading by Brian Keegan.

04/09/14; 03:02:44 PM

So more for my notes than anything else, here's a figure that shows the number of times subjects say remember, know, and guess with varying levels of confidence.

Figure

You can see that there's an overwhelming number of maximum confidence remember responses. These responses are made with confidence ratings of 90 to 100. Some take this as evidence that the phenomenon of remembering is all-or-none -- perhaps the observations of remember assigned lower confidence values are just noise.

04/08/14; 02:00:04 PM

My dissertation research is turning out to be somewhat timely -- a colleague of mine in the Behavior, Brain, and Cognition program recently gave a brief talk for the department reporting that she wasn't able to find some of the results reported in another paper (one relevant to my dissertation). I suppose it's not quite my place to talk about it here, so I'll just attach the figure, for my own reference, that adds to the discussion:

Replication of Ingram, Mickes, and Wixted (2012)

If you want to know more, you can e-mail me directly.

This interesting issue raises a larger point that I have been concerned about for a few weeks now. To what extent is a fellow researcher authorized to discuss research presented in department talks and colloquia? What is the best way to cover topics discussed in these sessions? Traditionally, researchers discuss data that are in progress or have not been finalized, published, or sometimes even double-checked. At the same time, though, it's rare to hear a presenter warn, "Please don't tell others about this research until it has been published." So what is the appropriate way to share an interesting talk you've heard? Do you only discuss work that has been published, as a ground rule? Do you speak about unpublished research, but in a vague way? Do you assume that if the research has been presented in a public talk, it's fair game? Is there a difference when research is presented in a lab meeting versus a brown bag versus a department talk?

Perhaps someone older and wiser than me can share their opinions.

04/04/14; 11:53:26 AM

Last built: Fri, Jul 17, 2015 at 9:33 AM

By Andy DeSoto, Friday, April 4, 2014 at 11:53 AM.