Signal Detection Theory - Distribution fitting

I'm trying to figure out how to design a signal detection experiment with an odd dataset. the data comes from an RSVP experiment where observers are required to identify which letter falls with in a white ring cue that surrounds it for a single frame. Responses are measured in their distance in items from the target item. Trial related responses tend to be normal and clustered closely around the target with the mean being approximately 0. However the noise or random guess distribution tends to be much more uniform as when an observer doesn't see the ring transient he or she will simply guess randomly. This leads to a uniform distribution of responses as an observer has equal probability of scoring every value in the set. Any suggestion with how to approach this as my understanding is that a Signal Detection task required fitting scores to two normal distributions to measure overlap of tails?