People can use their eyes to direct others' attention towards objects and features in their environment. A person who consistently looks away from targets is later judged to be less trustworthy than one that consistently looks towards targets, even when the person is a background distractor that viewers are instructed to ignore. This has been shown in many experiments using trustworthiness ratings, but one outstanding question is whether these systematic changes in trustworthiness reflect perceptual distortions in the stored memory representations of these faces, such that faces that provide valid cues are remembered as looking more trustworthy than they actually were, and vice versa for invalid faces. We test this in two experiments, one where we gave participants the opportunity to morph the faces along a continuum of trustworthiness and asked them to report the image they had seen during the experiment, and one where we presented the original face images morphed to appear more or less trustworthy and asked participants to select from the two. The results of these two experiments suggest that this incidental learning of gaze contingencies does not result in distorted memory for the faces, despite robust and reliable changes in trustworthiness ratings.