Learning a face space for experiments on human identity

May 19, 2018

Generative models of human identity and appearance have broad applicability to behavioral science and technology, but the exquisite sensitivity of human face perception means that their utility hinges on the alignment of the model’s representa- tion to human psychological representations and the photoreal- ism of the generated images. Meeting these requirements is an exacting task, and existing models of human identity and ap- pearance are often unworkably abstract, artificial, uncanny, or biased. Here, we use a variational autoencoder with an autore- gressive decoder to learn a face space from a uniquely diverse dataset of portraits that control much of the variation irrele- vant to human identity and appearance. Our method generates photorealistic portraits of fictive identities with a smooth, navi- gable latent space. We validate our model’s alignment with hu- man sensitivities by introducing a psychophysical Turing test for images, which humans mostly fail. Lastly, we demonstrate an initial application of our model to the problem of fast search in mental space to obtain detailed “police sketches” in a small number of trials.