Based on the output of an observer that a noisy face was classified as a male but not as a female, for example, we reasoned that the noise superimposed on the template face contained features matching the observer's internal male prototype.
![human brain mapping conference 2017 human brain mapping conference 2017](https://www.humanbrainmapping.org/files/imagebank/hbm_bnr_1b.png)
The noisy faces were then submitted to human observers and the VGG-Face, a typical DCNN pre-trained for face identification (Parkhi et al., 2015). Specifically, a gender-neutral template face midway between the average male and the average female faces was superimposed with random noises, which rendered the template face more male-like in some trials or more female-like in other trials. Here we applied this approach to both DCNNs and human observers to investigate whether the DCNNs and humans utilized similar representations to perform the task of face gender classification. This data-driven method allows an unbiased estimate of what is in observers' “mind” when performing a task, rather than manipulating specific features that researchers a priori hypothesize to be critical for the task. To address this question, here we applied a reverse correlation approach (Ahumada and Lovell, 1971 Gold et al., 2000 Mangini and Biederman, 2004 Martin-Malivel et al., 2006), which has been widely used in psychophysical studies to infer internal representations of human observers that transform inputs (e.g., stimuli) to outputs (e.g., behavior performance).
![human brain mapping conference 2017 human brain mapping conference 2017](https://upload.wikimedia.org/wikipedia/commons/f/f2/White_Matter_Connections_Obtained_with_MRI_Tractography.png)
That is, do DCNNs use similar computations and inner representations to perform tasks as humans do?
![human brain mapping conference 2017 human brain mapping conference 2017](https://venturebeat.com/wp-content/uploads/2018/10/Fit-Content-Proportionally.png)
Specifically, it remains unknown whether DCNNs achieve human-like performance through human-like processes. However, these highly complex networks have remained largely opaque, whose internal operations are poorly understood. For example, DCNNs trained to classify over a million natural images can match human performance on object categorization tasks (Krizhevsky, 2014 Simonyan and Zisserman, 2015 Krizhevsky et al., 2017), and DCNNs trained with large-scale face datasets can approach human-level performance in face recognition (Taigman et al., 2014 Parkhi et al., 2015 Schroff et al., 2015 Ranjan et al., 2017). In recent years, deep convolutional neural networks (DCNN) have made dramatic progresses to achieve human-level performances in a variety of challenging complex tasks, especially visual tasks. In sum, although DCNNs and humans rely on different sets of hardware to process faces, they can use a similar and implementation-independent representation to achieve the same computation goal. Importantly, the prior task experience, which the VGG-Face was pre-trained to process faces at the subordinate level (i.e., identification) as humans do, seemed necessary for such representational similarity, because AlexNet, a DCNN pre-trained to process objects at the basic level (i.e., categorization), succeeded in gender classification but relied on a completely different representation. We found that humans and a typical DCNN, VGG-Face, used similar critical information for this task, which mainly resided at low spatial frequencies. Here we applied a reverse-correlation method to make explicit representations of DCNNs and humans when performing face gender classification.
![human brain mapping conference 2017 human brain mapping conference 2017](https://bigbrainproject.org/images/news/bbworkshop-2019.jpg)
Deep convolutional neural networks (DCNN) nowadays can match human performance in challenging complex tasks, but it remains unknown whether DCNNs achieve human-like performance through human-like processes.