IEEE J Biomed Health Inform. 2021 Oct 26;PP. doi: 10.1109/JBHI.2021.3122299. Online ahead of print.

ABSTRACT

Currently, depression has become a common mental disorder, especially among postgraduates. It is reported that postgraduate students have a higher risk of depression than the general public, and they are more sensitive to contact with others. Thus, a non-contact and effective method for detecting people at risk of depression becomes an urgent demand. To make the recognition of depression more reliable and convenient, we propose a multi-modal gait analysis-based depression detection method that combines skeleton modality and silhouette modality. Firstly, we propose a skeleton feature set to describe depression and train a Long Short-Term Memory (LSTM) model for sequences strategy. Secondly, we generate Gait Energy Image (GEI) as silhouette features from RGB videos and design two Convolutional Neural Network (CNN) models with a new loss function to extract silhouette features from front and side perspectives. Then, we construct a multi-modal fusion model consisting of fusing silhouettes from the front and side views at the feature level and the classification results of different modalities at the decision level. The proposed multi-modal model achieved accuracy at 85.45% in the dataset consisting of 200 postgraduate students (including 86 depressive ones), 5.17% higher than the best single-mode model. The multi-modal method also shows improved generalization by reducing the gender differences. Furthermore, we design a vivid 3D visualization of the gait skeletons, and our results imply that gait is a potent biometric for depression detection.

PMID:34699374 | DOI:10.1109/JBHI.2021.3122299