Automatic evaluation of hypernasality has been traditionally computed using monophonic
signals (i.e., combining nose and mouth signals). Here, this study aimed to examine if nose signals
serve to increase the accuracy of hypernasality evaluation. Using a conventional microphone and a
Nasometer, we recorded monophonic, mouth, and nose signals. Three main analyses were performed:
(1) comparing the spectral distance between oral/nasalized vowels in monophonic, nose, and mouth
signals; (2) assessing the accuracy of Deep Neural Network (DNN) models in classifying oral/nasal
sounds and vowel/consonant sounds trained with nose, mouth, and monophonic signals; (3) analyzing
the correlation between DNN-derived nasality scores and expert-rated hypernasality scores. The
distance between oral and nasalized vowels was the highest in the nose signals. Moreover, DNN
models trained on nose signals outperformed in nasal/oral classification (accuracy: 0.90), but were
slightly less precise in vowel/consonant differentiation (accuracy: 0.86) compared to models trained
on other signals. A strong Pearson’s correlation (0.83) was observed between nasality scores from
DNNs trained with nose signals and human expert ratings, whereas those trained on mouth signals
showed a weaker correlation (0.36). We conclude that mouth signals partially mask the nasality
information carried by nose signals. Significance: the accuracy of hypernasality assessment tools may
improve by analyzing nose signals.