Item

Word Embeddings to quantify Depressive Language in Twitter

Sandhya,
Citations
Altmetric:
License
License
DOI
Abstract
How do people discuss mental health on social media? Can we train an algorithm to recognize differences between discussions of depression and other topics? Can an algorithm predict that someone is depressed from their tweets alone? In this project, we collect tweets referencing `depression’ over a seven year period, and train word embeddings to characterize linguistic structures within the corpus. We find that neural word embeddings capture the contextual differences between “depressed” and “healthy” language. The best performing model for the prediction task is Long Short-Term Memory (LSTM) with 70% test accuracy. Finally, we train a similar model on a much smaller collection of tweets authored by individuals formally diagnosed with depression. The results suggest social media could serve as a valuable screening tool for mental health.
Description
1:00 PM
1:00pm-3:00pm
Graduate
Date
2019-01-01
Journal Title
Journal ISSN
Volume Title
Publisher
Research Projects
Organizational Units
Journal Issue
Citation
DOI
Embedded videos