Skip to content
Home » Program


The Workshop on Computational Linguistics and Clinical Psychology

CLPsych 2024 will be held in conjunction with EACL 2024
Carlson Suite, Radisson Blu Resort, St. Julian’s, Malta

Thursday March 21st 2024


All times are in local time (Malta, CET). Venue is the Carlson Suite, Radisson Blu Resort.
Note: on the day of CLPsych, CET is 5 (not 6) hours ahead of EST, and 8 (not 9) hours ahead of PT, because Daylight Savings Time starts 2 weeks earlier in the US than in Europe.

8:45 AM

Workshop welcome & announcements

(chairs: Andrew Yates, Bart Desmet, Emily Prud'hommeaux)

9:00 AM

Keynote 1: Andrew Schwartz

"Approaching a Theoretical Upper-limit for Predicting Standard Psychological Scales: Lessons during a Decade of CLPsychs"

Abstract: Over the last decade, NLP has had a rocky relationship with standard psychological scales. On the one hand, such scales are the predominant bar against which NLP-based psychological assessments are measured. On the other, the ultimate goal is for NLP to reveal greater depths and produce more powerful tools about the human condition than could be dreamed with traditional scales. In this talk, I will review lessons from over 10 years of work towards better understanding and measuring human mental health with language. I will cover numerous objectives and tasks of the field, in the context of its on-again, off-again relationship with standard scales which will culminate in suggesting the relationship has reached a long-term resolution... is it a permanent breakup or a union? Attend to find out and to consider what is next with this rocky past behind us.

9:45 AM

Shared task session (chairs: Adam Tsakalidis & Maria Liakata)

  • Overview of the CLPsych 2024 Shared Task: Leveraging Large Language Models to Identify Evidence of Suicidality Risk in Online Posts. Jenny Chim, Adam Tsakalidis, Dimitris Gkoumas, Dana Atzil-Slonim, Yaakov Ophir, Ayah Zirikly, Philip Resnik, Maria Liakata

  • Integrating Supervised Extractive and Generative Language Models for Suicide Risk Evidence Summarization. Rika Tanaka, Yusuke Fukazawa

  • Utilizing Large Language Models to Identify Evidence of Suicidality Risk through Analysis of Emotionally Charged Posts. Ahmet Yavuz Uluslu, Andrianos Michail, Simon Clematide

  • Archetypes and Entropy: Theory-Driven Extraction of Evidence for Suicide Risk. Vasudha Varadarajan, Allison Lahnala, Adithya V Ganesan, Gourab Dey, Siddharth Mangalik, Ana-Maria Bucur, Nikita Soni, Rajath Rao, Kevin Lanning, Isabella Vallejo, Lucie Flek, H. Andrew Schwartz, Charles Welch, Ryan Boyd

10:30 AM

Coffee break

10:45 AM

Poster session

Main track

  • Assessing Motivational Interviewing Sessions with AI-Generated Patient Simulations. Stav Yosef, Moreah Zisquit, Ben Cohen, Anat Klomek Brunstein, Kfir Bar, Doron Friedman

  • Linguistic markers of schizophrenia: a case study of Robert Walser. Ivan Nenchev, Tatjana Scheffler, Marie de la Fuente, Heiner Stuke, Benjamin Wilk, Sandra Anna Just, Christiane Montag

  • Ethical thematic and topic modelling analysis of sleep concerns in a social media derived suicidality dataset. Martin Orr, Kirsten Van Kessel, David Parry

  • Automatic Annotation of Dream Report's Emotional Content with Large Language Models. Lorenzo Bertolini, Valentina Elce, Adriana Michalak, Hanna-Sophia Widhoezl, Giulio Bernardi, Julie Weeds

  • Analysing relevance of Discourse Structure for Improved Mental Health Estimation. Navneet Agarwal, Gaël Dias, Sonia Dollfus

  • Prevalent Frequency of Emotional and Physical Symptoms in Social Anxiety using Zero Shot Classification: An Observational Study. Muhammad Rizwan, Jure Demšar

  • Comparing panic and anxiety on a dataset collected from social media. Sandra Mitrović, Oscar William Lithgow-Serrano, Carlo Schillaci

  • Detecting a Proxy for Potential Comorbid ADHD in People Reporting Anxiety Symptoms from Social Media Data. Claire Lee, Noelle Lim, Michael Guerzhoy

Shared task track

  • Team ISM at CLPsych 2024: Extracting Evidence of Suicide Risk from Reddit Posts with Knowledge Self-Generation and Output Refinement using A Large Language Model. Vu Tran, Tomoko Matsui

  • Exploring Instructive Prompts for Large Language Models in the Extraction of Evidence for Supporting Assigned Suicidal Risk Levels. Jiyu Chen, Vincent Nguyen, Xiang Dai, Diego Molla-Aliod, Cecile Paris, Sarvnaz Karimi

  • Psychological Assessments with Large Language Models: A Privacy-Focused and Cost-Effective Approach. Sergi Blanco-Cuaresma

  • Incorporating Word Count Information into Depression Risk Summary Generation: INF@UoS CLPsych 2024 Submission. Judita Preiss, Zenan Chen

  • Extracting and Summarizing Evidence of Suicidal Ideation in Social Media Contents Using Large Language Models. Loitongbam Gyanendro Singh, Junyu Mao, Rudra Mutalik, Stuart Middleton

  • Detecting Suicide Risk Patterns using Hierarchical Attention Networks with Large Language Models. Koushik L, Vishruth M, Anand Kumar M

  • Using Large Language Models (LLMs) to Extract Evidence from Pre-Annotated Social Media Data. Falwah Alhamed, Julia Ive, Lucia Specia

  • XinHai@CLPsych 2024 Shared Task: Prompting Healthcare-oriented LLMs for Evidence Highlighting in Posts with Suicide Risk. Jingwei Zhu, Ancheng Xu, Minghuan Tan, Min Yang

  • A Dual-Prompting for Interpretable Mental Health Language Models. Hyolim Jeon, Dongje Yoo, Daeun Lee, Sejung Son, Seungbae Kim, Jinyoung Han

  • Cheap Ways of Extracting Clinical Markers from Texts. Anastasia Sandu, Teodor Mihailescu, Sergiu Nisioi

11:45 PM

Panel discussion: 10 years of CLPsych

• Margaret Mitchell

• Glen Coppersmith
• Philip Resnik
• April Foreman (moderator)

12:45 PM

Lunch break

2:00 PM

Paper session 1
Clinical discussant: Kate Niederhoffer
Topic: Depression

  • Delving into the Depths: Evaluating Depression Severity through BDI-biased Summaries. Mario Aragon, Javier Parapar, David E Losada

  • Explainable Depression Detection Using Large Language Models on Social Media Data. Yuxi Wang, Diana Inkpen, Prasadith Kirinde Gamaarachchige

  • Your Model Is Not Predicting Depression Well And That Is Why: A Case Study of PRIMATE Dataset. Kirill Milintsevich, Kairit Sirts, Gaël Dias

3:00 PM

Keynote 2: João Sedoc

"Empathy and Emotion in Conversational AI"

Abstract: As conversational agents are increasingly embedded in our lives, they will need to connect with users on a human level in addition to merely comprehending us. In this talk, we begin to incorporate empathy and emotion with Large Language Models (LLMs), aided by our Empathic Conversations dataset. Additionally, we discuss how we can integrate psychological metrics for evaluating conversational agents along dimensions of emotion, empathy, and user traits. We conclude with implications of empathetic conversational Ai for advancing domains such as mental health and customer service.

3:45 PM

Coffee break

4:15 PM

Paper session 2

Topic: Therapy
Discussant: Justin Tauscher

  • How Can Client Motivational Language Inform Psychotherapy Agents? Van Hoang, Eoin Rogers, Robert Ross

  • Therapist Self-Disclosure as a Natural Language Processing Task. Natalie Shapira, Tal Alfi-Yogev

  • Using Daily Language to Understand Drinking: Multi-Level Longitudinal Differential Language Analysis. Matthew Matero, Huy Vu, August Nilsson, Syeda Mahwish, Young Min Cho, James McKay, johannes Eichstaedt, Richard Rosenthal, Lyle Ungar, H. Andrew Schwartz

5:15 PM

Closing remarks (OC)

5:30 PM

Social dinner

Speakers and Panelists

Dr. Andrew Schwartz

H. Andrew Schwartz is the Director of the Human Language Analysis Beings (HLAB), an interdisciplinary lab housed in Computer Science at Stony Brook University (SUNY). Before that, he co-founded the World Well-Being Project, now a consortium of researchers from the University of Pennsylvania, Stony Brook University, and Stanford University focused on developing large-scale language analyses that reveal differences in health, personality, and well-being. He was a Recipient of a 2020 DARPA Young Faculty Award and an outstanding paper award at ACL-2023. Andrew is an active member in the fields of natural language processing, psychology, and health informatics. He created and co-maintains the Differential Language Analysis ToolKit (DLATK), used in over 100 studies.

Dr. Glen Coppersmith

Glen is the Chief Data Officer at SonderMind after their acquistion of Qntfy, for which he was the Founder and CEO. He is recognized as a leader and pioneer in the space of mental health and data science, with early and frequent peer-reviewed publications on the advancements made at SonderMind and Qntfy. Glen’s work has been covered in several major publications including the Today Show, Crunchbase, Mashable, The Mighty and Scientific American. He frequently speaks about the scientific advances, ethics, and pragmatics of using data to spark sustained improvements in wellbeing, including The White House, National Institute of Mental Health, SXSW, and NASA.

Dr. Margaret Mitchell

Margaret Mitchell is a researcher focused on the ins and outs of machine learning and ethics-informed AI development in tech. She has published around 100 papers on natural language generation, assistive technology, computer vision, and AI ethics, and holds multiple patents in the areas of conversation generation and sentiment classification. She has recently received recognition as one of Time’s Most Influential People of 2023. She currently works at Hugging Face as Chief Ethics Scientist, driving forward work in the ML development ecosystem, ML data governance, AI evaluation, and AI ethics. She previously worked at Google AI as a Staff Research Scientist, where she founded and co-led Google’s Ethical AI group, focused on foundational AI ethics research and operationalizing AI ethics Google-internally. Before joining Google, she was a researcher at Microsoft Research, focused on computer vision-to-language generation; and was a postdoc at Johns Hopkins, focused on Bayesian modeling and information extraction. She holds a PhD in Computer Science from the University of Aberdeen and a Master’s in computational linguistics from the University of Washington. While earning her degrees, she also worked from 2005-2012 on machine learning, neurological disorders, and assistive technology at Oregon Health and Science University. She has spearheaded a number of workshops and initiatives at the intersections of diversity, inclusion, computer science, and ethics. Her work has received awards from Secretary of Defense Ash Carter and the American Foundation for the Blind, and has been implemented by multiple technology companies.

Dr. Philip Resnik

Dr. April Foreman

April C. Foreman, Ph.D., is a Licensed Psychologist serving Veterans as the Program Director for Technology and Innovation for the Veterans Crisis Line. She served as an Executive Committee member for the Board of the American Association of Suicidology, and as the 2017 Acting Director of Technology and Innovation for the VA’s Office of Suicide Prevention. She is a member of the team that launched, a recognized innovation in data donation for ground breaking suicide research. She is passionate about helping people with severe (sometimes lethal) emotional pain, and in particular advocates for people with Borderline Personality Disorder, which has one of the highest mortality rates of all mental illnesses. She is known for her work at the intersection of technology, social media, and mental health, with nationally recognized implementations of innovations in the use of technology and mood tracking. She is the 2015 recipient of the Roger J. Tierney Award for her work as a founder and moderator of the first sponsored regular mental health chat on Twitter, the weekly Suicide Prevention Social Media chat (#SPSM, sponsored by the American Association of Suicidology, AAS). Her dream is to use her unique skills and vision to build a mental health system effectively and elegantly designed to serve the people who need it.

Dr. João Sedoc

João Sedoc is an Assistant Professor of Information Systems in the Department of Technology, Operations and Statistics at New York University Stern School of Business. His research areas are at the intersection of machine learning and natural language processing. His research areas are at the intersection of machine learning and natural language processing. His research areas are at the intersection of machine learning and natural language processing. His interests include large language models, generative AI, conversational agents, deep learning, hierarchical models, and time series analysis. He received his PhD in Computer and Information Science from the University of Pennsylvania.


Dr. Kate Niederhoffer

Dr. Justin Tauscher

Follow us on Twitter

Twitter feed is not available at the moment.