Advances on temporal information extraction
By Judith Jeyafreeda

Judith Jeyafreeda will give a talk on Advances on temporal information extraction

Abstract

Temporal information extraction is critical for understanding clinical narratives, particularly in the context of rare genetic diseases where longitudinal data is sparse and complex. This study presents a novel methodology for annotating and adapting temporal relations within French clinical texts, focusing on narratives related to rare genetic conditions. This study introduces a specialized annotation scheme tailored to the nuances of French medical language and evaluate transformer-based models, including CamemBERT variants, for temporal relation classification. The experiments demonstrate that parameter-efficient fine-tuning (PEFT) strategies significantly enhance performance while maintaining computational efficiency. The findings underscore the importance of linguistic adaptation and domain-specific modeling in improving temporal information extraction for under-resourced languages and rare disease contexts.

Biography

I’m Judith Jeyafreeda Andrew, an AI scientist and engineer focused on transformer models, distributed training, and language-centric multimodal learning. My work bridges academic research and real-world deployment, especially in medical NLP and open-source development. I’m fluent in Python, PyTorch, and JAX, and have led interdisciplinary projects that prioritize both technical rigor and societal relevance. I’m deeply motivated by the potential of AI to empower people and democratize intelligence, and I thrive in collaborative environments that value curiosity, clarity, and impact.