Safe and Effective Use of Large Language Models in Scientific Research
Posters on site
|
Attendance Tue, 05 May, 16:15–18:00 (CEST) | Display Tue, 05 May, 14:00–18:00 Hall X4
Tue, 16:15
This session will highlight both the transformative applications and the critical challenges of using LLMs in science. Key topics include developing specialized guardrails against hallucination and bias; creating robust evaluation frameworks, including uncertainty quantification; ensuring scientific integrity, data governance and reproducibility; and addressing unique scientific risks.
We invite submissions on novel scientific applications of LLMs and agentic workflows, methods that ensure integrity and reproducibility, safety mechanisms (e.g., guardrails, risk mitigation, alignment), responsible AI frameworks (including human-in-the-loop design, fairness, and ethics) and lessons learned from real-world deployments. Our goal is to foster discussion on pathways toward safe, effective and trustworthy use of LLMs for advancing science.
X4.36
|
EGU26-20967
Automating Requirements Elicitation using Large Language Models and Speech Processing
(withdrawn)