[AI Seminar Series] Seminar by Prof. Yue Dong, Friday April 24th, 12-1pm, MRB Seminar Room
Vassilis Tsotras
vassilis.tsotras at ucr.edu
Sun Apr 19 22:01:35 PDT 2026
The next AI Seminar will be on Friday April 24th, 12-1pm, in the MRB
Seminar Room (1st floor).
*** Pizza and refreshments will be provided ****
To keep track of the number of attendees, please *register* at:
https://www.eventbrite.com/e/ai-seminar-series-tickets-1987802297181
The talk will be given by *Prof. Yue Dong,* Department of Computer Science
and Engineering, UCR
TITLE: Revealing Hidden Vulnerabilities in Long-Context Large Language
Models
ABSTRACT:
Large Language Models are increasingly deployed in applications that
require reasoning over long and complex context, such as extended
documents, multi-turn interactions, retrieved evidence, and multimodal
inputs. While these capabilities make LLMs more powerful, they also
introduce new and underexplored safety risks. In long-context settings,
safety-relevant signals can be diluted or overridden, boundaries between
context segments can break down, and harmful influence can emerge only when
information is recombined during reasoning.
In this talk, I will highlight recent research uncovering hidden
vulnerabilities in long-context LLMs, including hallucinations, alignment
failures, and adversarial weaknesses across both text and multimodal
systems. These findings suggest that many existing safety evaluations and
defenses, which are often designed for short and self-contained inputs, are
insufficient for long-context reasoning. Addressing these challenges
requires new benchmarks, interpretability tools, and defense strategies for
safer and more reliable LLMs.
Bio:
Yue Dong is an Assistant Professor of Computer Science at the University of
California, Riverside. Her research focuses on building controllable,
trustworthy, and efficient large language models. She has published over 40
peer-reviewed papers in leading venues including ACL, ICLR, ICML, TACL,
NAACL, EMNLP, and AAAI. Her recent work spans hallucination reduction,
efficient post-training, and AI safety and robustness, including
red-teaming and alignment of multimodal language models. Her research has
received multiple recognitions, including a Best Paper Award at the 2023
SoCal NLP Symposium for work on multimodal LLM safety. Prior to joining UC
Riverside, she completed PhD research internships at Google, Microsoft, and
AI2.
------------------------------------
Sponsored by the RAISE at UCR Institute, the AI Seminar Series presents
speakers working on cutting edge Foundational AI or applying AI in their
research. The goal of these seminars is to inform the UCR community about
current trends in AI research and promote collaborations between faculty in
this emerging field. These seminars are open to interested faculty and
graduate/undergraduate students. Please forward this email to other
colleagues or students in your lab that may be interested. After the seminar a
discussion will follow for questions, open problems, ideas for possible
collaborations etc.
Sincerely,
Vassilis Tsotras
Professor, CSE Department
co-Director, RAISE at UCR Institute
Amit Roy-Chowdhury
Professor, ECE Department
co-Director, RAISE at UCR Institute
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ucr.edu/pipermail/raise-seminar/attachments/20260419/ea8b7a73/attachment.htm>
More information about the raise-seminar
mailing list