<div dir="ltr"><div dir="ltr"><div class="gmail_quote"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div class="gmail_quote"><div dir="ltr"><div dir="ltr"><div class="gmail_quote"><div dir="ltr"><div class="gmail_quote"><div dir="ltr"><div dir="ltr"><div class="gmail_quote"><div dir="ltr"><div dir="ltr"><div class="gmail_quote"><div dir="ltr"><div class="gmail_quote"><div dir="ltr"><div dir="ltr"><div class="gmail_quote"><div dir="ltr"><div dir="ltr"><div class="gmail_quote"><div dir="ltr"><div><div>The next AI Seminar will be on Friday April 24th, 12-1pm, in the MRB Seminar Room (1st floor).</div></div><div><br></div><div>*** Pizza and refreshments will be provided ****<br><br>To keep track of the number of attendees, please *register* at:</div><div><a href="https://www.eventbrite.com/e/ai-seminar-series-tickets-1987802297181" target="_blank">https://www.eventbrite.com/e/ai-seminar-series-tickets-1987802297181</a> </div><div><br></div><div>The talk will be given by <b>Prof. Yue Dong,</b> Department of Computer Science and Engineering, UCR<br><br>TITLE: Revealing Hidden Vulnerabilities in Long-Context Large Language Models</div><div><br></div><div>ABSTRACT: </div><div>Large Language Models are increasingly deployed in applications that require reasoning over long and complex context, such as extended documents, multi-turn interactions, retrieved evidence, and multimodal inputs. While these capabilities make LLMs more powerful, they also introduce new and underexplored safety risks. In long-context settings, safety-relevant signals can be diluted or overridden, boundaries between context segments can break down, and harmful influence can emerge only when information is recombined during reasoning.<br><br>In this talk, I will highlight recent research uncovering hidden vulnerabilities in long-context LLMs, including hallucinations, alignment failures, and adversarial weaknesses across both text and multimodal systems. These findings suggest that many existing safety evaluations and defenses, which are often designed for short and self-contained inputs, are insufficient for long-context reasoning. Addressing these challenges requires new benchmarks, interpretability tools, and defense strategies for safer and more reliable LLMs.</div><div><br></div><div><br></div><div>Bio:</div><div>Yue Dong is an Assistant Professor of Computer Science at the University of California, Riverside. Her research focuses on building controllable, trustworthy, and efficient large language models. She has published over 40 peer-reviewed papers in leading venues including ACL, ICLR, ICML, TACL, NAACL, EMNLP, and AAAI. Her recent work spans hallucination reduction, efficient post-training, and AI safety and robustness, including red-teaming and alignment of multimodal language models. Her research has received multiple recognitions, including a Best Paper Award at the 2023 SoCal NLP Symposium for work on multimodal LLM safety. Prior to joining UC Riverside, she completed PhD research internships at Google, Microsoft, and AI2.</div><div><br></div><div>------------------------------------<br>Sponsored by the RAISE@UCR Institute, the <span><span><span>AI</span></span></span> <span><span><span>Seminar</span></span></span> <span><span><span>Series</span></span></span> presents speakers working on cutting edge Foundational <span><span><span>AI</span></span></span> or applying <span><span><span>AI</span></span></span> in their research. The goal of these <span><span><span>seminars</span></span></span> is to inform the UCR community about current trends in <span><span><span>AI</span></span></span> research and promote collaborations between faculty in this emerging field. These <span><span><span>seminars</span></span></span> are open to interested faculty and graduate/undergraduate students. Please forward this email to other colleagues or students in your lab that may be interested. After the <span><span><span>seminar</span></span></span> a discussion will follow for questions, open problems, ideas for possible collaborations etc.<br><br>Sincerely,<br>Vassilis Tsotras<br>Professor, CSE Department<br>co-Director, RAISE@UCR Institute<br><br>Amit Roy-Chowdhury<br>Professor, ECE Department<br>co-Director, RAISE@UCR Institute</div></div>
</div></div>
</div>
</div></div>
</div>
</div></div>
</div></div>
</div>
</div></div>
</div>
</div></div>
</div></div>
</div>
</div></div>
</div>
</div>
</div></div>
</div>