<div dir="ltr"><div dir="ltr">Reminder for the AI Seminar, this Friday at noon. Please register with the link below if you plan to attend.<div><br></div><div>Sincerely,</div><div>V. Tsotras</div></div><div><br></div>---------------------------<br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Sat, Oct 25, 2025 at 1:18 PM Vassilis Tsotras <<a href="mailto:vassilis.tsotras@ucr.edu">vassilis.tsotras@ucr.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div class="gmail_quote"><div dir="ltr"><div dir="ltr"><div class="gmail_quote"><div dir="ltr"><div><div>The next AI Seminar will be next Friday October 31st, 12:00-1:00pm at the MRB Seminar Room (1st floor).</div></div><div><br></div><div>*** Pizza and refreshments will be provided ****<br><br>To keep track of the number of attendees, please *register* at:</div><div><a href="https://www.eventbrite.com/e/ai-seminar-series-tickets-1884219173269" target="_blank">https://www.eventbrite.com/e/ai-seminar-series-tickets-1884219173269</a></div><div><br></div><div><br></div><div>The talk will be given by <b>Prof. Zhouxing Shi,</b> Department of Computer Science and Engineering, UCR<br><br>TITLE: Formal verification and verification-aware training for trustworthy AI</div><div><br></div><div>ABSTRACT: </div><div>The revolutionary capabilities of AI with machine learning have enabled an increasingly broad range of applications, which has brought many new challenges in ensuring the trustworthiness of AI applications. In this talk, I will present our research on trustworthy AI with verifiable guarantees. I will first introduce our frameworks for the automatic formal verification of AI models as general computational graphs, to support general neural network architectures, nonlinearities, and safety properties being verified. I will also talk about our work on testing the soundness of neural network verifiers for the reliability of verifiers themselves. Then, I will present our verification-aware neural network training techniques for producing verification-friendly AI models with stronger verifiability. Finally, I will also discuss applications of our verification and verification-aware training in synthesizing verifiably stable neural network-based controllers for nonlinear dynamical systems.<br><br></div><div>Bio:</div><div>Zhouxing Shi recently joined UC Riverside as an Assistant Professor in Computer Science and Engineering in July 2025. He completed his Ph.D. at the UCLA Computer Science Department. His research focuses on machine learning and trustworthy AI for building more reliable AI models. His recent research topics mostly involve the robustness, safety, and verification for AI models.</div><div><br></div><div>------------------------------------<br>Sponsored by the RAISE@UCR Institute, the <span><span><span>AI</span></span></span> <span><span><span>Seminar</span></span></span> <span><span><span>Series</span></span></span> presents speakers working on cutting edge Foundational <span><span><span>AI</span></span></span> or applying <span><span><span>AI</span></span></span> in their research. The goal of these <span><span><span>seminars</span></span></span> is to inform the UCR community about current trends in <span><span><span>AI</span></span></span> research and promote collaborations between faculty in this emerging field. These <span><span><span>seminars</span></span></span> are open to interested faculty and graduate/undergraduate students. Please forward this email to other colleagues or students in your lab that may be interested. After the <span><span><span>seminar</span></span></span> a discussion will follow for questions, open problems, ideas for possible collaborations etc.<br><br>Sincerely,<br>Vassilis Tsotras<br>Professor, CSE Department<br>co-Director, RAISE@UCR Institute<br><br>Amit Roy-Chowdhury<br>Professor, ECE Department<br>co-Director, RAISE@UCR Institute</div></div>
</div></div>
</div>
</div></div>
</div>
</blockquote></div></div>