<div dir="ltr">Hi everyone,<br><div><br></div><div>I will be presenting Transformer-based large language models: <a href="https://proceedings.neurips.cc/paper_files/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html">Transformer</a>, <a href="https://arxiv.org/abs/1810.04805">BERT</a>, <a href="https://www.mikecaptain.com/resources/pdf/GPT-1.pdf">GPT</a>, <a href="https://dcmpx.remotevs.com/net/cloudfront/d4mucfpksywv/SL/better-language-models/language_models_are_unsupervised_multitask_learners.pdf">GPT2</a>, <a href="https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html">GPT3</a>.</div><div><br></div><div>Best regards,</div><div>Yudi</div></div>