Skip to content

Commit 3ec8767

Browse files
committed
guest lecture update
1 parent b0c3122 commit 3ec8767

File tree

3 files changed

+25
-28
lines changed

3 files changed

+25
-28
lines changed

images/people/chrisbishop.jpg

111 KB
Loading

images/people/mathiaslechner.jpg

9 KB
Loading

index.html

Lines changed: 25 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -654,14 +654,14 @@ <h6 class="card-title">Software Lab 3</h6>
654654
<div class="col-md-2 v-center">
655655
<div class="card card-lecture">
656656
<div class="card-icon">
657-
<img src="images/thumb/mystery1.jpg" alt="">
657+
<img src="images/thumb/molecule.gif" alt="">
658658
</div>
659659
</div>
660660
</div>
661661
<div class="col-md-1"></div>
662662
<div class="col-md-8 v-center">
663663
<div class="card card-lecture">
664-
<h5 class="card-title"><highlight>Guest Lecture</highlight></h5>
664+
<h5 class="card-title"><highlight>AI for Science</highlight></h5>
665665
</div>
666666
</div>
667667
</div>
@@ -673,8 +673,8 @@ <h6 class="card-title">Lecture 7</h6>
673673
<!-- <p>[<a data-toggle="modal" data-target="#themis_modal">Info</a>] [<a href='slides/6S191_MIT_DeepLearning_L5.pdf'>Slides</a>] [<a href="https://www.youtube.com/watch?v=kIiO4VSrivU&list=PLtBw6njQRU-rwp5__7C0oIVt26ZgjG9NI&index=5">Video</a>]</p> -->
674674
<!-- <p>[<a data-toggle="modal" data-target="#themis_modal">Info</a>] [<a href='slides/6S191_MIT_DeepLearning_L5.pdf'>Slides</a>] [<b>Video</b>] <i>coming soon!</i></p> -->
675675
<!-- <p>[<a data-toggle="modal" data-target="#google_modal">Info</a>] [<a href="https://www.youtube.com/watch?v=ZNodOsz94cc&list=PLtBw6njQRU-rwp5__7C0oIVt26ZgjG9NI&index=7">Video</a>] -->
676-
<!-- <p>[<a data-toggle="modal" data-target="#google_modal">Info</a>] [<b>Slides</b>] [<b>Video</b>] <i>coming soon!</i></p> -->
677-
<p>[<b>Slides</b>] [<b>Video</b>] <i>coming soon!</i></p>
676+
<p>[<a data-toggle="modal" data-target="#microsoft_modal">Info</a>] [<b>Slides</b>] [<b>Video</b>] <i>coming soon!</i></p>
677+
<!-- <p>[<b>Slides</b>] [<b>Video</b>] <i>coming soon!</i></p> -->
678678
</div>
679679
</div>
680680
</div> <!-- end of Lecture 7 -->
@@ -686,14 +686,14 @@ <h6 class="card-title">Lecture 7</h6>
686686
<div class="col-md-2 v-center">
687687
<div class="card card-lecture">
688688
<div class="card-icon">
689-
<img src="images/thumb/mystery2.jpg" alt="">
689+
<img src="images/thumb/rocket.gif" alt="">
690690
</div>
691691
</div>
692692
</div>
693693
<div class="col-md-1"></div>
694694
<div class="col-md-8 v-center">
695695
<div class="card card-lecture">
696-
<h5 class="card-title"><highlight>Guest Lecture</highlight></h5>
696+
<h5 class="card-title"><highlight>Secrets to Massively Parallel Training</highlight></h5>
697697
</div>
698698
</div>
699699
</div>
@@ -704,8 +704,8 @@ <h6 class="card-title">Lecture 8</h6>
704704
<!-- <i>Apr. 21, 2025</i> -->
705705
<!-- <p>[<a data-toggle="modal" data-target="#liquid_modal">Info</a>] [<a href='slides/6S191_MIT_DeepLearning_L8.pdf'>Slides</a>] [<b>Video</b>] <i>coming soon!</i></p> -->
706706
<!-- <p>[<a data-toggle="modal" data-target="#liquid_modal">Info</a>] [<a href='https://www.youtube.com/watch?v=_HfdncCbMOE&list=PLtBw6njQRU-rwp5__7C0oIVt26ZgjG9NI&index=9'>Video</a>] </p> -->
707-
<!-- <p>[<a data-toggle="modal" data-target="#liquid_modal">Info</a>] [<b>Slides</b>] [<b>Video</b>] <i>coming soon!</i></p> -->
708-
<p>[<b>Slides</b>] [<b>Video</b>] <i>coming soon!</i></p>
707+
<p>[<a data-toggle="modal" data-target="#liquid_modal">Info</a>] [<b>Slides</b>] [<b>Video</b>] <i>coming soon!</i></p>
708+
<!-- <p>[<b>Slides</b>] [<b>Video</b>] <i>coming soon!</i></p> -->
709709
</div>
710710
</div>
711711
</div> <!-- end of Lecture 8 -->
@@ -1433,7 +1433,7 @@ <h4>Social Media</h4>
14331433

14341434

14351435
<!-- Modal -->
1436-
<div class="modal fade" id="google_modal" role="dialog">
1436+
<div class="modal fade" id="microsoft_modal" role="dialog">
14371437
<div class="modal-dialog">
14381438

14391439
<!-- Modal content-->
@@ -1447,14 +1447,14 @@ <h4>Social Media</h4>
14471447
<div class="col-md-4 v-center">
14481448
<div class="card card-lecture card-modal">
14491449
<div class="card-icon">
1450-
<img src="images/people/petergrabowski.jpg" alt="">
1450+
<img src="images/people/chrisbishop.jpg" alt="">
14511451
</div>
14521452
</div>
14531453
</div>
14541454
<div class="col-md-8 v-center">
14551455
<div class="card card-lecture card-modal">
1456-
<h4 align="left">Introduction to Language Modeling</h4>
1457-
<h5 align="left">Peter Grabowski, Lead of Gemini Applied Research, Google</h5>
1456+
<h4 align="left">AI for Science</h4>
1457+
<h5 align="left">Chris Bishop, Technical Fellow, Microsoft</h5>
14581458
</div>
14591459
</div>
14601460
</div>
@@ -1463,12 +1463,12 @@ <h5 align="left">Peter Grabowski, Lead of Gemini Applied Research, Google</h5>
14631463
<div class="col-md-12 v-center">
14641464
<h6>Talk Abstract</h6>
14651465
<p>
1466-
Want to get started with LLMs? This lecture will cover an introduction to language modeling and prompt engineering, example use cases and applications, and a discussion of common considerations for LLM usage (cost, efficiency, accuracy, bias).
1466+
Coming soon!
14671467
</p>
14681468

14691469
<h6>Speaker Bio</h6>
14701470
<p>
1471-
Peter leads the Gemini Applied Research group, focused on developing fast, efficient, and scalable models in partnership with DeepMind, Search, Ads, Cloud, and other teams across Google. Prior to that, he led a group focused on Google's Enterprise AI, worked on making the Google Assistant better for Kids, and led the data integration / machine learning team at Nest. Peter loves to teach, and is a member of the faculty at UC Berkeley's School of Information, where he teaches courses focused on Deep Learning and Natural Language Processing.
1471+
Christopher Bishop is a Microsoft Technical Fellow and a member of Microsoft Research AI for Science. Chris obtained a BA in Physics from Oxford, and a PhD in Theoretical Physics from the University of Edinburgh, with a thesis on quantum field theory. After his PhD he joined the Theoretical Physics Division of Culham Laboratory where he conducted research into the physics of magnetically confined fusion plasmas. During this time, he developed an interest in machine learning and became Head of the Applied Neurocomputing Centre at AEA Technology. He was subsequently elected to a Chair in the Department of Computer Science and Applied Mathematics at Aston University, where he set up and led the Neural Computing Research Group. He joined Microsoft in 1997 and was Lab Director of Microsoft Research Cambridge from 2015 until 2022 when he founded the new AI for Science team. At Microsoft Research, Chris oversees a global portfolio of research, focussed on machine learning for the natural sciences.
14721472
</p>
14731473
</div>
14741474
</div>
@@ -1493,14 +1493,14 @@ <h6>Speaker Bio</h6>
14931493
<div class="col-md-4 v-center">
14941494
<div class="card card-lecture card-modal">
14951495
<div class="card-icon">
1496-
<img src="images/people/maximelabonne.jpg" alt="">
1496+
<img src="images/people/mathiaslechner.jpg" alt="">
14971497
</div>
14981498
</div>
14991499
</div>
15001500
<div class="col-md-8 v-center">
15011501
<div class="card card-lecture card-modal">
1502-
<h4 align="left">Introduction to LLM Post-Training</h4>
1503-
<h5 align="left">Maxime Labonne, Head of Post-Training, Liquid AI</h5>
1502+
<h4 align="left">Massively Parallel Training</h4>
1503+
<h5 align="left">Mathias Lechner, Co-Founder and Chief Technology Officer, Liquid AI</h5>
15041504
</div>
15051505
</div>
15061506
</div>
@@ -1509,12 +1509,12 @@ <h5 align="left">Maxime Labonne, Head of Post-Training, Liquid AI</h5>
15091509
<div class="col-md-12 v-center">
15101510
<h6>Talk Abstract</h6>
15111511
<p>
1512-
In this talk, we will cover the fundamentals of modern LLM post-training at various scales with concrete examples. High-quality data generation is at the core of this process, focusing on the accuracy, diversity, and complexity of the training samples. We will explore key training techniques, including supervised fine-tuning, preference alignment, and model merging. The lecture will delve into evaluation frameworks with their pros and cons for measuring model performance. We will conclude with an overview of emerging trends in post-training methodologies and their implications for the future of LLM development.
1512+
This lecture talks about how to scale training of deep neural networks to thousands of GPUs. It begins by motivating why GPUs are essential for training (comparing FLOPs of GPUs vs CPUs) and why scaling to larger models and datasets improves performance, drawing on scaling laws from LLaMA and Kaplan et al. The talk then explores the memory requirements of training and techniques to reduce them, including activation checkpointing and offloading. The bulk of the lecture covers parallelism strategies: data parallelism, tensor parallelism, pipeline parallelism, and sequence/context parallelism, as well as sharding approaches like DeepSpeed ZeRO and FSDP. It also touches on sparsity through Mixture of Experts and expert parallelism. Throughout, network bandwidth is highlighted as a key bottleneck. The lecture concludes with a case study of LFM2 showing how these techniques combine in practice.
15131513
</p>
15141514

15151515
<h6>Speaker Bio</h6>
15161516
<p>
1517-
Maxime Labonne is Head of Post-Training at Liquid AI. He holds a Ph.D. in Machine Learning from the Polytechnic Institute of Paris and is a Google Developer Expert in AI/ML. He has made significant contributions to the open-source community, including the LLM Course, tutorials on fine-tuning, tools such as LLM AutoEval, and several state-of-the-art models like NeuralDaredevil. He is the author of the best-selling books “LLM Engineer’s Handbook” and “Hands-On Graph Neural Networks Using Python”.
1517+
Mathias Lechner is Co-Founder and Chief Technology Officer (CTO) at Liquid AI, as well as a Research Affiliate at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, where he collaborates with Prof. Daniela Rus. He completed his PhD in 2022 at the Institute of Science and Technology Austria (ISTA), under the supervision of Tom Henzinger. Before his PhD, he earned his master’s (2017) and bachelor’s (2016) degrees in Computer Science from the Vienna University of Technology (TU Wien).
15181518
</p>
15191519
</div>
15201520
</div>
@@ -1525,7 +1525,7 @@ <h6>Speaker Bio</h6>
15251525

15261526

15271527
<!-- Modal -->
1528-
<div class="modal fade" id="microsoft_modal" role="dialog">
1528+
<div class="modal fade" id="google_modal" role="dialog">
15291529
<div class="modal-dialog">
15301530

15311531
<!-- Modal content-->
@@ -1539,14 +1539,14 @@ <h6>Speaker Bio</h6>
15391539
<div class="col-md-4 v-center">
15401540
<div class="card card-lecture card-modal">
15411541
<div class="card-icon">
1542-
<img src="images/people/avaamini.jpg" alt="">
1542+
<img src="images/people/anon.jpg" alt="">
15431543
</div>
15441544
</div>
15451545
</div>
15461546
<div class="col-md-8 v-center">
15471547
<div class="card card-lecture card-modal">
1548-
<h4 align="left">AI to Optimize Biology</h4>
1549-
<h5 align="left">Ava Amini, Senior Research Scientist, Microsoft</h5>
1548+
<h4 align="left">Coming soon!</h4>
1549+
<h5 align="left">Coming soon!</h5>
15501550
</div>
15511551
</div>
15521552
</div>
@@ -1555,12 +1555,12 @@ <h5 align="left">Ava Amini, Senior Research Scientist, Microsoft</h5>
15551555
<div class="col-md-12 v-center">
15561556
<h6>Talk Abstract</h6>
15571557
<p>
1558-
The potential of AI in biology is immense, yet its success is contingent on interfacing effectively with wet-lab experimentation and remaining grounded in the system, structure, and physics of biology. I will share how, at Microsoft Research, we are developing new AI systems that help us better understand and design biology via generative design and interactive discovery. I will focus on Generative AI models for the design of novel and useful biomolecules, expanding our ability to engineer new proteins for therapeutic, biological, and industrial applications and beyond.
1558+
Coming soon!
15591559
</p>
15601560

15611561
<h6>Speaker Bio</h6>
15621562
<p>
1563-
Ava Amini is a Senior Researcher at Microsoft, where she develops new AI technologies for precision biology and medicine. She completed her PhD in Biophysics at Harvard University and her BS in Computer Science and Molecular Biology at MIT and has been recognized by the National Academy of Engineering, the National Science Foundation, TEDx, Venture Beats, and the Association of MIT Alumnae, among others, for her research. Ava is passionate about AI education and outreach -- she is a lead organizer and instructor for MIT Introduction to Deep Learning, where she has taught AI to 1000s of students in-person and over 100,000 globally registered students online, garnering more than 11 million online lecture views, and served as a co-founder and director of MomentumAI, which taught all-expenses-paid education programs for high schoolers to learn AI.
1563+
Coming soon!
15641564
</p>
15651565
</div>
15661566
</div>
@@ -1584,9 +1584,6 @@ <h6>Speaker Bio</h6>
15841584
<div class="row">
15851585
<div class="col-md-4 v-center">
15861586
<div class="card card-lecture card-modal">
1587-
<!-- <div class="card-icon">
1588-
<img src="images/people/nikolaskaris.jpg" alt="">
1589-
</div> -->
15901587
<div class="card-icon">
15911588
<img src="images/people/douglasblank.jpg" alt="">
15921589
</div>

0 commit comments

Comments
 (0)