-
Notifications
You must be signed in to change notification settings - Fork 17
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
a9578f6
commit d89a516
Showing
1 changed file
with
15 additions
and
0 deletions.
There are no files selected for viewing
15 changes: 15 additions & 0 deletions
15
...kshops/c-dnn-and-c-transformer-ann-and-snn-for-the-best-of-both-worlds/index.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,15 @@ | ||
--- | ||
title: "C-DNN and C-Transformer: mixing ANNs and SNNs for the best of both worlds" | ||
author: | ||
- "Sangyeob Kim" | ||
date: "2024-05-04" | ||
start_time: 11:00 | ||
end_time: 12:15 | ||
time_zone: CEST | ||
description: "Join us for a talk by Sangyeob Kim, Postdoctoral researcher at KAIST, on designing efficient accelerators that mix SNNs and ANNs." | ||
upcoming: true | ||
speaker_photo: sangyeob-kim.jpeg | ||
speaker_bio: 'Sangyeob Kim (Student Member, IEEE) received the B.S., M.S. and Ph.D. degrees from the School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea, in 2018, 2020 and 2023, respectively. He is currently a Post-Doctoral Associate with the KAIST. His current research interests include energy-efficient system-on-chip design, especially focused on deep neural network accelerators, neuromorphic hardware, and computing-in-memory accelerators.' | ||
--- | ||
|
||
Sangyeob and his team have developed a C-DNN processor that effectively processes object recognition workloads, achieving 51.3% higher energy efficiency compared to the previous state-of-the-art processor. Subsequently, they have applied C-DNN not only to image classification but also to other applications, and have developed the C-Transformer, which applies this technique to a Large Language Model (LLM). As a result, they demonstrate that the energy consumed in LLM can be reduced by 30% to 72% using the C-DNN technique, compared to the previous state-of-the-art processor. In this talk, we will introduce the processor developed for C-DNN and C-Transformer, and discuss how neuromorphic computing can be used in actual applications in the future. |