Skip to content
This repository has been archived by the owner on Sep 30, 2024. It is now read-only.

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
vagheshp authored Jul 22, 2024
1 parent 7607003 commit 7d9dd08
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Optimized Inference at the Edge with Intel® Tools and Technologies
This workshop will walk you through the workflow using Intel® Distribution of OpenVINO™ toolkit for inferencing deep learning algorithms that help accelerate vision, automatic speech recognition, natural language processing, recommendation systems and many other applications. You will learn how to optimize and improve performance with or without external accelerators and utilize tools to help you identify the best hardware configuration for your needs. This workshop will also outline the various frameworks and topologies supported by Intel® Distribution of OpenVINO™ toolkit.


> :warning: Labs of this workshop have been validated with **Intel® Distribution of OpenVINO™ toolkit 2021.3 (openvino_toolkit_2021.3.394)**. Some of the videos shown below is based on OpenVINO 2021.2, might be slightly different from the slides, but the content is largely the same. **FPGA plugin will no longer be supported by the OpenVINO stardard release, you can find the FPGA content from earlier branches.**
## Workshop Agenda
Expand Down

0 comments on commit 7d9dd08

Please sign in to comment.