You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/finn/command_line.rst
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -132,16 +132,16 @@ build configuration), and are detailed below.
132
132
133
133
* ``report/ooc_synth_and_timing.json`` -- resources and achievable clock frequency from out-of-context synthesis
134
134
135
-
* :py:mod:`finn.builder.build_dataflow_config.DataflowOutputType.BITFILE` will run Vivado and/or Vitis to insert the FINN accelerator inside a shell, with DMA engines instantiated to move data to/from main memory:
135
+
* :py:mod:`finn.builder.build_dataflow_config.DataflowOutputType.BITFILE` will run Vivado (for Zynq), Vitis, or Slash to insert the FINN accelerator inside a shell, with DMA engines instantiated to move data to/from main memory:
136
136
137
-
* ``bitfile/finn-accel.(bit|xclbin)`` -- generated bitfile depending on platform
137
+
* ``bitfile/finn-accel.(bit|xclbin|vbin)`` -- generated bitfile depending on platform
138
138
* ``report/post_synth_resources.xml`` -- FPGA resource utilization after synthesis
* :py:mod:`finn.builder.build_dataflow_config.DataflowOutputType.PYNQ_DRIVER` will generate a PYNQ Python driver that can be used to interface the generated accelerator:
143
143
144
-
* ``driver/driver.py`` -- Python driver that can be used on PYNQ on Zynq or Alveo platforms to launch the accelerator
144
+
* ``driver/driver.py`` -- Python driver that can be used on PYNQ on Zynq or Vitis Alveo platforms to launch the accelerator
Copy file name to clipboardExpand all lines: docs/finn/getting_started.rst
+41-10Lines changed: 41 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ Quickstart
11
11
2. Set up ``FINN_XILINX_PATH`` and ``FINN_XILINX_VERSION`` environment variables pointing respectively to the Xilinx tools installation directory and version (e.g. ``FINN_XILINX_PATH=/opt/Xilinx`` and ``FINN_XILINX_VERSION=2022.2``)
12
12
3. Clone the FINN compiler from the repo: ``git clone https://github.com/Xilinx/finn/`` and go into the directory where it is cloned
13
13
4. Execute ``./run-docker.sh quicktest`` to verify your installation.
14
-
5. Optionally, follow the instructions on :ref:`PYNQ board first-time setup`or :ref:`Alveo first-time setup` for board setup.
14
+
5. Optionally, follow the instructions on :ref:`PYNQ board first-time setup`, :ref:`Vitis-based Alveo first-time setup`, or :ref:`Slash-based Alveo first-time setup` for board setup.
15
15
6. Optionally, set up a `Vivado/Vitis license`_.
16
16
7. All done! See :ref:`Running FINN in Docker` for the various options on how to run the FINN compiler.
17
17
@@ -98,8 +98,9 @@ The most relevant are summarized below:
98
98
99
99
* (required) ``FINN_XILINX_PATH`` points to your Xilinx tools installation on the host (e.g. ``/opt/Xilinx``)
100
100
* (required) ``FINN_XILINX_VERSION`` sets the Xilinx tools version to be used (e.g. ``2022.2``)
101
-
* (required for Alveo) ``PLATFORM_REPO_PATHS`` points to the Vitis platform files (DSA).
102
-
* (required for Alveo) ``XRT_DEB_VERSION`` specifies the .deb to be installed for XRT inside the container (see default value in ``run-docker.sh``).
101
+
* (required for Vitis) ``PLATFORM_REPO_PATHS`` points to the Vitis platform files (DSA).
102
+
* (required for Vitis) ``XRT_DEB_VERSION`` specifies the .deb to be installed for XRT inside the container (see default value in ``run-docker.sh``).
103
+
* (required for Slash) ``V80PP_DEB_PACKAGE`` specifies the .deb to be installed for Slash's v80++ linker.
103
104
* (optional) ``NUM_DEFAULT_WORKERS`` (default 4) specifies the degree of parallelization for the transformations that can be run in parallel, potentially reducing build time
104
105
* (optional) ``FINN_HOST_BUILD_DIR`` specifies which directory on the host will be used as the build directory. Defaults to ``/tmp/finn_dev_<username>``
105
106
* (optional) ``JUPYTER_PORT`` (default 8888) changes the port for Jupyter inside Docker
@@ -125,7 +126,7 @@ Supported FPGA Hardware
125
126
=======================
126
127
**Vivado IPI support for any Xilinx FPGA:** FINN generates a Vivado IP Integrator (IPI) design from the neural network with AXI stream (FIFO) in-out interfaces, which can be integrated onto any Xilinx-AMD FPGA as part of a larger system. It’s up to you to take the FINN-generated accelerator (what we call “stitched IP” in the tutorials), wire it up to your FPGA design and send/receive neural network data to/from the accelerator.
127
128
128
-
**Shell-integrated accelerator + driver:** For quick deployment, we target boards supported by `PYNQ <http://www.pynq.io/>`_ . For these platforms, we can build a full bitfile including DMAs to move data into and out of the FINN-generated accelerator, as well as a Python driver to launch the accelerator. We support the Pynq-Z1, Pynq-Z2, Kria SOM, Ultra96, ZCU102 and ZCU104 boards, as well as Alveo cards.
129
+
**Shell-integrated accelerator + driver:** For quick deployment, we target boards supported by `PYNQ <http://www.pynq.io/>`_ . For these platforms, we can build a full bitfile including DMAs to move data into and out of the FINN-generated accelerator, as well as a Python driver to launch the accelerator. We support the Pynq-Z1, Pynq-Z2, Kria SOM, Ultra96, ZCU102 and ZCU104 boards, as well as UltraScale+-based Alveo datacenter accelerator cards.
129
130
130
131
PYNQ board first-time setup
131
132
****************************
@@ -145,9 +146,9 @@ Continue on the host side (replace the ``<PYNQ_IP>`` and ``<PYNQ_USERNAME>`` wit
145
146
5. Test that you can ``ssh <PYNQ_USERNAME>@<PYNQ_IP>`` without having to enter the password. Pass the ``-v`` flag to the ssh command if it doesn't work to help you debug.
146
147
147
148
148
-
Alveo first-time setup
149
-
**********************
150
-
We use *host* to refer to the PC running the FINN Docker environment, which will build the accelerator+driver and package it up, and *target* to refer to the PC where the Alveo card is installed. These two can be the same PC, or connected over the network -- FINN includes some utilities to make it easier to test on remote PCs too. Prior to first usage, you need to set up both the host and the target in the following manner:
149
+
Vitis-based Alveo first-time setup
150
+
**********************************
151
+
The Vitis toolchain targets UltraScale and UltraScale+-based Alveo cards, such as the U250. We use *host* to refer to the PC running the FINN Docker environment, which will build the accelerator+driver and package it up, and *target* to refer to the PC where the Alveo card is installed. These two can be the same PC, or connected over the network -- FINN includes some utilities to make it easier to test on remote PCs too. Prior to first usage, you need to set up both the host and the target in the following manner:
151
152
152
153
On the target side:
153
154
@@ -164,8 +165,38 @@ On the host side:
164
165
1. Install Vitis 2022.2 and set up the ``VITIS_PATH`` environment variable to point to your installation.
165
166
2. Install Xilinx XRT. Ensure that the ``XRT_DEB_VERSION`` environment variable reflects which version of XRT you have installed.
166
167
3. Install the Vitis platform files for Alveo and set up the ``PLATFORM_REPO_PATHS`` environment variable to point to your installation. *This must be the same path as the target's platform files (target step 2)*
167
-
5. `Set up public key authentication <https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server>`_. Copy your private key to the ``finn/ssh_keys`` folder on the host to get password-less deployment and remote execution.
168
-
6. Done!
168
+
4. `Set up public key authentication <https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server>`_. Copy your private key to the ``finn/ssh_keys`` folder on the host to get password-less deployment and remote execution.
169
+
5. Done!
170
+
171
+
Slash-based Alveo first-time setup
172
+
***********************************
173
+
The Slash toolchain targets Versal-based Alveo cards such as the V80 using the V80++
174
+
linker. We use *host* to refer to the PC running the FINN Docker environment, which will
175
+
build the accelerator and package it up, and *target* to refer to the PC where the V80
176
+
card is installed. These two can be the same PC, or connected over the network.
177
+
178
+
Prior to first usage, you need to build the Slash packages from source and set up both
179
+
the host and the target. Please refer to the `Slash GitHub repository
180
+
<https://github.com/Xilinx/slash>`_ for instructions on how to build all Slash packages,
181
+
including the ``v80++`` linker package.
182
+
183
+
On the target side:
184
+
185
+
1. Install all Slash runtime packages as described in the `Slash GitHub repository
186
+
<https://github.com/Xilinx/slash>`_.
187
+
2. Done!
188
+
189
+
On the host side:
190
+
191
+
1. Build the ``v80++`` Debian package from the `Slash GitHub repository
192
+
<https://github.com/Xilinx/slash>`_ and copy it to a location accessible on the host.
193
+
2. Set the ``V80PP_DEB_PACKAGE`` environment variable to the path of the ``v80++``
194
+
Debian package (e.g. ``export V80PP_DEB_PACKAGE=/path/to/v80++.deb``). The package
195
+
will be installed into the Docker image when ``run-docker.sh`` builds it.
196
+
3. `Set up public key authentication <https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server>`_.
197
+
Copy your private key to the ``finn/ssh_keys`` folder on the host to get
198
+
password-less deployment and remote execution.
199
+
4. Done!
169
200
170
201
Vivado/Vitis license
171
202
*********************
@@ -195,7 +226,7 @@ strong hardware:
195
226
* **RAM.** Depending on your target FPGA platform, your system must have sufficient RAM to be
196
227
able to run Vivado/Vitis synthesis for that part. See `this page <https://www.xilinx.com/products/design-tools/vivado/vivado-ml.html#memory>`_
197
228
for more information. For targeting Zynq and Zynq UltraScale+ parts, at least 8 GB is recommended. Larger parts may require up to 16 GB.
198
-
For targeting Alveo parts with Vitis, at least 64 GB RAM is recommended.
229
+
For targeting Alveo parts with Vitis or Slash, at least 64 GB RAM is recommended.
199
230
200
231
* **CPU.** FINN can parallelize HLS synthesis and several other operations for different
201
232
layers, so using a multi-core CPU is recommended. However, this should be balanced
Copy file name to clipboardExpand all lines: docs/finn/hw_build.rst
+10-8Lines changed: 10 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ Hardware Build and Deployment
9
9
:align:center
10
10
11
11
A model where all layers have been converted to either HLS or RTL layers can be processed by
12
-
FINN to build a bitfile and driver targeting a Zynq or Alveo system or to generate a Vivado IP Integrator (IPI)
12
+
FINN to build a bitfile and driver targeting a Zynq or Alveo system (via Vitis or Slash) or to generate a Vivado IP Integrator (IPI)
13
13
design with AXI stream (FIFO) in-out interfaces, which can be integrated onto any Xilinx FPGA as part of a larger system.
14
14
15
15
@@ -22,7 +22,7 @@ Internally, the hardware build consists of the following steps:
22
22
2. DMA and DWC node insertion
23
23
3. Partitioning for floorplanning
24
24
4. FIFO insertion and IP generation
25
-
5. Vivado/Vitis project generation and synthesis
25
+
5. Project generation and synthesis (Vivado for Zynq, Vitis or Slash for Alveo)
26
26
27
27
.. note::
28
28
In previous FINN releases it was necessary to step through the individual sub-steps for hardware build manually by calling each transformation. The hardware build transformations `ZynqBuild` now execute all necessary sub-transformations. For more control over the build process, the transformations listed below can still be called individually.
@@ -59,7 +59,7 @@ This is accomplished by the :py:mod:`finn.transformation.fpgadataflow.floorplan.
59
59
and :py:mod:`finn.transformation.fpgadataflow.create_dataflow_partition.CreateDataflowPartition`
60
60
transformations.
61
61
62
-
.. note:: For Vitis, each partition will be compiled as a separate kernel, and linked together afterwards. For Zynq, each partition will become an IP block.
62
+
.. note:: For Vitis and Slash, each partition will be compiled as a separate kernel, and linked together afterwards. For Zynq, each partition will become an IP block.
63
63
64
64
65
65
FIFO Insertion and IP Generation
@@ -76,12 +76,14 @@ For RTL layers calling :py:mod:`finn.transformation.fpgadataflow.prepare_ip.Prep
76
76
77
77
The top-level IP blocks are generated in Vivado IPI, using the :py:mod:`finn.transformation.fpgadataflow.create_stitched_ip.CreateStitchedIP` transformation.
78
78
79
-
Vivado/Vitis Project Generation and Synthesis
80
-
---------------------------------------------
79
+
Project Generation and Synthesis
80
+
---------------------------------
81
81
82
-
The final step in the hardware build flow is to generate a Vivado (for Zynq) or Vitis (for Alveo)
83
-
project, and run synthesis to generate a bitfile. This is done using the `MakeZYNQProject`
84
-
transformation for Zynq, and the `VitisLink` transformation for Alveo.
82
+
The final step in the hardware build flow is to generate a project and run synthesis to produce
83
+
a bitfile. For Zynq this is done using the `MakeZYNQProject` transformation. For Alveo, the
84
+
stitched IP kernels are first prepared by `PrepareForLinking` and then linked using either the
85
+
`VitisLink` transformation (for UltraScale+-based Alveo cards) or the `SlashLink` transformation
Copy file name to clipboardExpand all lines: docs/finn/nw_prep.rst
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ Network Preparation
10
10
11
11
The main principle of FINN are analysis and transformation passes. For more information about these, see :ref:`analysis_pass` and :ref:`transformation_pass` in the :ref:`concepts` documentation, or the tutorial notebooks in :ref:`tutorials`.
12
12
13
-
This page describes the network preparation flow step that comes after :ref:`brevitas_export`. The main idea is to optimize the network and convert nodes to hardware layers that correspond to `finn-hlslib <https://github.com/Xilinx/finn-hlslib>`_ or `finn-rtllib <https://github.com/Xilinx/finn-rtllib>`_ implementations. This prepares the network for hardware generation with Vitis HLS and Vivado. Network preparation applies several transformations to the ONNX model, which is wrapped in a :ref:`modelwrapper`.
13
+
This page describes the network preparation flow step that comes after :ref:`brevitas_export`. The main idea is to optimize the network and convert nodes to hardware layers that correspond to `finn-hlslib <https://github.com/Xilinx/finn-hlslib>`_ or `finn-rtllib <https://github.com/Xilinx/finn-rtllib>`_ implementations. This prepares the network for hardware generation with Vitis HLS and RTL code generation. Network preparation applies several transformations to the ONNX model, which is wrapped in a :ref:`modelwrapper`.
14
14
15
15
Various transformations are involved in the network preparation. The following is a short overview of these.
0 commit comments