Skip to content

Commit 17aec22

Browse files
authored
Merge pull request #1541 from JOOpdenhoevel/feature/slash_support
Adding Alveo V80 support by adding Slash/V80++ as a new build target
2 parents beb7c7b + e928c7e commit 17aec22

22 files changed

Lines changed: 712 additions & 333 deletions

docker/Dockerfile.finn

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,7 @@ LABEL maintainer="Jakoba Petri-Koenig <jakoba.petri-koenig@amd.com>, Yaman Umuro
3333
ARG XRT_DEB_VERSION="xrt_202220.2.14.354_22.04-amd64-xrt"
3434
ARG SKIP_XRT
3535
ARG LOCAL_XRT
36+
ARG V80PP_DEB_PACKAGE
3637

3738
WORKDIR /workspace
3839

@@ -50,6 +51,10 @@ RUN apt-get update && \
5051
libsm6 \
5152
libxext6 \
5253
libxrender-dev \
54+
libyaml-0-2 \
55+
libjsoncpp-dev \
56+
libxml2-dev \
57+
libzmq3-dev \
5358
libpixman-1-0 \
5459
nano \
5560
zsh \
@@ -74,10 +79,16 @@ RUN apt-get update && \
7479
libjansson-dev \
7580
libgetdata-dev \
7681
libtinfo5 \
77-
g++-10
82+
g++-10 \
83+
cmake
7884
RUN echo "StrictHostKeyChecking no" >> /etc/ssh/ssh_config
7985
RUN locale-gen "en_US.UTF-8"
8086

87+
# Install v80++. If the package isn't provided, this is a no-op
88+
# The wildcard pattern ensures COPY doesn't fail if the file doesn't exist
89+
COPY v80pp.de[b] /tmp/
90+
RUN if [ -f /tmp/v80pp.deb ]; then apt-get install -y /tmp/v80pp.deb && rm /tmp/v80pp.deb; fi
91+
8192
# install XRT
8293
RUN if [ -z "$LOCAL_XRT" ] && [ -z "$SKIP_XRT" ];then \
8394
wget -U 'Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.27 Safari/537.17' "https://www.xilinx.com/bin/public/openDownload?filename=$XRT_DEB_VERSION.deb" -O /tmp/$XRT_DEB_VERSION.deb; fi

docker/jenkins/test_bnn_hw_pytest.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ def delete_file(file_path):
4040

4141

4242
def get_platform(board_str):
43-
return "alveo" if "U250" in board_str else "zynq-iodma"
43+
return "vitis-xrt" if "U250" in board_str else "zynq-iodma"
4444

4545

4646
def get_full_parameterized_test_list(marker, test_dir_list, batch_size_list, platform_list):
@@ -110,7 +110,7 @@ def test_type_execute(self, test_dir, batch_size, platform):
110110
delete_file(output_execute_results_file)
111111

112112
# Run test option: execute
113-
bitfile = "a.xclbin" if platform == "alveo" else "resizer.bit"
113+
bitfile = "a.xclbin" if platform == "vitis-xrt" else "resizer.bit"
114114
result = subprocess.run(
115115
[
116116
"python",
@@ -144,7 +144,7 @@ def test_type_throughput(self, test_dir, batch_size, platform):
144144
delete_file(output_throughput_results_file)
145145

146146
# Run test option: throughput
147-
bitfile = "a.xclbin" if platform == "alveo" else "resizer.bit"
147+
bitfile = "a.xclbin" if platform == "vitis-xrt" else "resizer.bit"
148148
result = subprocess.run(
149149
[
150150
"python",

docs/finn/command_line.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -132,16 +132,16 @@ build configuration), and are detailed below.
132132

133133
* ``report/ooc_synth_and_timing.json`` -- resources and achievable clock frequency from out-of-context synthesis
134134

135-
* :py:mod:`finn.builder.build_dataflow_config.DataflowOutputType.BITFILE` will run Vivado and/or Vitis to insert the FINN accelerator inside a shell, with DMA engines instantiated to move data to/from main memory:
135+
* :py:mod:`finn.builder.build_dataflow_config.DataflowOutputType.BITFILE` will run Vivado (for Zynq), Vitis, or Slash to insert the FINN accelerator inside a shell, with DMA engines instantiated to move data to/from main memory:
136136

137-
* ``bitfile/finn-accel.(bit|xclbin)`` -- generated bitfile depending on platform
137+
* ``bitfile/finn-accel.(bit|xclbin|vbin)`` -- generated bitfile depending on platform
138138
* ``report/post_synth_resources.xml`` -- FPGA resource utilization after synthesis
139139
* ``report/post_route_timing.rpt`` -- post-route timing report
140140

141141

142142
* :py:mod:`finn.builder.build_dataflow_config.DataflowOutputType.PYNQ_DRIVER` will generate a PYNQ Python driver that can be used to interface the generated accelerator:
143143

144-
* ``driver/driver.py`` -- Python driver that can be used on PYNQ on Zynq or Alveo platforms to launch the accelerator
144+
* ``driver/driver.py`` -- Python driver that can be used on PYNQ on Zynq or Vitis Alveo platforms to launch the accelerator
145145

146146
* :py:mod:`finn.builder.build_dataflow_config.DataflowOutputType.DEPLOYMENT_PACKAGE`:
147147

docs/finn/getting_started.rst

Lines changed: 41 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Quickstart
1111
2. Set up ``FINN_XILINX_PATH`` and ``FINN_XILINX_VERSION`` environment variables pointing respectively to the Xilinx tools installation directory and version (e.g. ``FINN_XILINX_PATH=/opt/Xilinx`` and ``FINN_XILINX_VERSION=2022.2``)
1212
3. Clone the FINN compiler from the repo: ``git clone https://github.com/Xilinx/finn/`` and go into the directory where it is cloned
1313
4. Execute ``./run-docker.sh quicktest`` to verify your installation.
14-
5. Optionally, follow the instructions on :ref:`PYNQ board first-time setup` or :ref:`Alveo first-time setup` for board setup.
14+
5. Optionally, follow the instructions on :ref:`PYNQ board first-time setup`, :ref:`Vitis-based Alveo first-time setup`, or :ref:`Slash-based Alveo first-time setup` for board setup.
1515
6. Optionally, set up a `Vivado/Vitis license`_.
1616
7. All done! See :ref:`Running FINN in Docker` for the various options on how to run the FINN compiler.
1717

@@ -98,8 +98,9 @@ The most relevant are summarized below:
9898

9999
* (required) ``FINN_XILINX_PATH`` points to your Xilinx tools installation on the host (e.g. ``/opt/Xilinx``)
100100
* (required) ``FINN_XILINX_VERSION`` sets the Xilinx tools version to be used (e.g. ``2022.2``)
101-
* (required for Alveo) ``PLATFORM_REPO_PATHS`` points to the Vitis platform files (DSA).
102-
* (required for Alveo) ``XRT_DEB_VERSION`` specifies the .deb to be installed for XRT inside the container (see default value in ``run-docker.sh``).
101+
* (required for Vitis) ``PLATFORM_REPO_PATHS`` points to the Vitis platform files (DSA).
102+
* (required for Vitis) ``XRT_DEB_VERSION`` specifies the .deb to be installed for XRT inside the container (see default value in ``run-docker.sh``).
103+
* (required for Slash) ``V80PP_DEB_PACKAGE`` specifies the .deb to be installed for Slash's v80++ linker.
103104
* (optional) ``NUM_DEFAULT_WORKERS`` (default 4) specifies the degree of parallelization for the transformations that can be run in parallel, potentially reducing build time
104105
* (optional) ``FINN_HOST_BUILD_DIR`` specifies which directory on the host will be used as the build directory. Defaults to ``/tmp/finn_dev_<username>``
105106
* (optional) ``JUPYTER_PORT`` (default 8888) changes the port for Jupyter inside Docker
@@ -125,7 +126,7 @@ Supported FPGA Hardware
125126
=======================
126127
**Vivado IPI support for any Xilinx FPGA:** FINN generates a Vivado IP Integrator (IPI) design from the neural network with AXI stream (FIFO) in-out interfaces, which can be integrated onto any Xilinx-AMD FPGA as part of a larger system. It’s up to you to take the FINN-generated accelerator (what we call “stitched IP” in the tutorials), wire it up to your FPGA design and send/receive neural network data to/from the accelerator.
127128

128-
**Shell-integrated accelerator + driver:** For quick deployment, we target boards supported by `PYNQ <http://www.pynq.io/>`_ . For these platforms, we can build a full bitfile including DMAs to move data into and out of the FINN-generated accelerator, as well as a Python driver to launch the accelerator. We support the Pynq-Z1, Pynq-Z2, Kria SOM, Ultra96, ZCU102 and ZCU104 boards, as well as Alveo cards.
129+
**Shell-integrated accelerator + driver:** For quick deployment, we target boards supported by `PYNQ <http://www.pynq.io/>`_ . For these platforms, we can build a full bitfile including DMAs to move data into and out of the FINN-generated accelerator, as well as a Python driver to launch the accelerator. We support the Pynq-Z1, Pynq-Z2, Kria SOM, Ultra96, ZCU102 and ZCU104 boards, as well as UltraScale+-based Alveo datacenter accelerator cards.
129130

130131
PYNQ board first-time setup
131132
****************************
@@ -145,9 +146,9 @@ Continue on the host side (replace the ``<PYNQ_IP>`` and ``<PYNQ_USERNAME>`` wit
145146
5. Test that you can ``ssh <PYNQ_USERNAME>@<PYNQ_IP>`` without having to enter the password. Pass the ``-v`` flag to the ssh command if it doesn't work to help you debug.
146147

147148

148-
Alveo first-time setup
149-
**********************
150-
We use *host* to refer to the PC running the FINN Docker environment, which will build the accelerator+driver and package it up, and *target* to refer to the PC where the Alveo card is installed. These two can be the same PC, or connected over the network -- FINN includes some utilities to make it easier to test on remote PCs too. Prior to first usage, you need to set up both the host and the target in the following manner:
149+
Vitis-based Alveo first-time setup
150+
**********************************
151+
The Vitis toolchain targets UltraScale and UltraScale+-based Alveo cards, such as the U250. We use *host* to refer to the PC running the FINN Docker environment, which will build the accelerator+driver and package it up, and *target* to refer to the PC where the Alveo card is installed. These two can be the same PC, or connected over the network -- FINN includes some utilities to make it easier to test on remote PCs too. Prior to first usage, you need to set up both the host and the target in the following manner:
151152

152153
On the target side:
153154

@@ -164,8 +165,38 @@ On the host side:
164165
1. Install Vitis 2022.2 and set up the ``VITIS_PATH`` environment variable to point to your installation.
165166
2. Install Xilinx XRT. Ensure that the ``XRT_DEB_VERSION`` environment variable reflects which version of XRT you have installed.
166167
3. Install the Vitis platform files for Alveo and set up the ``PLATFORM_REPO_PATHS`` environment variable to point to your installation. *This must be the same path as the target's platform files (target step 2)*
167-
5. `Set up public key authentication <https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server>`_. Copy your private key to the ``finn/ssh_keys`` folder on the host to get password-less deployment and remote execution.
168-
6. Done!
168+
4. `Set up public key authentication <https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server>`_. Copy your private key to the ``finn/ssh_keys`` folder on the host to get password-less deployment and remote execution.
169+
5. Done!
170+
171+
Slash-based Alveo first-time setup
172+
***********************************
173+
The Slash toolchain targets Versal-based Alveo cards such as the V80 using the V80++
174+
linker. We use *host* to refer to the PC running the FINN Docker environment, which will
175+
build the accelerator and package it up, and *target* to refer to the PC where the V80
176+
card is installed. These two can be the same PC, or connected over the network.
177+
178+
Prior to first usage, you need to build the Slash packages from source and set up both
179+
the host and the target. Please refer to the `Slash GitHub repository
180+
<https://github.com/Xilinx/slash>`_ for instructions on how to build all Slash packages,
181+
including the ``v80++`` linker package.
182+
183+
On the target side:
184+
185+
1. Install all Slash runtime packages as described in the `Slash GitHub repository
186+
<https://github.com/Xilinx/slash>`_.
187+
2. Done!
188+
189+
On the host side:
190+
191+
1. Build the ``v80++`` Debian package from the `Slash GitHub repository
192+
<https://github.com/Xilinx/slash>`_ and copy it to a location accessible on the host.
193+
2. Set the ``V80PP_DEB_PACKAGE`` environment variable to the path of the ``v80++``
194+
Debian package (e.g. ``export V80PP_DEB_PACKAGE=/path/to/v80++.deb``). The package
195+
will be installed into the Docker image when ``run-docker.sh`` builds it.
196+
3. `Set up public key authentication <https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server>`_.
197+
Copy your private key to the ``finn/ssh_keys`` folder on the host to get
198+
password-less deployment and remote execution.
199+
4. Done!
169200

170201
Vivado/Vitis license
171202
*********************
@@ -195,7 +226,7 @@ strong hardware:
195226
* **RAM.** Depending on your target FPGA platform, your system must have sufficient RAM to be
196227
able to run Vivado/Vitis synthesis for that part. See `this page <https://www.xilinx.com/products/design-tools/vivado/vivado-ml.html#memory>`_
197228
for more information. For targeting Zynq and Zynq UltraScale+ parts, at least 8 GB is recommended. Larger parts may require up to 16 GB.
198-
For targeting Alveo parts with Vitis, at least 64 GB RAM is recommended.
229+
For targeting Alveo parts with Vitis or Slash, at least 64 GB RAM is recommended.
199230

200231
* **CPU.** FINN can parallelize HLS synthesis and several other operations for different
201232
layers, so using a multi-core CPU is recommended. However, this should be balanced

docs/finn/hw_build.rst

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ Hardware Build and Deployment
99
:align: center
1010

1111
A model where all layers have been converted to either HLS or RTL layers can be processed by
12-
FINN to build a bitfile and driver targeting a Zynq or Alveo system or to generate a Vivado IP Integrator (IPI)
12+
FINN to build a bitfile and driver targeting a Zynq or Alveo system (via Vitis or Slash) or to generate a Vivado IP Integrator (IPI)
1313
design with AXI stream (FIFO) in-out interfaces, which can be integrated onto any Xilinx FPGA as part of a larger system.
1414

1515

@@ -22,7 +22,7 @@ Internally, the hardware build consists of the following steps:
2222
2. DMA and DWC node insertion
2323
3. Partitioning for floorplanning
2424
4. FIFO insertion and IP generation
25-
5. Vivado/Vitis project generation and synthesis
25+
5. Project generation and synthesis (Vivado for Zynq, Vitis or Slash for Alveo)
2626

2727
.. note::
2828
In previous FINN releases it was necessary to step through the individual sub-steps for hardware build manually by calling each transformation. The hardware build transformations `ZynqBuild` now execute all necessary sub-transformations. For more control over the build process, the transformations listed below can still be called individually.
@@ -59,7 +59,7 @@ This is accomplished by the :py:mod:`finn.transformation.fpgadataflow.floorplan.
5959
and :py:mod:`finn.transformation.fpgadataflow.create_dataflow_partition.CreateDataflowPartition`
6060
transformations.
6161

62-
.. note:: For Vitis, each partition will be compiled as a separate kernel, and linked together afterwards. For Zynq, each partition will become an IP block.
62+
.. note:: For Vitis and Slash, each partition will be compiled as a separate kernel, and linked together afterwards. For Zynq, each partition will become an IP block.
6363

6464

6565
FIFO Insertion and IP Generation
@@ -76,12 +76,14 @@ For RTL layers calling :py:mod:`finn.transformation.fpgadataflow.prepare_ip.Prep
7676

7777
The top-level IP blocks are generated in Vivado IPI, using the :py:mod:`finn.transformation.fpgadataflow.create_stitched_ip.CreateStitchedIP` transformation.
7878

79-
Vivado/Vitis Project Generation and Synthesis
80-
---------------------------------------------
79+
Project Generation and Synthesis
80+
---------------------------------
8181

82-
The final step in the hardware build flow is to generate a Vivado (for Zynq) or Vitis (for Alveo)
83-
project, and run synthesis to generate a bitfile. This is done using the `MakeZYNQProject`
84-
transformation for Zynq, and the `VitisLink` transformation for Alveo.
82+
The final step in the hardware build flow is to generate a project and run synthesis to produce
83+
a bitfile. For Zynq this is done using the `MakeZYNQProject` transformation. For Alveo, the
84+
stitched IP kernels are first prepared by `PrepareForLinking` and then linked using either the
85+
`VitisLink` transformation (for UltraScale+-based Alveo cards) or the `SlashLink` transformation
86+
(for Versal-based Alveo cards such as the V80).
8587

8688

8789
Deployment

docs/finn/nw_prep.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Network Preparation
1010

1111
The main principle of FINN are analysis and transformation passes. For more information about these, see :ref:`analysis_pass` and :ref:`transformation_pass` in the :ref:`concepts` documentation, or the tutorial notebooks in :ref:`tutorials`.
1212

13-
This page describes the network preparation flow step that comes after :ref:`brevitas_export`. The main idea is to optimize the network and convert nodes to hardware layers that correspond to `finn-hlslib <https://github.com/Xilinx/finn-hlslib>`_ or `finn-rtllib <https://github.com/Xilinx/finn-rtllib>`_ implementations. This prepares the network for hardware generation with Vitis HLS and Vivado. Network preparation applies several transformations to the ONNX model, which is wrapped in a :ref:`modelwrapper`.
13+
This page describes the network preparation flow step that comes after :ref:`brevitas_export`. The main idea is to optimize the network and convert nodes to hardware layers that correspond to `finn-hlslib <https://github.com/Xilinx/finn-hlslib>`_ or `finn-rtllib <https://github.com/Xilinx/finn-rtllib>`_ implementations. This prepares the network for hardware generation with Vitis HLS and RTL code generation. Network preparation applies several transformations to the ONNX model, which is wrapped in a :ref:`modelwrapper`.
1414

1515
Various transformations are involved in the network preparation. The following is a short overview of these.
1616

docs/finn/source_code/finn.transformation.fpgadataflow.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -294,10 +294,10 @@ finn.transformation.fpgadataflow.templates
294294
:undoc-members:
295295
:show-inheritance:
296296

297-
finn.transformation.fpgadataflow.vitis\_build
297+
finn.transformation.fpgadataflow.alveo\_build
298298
-------------------------------------------------
299299

300-
.. automodule:: finn.transformation.fpgadataflow.vitis_build
300+
.. automodule:: finn.transformation.fpgadataflow.alveo_build
301301
:members:
302302
:undoc-members:
303303
:show-inheritance:

run-docker.sh

Lines changed: 20 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,12 @@ fi
5353

5454
if [ -z "$PLATFORM_REPO_PATHS" ];then
5555
recho "Please set PLATFORM_REPO_PATHS pointing to Vitis platform files (DSAs)."
56-
recho "This is required to be able to use Alveo PCIe cards."
56+
recho "This is required to be able to use Vitis-based Alveo PCIe cards."
57+
fi
58+
59+
if [ -z "$V80PP_DEB_PACKAGE" ];then
60+
recho "Please set V80PP_DEB_PACKAGE pointing to the SLASH v80++ .deb package."
61+
recho "This is required to be able to use the Alveo V80 card."
5762
fi
5863

5964
DOCKER_GID=$(id -g)
@@ -79,6 +84,7 @@ SCRIPTPATH=$(dirname "$SCRIPT")
7984
: ${FINN_SSH_KEY_DIR="$SCRIPTPATH/ssh_keys"}
8085
: ${PLATFORM_REPO_PATHS="/opt/xilinx/platforms"}
8186
: ${XRT_DEB_VERSION="xrt_202220.2.14.354_22.04-amd64-xrt"}
87+
: ${V80PP_DEB_PACKAGE=""}
8288
: ${FINN_HOST_BUILD_DIR="/tmp/$DOCKER_INST_NAME"}
8389
: ${FINN_DOCKER_TAG="xilinx/finn:$(OLD_PWD=$(pwd); cd $SCRIPTPATH; git describe --always --tags --dirty; cd $OLD_PWD).$XRT_DEB_VERSION"}
8490
: ${FINN_DOCKER_PREBUILT="0"}
@@ -167,6 +173,11 @@ if [ -d "$FINN_XRT_PATH" ];then
167173
export LOCAL_XRT=1
168174
fi
169175

176+
# If v80++ deb package given, copy it to repo root for docker build
177+
if [ -n "$V80PP_DEB_PACKAGE" ] && [ -f "$V80PP_DEB_PACKAGE" ]; then
178+
cp "$V80PP_DEB_PACKAGE" ./v80pp.deb
179+
fi
180+
170181
if [ "$FINN_DOCKER_NO_CACHE" = "1" ]; then
171182
FINN_DOCKER_BUILD_EXTRA+="--no-cache "
172183
fi
@@ -204,11 +215,14 @@ if [ "$FINN_DOCKER_PREBUILT" = "0" ] && [ -z "$FINN_SINGULARITY" ]; then
204215
# Need to ensure this is done within the finn/ root folder:
205216
OLD_PWD=$(pwd)
206217
cd $SCRIPTPATH
218+
# Export DOCKER_BUILDKIT to enable BuildKit features
219+
export DOCKER_BUILDKIT
207220
docker build \
208221
-f docker/Dockerfile.finn \
209222
--build-arg XRT_DEB_VERSION=$XRT_DEB_VERSION \
210223
--build-arg SKIP_XRT=$FINN_SKIP_XRT_DOWNLOAD \
211224
--build-arg LOCAL_XRT=$LOCAL_XRT \
225+
--build-arg V80PP_DEB_PACKAGE=$V80PP_DEB_PACKAGE \
212226
--tag=$FINN_DOCKER_TAG $FINN_DOCKER_BUILD_EXTRA \
213227
--build-arg GROUP_ID=$DOCKER_GID \
214228
--build-arg GROUPNAME=$DOCKER_GNAME \
@@ -223,6 +237,11 @@ if [ ! -z "$LOCAL_XRT" ];then
223237
rm $XRT_DEB_VERSION.deb
224238
fi
225239

240+
# Remove local v80pp.deb file from repo
241+
if [ -f "./v80pp.deb" ]; then
242+
rm ./v80pp.deb
243+
fi
244+
226245
# Launch container with current directory mounted
227246
# important to pass the --init flag here for correct Vivado operation, see:
228247
# https://stackoverflow.com/questions/55733058/vivado-synthesis-hangs-in-docker-container-spawned-by-jenkins

0 commit comments

Comments
 (0)