6.2. Xaloc cluster installation¶
The OmpSs-2@FPGA releases are automatically installed in the Xaloc cluster. They are available through a module file for each target architecture. This document describes how to load and use the modules to compile an example application. Once the modules are loaded, the workflow in the Xaloc cluster should be the same as in the Docker images.
6.2.1. General remarks¶
- The OmpSs@FPGA toolchain is installed in a version folder under the
/opt/bsc/
directory. - Third-party libraries required to run some programs are installed in the corresponding folder under the
/opt/lib/
directory. - The rest of the software (Xilinx toolchain, slurm, modules, etc.) is installed under the
/tools/
directory.
6.2.2. Node specifications¶
- CPU: Dual Intel Xeon X5680
- Main memory: 72GB DDR3-1333
- FPGA: Xilinx Versal VCK5000
6.2.3. Logging into xaloc¶
Xaloc is accessible from HCA ssh.hca.bsc.es
Alternatively, it can be accessed through the 8410
port in HCA
and ssh connection will be redirected to the actual host:
ssh -p 8410 ssh.hca.bsc.es
Also, this can be automated by adding a xaloc
host into ssh config:
Host xaloc
HostName ssh.hca.bsc.es
Port 8410
6.2.4. Module structure¶
The ompss-2 modules are:
ompss-2/x86_64/*[release version]*
This will automatically load the default Vivado version, although an arbitrary version can be loaded before ompss:
module load vivado/2023.2 ompss-2/x86_fpga/git
To list all available modules in the system run:
module avail
6.2.5. Build applications¶
To generate an application binary and bitstream, you could refer to Compile OmpSs-2@FPGA programs as the steps are general enough.
Note that the appropriate modules need to be loaded. See Module structure.
6.2.6. Running applications¶
Warning
Although the Versal board is installed and can be allocated via slurm there is no toolchain support yet.
Get access to an installed fpga¶
Xaloc cluster uses SLURM in order to manage access to computation resources. Therefore, to be able to use the resources of an FPGA, an allocation in one of the partitions has to be made.
There is 1 partition in the cluster:
fpga
: a Versal VCK5000 board
The easiest way to allocate an FPGA is to run bash through srun
with the --gres
option:
srun --gres=fpga:BOARD:N --pty bash
Where BOARD
is the FPGA to allocate, in this case versal
, and N
the number of FPGAs to allocate, that is 1.
For instance, the command:
srun --gres=fpga:versal:1 --pty bash
Will allocate the FPGA and run an interactive bash with the required tools and file permissions already set by slurm. To get information about the active slurm jobs, run:
squeue
The output should look similar to this:
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
1312 fpga bash afilguer R 17:14 1 xaloc