.. index:: single: xaloc .. _xaloc-user_guide: xaloc user guide ================ **xaloc** takes its name from the Catalan form of `Sirocco `__, which is the name of the Mediterranean wind that comes from the southeast. The `OmpSs-2\@FPGA releases `__ are automatically installed in the server. They are available through a module file for each target architecture. This document describes how to load and use the modules to compile an example application. Once the modules are loaded, the workflow in the server should be the same as in the Docker images. General remarks --------------- * The OmpSs-2\@FPGA toolchain is installed in a version folder under the ``/opt/bsc/`` directory. * Third-party libraries required to run some programs are installed in the corresponding folder under the ``/opt/lib/`` directory. * The rest of the software (Xilinx toolchain, slurm, modules, etc.) is installed under the ``/tools/`` directory. Node specifications ------------------- * CPU: Dual Intel Xeon X5680 * https://ark.intel.com/content/www/us/en/ark/products/47916/intel-xeon-processor-x5680-12m-cache-3-33-ghz-6-40-gts-intel-qpi.html * Main memory: 72GB DDR3-1333 * FPGA: * Xilinx Versal VCK5000 * https://www.amd.com/en/products/adaptive-socs-and-fpgas/evaluation-boards/vck5000.html .. _xaloc-login: Logging into the system ----------------------- xaloc is accessible from HCA ``ssh.hca.bsc.es`` Alternatively, it can be accessed through the ``8410`` port in HCA and ssh connection will be redirected to the actual host: .. code-block:: text ssh -p 8410 ssh.hca.bsc.es Also, this can be automated by adding a ``xaloc`` host into ssh config: .. code-block:: text Host xaloc HostName ssh.hca.bsc.es Port 8410 .. _xaloc-modules: Module structure ---------------- The ompss-2 modules are: * ``ompss-2/x86_64/*[release version]*`` This will automatically load the default Vivado version, although an arbitrary version can be loaded before ompss-2: .. code-block:: text module load vivado/2023.2 ompss-2/x86_64/git To list all available modules in the system run: .. code-block:: text module avail Build applications ------------------ To generate an application binary and bitstream, you could refer to :ref:`compile-ompss2atfpga-programs` as the steps are general enough. Note that the appropriate modules need to be loaded. See :ref:`xaloc-modules`. .. _xaloc-running_applications: Running applications -------------------- .. warning:: Although the Versal board is installed and can be allocated via Slurm there is no toolchain support yet. .. _xaloc-access_fpga: Get access to an installed fpga ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The server uses Slurm in order to manage access to computation resources. Therefore, to be able to use the resources of an FPGA, an allocation in one of the partitions has to be made. You can check the number and name of partitions and nodes by running: .. code-block:: text sinfo -Nel There is 1 partition in the node: * ``fpga``: versal In order to make an allocation of computing resources, you must run ``salloc`` with the ``--gres`` option. For instance: .. code-block:: text salloc -p fpga --gres=fpga:BOARD:N Where ``BOARD`` is the FPGA to allocate and ``N`` the number of FPGAs to allocate. This command will allocate the number of specified FPGAs with the required tools and file permissions already set by slurm and prevent other users from using those resources. Once inside an allocation you can run a script or an interactive job with a subset of the allocated resources with ``srun``: For an interactive job, run: .. code-block:: text srun --gres=fpga:BOARD:N --pty bash To execute a script, run: .. code-block:: text srun --gres=fpga:BOARD:N script.sh .. note:: You can also allocate and run a job in a single command with ``srun``. There is no need to pre-allocate resources with ``salloc``. .. warning:: Just running an ``salloc`` will not set the OmpSs-2\@FPGA environment variables. In order to do so, you must run your job through ``srun``. Alternatively you can also run your jobs asynchronously through an ``sbatch`` command, passing a slurm job script as argument: .. code-block:: text sbatch --gres=fpga:BOARD:N job_script.sh Being an example ``job_script.sh``: .. code:: bash #!/bin/bash # #SBATCH --job-name=ompss-2_fpga_test #SBATCH --output=out.txt #SBATCH --time=05:00 #SBATCH --gres=fpga:BOARD:N #SBATCH -p fpga module load ompss-2/x86_64/git cd test make binary srun --gres=fpga:BOARD:N exec_test.sh To get information about the active slurm jobs, run: .. code-block:: text squeue The output should look similar to this: .. code-block:: text JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 1312 fpga bash afilguer R 17:14 1 quar To know which FPGAs have been allocated, you can run the ``report_slurm_node`` tool. The output should be similar to this: .. code-block:: text LOCAL_ID PCI_DEV USB_DEV QDMA_DEV HWSERVER_PORT GLOBAL_ID 0 0000:02:00.0 002:002 02000 13330 0