Skip to content

HPC types

DeiC Interactive HPC (UCloud)

DeiC Interactive HPC targets users who desire an interactive approach to HPC which ensures that the user's experience is as close to that of a laptop or desktop computer as possible. The computer is accessed through a browser from where a large range of applications can be used. Programs are run through a job which is scheduled through an interactive menu. Once the job is active the user can run the application, for which the job was created, either through the interactive editor, just like on an ordinary laptop or desktop computer, or through the terminal. The simplicity of use makes it ideal for new users, including students, and the large number of available applications provide experienced users with a vast array of tools for their projects. It is therefore well suited for both new and experienced HPC users. With DeiC Interactive HPC users have access to both ordinary CPU, large memory CPU and GPU nodes. More information on DeiC Interactive HPC is available at interactivehpc.dk and access can be found here UCloud (requires WQYF login).

DeiC Throughput HPC

DeiC Throughput HPC is the traditional HPC setup in which the user accesses a Linux server through an SSH connection. On the server everything is run through the terminal. Specifically the user accesses a front end node from where jobs can be submitted to a queue of jobs managed by a scheduling system, usually Slurm*, which controls resource allocations and starts jobs in due time. Once a job is finished the output is returned to the user. Each job runs autonomously and the user can submit a detailed resource request tailored to each job. Users have access to both ordinary CPU, large memory CPU and GPU nodes. Throughput HPC can handle large amounts of data stored with high levels of security if necessary and is ideal for throughput intensive tasks that can be distributed among multiple cores and/or nodes. The fact that everything is handled through a Linux server using a scheduling system does however imply a steeper learning curve for new users relative to UCloud.

DeiC Large Memory HPC

Like DeiC Throughput HPC, DeiC Large Memory HPC is configured as a Linux server with the Slurm* scheduler accessed by the user through an SSH connection. DeiC Large Memory HPC however distinguishes itself from DeiC Throughput HPC in the hardware employed by offering a comparatively small number of nodes and CPU cores that have access to large amounts of fast memory. This setup tailors to tasks that cannot efficiently be distributed among cores and nodes or whose memory requirements exceed that offered by DeiC Throughput HPC. The documentation for the computer is available at docs.hpc-type3.sdu.dk.

DeiC Accelerated HPC

Like the use of GPUs have massively increased the possibilities for massively parallel tasks, extensive research is currently devoted to other types of hardware that can accelerate specific operations. DeiC Accelerated HPC is a testing ground to explore hardware accelerated solutions targetted at future HPC use. A leading hardware component for this type of research is the Field-Programmable Gate Array (FPGA). Where GPUs can accelerate massively parallel tasks as it is hardware designed specifically for such tasks, the goal is for FPGAs is to be hardware configurable to a specific task. In this way one can tackle a prohibiting bottleneck by optimization at the hardware level. Another approach to acceleration is in-memory computing which aims to tackle the following problem: Contemporary big data sets are becoming ever larger and currently exceed even that largest RAM units. This implies that data transfer between hard-drives or flash-drives and memory create a significant bottleneck for contemporary big data programs. In-memory computing aims to solve this problem by distributing the data amongst multiple RAM units each with its own CPU operating in parallel. This requires the use of software to communicate between CPUs in an efficient manner.

Capability HPC (LUMI pre-exascale)

Capability HPC provides a similar setup to DeiC Throughput HPC but with increased possibilities by virtue of state-of-the-art hardware. Specifically the interconnections between compute nodes is designed to minimize latency thereby addressing the issue of communication induced latency in distributed-memory programs running on separate nodes. Additionally the user can obtain access to large amounts of disk space also with low-latency interconnects. In this way Capability HPC enables computations that are prohibitive with DeiC Throughput HPC due to communication latency.

Currently Capability HPC consists of the LUMI pre-exascale computer and once they are operational also Leonardo and MareNostrum5. These machines are a part of the EuroHPC project where the international collaboration allows the purchase of otherwise inaccessible hardware. In particular LUMI will provide an extensive GPU partition designed to suit machine learning and artificial intelligence applications.

* Best practices including an interactive Slurm tutorial are available under section Pipe-lining and submitting jobs in Slurm

This documentation is part of the EuroHPC Competence Center in Denmark, managed by DeiC.dk.