High Performance Computing (HPC)
Summary
Koç University supports the university's Research Computing needs by providing high-performance computing environments through its KUACC and VALAR clusters. The clusters are used for testing and running large programs, parallel-processing code, visualizations, and scientific applications. Koç University provides HPC System Administration and user support for computing, troubleshooting, software installation, and data storage.
The KUACC cluster consists of 61 compute nodes equipped with Intel Xeon Gold and AMD EPYC processors, offering more than 2000 cores in total and up to 512GB of memory on each node. This extensive configuration supports demanding computational tasks across various disciplines. Additionally, the university's cluster is outfitted with advanced GPU resources, including Tesla T4, V100, A100, and RTX A6000 GPUs. Legacy GPUs, such as Tesla K20m and K80, are also available for specific applications.
The VALAR cluster currently comprises 12 compute nodes powered by Intel Xeon Silver, Intel Xeon Gold, and AMD EPYC processors, delivering more than 1400 CPU cores and up to 768GB of memory per node. This robust setup caters to resource-intensive computational workloads across diverse fields. The cluster is also equipped with cutting-edge GPU resources, including Nvidia L40S GPUs, providing exceptional performance for AI and other scientific applications.
Both clusters use a scalable parallel file system (BeeGFS) and high-speed networking with Mellanox Infiniband, ensuring efficient and high-speed data transfer. Koç University offers a wide range of pre-installed scientific software, and the clusters are managed with an updated version of Slurm for workload allocation. Special configurations are also available for research groups that contribute resources to the system. This infrastructure supports high-demand research in areas like genetics, medicine, and other scientific computing fields.
The clusters were formed with IT resources and donations from university researchers. They are scalable and can be expanded through donations from researcher groups or projects.
Features
RHEL-based Linux operating systems
Compute nodes with both latest and legacy CPUs & GPUs
High speed closed network
Parallel File System (BeeGFS)
Installing and updating scientific softwares
Workload and resources are managed by modern SLURM versions
Who can use it?
- Faculty
- Staff
- Students
- Guests (Sponsored)
When can I use it?
You can use this service anytime.
How much does it cost?
This service is available at no charge to the KU community.
How do I get it?
You can create an IT Trackit or by sending an email to it@ku.edu.tr.