HPC clusters use multiple computers to process data in parallel, greatly reducing the time to solution for most computational science tasks. They also provide copious amounts of storage to manage the typically very large data sets processed.

Augusta University’s multidisciplinary HPC cluster is capable of fulfilling the wide range of computational workload needs reflective of its curricula and academic missions. These workloads include, but are not limited to, bioinformatics, genomics, population sciences, mathematics, chemistry, and physics. The compliment of modern GPU technology incorporated into the cluster is also ideal for data sciences, physics and life science modeling, artificial intelligence, inference, and digital forensics workloads. GPU systems also include features that can be used to accelerate complex simulations for particle or fluid dynamics, data visualization, and video and graphic rendering.

Contact Us

Information Technology HPC Services

706-721-4000

706-721-7500

auhpcs_support@augusta.edu

Service Portal

HPCS on iLab

About HPCS

High Performance Computing Services (HPCS) will promote, enable and aid research, academic, and cybersecurity missions by integrating leading edge high performance computing (HPC) technologies and services into enterprise service offerings.

Systems providing these services will be available to researchers and students assigned to AU research projects, faculty, and staff of Augusta University. AU Research Technology leadership and staff will make all efforts to partner with research and academic communities in attempts to develop, deploy, and maintain computational services that meet their unique needs.

Systems

Processing

AU HPCS provides a robust Linux HPC cluster with a theoretical peak performance of 100.8 TFLOPS.

1,032

Intel CPU cores

93,184

NVIDIA GPU CUDA cores

8,192

NVIDIA GPU Tensor cores

Learn more about GPU cores »

Systems use HDR infini-band and 10 Gb/s interconnects and have 15TB/528GB of CPU/GPU memory.

Learn about job submission, data transfer, and compute nodes »

Storage

AUHPCS storage provides several file stores for users

  • 240TB Lustre based temporary compute storage
    /scratch: compute job results, checkpoint files, data sets
  • 51TB Linux (Local temporary compute storage)
    /lscratch: compute job results, checkpoint files, data sets
  • 1PB Lustre longer term data storage
    /work: shared data common to multiple users or jobs
    /project: Longer term storage for active data and results
  • 12TB Linux (Shared user storage)
    /home: Job submission scripts and small applications

The “/home” file system is the only user cluster file system that is backed up.

Learn more about cluster storage »

Software

Red Hat system utility and application support

Research Technology systems administration staff can assist with most Red Hat Linux operating system, application, and system utility support. Enterprise support for Red Hat Linux can be extended to professional services if needed. However, because there is often commonality across many Linux distributions, Research Technology may be able to help support application and utilities on other distributions as well.

Job submission script support

Research Technology HPC systems engineering and application staff can assist with many job submission script composition and troubleshooting tasks. However, it is important to understand that job submission script issues and debugging can be complex and support will often be a cooperative effort between users and staff. 

Software delivery

Software will be made available to the cluster using the following methodologies.

  • Installation by users in their home directories
  • Installation into the cluster as a LMOD environment module
  • Installation in an end user managed Singularity container
  • Installation on a HPCS specific network application server
  • Some externally hosted applications requiring GPU resources

Access

To access AUHPCS researchers, faculty, or staff must meet the following requirements:

  • Have an active AU NetID and a device configured for Duo two factor authentication.
  • Register as a Principal Investigator (PI) using the iLAB “High Performance Computing and Parallel Computing Services Core” page, or be added to an existing project by a registered PI.
  • Complete the required training course(s) for basic Linux competency (if required) and AUHPCS cluster concepts, use, and workflow.

Once access is granted for approved compute projects adherence to AUHPCS governance policies is a continuing requirement.

 

Get Help

Support

Consultation and assistance with HPC Services from the ITSS Research Technology group can be requested using the standard AU enterprise support services. Requests can be directed to us using the “Research HPC Services” assignment group.

Software Support

HPCS will make every effort to provide some level of support the scientific software installed in the cluster. However due to the open-source origins of many of these software packages, standard support is generally not available in the majority of cases. In these cases support will be a collaborative effort leveraging HPCS staff experience, HPCS staff research, and end user self-support. For purchased software with support HPCS staff will liaise with end users and vendors to support such software. Scientific software that requires funding for licenses and support that the university is not already licensed for will have to be purchased by the requesting department.