High Performance Research Computing

High Performance Research Computing (HPRC) is a dedicated computing resource used for cutting-edge, collaborative and transformative research and discovery at Texas A&M . Since 1989, HPRC has been a dedicated resource for research and discovery at Texas A&M. Formerly known as Supercomputing Facility, has been transformed from a mere service facility to an interdisciplinary research center advancing computational and data-enabled science and engineering with a broad mission for advancing research, education, outreach, training and service since January 2016. HPRC supports more than 2,500 users, including more than 450 faculty members. Computing resources are used for cutting-edge, collaborative, and transformative research including, but not limited to, materials development, quantum optimization, and climate prediction. HPRC promotes emerging computing technology to researchers and assists them in using it for research and discovery. New users can apply for accounts at hprc.tamu.edu.
The website offers training and documentation for HPRC systems and software. HPRC provides a broad range of regularly scheduled training sessions and workshops for our users. These sessions may be included in formal classes that have a technical and scientific computing focus. HPRC also hosts the Summer Computing Academy to promote computing among high school students.

Resources:

Grace:

  • Grace is the new University’s flagship supercomputer, a heterogeneous HPC cluster replacing the Ada. The new cluster provides a minimum aggregate peak performance computing capacity of 6 PFLOPS. The Grace cluster is composed of 800 regular compute nodes, 100 A100 GPU compute nodes, 17 single precision T4/RTX6000 GPU compute nodes, 8 large memory (3 TB) compute nodes, 5 login nodes, and 6 management servers. The Grace cluster has an HDR 3:1 InfiniBand interconnect and 5+ PB of usable high-performance storage running Lustre parallel filesystem.

TERRA:

  • The 320-node heterogeneous cluster with 8,520 Intel Broadwell cores, 48 NVIDIA K80 dual-GPU accelerators, 16 Intel Knights Landing processor, and an Intel Omni-Path Architecture (OPA) interconnect.

LONESTAR:

  • Lonestar-5, a Lonestar cluster hosted at TACC, is comprised of 1252 Cray XC40 nodes. Jointly funded by The University of Texas System, Texas A&M University, and Texas Tech University, it provides additional resources to Texas A&M researchers. Allocation requests are made through the HPRC request page. Lonestar-5 will be replaced by Lonestar-6 in Fall 2021. Texas A&M researchers with Lonestar-5 allocations can use TACC’s Frontera system until Lonestar-6 is in production.

ViDal:  A 24-node secure and compliant computing environment supports data intensive research using sensitive person level data or proprietary licensed data to meet the myriad legal requirements of handling such data (e.g., HIPAA, Texas HB 300, NDA). It has 16 compute nodes with 192 GB Ram each and 4 large memory nodes with 1.5 TB Ram each, and 4 GPU nodes with 192 GB Ram and two NVIDIA V100 GPUs each.

FASTER: 

  • FASTER (Fostering Accelerated Scientific Transformations, Education, and Research) is a composable high-performance data-analysis and computing instrument funded by the NSF MRI (Major Research Instrumentation) program. FASTER adopts the innovative Liqid composable software-hardware approach combined with cutting-edge technologies such as state of the art Intel CPUs and NVIDIA GPUs, NVMe (Non-Volatile Memory Express) based storage, and high-speed interconnect. The composable and configurable techniques will allow researchers to use resources efficiently, enabling more science. Thirty percent of FASTER’s computing resources will be allocated to researchers nationwide by the NSF XSEDE (Extreme Science and Engineering Discovery Environment) program.

Advanced Support Program (ASP):

  • HPRC provides technical assistance to research teams across campus that goes beyond general consulting. HPRC offers collaborations in research projects with a large computational component. Under the ASP, one or more HPRC analysts will contribute expertise and experience in several areas of high performance computing.

OUR COLLABORATIVE CONTRIBUTIONS INCLUDE:

  • porting applications to our clusters
  • analyzing and optimizing code performance
  • developing parallel code from serial versions and analyzing performance
  • bioinformatics and genomics
  • optimizing serial and parallel I/O code performance
  • optimal use of mathematical libraries
  • code development and design