Welcome

Hummingbird 2.0 is an open access computational cluster at UC Santa Cruz.  Comprised of the legacy Hummingbird cluster and a newly commissioned high-performance partition, Hummingbird 2.0 has a petabyte-scale parallel file access storage system and a new computational-node hardware architecture. Located at the UC Santa Cruz colocation facility since 2023, Hummingbird 2.0 allows for high-availability, extensibility (via the “condo model”) and is connected to the campus network at 10Gbps for high-speed data transfers using Globus. Hummingbird 2.0 is fully integrated into the legacy Hummingbird cluster, where the two clusters share storage, software, ethernet connectivity, and configurations. The cluster has been preinstalled with software packages that are used in the sciences and engineering, as well as tools and applications for the social sciences, humanities and arts. 

Legacy Hummingbird components consist of:

  • 500 Intel cores – 1 node is 44 cores/256 GBs / 19 nodes are 24 cores/128 GBs (GBs = Gigabytes)
  • 288 AMD (6000 Series) cores – 2 nodes 48 cores/192 GBs / 3 nodes are 64 cores/256 GBs
  • 1 Intel GPU node that consist of 24 cores/96GBs RAM and 4 nVidia Tesla P100 GPUs
  • Shared 10Gbps Ethernet data backplane
  • A BeeGFS parallel file access storage system with approximately 750TB of useable project storage space (legacy connectivity limited to 10Gbps shared ethernet)
  • Detailed information on the current hardware configurations.

Hummingbird 2.0 components consist of:

  • High-speed data backplane, connecting cluster head node, storage and computational nodes at 200 Gbps via NVIDIA Infiniband network
  • BeeGFS parallel file access storage system with approximately 750TB of useable project storage space, connected at 200Gbps Infiniband
  • 512 AMD (Epyc® 9000 Series) cores (8 nodes x 64 cores each) with a total of 4096GB RAM (8 x 512GB each node)
  • Hummingbird 2.0 is a “condo model” cluster, where researchers can purchase dedicated resources for their projects for project-specific storage and computation. Investments are fully managed and supported by staff, allowing researchers to focus on their research (and not cluster administration).

Additionally, the cluster has the following configurations:

  • Connected to cluster head node at 10-200Gbps (Ethernet – Infiniband)
  • Job submission is handled via the SLURM batch management system
  • There are 4 Open Access partitions: 5 Instruction-specific nodes (Instruction),  19 general computer nodes (128x24), 1 GPU node with 4 GPUs (96x24gpu4) , and 1 large-memory node (256x44)
  • Sponsored Access partitions: development partition and PI-owned partitions
  • Open Access users are limited to a maximum of 72 cores in use at any given time; Sponsored access is core limit varies according to PI-owned cores (n) + 72 open access cores (if required).
  • Software packages are provided via the Modules environment system, so users can load only what they need

Hummingbird is a growing resource for the campus that you can add more hardware to and increase the availability of more compute cycles.  The cluster can be utilized for many different different applications from a simple 1 cpu to multiple cpus.  You could save money by not buying a complete computing solution (workstation, small cluster) and therefore save you time and more money not having to administer, upgrade and/or repair your equipment.  If you are interested in learning more  Contact us!

The  cluster environment is built on AlmaLinux 9 using the OpenHPC 3.x cluster environment packages.  OpenHPC has  common scientific libraries and uses the LMod environmental modules system to streamline the use of applications and software.   OpenHPC uses the batch scheduler system called SLURM.  This is a job management system that will be more in line with many other high performance computing cluster centers.

There is also  a baseline of software packages that are in general use across disciplines.  We will do our best to accommodate people’s request but we may not be able to do all requests.  Keep in mind that if you are wishing the installation of licensed software it can be handled in a few different ways:  install in your own home directory, have a site wide license or we restrict the use of the software to a designated group.

Both of these arrangements would save financial and human resources and give you access to even more CPU cycles and backed up storage both static and scratch.

Questions, Comments, Help?  Send e-mail to hummingbird@ucsc.edu. This will open a ticket and someone from the support team will be in touch.

UC Santa Cruz Research Computing