CCAST Thunder Cluster

Inventory Classification

CCAST provides large-scale, state-of-the-art computing resources to the users:

The Thunder cluster consists of 53 compute nodes and 2 login nodes. The total aggregated theroretical peak performance is ~40TFLOPS. All these nodes are interconnected with FDR Infiniband at a 56Gbit/s transfer rate. The system details are shown below

Compute node: 48x Dual socket Intel Xeon 2670v2 "Ivy Bridge" (10 core per socket) 2.5GHz with 64GB DDR3 RAM at 1866MHz and 14x of these 48 nodes are equipped with Intel Phi (aka mic) accelarator cards with 60 x86 cores at 1.047GHz, 7.5GB of RAM. 2x Large memory nodes 1TB RAM at 1600MHz, 4 sockets with 8 cores (Intel Xeon 4640 Sandy Bridge processrs) on each socket, 3.3 TB of local SSD scratch. 2 Development nodes with Intel Ivy Bridge processors with Intel Phi cards and one Sandy Bridge node. 1x Mellanox ConnectX3 FDR IB on all 53 comptue nodes.

Login Node: 2x Intel Ivy Bridge ES-2670 v2 (2 socket, 10 core per socket) 2.5GHz, 64GB DDR3 1866MHz memory, 2x 1TB SATA hard disks, 1 Mellanox ConnectX3 FDR IB, 1 dual 10GbE port.

Number of Compute node: 53

Number of Processor core: 1080

Number of Login node: 2

Storage System: 2 tier IBM GPFS filesystem with a policy driven HSM. Tier 1 with 120x 10K SAS drives at about 4GB/s. Tier 2 with 80x 7.2K SATA drives at about 2.6GB/s. Tape storage: TS3584 L53 with 8x LTO6 Tape drive, 274x tape slot and TS3584 S54 with 1340x tape slots. Included are 258x LTO6 tape cartridge 645TB and expandable upto 4PB.

Interconnection: FDR Infiniband at 56Gbit/s

Network Switch: Mellanox FDR IB switch

University Tag Number: 
191524
Availability: 
Contact Custodian for availability
Location
Research 1
Room Number: 
204
Acquisition Date: 
April 25, 2014