Secure Computing Facility—SCF

Agate | CSLIC | Jade | Jadeita |Max
| Sequoia | Zin | Visualization Servers

Agate

 

Agate
Nodes
   Login nodes: agate[2,5]
   Batch nodes
   Debug nodes
   Total nodes

2
41
2
48
CPUs (Intel Xeon EP X5660)
   Cores per node
   Total cores

36
1762
CPU speed (GHz)

2.1

Theoretical system peak performance (TFLOP/s) 58.1
Memory
   Memory per node (GB)
   Total memory (TB)

128
6,144
Peak CPU memory bandwidth (GB/s) 154
Operating system TOSS3
High-speed interconnect N/A
Compilers  
Parallel job type multiple jobs per node
Job limits  
Run command srun
Recommended location for scratch file space /p/lscratch{...}
Password authentication OTP
Documentation Introduction to LC Resources
Linux Clusters Overview
/usr/local/docs/linux.basics
/usr/local/docs/lustre.basics

CSLIC (Classified Storage Lustre Interface Cluster)

CSLIC is a resource reserved for moving files between LC file systems and HPSS archival storage.

 

CSLIC
Nodes 10
Cores/node 4
Total cores 40
CPU speed (GHz) 2.4
Network bandwidth per node Four 1-gigabit Ethernet
Memory/node (GB) 48
Operating system TOSS
Password authentication OTP

Jade

Jade
Nodes
   Login nodes: jade[188,380,386,764,770,962]
   Batch nodes
   GPU nodes
   Total compute nodes


6
1,296
1.302
2.688

CPUs
  Cores/node
  Total cores


36 cores/node
4584
CPU speed (GHz)

2.1

Theoretical system peak performance (TFLOP/s) 1,575
Memory
   Memory per node (GB)
   Total memory (GB)

128
344,064
Peak CPU memory bandwidth (GB/s) 154
Operating system TOSS3
High-speed interconnect Omni-Path
Compilers  
Parallel job type multiple nodes per job
Run command srun
Recommended location for scratch file space /p/lscratch{...}
Password authentication OTP or Kerberos
Documentation Introduction to LC Resources
Linux Clusters Overview
/usr/local/docs/linux.basics
/usr/local/docs/lustre.basics

Jadeita

Jadeita
Nodes
   Login nodes: jade[1340,1532,1538,1916,1922,2114]
   Batch nodes
   Debug nodes
   Total compute nodes


6
1,232
32
1,270

CPUs
  Cores/node
  Total cores


36
45,720
CPU speed (GHz)

2.1

Theoretical system peak performance (TFLOP/s) 1,536
Memory
   Memory per node (GB)
   Total memory (GB)

128
162,560
Peak CPU memory bandwidth (GB/s) 154
Operating system TOSS3
High-speed interconnect Omni-Path
Compilers  
Parallel job type multiple nodes per job
Run command srun
Recommended location for scratch file space /p/lscratch{...}
Password authentication OTP or Kerberos
Documentation Introduction to LC Resources
Linux Clusters Overview
/usr/local/docs/linux.basics
/usr/local/docs/lustre.basics

Max

Max is an ASC resource for visualization and data analysis work.

Max
Nodes
   Login nodes: max[152,164]
   Batch nodes
   GPU nodes
   Total compute nodes


2
280
20
302

CPUs
   Login nodes (Intel Xeon E5-2670)
   Compute nodes (Intel Xeon E5-2670)
   GPU-enabled nodes (NVIDIA Kepler K20X)
   Total cores


16 cores/node
16 cores/node
2 GPUs/node
4584
CPU speed (GHz)

2.6

Theoretical system peak performance (TFLOP/s) 107
Memory
   Memory per login node (GB)
   Memory per compute node (GB)
   GPU memory per node (GB)
   Total memory (TB)

64
256
6
71.7
Peak CPU memory bandwidth (GB/s) 75
Operating system TOSS
High-speed interconnect InfiniBand QDR (QLogic)
Compilers  
Parallel job type multiple nodes per job
Run command srun
Recommended location for scratch file space /p/lscratch{...}
Password authentication OTP or Kerberos
Documentation Introduction to LC Resources
Linux Clusters Overview
/usr/local/docs/linux.basics
/usr/local/docs/lustre.basics

Sequoia

Sequoia is an ASC capability* resource that is best used for very large, highly parallel (up to 40k-processor) jobs.

Sequoia **
Nodes
   Login nodes
   Compute nodes

4
98,304
CPUs (IBM Power7; PPC A2)
   Cores per node
   Total compute cores

48 (login; Power7); 16 (compute; PPC A2)
1,572,864
CPU speed (GHz) 3.7 (login); 1.6 (compute)
Theoretical system peak performance (PFLOP/s) 20.1
Memory
   Memory per node (GB)
   Total compute memory (PB)

64 (login); 16 (compute)
1.6
Peak CPU memory bandwidth (GB/s) 42.6
Operating system
   
Login nodes
   Compute nodes

Red Hat Enterprise Linux
Compute Node Kernel
High-speed interconnect BlueGene torus
Compilers  
Parallel job type multiple nodes per job
Job limits  
Run command srun
Recommended location for scratch file space /p/lscratch{...}
Password authentication OTP
Documentation Sequoia User Information [Authentication requires login with LC user name and OTP;
select BG/Q from Global Spaces menu.]
Introduction to LC Resources
Using the Sequoia BG/Q System
/usr/local/docs/rzuseq.basics
/usr/local/docs/lustre.basics
* Capability computing refers to the use of the most powerful supercomputers to solve the largest and most demanding problems with the intent to minimize time to solution. A capability computer is dedicated to running one problem, or at most a few problems, at at time.
** Limited access

Zin

Zin is an LLNL-only ASC resource that is tuned for parallel capacity* computing.

Zin

Nodes
   Login nodes: zin[497,498,501,502,505,506,1433,1434,1437,1438,1441,1442,2425,2426,2429,2430,2433,2434]
   Batch nodes
   Debug nodes
   Total nodes


18
2,740
32
2,916
CPUs (Intel Xeon E5-2670)
   Cores per node
   Total cores

16
46,656
CPU speed (GHz) 2.6
Theoretical system peak performance (TFLOP/s) 970.4
Memory
   Memory per node (GB)
   Total memory (TB)

32
93.3
Peak CPU memory bandwidth (GB/s) 51.2
Operating system TOSS
High-speed interconnect InfiniBand QDR (Q-Logic)
Compilers  
Parallel job type multiple nodes per job
Run command srun
Recommended location for scratch file space /p/lscratch{...}
Password authentication OTP or Kerberos
Documentation Introduction to LC Resources
Linux Clusters Overview
/usr/local/docs/linux.basics
/usr/local/docs/lustre.basics
* Capacity computing is accomplished through the use of smaller and less expensive high-performance systems to run parallel problems with more modest computational requirements.

Top


Open Computing Facility (OCF)