Secure Computing Facility—SCF

CSLIC | Graph | Inca | Juno | Max
Muir | Sequoia | Zin | Visualization Servers

CSLIC (Storage Lustre Interface Cluster)

CSLIC is a resource reserved for moving files between LC file systems and HPSS archival storage.

CSLIC
Nodes 10
Cores/node 16
Total cores 160
CPU speed (GHz) 2.6
Network bandwidth per node Single 10-gigabit Ethernet
Memory/node (GB) 128
Operating system TOSS
Password authentication OTP or Kerberos

Inca

Inca is an ASC LLNL-only capacity* resource to be used for serial and on-node parallelism only. This system does not have a high-speed interconnect.

Inca
Nodes
   Login nodes: inca[1-4]
   Batch nodes
   "Big memory" batch nodes
   Debug nodes
   Total nodes

4
91
4
1
100

CPUs
   Login/batch/debug (Intel Xeon EP X5660)
   "Big memory" (AMD Opteron 8356)
   Total cores


16 cores/node
12 cores/node
1,216
CPU speed (GHz)

2.8 (for "big memory" nodes)

Theoretical system peak performance (TFLOP/s) 13.5
Memory
   Memory per node (GB)
   Total memory (GB)

48; 128 (inca97-100)
5,120
Peak CPU memory bandwidth (GB/s) 32
Operating system TOSS
High-speed interconnect none
Compilers  
Parallel job type multiple jobs per node
Run command srun
Recommended location for scratch file space /p/lscratch{...}
Password authentication OTP or Kerberos
Documentation Introduction to LC Resources
Linux Clusters Overview
/usr/local/docs/linux.basics
/usr/local/docs/lustre.basics
* Capacity computing is accomplished through the use of smaller and less expensive high-performance systems to run parallel problems with more modest computational requirements.

Juno

Juno is an LLNL-only ASC resource that is tuned for parallel capacity* computing.

Juno
Nodes
   Login nodes: juno[0,1,552,553,576,577,1128,1129]
   Batch nodes
   Debug nodes
   Total nodes

8
1,056
32
1,152
CPUs (AMD Opteron 8354)
   Cores per node
   Total cores

16
18,432
CPU speed (GHz) 2.2
Theoretical system peak performance (TFLOP/s) 162.2
Memory
   Memory per node (GB)
   Total memory (TB)

32
36.9
Peak CPU memory bandwidth (GB/s) 24
Operating system TOSS
High-speed interconnect InfiniBand DDR (Mellanox)
Compilers  
Parallel job type multiple nodes per job
Run command srun
Recommended location for scratch file space /p/lscratch{...}
Password authentication OTP or Kerberos
Documentation Introduction to LC Resources
Linux Clusters Overview
/usr/local/docs/linux.basics
/usr/local/docs/lustre.basics
* Capacity computing is accomplished through the use of smaller and less expensive high-performance systems to run parallel problems with more modest computational requirements.

Max

Max is an ASC resource for visualization and data analysis work.

Max
Nodes
   Login nodes: max[152,164]
   Batch nodes
   GPU nodes
   Total compute nodes


2
280
20
302

CPUs
   Login nodes (Intel Xeon E5-2670)
   Compute nodes (Intel Xeon E5-2670)
   GPU-enabled nodes (NVIDIA Kepler K20X)
   Total cores


16 cores/node
16 cores/node
2 GPUs/node
4584
CPU speed (GHz)

2.6

Theoretical system peak performance (TFLOP/s) 107
Memory
   Memory per login node (GB)
   Memory per compute node (GB)
   GPU memory per node (GB)
   Total memory (TB)

64
256
6
71.7
Peak CPU memory bandwidth (GB/s) 75
Operating system TOSS
High-speed interconnect InfiniBand QDR (QLogic)
Compilers  
Parallel job type multiple nodes per job
Run command srun
Recommended location for scratch file space /p/lscratch{...}
Password authentication OTP or Kerberos
Documentation Introduction to LC Resources
Linux Clusters Overview
/usr/local/docs/linux.basics
/usr/local/docs/lustre.basics

Muir

Muir is an LLNL-only ASC resource for visualization.

Muir
Nodes
   Login nodes: muir[0,6,324,330,648,654,972,978]
   Batch nodes: muir[12-323,336-647,660-971,984-1295]
   Total nodes

8
1,248
1,296

CPUs (Intel Xeon EP X5660)
   Cores per node
   Total cores


12
15,552
CPU speed (GHz)

2.8

Theoretical system peak performance (TFLOP/s) 174.2
Memory
   Memory per node (GB)
   Total memory (TB)

24
31.1
Peak CPU memory bandwidth (GB/s) 32
Operating system TOSS
High-speed interconnect InfiniBand QDR (QLogic)
Compilers  
Parallel job type multiple nodes per job
Run command srun
Recommended location for scratch file space /p/lscratch{...}
Password authentication OTP or Kerberos
Documentation Introduction to LC Resources
Linux Clusters Overview
/usr/local/docs/linux.basics
/usr/local/docs/lustre.basics

Sequoia

Sequoia is an ASC capability* resource that is best used for very large, highly parallel (up to 40k-processor) jobs.

Sequoia **
Nodes
   Login nodes
   Compute nodes

4
98,304
CPUs (IBM Power7; PPC A2)
   Cores per node
   Total compute cores

48 (login; Power7); 16 (compute; PPC A2)
1,572,864
CPU speed (GHz) 3.7 (login); 1.6 (compute)
Theoretical system peak performance (PFLOP/s) 20.1
Memory
   Memory per node (GB)
   Total compute memory (PB)

64 (login); 16 (compute)
1.6
Peak CPU memory bandwidth (GB/s) 42.6
Operating system
   
Login nodes
   Compute nodes

Red Hat Enterprise Linux
Compute Node Kernel
High-speed interconnect BlueGene torus
Compilers  
Parallel job type multiple nodes per job
Job limits  
Run command srun
Recommended location for scratch file space /p/lscratch{...}
Password authentication OTP
Documentation Sequoia User Information [Authentication requires login with LC user name and OTP;
select BG/Q from Global Spaces menu.]
Introduction to LC Resources
Using the Sequoia BG/Q System
/usr/local/docs/rzuseq.basics
/usr/local/docs/lustre.basics
* Capability computing refers to the use of the most powerful supercomputers to solve the largest and most demanding problems with the intent to minimize time to solution. A capability computer is dedicated to running one problem, or at most a few problems, at at time.
** Limited access

Zin

Zin is an LLNL-only ASC resource that is tuned for parallel capacity* computing.

Zin

Nodes
   Login nodes: zin[497,498,501,502,505,506,1433,1434,1437,1438,1441,1442,2425,2426,2429,2430,2433,2434]
   Batch nodes
   Debug nodes
   Total nodes


18
2,740
32
2,916
CPUs (Intel Xeon E5-2670)
   Cores per node
   Total cores

16
46,656
CPU speed (GHz) 2.6
Theoretical system peak performance (TFLOP/s) 970.4
Memory
   Memory per node (GB)
   Total memory (TB)

32
93.3
Peak CPU memory bandwidth (GB/s) 51.2
Operating system TOSS
High-speed interconnect InfiniBand QDR (Q-Logic)
Compilers  
Parallel job type multiple nodes per job
Run command srun
Recommended location for scratch file space /p/lscratch{...}
Password authentication OTP or Kerberos
Documentation Introduction to LC Resources
Linux Clusters Overview
/usr/local/docs/linux.basics
/usr/local/docs/lustre.basics
* Capacity computing is accomplished through the use of smaller and less expensive high-performance systems to run parallel problems with more modest computational requirements.

Top


Open Computing Facility (OCF)