Computing Resources

Hardware | Software | File Systems/File Management

Hardware

LC Computing Resources

A summary of computing resources at LC is provided in the table below. Other useful (and more detailed) information may be found by following these links:

Systems Summary
Network System (Program) Processor
Architecture
OS * Nodes Cores Peak
(TFLOP/s)
OCF    CZ
Ansel (M&IC) Intel Xeon EP X5660 TOSS 324 3,888 43.5
Aztec (M&IC) Intel Xeon EP X5660 TOSS 96 1,152 12.9
Catalyst (ASC/M&IC) **** Intel Xeon E5-2695 v2 TOSS 324 7,776 149.3
Cab (ASC/M&IC) Intel Xeon E5-2670 TOSS 1,296 20,736 431.3
Herd (M&IC) ** AMD Opteron 8356, 6128
Intel EX E7-4850
TOSS 9 256 1.6
Hyperion (computing industry collaboration) Intel Xeon TOSS 1,100 13,216 112.7
OSLIC *** Intel Xeon E5330 TOSS 10 40
Sierra (M&IC) Intel Xeon EP X5660 TOSS 1,944 23,328 261.3
Surface (ASC/M&IC) ** Intel Xeon E5-2670 TOSS 162 2,592 53.9
Syrah (ASC/HPCIC) ** Intel Xeon E5-2670 TOSS 324 5,056 107.8
Vulcan (ASC/M&IC/HPCIC) IBM PowerPC A2 RHEL/CNK 24,576 393,216 5,033
   RZ
RZCereal (M&IC) Intel Xeon E5530 TOSS 21 169 1.6
RZHasGPU Intel Xeon E5-2667 v3 TOSS 20 320 8.2
RZMerl (ASC/M&IC) Intel Xeon E5-2670 TOSS 162 2,592 53.9
RZSLIC *** Intel Xeon E5330 TOSS 3 24
RZuSeq (ASC) **** IBM PowerPC A2 RHEL 522 8,192 100
RZZeus (M&IC) Intel Xeon E5530 TOSS 267 2,144 20.6
SCF CSLIC *** Intel Xeon E5330 TOSS 10 40
Inca (ASC) Intel Xeon EP X5660 TOSS 100 1,216 13.5
Juno (ASC) AMD Opteron 8354 TOSS 1,152 18,432 162.2
Max (ASC) Intel Xeon E5-2670 TOSS 302 4,584 107
Muir (ASC) Intel Xeon EP X5660 TOSS 1,296 15,552 174.2
Sequoia (ASC) ** IBM PowerPC A2 RHEL/CNK 98,304 1,572,864 20,132
Zin (ASC) Intel Xeon E5-2670 TOSS 2,916 46,656 970.4
ISNSI Pinot (M&IC) Intel Xeon E5-2670 TOSS 162 2,592 53.9

Notes: Aztec, Inca, and RZCereal lack a high-speed interconnect and are intended for single-node computing (serial jobs or on-node shared memory parallelism). All other systems have a high-speed interconnect and are for multinode computing (both distributed and shared memory parallelism).
* OS (operating system) types are TOSS (Tri-Lab Operating System Stack—formerly CHAOS, Clustered High Availability Operating System—derived from Red Hat Linux); SLES/CNK (SuSE Linux Enterprise Server/Compute Node Kernel); and RHEL (Red Hat Enterprise Linux).
** Limited (or restricted) access.
*** Reserved for moving files between LC file systems and HPSS archival storage.
**** Limited availability (LA); not generally available (GA).
Definitions: Limited availability (LA) indicates the machine is available for use but not yet ready for general availability (GA). A limited number of user accounts are added; however, sometimes not all of the new account requests can be granted. Typically, when a machine is in LA, there may still be a few issues to be resolved before the machine becomes GA. GA is when the machine is ready for production computing, and any of the class of users the machine is targeted to serve may request and be granted accounts.

Tri-Lab Computing Resources

Los Alamos and Sandia National Laboratories provide additional computing resources for Tri-Lab, Intersite, and SecureNet users.

Machine Status

Top

Software

  • Compilers. Lists which compilers are available for each LC system.
  • Supported Software and Computing Tools. Development Environment Group supported software includes compilers and preprocessors, libraries, debugging, profiling, trace generation/visualization, performance analysis tools, correctness tools, parallel techniques/parallel environment, and utilities.
  • Graphics Software. Information Management & Graphics Group supported software includes visualization tools, graphics libraries, and utilities for the plotting and conversion of data.
  • Linux. See Linux Projects (CHAOS, SLURM, and Lustre) for information about system development. Software Downloads lists locally developed software that is available for download.
  • Open Source and Released Software
  • Math Libraries. LINMath, the Livermore Interactive Numerical Mathematical Software Access Utility, is a Web-based access utility for math library software. The LINMath Web site also has pointers to packages available from external sources. Other good documents on math software are also available.

Top

File Systems/File Management

Login or "Dot" Files

Each user's common home directory contains a set of master dot files for login and run-control support. Host-specific dot files supplement your master dot files. These host-specific files are the place to install your own customizations that apply only to individual hosts (an alias to run one-host software, for example). See the Login Files section of the Introduction to Livermore Computing Resources for more information.

Home Directories/Common Home Directory

The "common home" directory arrangement makes keeping redundant home files (and doing redundant updates) on multiple machines unnecessary, and it allows the same path name and directory structure for home to be shared on every host involved. The /g home directories are backed up. Additional details are available in the Home Directories section of the Introduction to Livermore Computing Resources. See the Backup Policy Summary in the EZFILES basic file-management guide. If you accidentally delete a file, you may be able to retrieve it.

Information about the organization of important public directories (and their underlying file systems) on the LC systems and a comparison of the home and work file environments on each computing system is available in the Directory Structure and Properties section of EZFILES.

Top

File Systems

/usr/gapps and /usr/local File Systems - LC provides four /usr/gapps file systems (three on the OCF and one on the SCF) for user-developed and supported applications on LC systems. The /usr/local file system is the usual location of commonly used binaries, libraries, and documents that are platform specific, but it is also used for code development and file storage.

Temporary File Systems - Various temporary file systems (global and local) are for system use, temporary storage of input or output local to each machine, or large temporary storage for input or output shared among many machines. Temporary files are not backed up. All temporary files are subject to purge. See the Purge Policy news article for details.

Parallel File Systems - View the CZ file system status page or the RZ file system status page to learn which file systems are mounted on which machines and check file system status. Consult the Lustre parallel file system summary for file system names, bandwidth, and capacity. The Lustre parallel file system, /p/lscratch*, is not backed up and is subject to purge. See the Lustre parallel file system summary and maximum bandwidths of the lscratch file systems. See the lustre.basics article in /usr/local/docs/ for a Lustre overview and the Revised Lustre Parallel File System Purge Policy for the guidelines for purging. A summary of the backup and purge policy for OCF Linux systems is available in the PurgePolicy.linux news article. See the FileSystemUse article in /usr/local/docs/ for information regarding the recommended use of the file systems.

For additional details, see the Parallel File Systems section of the Introduction to Livermore Computing Resources, including the File Systems Summary table for an at-a-glance overview of file system types, sizes, quotas, backup and purge status, and usage. Also see the Summary of Default File Quotas and Quota Warnings.

File System Status

Top

Archival Storage

The High Performance Storage System (HPSS) is available on all OCF (CZ/RZ) and SCF production systems. Each user has a virtually unlimited storage account with passwordless access from LC systems. The EZSTORAGE manual provides an overview of tools for storing and archiving files. Consult the HPSS User Manual for details of the storage system and its specialized features. See also the Data Storage Group Web page.

File Transfer and Sharing

LC supports a wide variety of file transfer protocols, including FTP, NFT, HSI, SFTP, and SCP. (FTP, NFT, and HSI also serve as the interfaces to LC's open and secure archival storage systems, as does the special-purpose transfer tool called HTAR.) Each of these protocols can also be used via the interactive Hopper, a powerful, interactive, cross-platform tool that allows users to transfer and manipulate files and directories by means of a graphical user interface. In addition to the Hopper Web pages, see the hopper man page or use the hopper -readme command. On some systems, the FTP-based tools XFTP and XDIR are available. For more details, see the File Sharing and Transfer section of Introduction to Livermore Computing Resources.

The Green Data Oasis, a large data store on the unrestricted LLNL network, can be used for sharing very large amounts of data with external collaborators.

All files and directories have an owner, usually the person who initially created the file or directory. The owner can assign permissions to other users, and these permissions control who can manipulate the files and directories. "World" sharing of files is not allowed in user directories (all home and tmp directories). World (other) permissions will be removed automatically. File sharing can be accomplished with the give and take utilities. Users can share files and directories with a group by creating a group via the LC Identity Management System. Applications can be shared in the /usr/gapps file system. Submit the /usr/gapps Request form to create, change, or delete a directory. LLNL maintains an anonymous FTP server (ftp.llnl.gov) that allows world sharing of files. For more details, see the File Transfer and Sharing section of Introduction to Livermore Computing Resources.

Consult the EZFILES manual for descriptions of tools that manipulate file permissions, transfer files, or perform basic file-handling tasks. File management tools (such as give and take) unique to the LC computing environment receive special attention in EZFILES.

The File Interchange Service (FIS) moves files between the open and the secure LC computing networks.

File Editors

See the list of file editors available on most UNIX and Linux systems.

Top