Supercomputing

The fastest UD supercomputer Chimera, which consists of 3168 processing cores and 4800 GB of memory, is available to students and reserchers in DPA who belong to either Szalewicz or Nikolic groups.

Chimera as the fastest supercomputer on UD campus with 3168 cores is avaiable to members of Szalewicz and Nikolic groups.
Szalewicz group operates one of the fastest supercomputing cluster on UD campus. The peak performance of kolos is 2.4 Tflops.
Iceteen processes data gathered from Ice Top at the South Pole
The projected performance graph provides an important tool to track historical development and to predict future trends.

 
Parallel computing has the goal of enhancing performance through the process of performing parts of a task concurrently. Achieving the performance gains projected for applications through parallelism offered by today's parallel systems (with multicore processors, GPU units, etc.)  mandates new paradigms for system software (operating systems, languages, compilers, and programming tools) and redesign of applications software through the development and implementation of parallel algorithms. High performance computing is a growing area of research and many groups in the Department share interest in Computational Physics research, algorithm development, and education at both undergraduate and graduate level.
 
The Department researchers utilize remote supercomputing facilities (such as TeraGrid accessible through Nikolic group) and operate locally nine Linux supercomputing cluster:
 
Wiki pages on how to use clusters below (access outside of UD campus is password protected).
 
arches:

  • operating system: Linux
  • number of nodes: 12
  • number of processing cores: 42
  • total memory: 60 GB

asterix:

  • operating system: Linux
  • number of nodes:  41
  • number of processing cores:  222
  • total memory:  500GB

chimera: (access through Szalewicz and Nikolic groups)

  • operating system: Linux
  • number of nodes: 67
  • number of processing cores: 3168
  • total memory: 4800 GB

dante: (access through Matthaeus group)

  • operating system: Linux
  • number of nodes:  8
  • number of processing cores:  60
  • total memory:  204GB

energy: (access through Szalewicz group)

  • operating system: Linux
  • number of nodes: 16
  • number of processing cores: 32
  • total memory: 128 GB


goose: (access through Shay group)

  • operating system: Linux
  • number of nodes:  38
  • number of processing cores:  312
  • total memory:  900GB

kolos: (access through Szalewicz group)

  • operating system: Linux
  • number of nodes: 31
  • number of processing cores: 276
  • total memory: 1.1 TB

lowdin:  (Departmental cluster accessible to all students and faculty)

  • operating system: Linux
  • number of nodes:  33
  • number of processing cores:  66
  • total memory:  232GB

stcsun: (access through Chui group)

  • operating system: Linux
  • number of nodes: 1
  • number of processing cores: 32
  • total memory: 64 GB

ulam: (access through Nikolic group)

  • operating system: Linux
  • number of nodes:  17
  • number of processing cores:  48
  • total memory:  124GB