Inter-Continental Online Visualization and Control of T3E Simulations of Black-Hole-Interactions and Gravitational Waves

Run as demo at SC97 in San Jose, Nov 97

You can see the poster shown at the demo:

Ed Seidel (1,3), Lee Wild (1), Joan Masso (1), Paul Walker (1), Manuel Panea (2), Ulrich Schwenn (2), Klaus Desinger (2), Hermann Lederer (2), John Shalf (3), Randy Butler (3), Brian Toonen (4), Ian Foster (4)

1) Max-Planck-Institut für Gravitationsphysik
Albert-Einstein-Institut
D-14473 Potsdam, Germany

2. Rechenzentrum Garching (RZG) der Max-Planck-Gesellschaft
Max-Planck-Institut für Plasmaphysik
D-85748 Garching, Germany

3) National Center for Supercomputing Applications (NCSA)
University of Illinois at Urbana-Champaign

4) Argonne National Laboratory (ANL)
Argonne, Illinois 60439-4803


The Science

The 3D Einstein equations are solved to describe the simulation of interactions between black holes and gravitational waves in support of design and interpretation of detection experiments.


Online visualization and remote control of the running simulation

Gravitational wave iso-surfaces can be selected and displayed near to realtime with the ImmersaDesk equipment.

High Performance Computing

The simulation code, called CACTUS, runs on a Cray T3E-600 at RZG in Garching, Germany with a speedup of 484 on 512 PEs.

High Performance Networking

An 8,000 mile long inter-continental network connection of 12 Mbit/s bandwidth has been established between San Jose and Garching during the demonstration in order to support the T3E output stream of 1 MByte/s.

This has been realized by the help of various providers and partners:

Garching/Germany (RZG) --- [B-WiN (DFN)] --- Stuttgart (RUS) --- [public ATM (DTAG)] --- Hamburg / Sylt --- [CANTAT-3 (Teleglobe)] --- Nova Scotia --- [CA*net II (CANARIE INC / NTN)] --- Chicago (STAR TAP) --- [vBNS (NFS)] --- San Jose.

Special TCP/IP tuning for efficient long distance transfers has been implemented in the communicating applications in Garching and San Jose.

New iso-surfaces of 1 MByte in size are calculated every second and transferred over the special network link for visualization in San Jose, with a raw bandwitdth of 12 MBit/s. The code can also be controlled from the show, allowing the user to choose visualization and simulation parameters as the code is running.
CACTUS is an MPI based parallel simulation code developed at the Albert Einstein Institute in Potsdam, Germany, at NCSA, and at Washington University in St. Louis.

After each cactus iteration iso-surfacing is carried out in parallel on each T3E PE for data reduction and production of a data stream of 1 Mbyte/s suited for direct visualization on the ImmersaDesk.


We thank RUS (Rechenzentrum Universitaet Stuttgart), DTAG (Deutsche Telekom AG), Teleglobe, CANARIE INC / NTN and NSF (National Science Foundation) for providing the network connection to enable this demonstration.