Alliance'98
AEI - NCSA - ZIB

Transatlantic Interactive Visualization of Black Hole Interaction and Gravitational Wave Supercomputing

with support by

AEI - ANL - Berkom - Canarie - GMD Focus - NCSA - RZG - STARTAP - Teleglobe - vBNS - ZIB

At the Alliance'98 conference in Champaign, Illinois we presented the feasibility of doing a complex physical simulation on a powerful supercomputer and viewing the results as they are computed in another part of the world. This kind of work provides a new quality of international cooperation among scientists and is considered to become routine in the next century.

NCSAdemo.jpg

The Demo

The Transatlantic High-Speed Line
Six telephone companies/network partners worked together to establish a data connection between ZIB, Berlin (Germany) and NCSA, Champaign (US), that would reveil the required bandwidth of about 100 - 1000 times the transfer rate of modern analog modems.

Directly out of the ZIB, the BRAIN (Berlin Research Area Information Network) could be used to directly connect to the building of GMD Focus. This is the same building as the Berkom, where a cable had to be established to connect to the public atm net by Telekom, which sponsored the line to the island Sylt in the North Sea. From there on Teleglobe's CANTAT-3 submarine fibre-optic cable took over until Pennan Point in Nova Scotia (Canada), where Teleglobe's GigaPOP router is located. In Canada CANARIE, took over with its CA*II net to connect to the STAR TAP in Chicago, the entrance point of the vBNS, to which the NCSA is connected. Teleglobe's CANTAT-3 submarine fibre-optic cable Special thanks to Teleglobe and for supporting the demo with its CANTAT-3 transatlantic cable, which connects Sylt in the German North Sea with Pennan Point in Nova Scotia. Routing ZIB - [BRAIN] - GMD Focus - [Berkom Building] - Berkom - [Public ATM] - Sylt - [Teleglobe CANTAT-3] - Pennan Point (Nova Scotia) - [CA*II net] - STAR TAP (Chicago) - [vBNS ] - NCSA Performance (TCP transfer rate)
ZIB CRAY T3E ---> NCSA demo machine

        messagesize(in bytes) throughput(in Mbit/sec)
        ---------------------------------------------
                 1024                   2.71
                 4096                   2.76
                16384                   2.75
                32768                   2.74
                65536                   2.55
The CRAY T3E Supercomputer
The CRAY T3E 136/128 supercomputer at the ZIB, Berlin consists of 128 Alpha-processors, each of them running at 450MHz, with a memory of 128 Megabytes per processor. The CRAY was reserved during the demo time only for use with the demo program, i.e. switched into `dedicated mode'. All other processes running currently had to be swapped out, which took about 10-20 minutes. After the demo had been finished, all other processes could continue. The Physics Behind
The simulation shown was the extraction of gravitational waves from a distorted black hole, the so called Teukolsky waves. Any black hole in nature must become totally spherically symmetric in a finite time (the so called No-Hair-theorem). Non spherically symmetric black holes must radiate their pertubations away by gravitational radiaton. Such things might happen during the collision of two black holes. It is believed that such collisions of black holes occur once a year at an intensity, that will become measurable within the next five years. Simulation and Visualization - Cactus, Globus and parallel Isosurfaces

  • The physical simulation in full three dimensions was done by the Cactus code, which is beeing developed mainly at the Albert-Einstein-Institute at Potsdam. It is a highly parallelized code for solving the Einstein equations, which are said to be the most complex one in modern physics, and is designed to make maximal use of modern supercomputers.
  • Visualization was done on Immersadesk systems, which is a virtual reality-like projection desk to allow a couple of visitors to view animated graphics in 3D.
  • The interface between the cactus program and the visualization software (parallel isosurface computation) was developed at the Rechenzentrum Garching.
  • Globus was used to manage and control program start and execution. Globus is beeing developed by Argonne National Laboratories.
The Actual Demonstration
The demonstration took place at Monday, April 27th and Wednesday, April 30th in the Assembly Hall at Champaign, a hall normally used as a basket ball stadion. The scheduled time was 5pm to 6:30pm Champaign local time, which was 12pm to 1:30am local time in Berlin (Tuesday/Thursday morning there). The demo was issued by John Shalf (who wrote the network interface within the cactus code) and Warren Smith, who managed to have Globus run on the ZIB T3E.

We had a bidirectional videoconference over the Internet, such that people at the demo hall could see the people at the ZIB, who were watching the progress of the demo, and people at the ZIB could watch what happened in Champaign at the same time. Via audio connection we could also hear another, and Ed Seidel and Werner Benger gave some descriptions from the ZIB, answering questions from the visitors at the demo hall.
Impressions from the demo event:
1s.jpg
The Immersadesk at the Assembly Hall - ready to start the demo
1s.jpg
Visitors at the Assembly Hall - waiting for action
t3es.jpg
Empty CRAY just before launching cactus
13s.jpg
Isosurface computation initiated
18s.jpg
Watching in 3D
13s.jpg
New isosurfaces evolving at the Idesk
16s.jpg
Grown isosurfaces some timesteps later
40s.jpg
Full CPU load at the ZIB CRAY in Berlin
42s.jpg
Ready for the second demo run
7s.jpg
CPU load as seen in Champaign and viewed via video camera in Berlin
23s.jpg
Explaining Globus
46s.jpg
Again: isosurfaces growing

Partners:

Press echo:

[Unfold Page]

Already had accesses since ,
last modified , Werner Benger