Seismic Imaging in Grid Computing



Grid computing offers a model for solving massive computational problems by making use of the unused resources (CPU cycles and/or disk storage) of large numbers of disparate computers and heterogeneous platform, often desktop computers, treated as a virtual cluster embedded in a distributed infrastructure. Seismic data is key to discriminate oil based earth layers from overburden rock. Seismic data is acquired by setting off a source of seismic energy, e.g. an airgun or vibrator array, and recording the energy reflected from horizons or reflectors in the sub-surface. A seismic survey is acquired by repeating this procedure for a series of shot and receiver locations. An average 3D seismic survey can easily consist of 300 million seismic traces, 1000’s of shots, which is about 2 Tera bytes in storage size. The goal of the seismic processing is to filter the shot gathers in a way that a clear image of the subsurface is obtained. To create a final filtered image of the subsurface, many different filter algorithms (geophysical modules) are applied to the data in a sequential order. Seismic imaging is the most CPU intensive module but very suitable for parallelization. A heterogeneous domain grid-computing solution is implemented and tested to enable distributed computing of seismic imaging. The grid solution will consist of a of the BOINC framework. Linux and Windows desktop PC’s will act as compute clients. The grid solution will serve as a framework to become useful for various CPU intensive and parallelizable science applications.


Authors : Amri Widyatmoko, AM Ustad