Skip to main content Skip to secondary navigation
Thesis

SEP-150 (2013)

Download

Book (pdf)

Deconvolution and signal processing

Ricker compliant deconvolution (pdf)

[SRC]
Jon Claerbout and Antoine Guitton
Ricker compliant deconvolution spikes at the center lobe of the Ricker wavelet. It enables deconvolution to preserve and enhance seismogram polarities. Expressing the phase spectrum as a function of lag, it works by suppressing the phase at small lags. A byproduct of this decon is a pseudo-unitary (very clean) debubble filter where bubbles are lifted off the data while onset waveforms (usually Ricker) are untouched.

 
Shortest path to whiteness (pdf)

[SRC]
Stewart A. Levin, Jon Claerbout, and Eileen R. Martin
The output of a prediction error filter is white. Easy to state, annoyingly hard for students to understand. We provide here two short, clean paths to that understanding.

 
OpenCPS experience in my summer seismic preprocessing workshop (pdf)

[SRC]
Stewart A. Levin
Many students (and faculty) throughout the Earth Sciences have at least an occasional need to deal with seismic data. To help them handle some basic data import, preparation and, as is common here at Stanford, export, I ran a seven session workshop over the summer. During the course of the workshop, I exposed students to multiple packages: Seismic Unix, ProMAX/SeisSpace, and, the very recently arrived, OpenCPS. In this report, I discuss our experience with that last package, both as a vehicle for allowing students to easily focus on the geophysics and for the capabilities it provides the casual user.

Velocity estimation

Simultaneous inversion of full data bandwidth by tomographic full waveform inversion (TFWI) (pdf)

[SRC]
Biondo Biondi and Ali Almomin
Convergence of full waveform inversion can be improved by extending the velocity model along either the subsurface-offset axis or the time-lag axis. The extension of the velocity model along the time-lag axis enables us to linearly model large time shifts caused by velocity perturbations. This linear modeling is based on a new linearization of the scalar wave equation where perturbation of the extended slowness-squared is convolved in time with the second time derivative of the background wavefield. The linearization is accurate for both reflected events and transmitted events. We show that it can effectively model both conventional reflection data as well as modern long-offset data containing diving waves. It also enables the simultaneous inversion of reflections and diving waves, even when the starting velocity model is far from being accurate. We solve the optimization problem related to the inversion with a nested algorithm. The inner iterations are based on the proposed linearization and on a mixing of scales between the short and long wavelength components of the velocity model. Numerical tests performed on synthetic data modeled on the Marmousi model and on the "Caspian Sea" portion of the well-known BP model demonstrate the global-convergence properties as well the high-resolution potential of the proposed method.

 
Near-surface velocity estimation for a realistic 3D synthetic model (pdf)

[SRC]
Xukai Shen
I performed data-domain wave-equation tomography for a realistic synthetic near-surface model. From a starting model that misses some large scale velocity features as well as some small scale velocity features, both traveltime tomography and waveform tomography were performed. First-break traveltime tomography using wave equation not only results in correct updates of large scale velocity structure, but also gives hints of small scale velocity structures. The tomography result can be further refined by refraction waveform tomography. Refraction waveform tomography pin-point the location of small scale velocity features by using the waveform information in addition to the traveltime information. However, direct refraction waveform tomography without traveltime tomography can not resolve the missing large velocity features in the starting model, and easily converges to a local minima.

 
Hessian analysis of tomographic full waveform inversion operators (pdf)

[SRC]
Ali Almomin and Biondo Biondi
Tomographic full waveform inversion (TFWI) provides a framework to invert the seismic data that is immune to cycle-skipping problems. This is achieved by extending the wave equation and adding a spatial or temporal axis to the velocity model. For computational efficiency, the inversion is performed in a nested scheme. We examine the linearized component of the nested inversion scheme and present alternative fitting goals that have different properties compared to the original formulation. Then, we compute the Hessian matrix of both formulations as well as their individual operators to analyze the properties of each matrix. The analysis of the new formulation indicate an improved convergence behavior of inversion.

 
Simultaneous time-lapse full waveform inversion (pdf)

[SRC]
Musa Maharramov and Biondo Biondi
We propose a technique for improving the robustness of time-lapse full waveform inversion by reducing numerical artifacts that contaminate inverted model differences. More specifically, we demonstrate that simultaneously inverting for baseline and monitor models in combination with a Tikhonov regularization applied to the model difference can reduce acquisition-related repeatability issues and spurious numerical artifacts arising in separate baseline and monitor inversions. We demonstrate our method using a synthetic model problem and describe a simplified "cross-updating" approach that can be applied to large-scale time-lapse industrial problems using the existing FWI inversion tools.

Imaging through inversion

Joint-LSRTM in practice with the Deimos ocean bottom field data set (pdf)

[SRC]
Mandy Wong, Biondo Biondi, and Shuki Ronen
We apply an adaptation of the least-squares reverse time migration (LSRTM) algorithm to the 3D Deimos ocean bottom field data set from the Gulf of Mexico. A simple data-fitting objective function may not be sufficient when applying LSRTM in practice. Some challenges arise because the recorded field data depart from the theory and assumption of the LSRTM operator. To optimize the inversion with the field data set, we include Laplacian preconditioning, salt-dimming data weighting, extended domain noise filtering, and regularization onto the LSRTM algorithm. Results from the 3D Deimos ocean bottom field dataset show an improvement when using joint LSRTM of primary and mirror signals over conventional imaging.

 
Designing an object-oriented library for large scale iterative inversion (pdf)

[SRC]
Chris Leader and Robert Clapp
A flexible library that allows the user to apply a variety of geophysical imaging/inversion techniques and leverage a selection of solvers can be a very powerful tool. However, constructing this to work in multiple dimensions and with a variety of options is a difficult task. The abstraction provided by object-oriented languages helps us to separate the geophysics from the solver, to use the same function calls for models of different dimensions and to create a single framework that has the potential to apply a range of imaging or inversion methods on heterogeneous computing systems.

Surface and passive seismic

Velocity dispersion at Long Beach (pdf)

[SRC]
Jason P. Chang
Previous studies from the dense Long Beach, California, seismic array have shown that both surface waves and P-waves can be recovered by seismic interferometry. In this report, I focus on constraining the apparent velocities of the wave types by performing tau-p transforms and generating dispersion images. From tau-p transforms, I find that the velocity of P-waves is approximately 2500 m/s. My dispersion analysis reveals that I am recovering the fundamental Rayleigh wave mode, as well as the first-order mode. Both observed Rayleigh-wave modes are dispersive, with the waves travelling faster at lower frequencies than at higher frequencies. I also find that the first-order mode travels at a greater velocity than the fundamental mode over the range of frequencies that I investigated.


Scholte-wave azimuthal-anisotropic phase-velocity images in the near surface at Ekofisk from seismic noise correlations (pdf)

[SRC]
Sjoerd de Ridder, Dave Nichols and Biondo Biondi
In this report we summarize work done on an ambient seismic noise recording made at the Ekofisk LoFS array. We first isolate the double frequency microseism noise and synthesize virtual seismic sources by cross-correlation. A dispersion analysis shows that these sources contain fundamental mode Scholte-waves. Using Eikonal tomography on the phase-delay times extracted by using the unwrapped instantaneous phase, we construct maps of Scholte wave phase-velocities and elliptical anisotropy. A high velocity anomaly is found in the center of the array, surrounded by a lower velocity region. Under the the southern end of the array we find higher velocities again. We retrieve azimuthal anisotropy that relates to the subsidence pattern.

 
Scholte-wave excitation (pdf)

[SRC]
Marine Denolle, Sjoerd de Ridder, Jason P. Chang, Eileen R. Martin, Taylor Dahlke, Humberto Arevalo-Lopez, Sr., and Stewart A. Levin
We estimate the excitation of the Scholte waves using a new formulation of the surface-wave eigenproblem. We adapt the Rayleigh-wave case for solid media to accommodate the fluid shear-free condition and successfully calculate the Scholte-wave excitation. We detail here the derivation and numerical implementation, along with preliminary results for simple fluid-over-solid cases. We verify our results by comparing our phase velocity dispersion curve to the numerical solution of the dispersion relation for a fluid layer above an elastic half-space.

Modeling and Anisotropy

Equivalent accuracy at a fraction of the cost: Tackling spatial dispersion (pdf)

[SRC]
Huy Le, Robert G. Clapp, and Stewart A. Levin
To reduce numerical spatial dispersion and find an optimal set of finite difference coefficients for a given frequency bandwidth and a range of velocities, we minimize the weighted sum of the squared error between the finite difference operator and the continuous operator. We reformulate the optimization problem in terms of frequency and velocity, which allows us to weight our cost function according to the frequency content of our injected source and to the velocity distribution present in our model. We show that our method gives promising results on a constant velocity model and a constant-thickness, linearly-increasing velocity model. However, without selecting the appropriate portion of the domain on which we optimize, the error at mid-range frequencies may be increased as a trade-off for reducing the error at high frequencies. This problem has been noted in previous work but not emphasized strongly enough. In this paper, we show numerical examples demonstrating this critical point.

 
Equivalent accuracy at a fraction of the cost: Overcoming temporal dispersion (pdf)

[SRC]
Yunyue (Elita) Li, Mandy Wong, and Robert Clapp
Numerical dispersion in finite difference modeling produces coherent artifacts, severely constraining the resolution of advanced imaging and inversion schemes. Conventionally, we deal with this by increasing the order of accuracy of the finite difference operators and resign ourselves to paying the high computational cost that incurs. But is there a way to reduce such dispersion without increasing cost or, conversely, decrease the cost without increasing numerical dispersion? To tackle this, we separate the finite difference numerical dispersion into pure time and pure space dispersion and address them independently. In this article, we focus on time dispersion. We show that finite difference time dispersion is virtually independent of the medium velocity and the spatial grid for propagation, and only depends on the time stepping scheme and the propagation time. Based on this, we devise post-propagation filtering to collapse the time dispersion effect of the finite difference modeling. Our dispersion correction filters are designed by comparing the input waveform with dispersive waveforms obtained by 1D propagation of that waveform forward in time. These filters are then applied on multi-dimensional shot records to eliminate the time dispersion by two schemes: (1) stationary filtering plus interpolation and (2) non-stationary filtering. We show with both 1D and 2D examples that the time dispersion is effectively removed by our post-propagation filtering at nearly no additional cost.

 
Stochastic rock physics modeling for seismic anisotropy with two different shale models (pdf)

[SRC]
Yunyue (Elita) Li, Dave Nichols, and Gary Mavko
We study the topography of the DSO objective function with respect to the error in the anisotropic parameters. The flat bottom of the topography suggests that inversion may stop in any location in the epsilon-delta space. To stabilize the inversion, we need a priori anisotropic model to precondition the model space. In this paper, we build the anisotropic prior model using deterministic and stochastic rock physics modeling for sandy-shale anisotropy. We investigate two different methodologies to combine sand (quartz) and shale (clay): suspension model and lamination model. Anisotropic differential effective medium model is used to model the quartz suspension, and Backus average model is used to model the sand/shale lamination. The modeling results from both methodologies show greater differences for delta than for epsilon. By taking compaction and mineral transition into account, we then perform a more realistic modeling at a well location where the shale content and porosity are available from the well log measurements. Both the deterministic and the stochastic model results from these two approaches have similar trends but different spans over the epsilon-delta space. The combined distribution will provide looser constraints to the anisotropic parameter estimation.

 
Wave equation migration velocity analysis for VTI models (pdf)

[SRC]
Yunyue Li, Biondo Biondi, Robert Clapp and Dave Nichols
Anisotropic models are recognized as more realistic representations of the subsurface where a complex geological environment exists. These models are widely needed by all kinds of migration and interpretation schemes. In this paper, we extend the theory of wave equation migration velocity analysis (WEMVA) to build vertical transverse isotropic (VTI) models. Because of the ambiguity between depth and delta, we assume delta can be accurately obtained from other sources of information, and invert for the NMO slowness and the anellipticity parameter eta. We use a differential semblance optimization objective function to evaluate the focusing of the prestack image in the subsurface-offset domain. To regularize the multi-parameter inversion, we build a framework to adapt the geological and the rock physics information to guide the updates in both NMO slowness and anisotropic parameter eta. This regularization step is crucial to stabilize the inversion and to produce geologically meaningful results. We test the proposed approach on a 2-D Gulf of Mexico dataset starting with a fairly good initial anisotropic model. The inversion result reveals a shallow anomaly collocated in NMO velocity and eta and improves both the continuity and the resolution of the final stacked image.

Image segmentation

Integrated seismic image segmentation and efficient model evaluation: field data example (pdf)

[SRC]
Adam Halpert
Salt interpretation and model building in areas with complicated salt geology represent significant bottlenecks during large iterative imaging projects. Automated tools like image segmentation can help interpreters quickly identify salt bodies in 3D seismic volumes, reducing the need for time-consuming manual picking. In addition, a scheme to efficiently test multiple possible models without fully re-migrating the dataset is useful when more than one salt scenario is in play. Here, a 3D field data example demonstrates that a combination of these two computational interpretation tools can effectively generate and test alternative models. Re-migration with a preferred velocity model produces an improved subsalt image.

 
Distance Regularized Level Set Salt Body Segmentation (pdf)

[SRC]
Taylor Dahlke
Segmentation of seismic images using a Distance Regularized Level Set Evolution (DRLSE) scheme maintains numerical stability of our implicit surface without the expense and accuracy issues associated with reinitialization approaches. In this work I apply the DRLSE algorithm to the Sigsbee salt model as well as an offshore salt data set. I then apply a modified energy functional which includes a Frobenius norm term that further improves the segmentation results. These applications of DRLSE demonstrate promising results using a very simplified energy functional.

Author(s)
A. Almomin
H. Arevalo-Lopez
B. Biondi
J. Chang
J. Claerbout
R. Clapp
T. Dahlke
M. Denolle
A. Guitton
A. Halpert
H. Le
C. Leader
S. Levin
Y. Li
M. Maharramov
E. Martin
G. Mavko
D. Nichols
S. de Ridder
S. Ronen
X. Shen
M. Wong
Publication Date
October, 2013