Merit function

Merit function estimates the closeness between target and current spectral characteristics. In the simplest case when target reflectance on the wavelength grid \(\{\lambda_j\}\)  is specified:

\[MF=\left[\frac 1L \sum\limits_{j=1}^{L}\left(\frac{R(X,\lambda_j)-\hat{R}(\lambda_j)}{\Delta R_j}\right)^2\right]^{1/2} \]

where \(\Delta R_j\)  are tolerances,  \(X\) is a vector of layer thicknesses.

Optimization methods

Hyper Newton method is a modification of the Newton method based on the ideas used by the Damped Least Squares method. It requires considerable computer memory but has a fast convergence.

Modified damped least squares (Modified DLS) is a powerful method which offers a rapid convergence at early stages of the refinement procedure, but may be slowly converging near the end of the refinement procedure, especially in the case of complicated designs with many design layers.

Newton method is a 2nd order optimization method which uses full analytic second order derivatives of the merit function.  It is especially well suited for problems with a large number of optimization parameters and poor convergence of the merit function. Newton's second order method is one of the most powerful optimization methods, but not necessarily the most rapidly converging at the initial stages of the refinement procedure. It is usually a good choice at the end of the refinement procedure.

Quasi-Newton DLS method is utilizes full information on partial Jacoby matrices for a merit function being a sum of squares. It has a fast convergence, but is not available for problems with targets having qualifiers.

Sequential QP method is based on a sequential approximations of the optimization problem by a set of Quadratic Programming (QP) problems. It has a good convergence and can be recommended for complicated problems.

Conjugate gradients method is a well-known and widely used method.  It is a stable method which typically exhibits faster convergence than the Steepest descent method.

Steepest descent method is the simplest and historically the first to be applied to optimization problems.  This method converges quite slowly compared to other methods

Degree of bulk inhomogeneity \(\delta\) 

\( \delta=\displaystyle\frac{n_i-n_0}{n}\cdot 100\%, \;\; n=\frac{n_i+n_0}{2} \)

where \(n_i, n_0\)  are refractive indices at the outer boundary and substrate boundary, respectively.
Admittance \(A\)

\(\displaystyle\frac{dA}{dz}=ik\left[n^2(z)-A^2\right], \)

\(r(k)=\displaystyle\frac{1-A(z_a,k)}{1+A(z_a,k)},\;\; A(0,k)=n_s \)

where \(r(k)\) is the amplitude reflectance, \(n(z)\) is refractive index profile, \(k\) is the wavenumber.


Admittance is a complex value. Geometrically it can be represented as a point on a complex plane (admittance phase plane). For more details, see, for example, this book , pages 34-46. 

Group delay (GD) and group delay dispersion (GDD)

The first and the second derivatives of the total phase shift with respect to angular frequency:

\(GD=-\displaystyle\frac{d\varphi}{d\omega},\;\;\; GDD=-\frac{d^2\varphi}{d\omega^2} \)


  •  Start 
  •  Prev 
  •  1  2  3 
  •  Next 
  •  End