You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
which fits a parabola through the old point ($\tau$=0), the full step ($\tau$=1) and a test ($\tau$=0.3) and optain an optimum line search parameter of about 0.6-0.65.
199
200
As method, we can also use `'exact'` (forward calculations) or `'inter'` (interpolation), yielding almost the same results.
200
201
The latter is the simplest one and the former takes the most effort.
201
-
In total, the chi-square misfit decreases, but slowly.
202
+
In total, the chi-square misfit, computed by $\Phi_d/N$, decreases slowly.
202
203
203
-
### Controlling inversion
204
+
As gradient-based minimization converges much slower, we switch to a Gauss-Newton framework.
205
+
After initialization, we set the model transformations as strings (we can also create instances)
206
+
We can choose `lin`, `log`, `logL` (L being lower bound), `logL-U` (two bounds), `cotL-U` or `symlogT` (T being the linear threshold).
207
+
The model roughness vector (including model transformation and weighting) can be
208
+
accessed by `inv.roughness()`.
204
209
205
-
The data errors $\epsilon_i$ play a crucial role in the inversion process as they
206
-
define the data weighting matrix $\mathbf{W_d}$. Sometimes they are well known from
207
-
statistical considerations, but often they need to be estimated. We usually consider
208
-
a so-called error model consisting of a relative and absolute errors.
210
+
```{code-cell}
211
+
from pygimli.frameworks.inversion import GaussNewtonInversion
212
+
213
+
inv = GaussNewtonInversion(fop=fop)
214
+
inv.modelTrans = 'log' # already default
215
+
inv.dataTrans = 'log' # default linear
216
+
```
217
+
218
+
Like the transformations, there are a lot of options that can be set directly to the inversion instance:
219
+
220
+
-`fop` - the forward operator
221
+
-`robustData`, `blockyModel` - use L1 norm for data misfit and model roughness
222
+
-`verbose` - to see some output
223
+
-`model` - the current model
224
+
-`response` - the model response
225
+
-`dataVals`, `errorVals` - data and error vectors
209
226
227
+
Most of them can also be passed to the inversion run and should better
228
+
229
+
-`maxIter` - maximum iteration number
230
+
-`lam` - the overall regularization strength
231
+
-`zWeight` - the vertical-to-horizontal regularization ratio (2D/3D problems)
232
+
-`startModel` - the starting model as float or array
233
+
-`relativeError` and `absoluteError` to define the error model
234
+
-`limits` - list of lower and upper parameter limits (overriding `inv.modelTrans`)
235
+
236
+
After running the inversionq
237
+
238
+
```{code-cell}
239
+
model = inv.run(data, relativeError=0.03, verbose=True)
240
+
```
241
+
242
+
we observe that the data are fitted within noise in very few iterations.
243
+
The chi-square value can be accessed by `inv.chi2()`, its convergence is stored in
244
+
`inv.chi2History`. The data, model and total objective function values can be retrieved
245
+
by `inv.phiData()`, `inv.phiModel()` and `inv.phi()`. By default, the current model and
246
+
its response are used, alternatively you can pass `model=` to `phiModel()` or `phi()`
247
+
and `response=` to `phiData()` and `phi()`.
248
+
The important measure of data fit is the chi-square value
0 commit comments