We’re going to introduce the notions of standard error rectangle and standard error ellipse using some examples with which you’re familiar. Although not strictly necessary, I thought it would be useful here to draw upon / remind you of much of the work we’ve done so far in the course, e.g. on problem setup, functional modeling, linearization, and estimation. The following examples do this and, thereby, provide a quick tour of most of our “big picture” view (with the exception of the pre-processing and pre-adjusting which we won’t include here).

Example 1

Recall our simple case of estimating the coordinates of an unknown point B by measuring its distance from “known” point A. In the following I’m going to review that example from the top and take it right through beyond estimation to begin our treatment of the errors.

example-for-standard-error-and-ellipse-1

The situation

We interpret the situation as follows.

The unknown parameters – what we’re after estimating – are:

    \begin{equation*} \mathbf{x}= \begin{bmatrix} E_B \\ N_B \end{bmatrix} \end{equation*}

and the measurements are:

    \begin{equation*} \mathbf{l}_{measured}= \begin{bmatrix} d_{AB}_1 \\ d_{AB}_2 \\ \vdots \\ d_{AB}_n \end{bmatrix} \end{equation*}

and u=2, r=n=number of measurements.

The linearized functional model

We also know that the general functional model \mathbf{l}_{true}-\mathbf{F}(\mathbf{x})=\mathbf{0} is of the following form for each measurement:

    \begin{equation*} l_{true}-\sqrt{(E_B-E_A)^2+(N_B-N_A)^2} = 0 \end{equation*}

which has the linearized form \mathbf{A}\boldsymbol{\delta}-\mathbf{e}+\mathbf{w}=\mathbf{0}, where:

    \begin{equation*} \boldsymbol{\delta}= \begin{bmatrix} \delta E_B \\ \delta N_B \end{bmatrix} \end{equation*}

and:

    \begin{equation*} \underset{n\times u}{\mathbf{A}}=\left.\frac{d\mathbf{F}}{d\mathbf{x}}\right|_{\mathbf{x}_0} = \begin{bmatrix} -\dfrac{E_B^0-E_A}{d_{AB}^0} & -\dfrac{N_B^0-N_A}{d_{AB}^0} \end{bmatrix}_{\text{for each n}} \end{equation*}

and:

    \begin{equation*} \underset{n\times 1}{\mathbf{w}}=\mathbf{F}(\mathbf{x}^0, \mathbf{l}_{measured}) = \begin{bmatrix} l_{measured}-d_{AB}^0 \end{bmatrix}_{\text{for each n}} \end{equation*}

Or, we can write the full \mathbf{A}\boldsymbol{\delta}-\mathbf{e}+\mathbf{w}=\mathbf{0} as:

    \begin{equation*} \begin{bmatrix} -\dfrac{E_B^0-E_A^0}{d_{AB}^0} & -\dfrac{N_B^0-N_A^0}{d_{AB}^0} \end{bmatrix} \begin{bmatrix} \delta E_B \\ \delta N_B \end{bmatrix} - e_{AB} + l_{measured} - d_{AB}^0 = 0 \end{equation*}

(If you need a review of what we’ve done here so far then head back here and here.)

A reasonable stochastic model

Next we’d need a stochastic model of the observations, to use as input to the estimation process. As we discussed in some detail earlier in this mini course, the corresponding variance-covariance for such a model has the following form for two measurements i, and j:

    \begin{equation*} \mathbf{C}_\mathbf{l}= \begin{bmatrix} \sigma_i^2 & \sigma_{ij} \\\ \sigma_{ji} & \sigma_j^2 \end{bmatrix} \end{equation*}

The values we use here can come about in many ways, something we’ll consider in more detail later in the course. In the meantime I remind you of Lab 3 in which we saw an example with the following simplified stochastic model of the observations:

    \begin{equation*} \sigma_{l}\text{ = }\pm\text{ ( }4\text{ mm }+2\text{ ppm }\text{ ) } \end{equation*}

And I remind you that you can estimate the parameters of a stochastic model yourself from wisely collected samples, as we saw here.

Estimating the parameters

Next we’d carry out the estimation using the following equations that we reviewed here:

    \begin{equation*} \underset{u\times 1}{\hat{\mathbf{x}}}=\mathbf{x}_0+\hat{\boldsymbol{\delta}} \end{equation*}

    \begin{equation*} \underset{u\times 1}{\hat{\boldsymbol{\delta}}} = -(\mathbf{A}^T\mathbf{C}_{\mathbf{l}}^{-1}\mathbf{A})^{-1}\mathbf{A}^T\mathbf{C}_{\mathbf{l}}^{-1}\mathbf{w} \end{equation*}

    \begin{equation*} \underset{u\times u}{\mathbf{C}_{\hat{\mathbf{x}}}} =\mathbf{C}_{\hat{\boldsymbol{\delta}}} =(\mathbf{A}^T\mathbf{C}_{\mathbf{l}}^{-1}\mathbf{A})^{-1} \end{equation*}

 

If you can get to this point you’re ready to take a look at the resulting standard error rectangle and error ellipse for the estimated point coordinates. I’ll do this below.

Note

I think it’s worth noting that what we’ve done so far is all you need to do a network preanalysis. It’s pretty powerful that with some approximate coordinates and the right up front modeling and analysis – like I’ve done in this example so far – it’s possible to get an estimate of how “good” your estimated parameters are doing to be. This allows you to analyze different network configurations in the office before going into the field and, in turn, figure out which will best meet the desired specifications. This is the subject of Lab 3.

Example 2

Can you repeat the above to add a second type of observation? For example, if I told you we were also going to measure the angle between a second known point and our unknown point as shown below?

example-for-standard-error-and-ellipse-3

This should be pretty straightforward to you know given what we did earlier, and I will leave this to you.

The standard point error rectangle

The variance-covariance matrix we would get out of the estimation process considered in the above examples can tell us an awful lot about the “goodness” of our estimated parameters. Let’s consider first what it is:

    \begin{equation*} \mathbf{C}_{\hat{\mathbf{x}}} = \begin{bmatrix} \sigma_{E_B}^2 & \sigma_{E_BN_B} \\\ \sigma_{N_BE_B} & \sigma_{N_B}^2 \end{bmatrix} \end{equation*}

This alone pretty much defines the standard error rectangle which is depicted below and has dimensions 2\sigma_E x 2\sigma_N:

example-for-standard-error-and-ellipse-4

The standard point error ellipse

The dimesions of the error rectangle are not a true representation of the error present at a point of interest. For example, the largest uncertainty is not in either of the cardinal (north or east) directions. In the case of Example 1 above, where the position is based solely on distance measurements from point A, the maximum error would be on the line between A and B. But more generally, as in the case of Example 2 above, it is in a direction that comes from the geometry and the input covariances in the situation at hand.

This gives rise the notion of a standard point error ellipse of the type depicted here:

example-for-standard-error-and-ellipse-5

where we refer to a as the semi-major axis of the error ellipse, b as the semi-minor axis of the error ellipse, and \beta as the orientation of the semi-major axis.

You should think of the semi-major axis, a, as the maximum error, \sigma_{max}, and the semi-minor axis, b, as the minimum error, \sigma_{min}.

I share the following closer look in case it’s helpful too:

example-for-standard-error-and-ellipse-6

You can calculate the parameters of the error ellipse from the variance-covariance matrix \mathbf{C}_{\hat{\mathbf{x}}} as follows:

    \begin{equation*} tan(2\beta)=\dfrac{2\sigma_{EN}}{\sigma_N^2-\sigma_E^2} \end{equation*}

    \begin{equation*} a^2=\sigma_{max}^2=\dfrac{1}{2}\begin{bmatrix}\sigma_E^2+\sigma_N^2+\sqrt{(\sigma_E^2-\sigma_N^2)^2+4\sigma_{EN}^2}\end{bmatrix} \end{equation*}

    \begin{equation*} b^2=\sigma_{min}^2=\dfrac{1}{2}\begin{bmatrix}\sigma_E^2+\sigma_N^2-\sqrt{(\sigma_E^2-\sigma_N^2)^2+4\sigma_{EN}^2}\end{bmatrix} \end{equation*}

Notes

Here we looked at the standard cases. This means that they represent the 1-\sigma situation, or 68.5 or 39.4% confidence levels in the 1D and 2D cases, respectively. Multiplying factors are needed to get to other confidence levels, as we will see in the next lesson (link TBA).

Also, our examples here led us to the concepts and equations for the point error rectangle and point error ellipse. As you know from the situation, this is a way of expressing the error in an estimate of a station or point of interest. The situation is slightly different if we’re after similar measures of accuracy for a position difference. Look at the self-assessment questions below for more on what we call the relative error ellipse.