Graph Optimization 2  Modelling Optimization Target
Usually, all graph optimization papers or tutorials start from raising the optimization problem: minimizing the following function:
This article will explain where this optimization comes from and why it is related to gaussian noise.
Maximum Likelihood Estimation with Gaussian noise
The probabilistic modeling of graph optimization is based on the Maximum Likelihood Estimation(MLE) algorithm.
The case we use for this article will also be the one we used in the last article:
What we want to estimate are:
 Camera pose
 Camera pose
 3D Landmark
The 3D landmark can be seen by the camera at pose and , the projections of the landmark on both cameras are our measurements:

: 2D projection of on camera at pose

: 2D projection of on camera at pose
The basic idea of MLE is looking for the and to maximize the probability of getting the measurements and .
This probability is the socalled the “likelihood”. Assuming that the noise we get in the measurement process is gaussian noise, the probability can be modelled using gaussian distribution.
Let’s say the projection of onto the camera at pose is a function :
The is then the estimated measurement. The probability of getting the measured data is expressed as the likelihood between this estimation and the real measurements:
Where is considered the noise of our measurement.
Combining all the measurements together, we get:
The problem turns to be maximize the above likelihood function:
In order to get rid of the exponential root and turn this maximization problem to be a minimization problem, we minimize the log likelihood instead:
then we get:
After some calculation, we get:
In the graph optimization convention, we define a new term:
this is the socalled “information” matrix. This information matrix is a diagonal matrix, as usually the different dimensions of the measurement are independent.
The difference of real measurement and the estimated measurement is expressed as an error :
where the new represents all unknowns including and .
is taking care of the error related to and . For bundle adjustment, there is another error between and :
and there is also a:
We only use and to represent both sides, just for convenience.
Don’t forget that all these parameters have many degree of freedoms, so they are actually vectors. Then our new is also a vector. The negative log likelihood turns to be:
Now we converge to the convention of graph optimization, usually where every graph optimization starts from setting the optimization goal of minimizing: