Video Orbits is software based on the methods presented in  that tries to solve the problem of relating two images of the same scene. Specifically, we would like to be able to relate two different frames of video (i.e. images that are relatively similiar).
The idea is that in order to relate the two images we should try to model the physical underpinnings of the difference. That is what is causing the differences between these two frames of video. The differences will be caused by the motion of the camera as it records the scene. This is especially apparent in the context of mobile wearable video capture. The camera is quite literally a replacement for the eye and is subject to quite a variety of motions. The motions can range from translations, rotations, scale changes (i.e. zoom) to pan and tilt. An important thing to note is that although the motions happen in a space they are projected on to or represented in a space. Thus all discussion of the transformation of one set of points into another happens in this image space.
The modelling of these motions is made with two typical assumptions.