The decomposing of the image up to 7x7 resulted in acceptable calculations for the parameters P and thus a MSE. However, if you increased beyond that the images because too small and no meaningful calculation could be done to estimate the parameters P, thus there was no MSE.
Ideally, I would have liked to do this in a way that was similiar to the robust estimation assignment on the entire image. One thing I am having major difficulties with is what actually is the Hough transform for video orbits and how this hyperbolic regression is actually done. I understand that it is hyperbolic in shape but how does each pixel contribute to this and what is in this other space? and how would I fit more then one hyperbola? I imagine the process to be very similiar but have yet to understand it completely.
The images that I worked on were quite similiar, two successive frames in a video sequence. It seems that the initial idea did hold to be true. Albeit the resolution is quite coarse but that was expected as this is only a proof of concept.
You can see from above the original picture after I mask out the parts with a high MSE. As a sanity check, you can see that it is part of the closer sign , the metal lamp post across the street, the shadow on the street, the crack on the street and a square with just a high MSE. These motions are small and should probably be ignored.
In the next test images the motion is much larger. The sign is in the foreground as well as the shadow from the light pole. There seems to be basically two motions. The background which is the left side of the screen and the sign, lamp post, and lamp post shadow.
It isn't the most beautiful demonstration but it seems to show a proof of concept. That if we applied robust estimation to orbits we would be able to discern different motions. This could very well be something that I will try to work on.