With the 8 projective parameters P it is possible to transform an image from one frame to the next frame. Due to the high duplicity in pixel content, this means that I can send one frame of data and ``pchirp'' it several times to get the subsequent frames. This would be interesting as a video codec. Of course you are missing some pixels and have problems if new things come on to the screen but those problems can be remedied. Especially if you can calculate P for different motions (i.e. different objects). Quantitatively, I have no idea of whether video orbits would be feasible as a codec but it seems promising. Possible for things like wearable computing interaction where what the wearer sees is streamed out to somewhere else. More mundanely you could do straight video conferencing where the background is static and only the speaker or foreground is moving.