I thought I’d better add this as a new topic so it didn’t get missed, as I think the root cause is a bug in Vine Server
To better understand why I thought I was getting too much frame data, I tried experimenting with different sizes of capture area and then looked at the captured data as a series of images.
I found that once the size of capture area exceeded a certain threshold the resultant image was becoming corrupted.
For example, a 64x64 pixel area was fine - no corruption. But a 128x64 pixel capture would have image corruption after the 57th/58th scan line. By corruption I mean that the following scan lines are offset by a pixel or two, as if a few extra bytes have been added to the data stream.
As a complete shot in the dark I looked at the Vine Server source and tried increasing the UPDATE_BUF_SIZE macro (in rfb.h) to 100000. And voila, the corruption went away.
I increased the capture area again, so it exceeded the UPDATE_BUF_SIZE buffer once again, and the corruption reappeared.
So it looks to me as though there is a subtle bug in the way the client update buffer is emptied and refilled when it’s too small to hold a complete update. Make the buffer big enough to handle a complete update for the area you want to capture and it’s fine. If it’s too small it seems to create corruption in the data stream.
And if the buffer is big enough to handle a complete update in one pass I get exactly the expected amount of data arriving at my client
It’s all to easy to cry ‘BUG !’ at the first opportunity, I know, but it would be great if someone at Redstone could look into this and verify my theory.
PS. If I’m correct this should also be evident when using VNC in its normal mode of operation i.e. transferring a video display to a remote terminal. It’s probably masked by the fact that you only get a full update initially. This would be corrupted but you might not see it as subsequent, and rapid, updates would be a lot smaller than the update buffer size (as they are likely to be compressed).