In general, I think Matt is right. But here are a couple of ideas that might lead to useful results.
First, focus on a very small area of the video. Using the CaptureScreen command you might be able to capture the same area of the screen repeatedly. With a fast machine and a fast network connection you might be able to capture a sequence of images fairly rapidly, and then analyze those images in your script after capturing them for a few seconds (so that the capture can run as frequently as possible) to see which ones show different frames.
Another approach would be to simply use the colorAtLocation() function to repeatedly examine the color of one pixel (or maybe a few pixels) on the screen to see that they are changing as the video runs.
Any approach in Eggplant, though, will be limited by the VNC server’s ability to be notified by the operating system that the screen has changed, pick up those changes and send them across the network to Eggplant, and for Eggplant to process that data. That process involves a lot of variables that are out of our direct control, and I don’t know that we’ve ever done any experiments to see just how fast or consistent it is (at the speed levels you’re talking about). Plus, it will depend on the specific operating system, VNC server, and network configuration you’re working with.
If you decide to give this a try and learn something from it, please post the details here so everyone can learn from your experience!