What to do if you can’t do 60fps?

I’ve been working recently on figuring out application to compositor synchronization. One aspect of that is the timing information does the compositor need to send back to the application and how should the application use it. In the case where everything is lightly loaded and we can hit 60fps, it’s pretty obvious what we want – we just output constantly spaced frames:

But what if we can’t do that? Say we only have the CPU and GPU resources to draw 40fps. To keep things simple and frame timing consistent, do we drop every other frame and draw the animation at 30fps?

(Gray frames are frames where we don’t do an update and reuse the previous image. The dotted circles show the theoretical position of the moving ball at the time of the frame.)

Or maybe it would be better to show more frames, to drop only one out of every three frames?

Or maybe we need to do something more sophisticated than to just drop frames – maybe when rendering a frame we need to take into account how long the frame will be displayed for and calculate positions at the center of the frame display period?

The answers to what looked better wasn’t at all obvious to me, even after a few years of playing with drawing animations for GNOME Shell, so I wrote a demo application to try out various things. If you want to test it out, note that it needs to be run uncomposited, so under GNOME 3, run metacity --replace & from a terminal and then use Alt-F4 to close the “something has gone wrong” screen. (gnome-shell --replace & to get back to your desktop.)

So, what conclusions have I drawn from looking at my demo? The first conclusion is that 60fps is visually way better than anything else. This wasn’t completely obvious to me going in – after all, movies run at 24fps. But movies have motion blur from the exposure time, which we don’t have here. (Adding motion blur to desktop animations would increase computational work considerably, and it seems unlikely that 30fps + motion blur looks better than 60fps without motion blur.)

The second conclusion is that how we time things matters a lot. Of the two methods above for dropping every third frame, the second method is obviously much better than the first one.

The third conclusion, is that if we can get frame timing right, then running at 40fps looks better than running at 30fps, but if we don’t get frame timing right, then the visual appearance is about the same, or possibly even worse.

What does this mean for an application to compositor synchronization protocol? I don’t have the final answer to that yet, but in very general terms we need to support applications that want to draw at frame rates like 40fps, because it can potentially look better – but we have to be careful that we support doing it with algorithms that actually look better.

Update: BTW, if anybody knows useful literature references about this area, I’d be interested.


  1. Posted June 22, 2011 at 6:16 pm | Permalink

    what about an extra framebuffer to store the last frame, display both at once giving a motion blur?

    • Owen
      Posted June 22, 2011 at 6:29 pm | Permalink

      My guess is that trying to combine together frames to get motion blur without rendering more frames is going to be ineffective. The point of motion blur in this context is to get rid of the discrete edges of the objects when they are instantaneously captured at some position – the “strobe light” effect. If we’re just leaving those same set edges on screen for a longer period of time, that doesn’t seem likely to improve things. But the real thing to do would be to try it. I was tempted to spend time adding optional motion blur to my demo to investigate such questions, but since it didn’t seem applicable to applications that can’t draw frames at 60fps, I decided to skip it.

  2. randall
    Posted June 22, 2011 at 6:35 pm | Permalink

    Many “modern” LCD panels will do motion blur for free! (A lot of cheap ARM tablets ship with these panels; most PCs don’t).

  3. Posted June 22, 2011 at 6:55 pm | Permalink

    constant steps > round down > frame centre > frame start

    Also, have you ever seen any of the experiments in high frame rate movies?

  4. Bob Bobson
    Posted June 22, 2011 at 6:56 pm | Permalink

    This cannot be a new problem, can it?

    Check the literature!

    • Owen
      Posted June 22, 2011 at 7:17 pm | Permalink

      If you know where to look, hints would be appreciated! (I actually updated the blog post 30 minutes or so to ask for literature references; I meant to do that originally and forgot.) I haven’t seen any good discussions of this in the past, and some extensive web searches and a quick look in google scholar didn’t turn up anything either. It’s slightly akin to frame-rate conversion, but the literature on that isn’t really applicable because for frame rate conversion the source material is a fixed set of frames.

  5. DDD
    Posted June 22, 2011 at 7:41 pm | Permalink

    I would recommend throttling to an exact divisor. So 60, 30, 15, 7 FPS. The only way that the eye isn’t going to notice a judder is if you do luminance weighted intermediates (basically a weighed average in YUV space) and if you had the CPU for that, you wouldn’t have a problem in the first place πŸ˜‰

    • Owen
      Posted June 23, 2011 at 3:28 pm | Permalink

      I certainly thought going in that the exact divisor approach was likely to be the winner – that 30fps would always look better than 40fps. At least to my eye, that guess is not substantiated by the demo application – when frames are properly timed, 40 or 50fps looks a lot smoother to me than 30fps.

  6. Posted June 22, 2011 at 11:29 pm | Permalink

    I may be wrong, but I think movies look alright at 24fps because that was the speed at which they were recorded.

    imho, if you can’t draw at 60fps, than modify the draw rate to fit a slower refresh rate (in other words, draw fewer frames with the object moving further between each frame). Of course, that only works down to a certain frame rate–after which it starts looking like an object hopping through space–but it shouldn’t look bad at 30fps as long as the object isn’t moving fast. Motion blur would be great, but should probably be handled at the compositor level, anyway.

    Maybe you could provide motion blur in the compositor, and when an application requests it, the compositor could blur between frames until the “buffer” stops “moving” (obviously the buffer doesn’t actually move ;-). Of course, I know nothing about what I’m talking about, so I understand if this just isn’t feasible.

    • Bob Bobson
      Posted June 23, 2011 at 11:57 am | Permalink

      I really don’t think you’ve understood what he’s saying…

      • Posted June 23, 2011 at 2:15 pm | Permalink

        In that case, please elaborate.

        Since you understood it better than I, it would be nice if you explained where you think I’m confused.

      • Owen
        Posted June 23, 2011 at 3:24 pm | Permalink

        Really, what I’m talking about in the post is basically how to “modify the draw rate to fit a slower refresh rate” – describing testing an experiment comparing several different ways of doing it. Movies look OK at 24fps not just because that’s the speed they were recorded at but because, as I mentioned, they inherently have motion blur from the way that the camera works. They also (I didn’t get into that) look OK because directors and camera operators learn to avoid situations where the low 24fps frame rate would cause distracting visual artifacts. In general, I don’t think that doing the sort of motion interpolation that modern TV’s do in the compositor really makes sense – I’d rather leave the CPU and GPU resources to the application, and if the application is doing such complicated rendering per each frame that motion interpolation helps, then the application could do it.

  7. Anon
    Posted June 23, 2011 at 3:39 am | Permalink

    Do a Carmack and ask for an extension that allows tearing πŸ™‚ – http://www.eurogamer.net/articles/2011-06-16-john-carmack-the-future-now-interview (apparently the resolution is dynamically adjusted too though)

  8. Anon
    Posted June 23, 2011 at 4:01 am | Permalink

    If this was published http://www.eurogamer.net/articles/digitalfoundry-force-unleashed-60fps-tech-article then there may be something in the references…

    • Owen
      Posted June 23, 2011 at 3:51 pm | Permalink

      Thanks for the pointer; from the abstract it certainly looks like the author likely had managed to find references about the interaction between vision and animation which could be useful. I’ll have to see if I can track down a copy of it.

  9. Andreas Tunek
    Posted June 23, 2011 at 8:29 am | Permalink

    If you can, always try for the highest frame rate with the least latency. This should give the quickest input which is more important than the miniscule improvement in image quality when doing triple buffering and such.

    Regarding literature, it seems like this problem is often encountered by games, and they seem to solve it by having a variable frame rate, tearing or capping the frame rate to something low (and still dip under that sometimes). I do not know where to start looking at info, but the more technical places of beyond3d might be a start.

  10. Chris Adsfor
    Posted June 24, 2011 at 5:56 am | Permalink

    Not directly related, but when TV signals didn’t have the bandwidth/hardware to do full 50fps video they used interlacing. Might be an interesting experiment to interlace the frames you do have, it’s a bit like motion blur.

    for a discussion of how to deinterlace video but it’s a nice intro to interlacing in general and you can always reverse what they say.

    Also, I believe the 24fps movies are actually shown at 48fps because each frame is shown twice. (This is not the same as showing each frame for twice as long.) It also gives it the film-like appearance.

    • Juan Pablo Ugarte
      Posted June 28, 2011 at 8:19 pm | Permalink

      Yes on theaters they show each frame twice which does not mean they leave it longer, they actually shut the shutter twice on each frame to make you think its a different frame (not sure how much that actually helps)

      On the tv you can not do that so they came up with interlacing and showing each frame in two separate fields
      makes you think the motion is more fluid.

      This is why movies on NTSC dont look as smooth as in PAL
      Because on PAL they simply run the movie faster (25fps) which makes the motion fluid but has other problems like having a different pitch (pretty bad for music)

      On NTSC you simply show one frame during two fields and the next frame for 3 (12*2+12*3=60)

      We could try different pulldown patterns when the app can not hit 60 so for example if it can hit 50, the compositor will have to show a frame twice every 5, for 40 every 2 and so on.

      Its either that or tearing, I personally preffer stepping down to multiples of the actual display rate, in this case if an app can not hit 60 I preffer to go down to 30 then 20 and 15.
      But pulldown is a valid option here. (even more with higher rate displays)

      BTW owen:
      I ussually hang out at #glade3

  11. Posted June 25, 2011 at 3:02 pm | Permalink

    I’d create animations based on time line. Something like move_object(OBJ, 100px to left, start_microtime, current_microtime, duration), so no matter what FPS you get, app always exactly knows where/how to draw next frame.

  12. Posted June 25, 2011 at 5:45 pm | Permalink

    I think you should look at how the existing animation frameworks solve this problem.

    For example, see how the Qt Amination framework or Silverlight or WPF solves this.

  13. Matt
    Posted June 28, 2011 at 10:50 pm | Permalink

    It seems to me that the problem you’re trying to deal with is the temporal analog of the spatial sampling problem. Of course there is a ton of literature on sampling artifacts in 1-dimensional time-varying signals too. In fact it’s probably the dominant mode of explaining the concepts. So look for digital signal processing books for the very basics. Expanding the math for 2-dimensional images is an exercise left for the reader in these books ;).

    Motion blur would be akin to super sampling then filtering the buffered samples to a lower output rate. Sample at 60 frames per second and then filter down to 30 fps being somewhat analogous to sampling at 1200×800 and then non-nearest-neighbor filtering down to 600×400.

    If you drop frames then you’re doing something more like nearest neighbor scaling and will suffer analogous artifacts. In fact, I think you’re virtually guaranteed that there is a worst-case animation for any fixed frame dropping scheme. For example, imagine if the red dot had bounced off a wall in a series of 3 frames. If your frame dropping scheme drops the middle frame then it appears as if no animation has occurred at all, or that the motion of the ball suddenly changed without hitting anything. On the other hand, if the motion is parallel to the “wall” then dropping a frame won’t be noticeable. So which frame dropping scheme you ought to choose really depends heavily on which direction the ball is moving and what its phase the frames are relative to your fixed frame dropping scheme.

    Since you have “a lot” of frames per second (more than 3 :)) perhaps the best way to drop frames is to add some small limited amount of jitter when selecting which frames to drop. This is just like some of the old spatial sampling schemes did when nearest-neighbor was no longer acceptable. That should get rid of the really annoying “regular” aliasing artifacts in exchange for the less noticeable irregular ones. Another drawback is you’ll see slight fluctuations in CPU/GPU load so other jitter-sensitive tasks (e.g. smoothly tracking mouse input, where latency variability is more annoying/important than low throughput) might be negatively impacted.

%d bloggers like this: