Avoiding Jitter in Composited Frame Display

When I last wrote about compositor frame timing, the basic algorithm compositor algorithm was very simple:

  • When we receive damage, schedule a redraw immediately
  • If a redraw is scheduled, and we’re still waiting for the previous swap to complete, redraw when the swap completes

This is the algorithm that Mutter has been using for a long time, and is also the algorithm that is used by the Weston, the Wayland compositor. This algorithm has the nice property that we draw at no more than 60 frames per second, but if a client can’t keep up and draw at 60fps, we draw all the frames that the client can draw as soon as they are available. We can see this graceful degradation in the following diagram:

But what if we have a source such as a video player which provides content at a fixed frame rate less than the display’s frame rate? An application that doesn’t do 60fps, not because it can’t do 60fps, but because it doesn’t want to do 60fps. I wrote a simple test case that displayed frames at 24fps or 30fps. These frames were graphically minimal – drawing them did not load the system at all, but I saw surprising behavior: when anything else started going on the system – if I moved a window, if a web page updated – I would see frames displayed at the wrong time – there would be jitter in the output.

To see what was happening, first take a look at how things work when the video player is drawing at 24fps and the system is otherwise idle:

Then consider what happens when another client gets involved and draws. In the following chart, the yellow shows another client rendering a frame, which is queued up for swap when the second video player frame arrives:

The video player frame is displayed a frame late. We’ve created jitter, even though the system is only lightly loaded.

The solution I came up for this is to make the compositor wait for a fixed point in the VBlank cycle before drawing. In my current implementation, the compositor starts drawing at 2ms after the VBlank cycle. So, the algorithm is:

  • When we receive damage, schedule a redraw for 2ms after the next VBlank.
  • If a redraw is scheduled for time T, and we’re still waiting for the previous swap to complete at time T, redraw immediately when the swap completes

This allows the application to submit a frame and know with certainty when the frame will be displayed. There’s a tradeoff here – we slightly increase the latency for responding to events, but we solve the jitter problem.

There is one notable problem with the approach of drawing at a fixed point in the VBlank cycle, which we can see if we return to the first chart, and redo it with the waits added:

What we see is that the system is now idle some of the time and the frame rate that is actually achieved drops from 24fps to 20fps – we’ve locked to a sub-multiple of the 60fps frame rate. This looks worse, but also has another problem. On a system with power saving, it will start in a low-power, low-performance mode. If the system is partially idle, the CPU and GPU will stay in low power mode, because it appears that that is sufficient to keep up with the demands. We will stay in low power mode doing 20fps even though we could do 60fps if the CPU and GPU went into high-power mode.

The solution I came up with for this is a modified algorithm where, when the application submits a frame, it marks it with whether it’s an “urgent” frame or not. The distinguishing characteristic of an urgent frame is that the application started the frame immediately after the last frame without sleeping in between. Then we use a modified algorithm:

  • When we receive damage:
    • If it’s part of an urgent frame, schedule a redraw immediately
    • Otherwise, schedule a redraw for for 2ms after the next VBlank.
  • If a redraw is scheduled for time T, and we’re still waiting for the previous swap to complete at time T, redraw immediately when the swap completes

I’m pretty happy with how this algorithm works out in testing, and it may be as good as we can get for X. The main downside I know of is that it only individually solves the two problems – handling clients that need all the rendering resources of the system and handling clients that want minimum jitter for displayed frames, it doesn’t solve the combination. The client that is rendering full-out at 24fps is also vulnerable to jitter from other clients drawing, just like the client that is choosing to run at 24fps. There are mitigation strategies – for example, not triggering a redraw when client that is obscured changes, but I don’t have a full answer. Unredirecting full-screen games definitely is a good idea.

What are other approaches we could take to the overall problem of jitter? One approach would be use triple buffering for the compositor’s output so it never has to block and wait for the VBlank – as soon as the previous frame completes, it could start drawing the next one. But the strong disadvantage of this is that when two clients are drawing, the compositor will be rendering more than 60fps and throwing some frames away. We’re wasting work in a situation where we already have oversubscribed resources. We really want to coelesce damage and only draw one compositor frame per VBlank cycle.

The other approach that I know of is to submit application frames tagged with their intended frame times. If we did this, then the video player could submit frames tagged two VBlank intervals in the future, and reliably know that they would be displayed with that latency and never unexpectedly be displayed early. I think this could be an interesting thing to pursue for Wayland, but it’s basically unworkable for X, since there is no way to queue application frames. Once the application has drawn new window contents, they’ve overwritten the old window contents, and the old window contents are no longer available to the compositor.

Credit: Kristian Høgsberg made the original suggestion that waiting a few ms after the VBlank might provide a solution to the problem of unpredictable latency.

Application/Compositor Synchronization

This blog entry continues an extended series of posts over the last couple of years. Older entries:

What we figured out in the last post was that if you can’t animate at 60fps, then from the point of achieving a smooth display, a very good thing to do is to just animate as fast as you can while still giving the compositor time to redraw. The process is represented visually below. (You can click on the image for a larger version.)

The top section shows a timeline of activity for the Application, Compositor, and X server. At the bottom, we show the contents of the application’s window pixmap, the back buffer, and the front buffer as time progresses. From this, we can get an idea of the time between the point where a user hits a key and the point where that displays on the screen: the input latency. The keystroke C almost immediately makes its way into a new application frame, and that new frame is almost immediately drawn by the compositor into the back buffer, and the back buffer is almost immediately swapped by the X server. On the other hand, the keystroke D suffers multiple delays.

What happens if we use the same algorithm when we’re unloaded – when the total drawing time is less than the interval between screen refreshes? Then it looks like:

This is basically working pretty well – but we note that even though the application is drawing quickly and the entire system is unloaded we still have a lot of input latency. If we plot the latency versus the application drawing time it looks like:

The shaded area shows the theoretical range of latencies, the solid line the theoretical average latency, and the points show min/max/avg latencies as measured in a simulation. (It should be mentioned that this is only the latency when we’re continually drawing frames. An isolated frame won’t have any problems with previously queued frames, so will appear on the screen with minimal latency.)

We could potentially improve this by having the application delay rendering a new frame – the compositor can use the time used to render the last frame to make a guess as to a “deadline” – a time by which the application needs to have the frame rendered. We can again look at a timeline plot and simulated latencies for this algorithm:

There are downsides to delaying frame render – the obvious one is that if we guess wrong and the application starts the frame too late, then we can entirely miss a frame. From a smoothness point of view this looks really bad. In general, an application should only use a deadline provided by the compositor if it has reason to believe that the next frame is roughly similar to the previous one. Another disadvantage is that the delay algorithm does cause a frame-rate cliff as soon as the time to draw a frame exceeds the vblank period – there is an instant drop from 60fps to 30fps.

Which of these two algorithms is better likely depends upon the application: if an application wants maximum animation smoothness and protection from glitches, drawing frames as early makes sense. On the other hand, if input latency is a critical factor – for a game or real-time music application, then delaying frame drawing as late as possible would be preferable.

So, what we want to do from the level of application/compositor synchronization is provide enough information to allow applications to implement different algorithms. After drawing a frame, the compositor should send a message to the application containing:

  • The expected time at which the frame will be displayed on screen
  • If possible, a deadline by which the application needs to have finished drawing the next frame to get it appear onscreen.
  • The time that the next frame will be displayed onscreen

But even without the deadline information, just having a basic response at the end of the frame already greatly improves the situation from the current situation. I’m working on a proposal to add application/compositor synchronization to the Extended Window Manager Hints specification.

Frame Timing: the Simple Way

This is a follow-up to my post from last week; I wanted to look into why the different methods of frame timing looked different from each other. As a starting point, we can graph the position position of a moving object (like the ball) as displayed to the user versus it’s theoretical position (gray line):

The “Frame Start” and “Frame Center” lines represent the two methods described in the last post. We either choose position of objects in the frame based on the theoretical position at the start of the time that the frame will be displayed, or at the center of the frame time. The third line “Constant Steps” shows a method that wasn’t described in the last post, but you would have seen if you tried the demo: it’s actually the simplest possible algorithm you can imagine: compute the positions of objects at the current time, draw the frame as fast as possible, start a new frame.

The initial reaction to the above graph is that the “Frame Center” method is about as good as you can get at tracking the theoretical position, and the “Constant Steps” method is much worse than the other two. But this isn’t what you see if you try out the demo – Constant Steps is actually quite a bit better than Frame Start. Trying to understand this, I realized that the delay – the vertical offset in the graph – is really completely irrelevant to how smooth things look – the user has no idea of where things are supposed to be – just how things change from frame to frame. What matters for smoothness is mostly the velocity – the distance things move from frame to frame. If we plot this, we see a quite different picture:

Here we see something more like the visual impression – that the “Frame Start” method has a lot more bouncing around in velocity as compared to the other two. (The velocity for all three drops to zero when we miss a frame.) We can quantify this a bit by graphing the variance of the velocity versus time to draw a frame. We look at the region from a frame draw time of 1 frame (60fps) to a frame draw time of 2 frames (30fps).

Here we see that in terms of providing consistent motion velocity, Constant Steps is actually is a bit better than the Frame Center at all times.

What about latency? You might think that Constant Steps is worse because it’s tracking the theoretical position less closely, but really, this is an artifact – to implement Frame Center, we have to predict future positions. And, unless we can predict what the user is going to do, predicting future positions cannot reduce latency in responding to input. The only thing that tracking the theoretical positions closer helps with is is if we’re trying to do something iike sync video to audio. And of course, to the extent that we can predict future positions or compute a delay to apply to the audio track, we can do that for Constant Steps as well: instead of drawing everything at their current positions, we can choose them based on the position at a time shortly in the future.

If such a simple method work well, do we actually need compositor to application synchronization? It’s likely needed because we can’t really draw frames as fast as possible, we should draw frames only as fast as possible while still allowing the compositor to always be able to get in, get GPU resources, and draw a frame at the right time.

What to do if you can’t do 60fps?

I’ve been working recently on figuring out application to compositor synchronization. One aspect of that is the timing information does the compositor need to send back to the application and how should the application use it. In the case where everything is lightly loaded and we can hit 60fps, it’s pretty obvious what we want – we just output constantly spaced frames:

But what if we can’t do that? Say we only have the CPU and GPU resources to draw 40fps. To keep things simple and frame timing consistent, do we drop every other frame and draw the animation at 30fps?

(Gray frames are frames where we don’t do an update and reuse the previous image. The dotted circles show the theoretical position of the moving ball at the time of the frame.)

Or maybe it would be better to show more frames, to drop only one out of every three frames?

Or maybe we need to do something more sophisticated than to just drop frames – maybe when rendering a frame we need to take into account how long the frame will be displayed for and calculate positions at the center of the frame display period?

The answers to what looked better wasn’t at all obvious to me, even after a few years of playing with drawing animations for GNOME Shell, so I wrote a demo application to try out various things. If you want to test it out, note that it needs to be run uncomposited, so under GNOME 3, run metacity --replace & from a terminal and then use Alt-F4 to close the “something has gone wrong” screen. (gnome-shell --replace & to get back to your desktop.)

So, what conclusions have I drawn from looking at my demo? The first conclusion is that 60fps is visually way better than anything else. This wasn’t completely obvious to me going in – after all, movies run at 24fps. But movies have motion blur from the exposure time, which we don’t have here. (Adding motion blur to desktop animations would increase computational work considerably, and it seems unlikely that 30fps + motion blur looks better than 60fps without motion blur.)

The second conclusion is that how we time things matters a lot. Of the two methods above for dropping every third frame, the second method is obviously much better than the first one.

The third conclusion, is that if we can get frame timing right, then running at 40fps looks better than running at 30fps, but if we don’t get frame timing right, then the visual appearance is about the same, or possibly even worse.

What does this mean for an application to compositor synchronization protocol? I don’t have the final answer to that yet, but in very general terms we need to support applications that want to draw at frame rates like 40fps, because it can potentially look better – but we have to be careful that we support doing it with algorithms that actually look better.

Update: BTW, if anybody knows useful literature references about this area, I’d be interested.

Benchmarking compositor performance

Recently Phoronix did an article about performance under different compositing and non-compositing window managers. GNOME Shell didn’t do that well, so lots of people pointed it out to me. Clearly there was a lot of work put into making measurements for the article, but what is measured is a wide range of 3D fullscreen games across different graphics drivers, graphics hardware, and environments.

Now, if what you want to do with your Linux system is play 3D games this is very relevant information, but it really says absolutely nothing about performance in general. Because the obvious technique to use when a 3D game is running is to “unredirect” the game – and let it display normally to the screen without interference from the compositor. Depending on configuration options, both Compiz and KWin will unredirect, while GNOME Shell doesn’t do that currently, so this (along with driver bugs) probably explains the bulk of difference between GNOME Shell and other environments.

Adel Gadllah has had patches for Mutter and GNOME Shell to add unredirection for over a year, but I’ve dragged my feet on landing them, because there were some questions about when it’s appropriate to unredirect a window and when not that I wasn’t sure we had fully answered. We want to unredirect fullscreen 3D games, but not necessarily all fullscreen windows. For example, a fullscreen Firefox window is much like any other window and can have separate dialog windows floating above it that need compositing manager interaction to draw properly.

We should land some sort of unredirection soon to benefit 3D gamers, but really, I’m much more interested in compositing manager performance in situations where the compositing manager actually has to composite. So, that’s what I set out this week to do: to develop a benchmark to measure the effect of the compositing manager on application redraw performance.

Creating a benchmark

The first thing that we need to realize when creating such a benchmark is that the only drawing that matters is drawing that gets to the screen. Any frames drawn that aren’t displayed by the compositor are useless. If we have a situation where the application is drawing at 60fps, but the compositor only is drawing 1fps, that’s not a great performing compositor, that’s a really bad performing compositor. Application frame rate doesn’t matter unless it’s throttled to the compositor frame rate.

Now, this immediately gets us to a sticky problem: there are no mechanisms to throttle application frame rate to the compositor frame rate on the X desktop. Any app that is doing animations or video, or anything else, is just throwing frames out there and hoping for the best. Really, doing compositor benchmarks before we fix that problem is just pointless. Luckily, there’s a workaround that we can use to get some numbers out in the short term – the same damage extension that compositors use to find out when a window has been redrawn and has to be recomposited to the screen can also be used to monitor the changes that the compositor is making to the screen. (Screen-scraping VNC servers like Vino use this technique to find out what they need to send out over the wire.) So, our benchmark application can draw a frame, and then look for damage events on the root window to see when the drawing they’ve done has taken effect.

This looks something like:

In the above picture, what is shown is a back-buffer to front-buffer copy that creates damage immediately, but is done asynchronously during the vertical blanking interval. The MESA_copy_sub_buffer GL extension basically does with, with the caveat that (for the Intel and AMD drivers) it can entirely block the GPU while waiting for the blank.

I’ve done some work to develop this idea into a benchmark I’m calling xcompbench. (Source available.)

Initial Results

Below is a graph of some results. What is shown here is the frame rate of a benchmark that blends a bunch of surfaces together via cairo as we increase an arbitrary “load factor” which is proportional to the number of surfaces blended together. Since having only one window open isn’t normal, the results are shown for different “depths”, which are how many xterms are stacked underneath the benchmark window.

Compositor Benchmark (Cairo Blending)

So, what we see above is that if we are drawing to an offscreen pixmap, or we are running with metacity and no compoisting, frame rate decreases smoothly as the load factor increases. When you add a compositor, things change: if you look at solid blue line for mutter you see the prototypical behavior – the frame rate pins at 60fps (the vertical refresh rate) until it drops below it, then you see some “steps” where it preferentially runs locked to integral fractions of the frame rate – 40fps, 30fps, 20fps, etc. Other things seen above – kwin runs similarly to mutter with no other windows open, but drops off as more windows are added, while mutter and compiz are pretty much independent of number of windows. And compiz is running much slower than the other compositors.

Since the effect of the compositor on performance depends on what resources the compositor and application are competing for, it clearly matters what resources the benchmark is using – is it using CPU time? is it using memory bandwidth? is it using lots of GPU shaders? So, I’ll show results for two other benchmarks as well. One draws a lot of text, and another is a simple GL benchmark that draws a lot of vertices with blending enabled.

Compositor Benchmark (Text Drawing)

Compositor Benchmark (GL Drawing)

There are some interesting quirks there that would be worth some more investigation – why is the text benchmark considerably faster drawing offscreen than running uncomposited? why is the reverse true for the GL benchmark? But the basic picture we see is the same as for the first benchmark.

So, this looks pretty good for Mutter right? Well, yes. But note:

It’s all about Timing

The reason Compiz is slow here isn’t that it has slow code, it’s that the timing of when it redraws is going wrong with this benchmark. The actual algorithm that it uses is rather hard to explain, and so are the ways it interacts with the benchmark badly, but to give a slight flavor of what might be going on, take a look at the following diagram.

If a compositor isn’t redrawing immediately when it receives damage from a client, but is waiting a bit for more damage, then it’s possible it might wait too long and miss the vertical reblank entirely. Then the frame rate could drop way down, even if there was plenty of CPU and GPU available.

Future directions

One thing I’d like to do is to be able to extract a more compact set of numbers. The charts above clearly represent relative performance between different compositors, but individual data points tell much less. If someone runs my benchmark and reports that on their system, kwin can do 45 fps when running at a load factor of 8 on the blend benchmark, that is most representative of hardware differences and not of compositor code. The ratio of the offscreen framerate to the composited framerate at the “shoulder” where we drop off from 60fps might be a good number. If one compositor drops off from 60fps at an offscreen framerate of 90fps, but for a different compositor we have to increase the load factor so that the offscreen framerate is only 75fps at the shoulder, then that should be a mostly hardware independent result.

It is also important to look at the effect of going from a “bare” compositor to a desktop environment? The results above are with bare compiz, kwin, and mutter ,and not with Unity, Plasma, or GNOME Shell. My testing indicates pretty similar results with GNOME Shell as with the full desktop. Can I put numbers to that? Is the same true elsewhere?

And finally, how do we actually add proper synchronization instead of using the damage hack? I’ve done an implementation of an idea that was come up with a couple of years ago in a discussion between me and Denis Dzyubenko and it looks promising. This blog post is, however, too long already to give more details at this point.

My goal here is that this is a benchmark that we can all use to figure out the right timing algorithms and get them implemented across compositors. At that point, I’d expect to see only minimal differences, because the basic work that every compositor has to do is the same: just copy the area that the application updated to the screen and let the application start drawing the next frame.

Test Configuration

Intel Core i5 laptop @2.53GHz, integrated intel Ironlake graphics
KWin 4.6.3
Compiz 0.9.4
Mutter 3.0.2

Update: The sentence “why is the text benchmark considerably faster drawing offscreen than running uncomposited” was originally reversed. Pointed out by Benjamin Otte and fixed.