Benchmarking compositor performance

Recently Phoronix did an article about performance under different compositing and non-compositing window managers. GNOME Shell didn’t do that well, so lots of people pointed it out to me. Clearly there was a lot of work put into making measurements for the article, but what is measured is a wide range of 3D fullscreen games across different graphics drivers, graphics hardware, and environments.

Now, if what you want to do with your Linux system is play 3D games this is very relevant information, but it really says absolutely nothing about performance in general. Because the obvious technique to use when a 3D game is running is to “unredirect” the game – and let it display normally to the screen without interference from the compositor. Depending on configuration options, both Compiz and KWin will unredirect, while GNOME Shell doesn’t do that currently, so this (along with driver bugs) probably explains the bulk of difference between GNOME Shell and other environments.

Adel Gadllah has had patches for Mutter and GNOME Shell to add unredirection for over a year, but I’ve dragged my feet on landing them, because there were some questions about when it’s appropriate to unredirect a window and when not that I wasn’t sure we had fully answered. We want to unredirect fullscreen 3D games, but not necessarily all fullscreen windows. For example, a fullscreen Firefox window is much like any other window and can have separate dialog windows floating above it that need compositing manager interaction to draw properly.

We should land some sort of unredirection soon to benefit 3D gamers, but really, I’m much more interested in compositing manager performance in situations where the compositing manager actually has to composite. So, that’s what I set out this week to do: to develop a benchmark to measure the effect of the compositing manager on application redraw performance.

Creating a benchmark

The first thing that we need to realize when creating such a benchmark is that the only drawing that matters is drawing that gets to the screen. Any frames drawn that aren’t displayed by the compositor are useless. If we have a situation where the application is drawing at 60fps, but the compositor only is drawing 1fps, that’s not a great performing compositor, that’s a really bad performing compositor. Application frame rate doesn’t matter unless it’s throttled to the compositor frame rate.

Now, this immediately gets us to a sticky problem: there are no mechanisms to throttle application frame rate to the compositor frame rate on the X desktop. Any app that is doing animations or video, or anything else, is just throwing frames out there and hoping for the best. Really, doing compositor benchmarks before we fix that problem is just pointless. Luckily, there’s a workaround that we can use to get some numbers out in the short term – the same damage extension that compositors use to find out when a window has been redrawn and has to be recomposited to the screen can also be used to monitor the changes that the compositor is making to the screen. (Screen-scraping VNC servers like Vino use this technique to find out what they need to send out over the wire.) So, our benchmark application can draw a frame, and then look for damage events on the root window to see when the drawing they’ve done has taken effect.

This looks something like:

In the above picture, what is shown is a back-buffer to front-buffer copy that creates damage immediately, but is done asynchronously during the vertical blanking interval. The MESA_copy_sub_buffer GL extension basically does with, with the caveat that (for the Intel and AMD drivers) it can entirely block the GPU while waiting for the blank.

I’ve done some work to develop this idea into a benchmark I’m calling xcompbench. (Source available.)

Initial Results

Below is a graph of some results. What is shown here is the frame rate of a benchmark that blends a bunch of surfaces together via cairo as we increase an arbitrary “load factor” which is proportional to the number of surfaces blended together. Since having only one window open isn’t normal, the results are shown for different “depths”, which are how many xterms are stacked underneath the benchmark window.

Compositor Benchmark (Cairo Blending)

So, what we see above is that if we are drawing to an offscreen pixmap, or we are running with metacity and no compoisting, frame rate decreases smoothly as the load factor increases. When you add a compositor, things change: if you look at solid blue line for mutter you see the prototypical behavior – the frame rate pins at 60fps (the vertical refresh rate) until it drops below it, then you see some “steps” where it preferentially runs locked to integral fractions of the frame rate – 40fps, 30fps, 20fps, etc. Other things seen above – kwin runs similarly to mutter with no other windows open, but drops off as more windows are added, while mutter and compiz are pretty much independent of number of windows. And compiz is running much slower than the other compositors.

Since the effect of the compositor on performance depends on what resources the compositor and application are competing for, it clearly matters what resources the benchmark is using – is it using CPU time? is it using memory bandwidth? is it using lots of GPU shaders? So, I’ll show results for two other benchmarks as well. One draws a lot of text, and another is a simple GL benchmark that draws a lot of vertices with blending enabled.

Compositor Benchmark (Text Drawing)

Compositor Benchmark (GL Drawing)

There are some interesting quirks there that would be worth some more investigation – why is the text benchmark considerably faster drawing offscreen than running uncomposited? why is the reverse true for the GL benchmark? But the basic picture we see is the same as for the first benchmark.

So, this looks pretty good for Mutter right? Well, yes. But note:

It’s all about Timing

The reason Compiz is slow here isn’t that it has slow code, it’s that the timing of when it redraws is going wrong with this benchmark. The actual algorithm that it uses is rather hard to explain, and so are the ways it interacts with the benchmark badly, but to give a slight flavor of what might be going on, take a look at the following diagram.

If a compositor isn’t redrawing immediately when it receives damage from a client, but is waiting a bit for more damage, then it’s possible it might wait too long and miss the vertical reblank entirely. Then the frame rate could drop way down, even if there was plenty of CPU and GPU available.

Future directions

One thing I’d like to do is to be able to extract a more compact set of numbers. The charts above clearly represent relative performance between different compositors, but individual data points tell much less. If someone runs my benchmark and reports that on their system, kwin can do 45 fps when running at a load factor of 8 on the blend benchmark, that is most representative of hardware differences and not of compositor code. The ratio of the offscreen framerate to the composited framerate at the “shoulder” where we drop off from 60fps might be a good number. If one compositor drops off from 60fps at an offscreen framerate of 90fps, but for a different compositor we have to increase the load factor so that the offscreen framerate is only 75fps at the shoulder, then that should be a mostly hardware independent result.

It is also important to look at the effect of going from a “bare” compositor to a desktop environment? The results above are with bare compiz, kwin, and mutter ,and not with Unity, Plasma, or GNOME Shell. My testing indicates pretty similar results with GNOME Shell as with the full desktop. Can I put numbers to that? Is the same true elsewhere?

And finally, how do we actually add proper synchronization instead of using the damage hack? I’ve done an implementation of an idea that was come up with a couple of years ago in a discussion between me and Denis Dzyubenko and it looks promising. This blog post is, however, too long already to give more details at this point.

My goal here is that this is a benchmark that we can all use to figure out the right timing algorithms and get them implemented across compositors. At that point, I’d expect to see only minimal differences, because the basic work that every compositor has to do is the same: just copy the area that the application updated to the screen and let the application start drawing the next frame.

Test Configuration

Intel Core i5 laptop @2.53GHz, integrated intel Ironlake graphics
KWin 4.6.3
Compiz 0.9.4
Mutter 3.0.2

Update: The sentence “why is the text benchmark considerably faster drawing offscreen than running uncomposited” was originally reversed. Pointed out by Benjamin Otte and fixed.

What does the user see?

As a long-time GNOME module maintainer and as a team lead within Red Hat, I often get people coming to me for advice about some technical issue or another. And no matter the issue, there’s one question that I’ll almost always end up asking at some point: “what does the user see?” Code, APIs, protocols are all just means to the end-user experience. Discussion of the future of GNOME should also start with what the user sees.

Mark argues that GNOME should be a place where we have internal competition. But his idea of internal competition seems to be competition between different end-user experiences. His entrant into the competition is Unity, an environment with a user experience designed completely in isolation from GNOME. The other entrant would, I suppose, be the GNOME 3 desktop that GNOME has created.

This competition doesn’t make sense to me: what would be left of GNOME if Unity “won” that competition? Not even the libraries are left, because every decision that is made about what goes into library should be driven by that same question “what does the user see?” No widget should go into GTK+ unless it makes sense in a GNOME application. GNOME cannot cede the user experience and still work as a project.

The sort of internal competition I’d like to see within GNOME is competition of ideas. Competition of mockups and prototypes, and even entire applications. We know that we need better file management within the GNOME Activities Overview for 3.2. Is that organized as a timeline? Does it involve tagging? Is it best implemented with Zeitgeist? With Tracker? With both? Those are things that are still open, and the more people that are working on different designs and approaches, the better off the final result will be.

The basic constraint of any sort of internal competition within GNOME is that you have to be willing for some of your ideas to win and some of your ideas to lose. If you are starting out with the premise that you have complete final control over the user experience, then you aren’t working on GNOME, you are working on something else. So far, this seems to be the approach of Canonical. In the past, they took GNOME, modified it, and presented the modified result to their users. Now they are taking some GNOME libraries, building a new desktop on top of that, and presenting that to their users. But I’ve never seen Canonical make the leap and realize that they could actually dive in and make GNOME itself better.

Diving in means a commitment – it means fighting for your ideas at every step of the way, from the design level, to figuring out how the code pieces fit together, to the line-by-line details of the code. But the thing about open source is that the more you engage at this level with a project, the more you win. You become more in sync with the other contributors about end goals. You learn how to communicate with them better. And soon enough you no longer think of yourself as an outsider. You are just are another insider.

Make no mistake: I’m very sad to see further splintering of the Linux desktop. I think GNOME 3 is going to be amazing, but how much more amazing could it have been if the design and coding talent that is going into Unity could have been pooled with the work being done inside GNOME? An application developer can create an application that works both within GNOME and within Unity, but we’re adding extra barriers to the task of creating an application for Linux. That’s already far too hard.

No matter what happens, all desktops on Linux need to continue to work together to try and provide as much cross-desktop compatibility as possible. But we have to realize the limits of freedesktop.org specifications and standards. Many of the early successes of freedesktop.org were in places where there was broad user interface consensus. Drag-and-drop of text from one application to another made sense in all toolkits, so we made it work between toolkits. But if there isn’t consensus on the user experience, then the specification isn’t that useful.

For example, appindicators start off with the proposition any application should be able to create an icon with a drop-down menu and make it a permanent part of the desktop. (I’m simplifying somewhat – the original Status Notifier specification leaves the user experience quite unspecified, but that’s the way that Canonical was using the specification.) If you don’t have that user interface concept, it’s not clear how the spec helps. So that’s what made the Canonical proposal of libappindicator strange. They didn’t engage with GNOME to make the user interface idea part of future designs. They didn’t even propose changes to core GNOME components to support application indicators. They showed up with a library that would allow applications to display indicators in a modified GNOME desktop, and proposed that GNOME should bless it as a dependency.

(From the GNOME Shell side we were never considering whether appindicators were useful for their original designed purpose; we were considering whether they were a useful way to implement the fixed system icons we had in the GNOME Shell design. In the end, it seemed much simpler to just code the fixed system icons, and I think that decision has been supported in how things have turned out in practice. We’ve been able to create system icon drop-downs that match our mockups and use the full capabilities of the shell toolkit without having to figure out how to funnel everything over a D-Bus protocol.)

So, by all means, we should collaborate on standards, but we can’t just collaborate on standards for the sake of collaborating on standards. We have to start off from understanding what the user sees. Once we understand what the user sees, if there’s a place to make an application written for one environment work better in another environment, that’s a place where standardization is useful. Of course, the more that designers from different environments exchange ideas and go down similar user interface paths, the more opportunity there will be for standards.

Is collaboration on standards and on bits of infrastructure, and friendly exchange of UI ideas the way forward for GNOME and Unity? Are they completely separate desktops? Perhaps it’s the only feasible way forward at this point, but it certainly doesn’t make me happy. Mark: any time you want discuss how we can work together to create to a single great desktop, we’re still ready to talk. Design choices, technological choices, the channels and ways we communicate in GNOME, are all things we can reconsider. The only thing to me that’s untouchable is the central idea that GNOME is ultimately about the experience of our users.

Setting Goals for GNOME

Often in GNOME, we think of goal setting is something that we can leave up to the board, or up to the marketing team. An appearance of direction that we layer on top of the what we are really working on. This is obviously backwards … everybody in GNOME should consider the goals of GNOME to be their business. I led a session Sunday morning at the Boston GNOME Summit to try and get some broader brainstorming going about where we want to go with GNOME. So, I wanted to write up both how I set up the discussion and some of the ideas that came out.

Why should do we need goals for GNOME? Goals inspire us. They are great tools for recruiting contributors of all types. They allow us to create compelling marketing materials that explain to user’s what is significant about what we are creating and where we are going. And importantly, they drive decisions – they let us choose between path A and path B. This leads us to what makes a good goal: a good goal is motivational – it can inspire. It’s realistic – it has to be achievable. And it is concrete enough to let you make decisions.

We can look at how some past GNOME goals fit into this framework. The most famous explicitly stated goal was the the 10×10 goal. 10% market share by 2010. It was very catchy and memorable. But even from the start realism was a huge question mark. And worse than that, it really didn’t help answer what we should be doing. By contrast, the goal of the early years of GNOME, though it was never explicitly stated, was to provide a free software replacement for Windows. Not nearly as neat-sounding a goal, but when you line it up against the criteria above it actually stacks up well. At that time Windows was the big barrier to putting users in control of their software through Free Software, so people were motivated to work on replacing it. The goal was realistic – we eventually achieved a lot of it. And it gave us lots of concrete tasks to work on. Things have moved on, but it was an effective goal for that time.

Any sort of exploration of goals for GNOME involves some idea of what GNOME fundamentally is. A phrase I think captures it: “GNOME is a community of people building Free Software for users”. The direction of GNOME is set by the people working on it as individuals, not the companies that might be sponsoring some of that work. GNOME is strongly committed to Free Software, not as a temporary strategy but as a fundamental principle. And we’re not building toys for ourselves, or creating technology masterpieces for their own sake, we are trying to make user’s lives better.

Within that broad set of parameters, we really have the option to do anything. We shouldn’t feel constrained by the set of things we do currently. Another thing to keep in mind is that the computing space is mind-bogglingly big these days. We don’t need to dominate even one segment of computing to be a big and successful project. But what we do need to do is create something that’s really great for the people we do touch: that meets their needs and makes a portion of their day better. And that means direct influence over the user experience. It’s pretty hard to build something that is great for users if you are just building components that other people take and re-purpose. It’s also pretty hard to to be great for users if we’re just a small slice of the total experience. To be concrete: if we’re just the stuff around the edges of the web browser, and the web browser is a tool to look at Facebook, and the user is looking at Facebook on their phone most of the time anyways. Then that’s not an experience we can do a lot to make better. We need to engage with the user beyond traditional “computers” and beyond the local application.

I finished my intro with the question: the user actually gets big benefits by giving all their searches and documents and mail over to Google. Giving their social interactions over to Facebook. While the downsides of centralizing your data under someone else’s control and being able to only do the things with that data that they want to let you do may be obvious, we can’t pretend that this is a trap for the unwary and smart users will keep everything locally. How do we, as GNOME, enable an experience that is both under the user’s control and also as good or better than the experience they can get by giving up that control?

In my next post I’ll describe some of the ideas that came out of the brainstorming session.

GNOME Shell GUADEC wrap-up

Unless you’ve been hiding under a rock, if you are a GNOME contributor, you probably know the big news from GUADEC: we decided to push GNOME 3 back another six months. I obviously would prefer if this wasn’t necessary – it feels like we’ve been working on GNOME Shell for a long time now, and it would be good to get something into user’s hands. (It’s been 19 months or so since we announced the project and wrote the first code.) But it was definitely the right decision: it will give us the time to make GNOME 3 really solid, rather than pushing something not quite finished out the door.

The GNOME Shell team did 3 talks:

  • I gave a presentation where I looked at what we’ve done in the last year and where we are currently: The State of the GNOME 3 Shell.
  • Jon talked about the big ideas behind GNOME Shell design, then together with Jakub gave a peak at some of the work they’ve been doing recently: Shell Yes!.
  • Colin and Dan talked about how to make your application rock with GNOME 3: GNOME 3 and Your Application

Unfortunately, the videos for the talks aren’t yet posted anywhere. So, you’ll have to figure out what you can from slides. This may be pretty hard for my talk – a lot of the slides are just screenshots comparing where we were a year ago to where we are today. But Colin and Dan’s talk has notes in the slides, and Jon’s talk has a video mockup of upcoming shell design changes.

Some media coverage: You can hear me talking with Fabian Scherschel about the shell in the latest Linux Outlaws podcast, and if you read German there’s an article by Andreas Proschofsky at derstandard.at.

We had a good crowd of shell people at the conference; there were 6 or so of us there from Red Hat, and beyond that I was really happy to meet Florian Muellner and Maxim Ermilov who have been responsible for much of the progress in the shell over the last year. (I’d tell you what they’ve done here, but it would make this post too long.) Our two summer of code students were also there: Christina Boumpouka who is working on adding CSS support to LookingGlass, and Matt Novenstern who is doing a bunch of improvements to the message tray.

I talked to the Tracker crowd a bit about integration of Tracker with the shell. I mentioned a couple of areas where we could use some help on the Tracker side: we need more notifications, so we don’t have to constantly requery to show the user all their files and no deleted files, and someone needs to take care of pushing files that the user actually uses into Tracker for indexing whether or not they are in the directories that Tracker automatically indexes. But basically it’s a question of finding someone with the time to sit down and do the work and implement the designs we have. The shell needs a way to search files and file metadata, and Tracker is obvious leader in this area for GNOME.

I also talked a bit about remerging St and Mx with Chris Lord and Emmanuelle Bassi, though somehow I missed catching up with Thomas Wood. We’re all really positive on the idea – there’s been a lot of complementary development on the two codebases. (The way I explained it in my talk is that St has powerful CSS support, which we need for the one-off shell UI, while Mx has far more widgets, which they need for the wider range of things they use Mx for in Meego..) There’s definitely some devil in the details – we’re definitely not done with every toolkit enhancement we need for the shell, for one thing – but hopefully we can make it happen soon.

Øyvind Kolås gave an awesome demo of ClutterSmith. This had the GNOME Shell designers jumping up and down to get it yesterday – enough with Inkscape and Blender. Ideally, we could have a live mockup of the shell in ClutterSmith, using the same images and CSS files as the real shell, and the designers could use that to experiment with new visual changes and motion design.

On the plane home, I started hacking on new shadow code for Mutter that can handle variable shadow radii and shaped shadows. It’s still a work in progress, but since every blog post needs a picture:

(This is a test program using images for windows; the ugly window borders are ugly because that’s what I could draw in the GIMP quickly. On the other hand, the shadow banding is a defect in my code that I’m still working on.)

Measuring GNOME Shell Performance

One of the big goals of the GNOME 3 Shell is to use animation and visual effects for positive good. An animation explains to the user what the connection is between point A and point B. For this to work, the animation has to be smooth – it can’t be a jerky sequence of disconnected frames. Performance matters.

Over the last 18 months we’ve done a fair bit of work on performance – everything from fixing problems with AGP memory caching in the radeon kernel drivers to to moving tests for whether recent files are still there to a separate thread. But this work was ad-hoc and unsystematic. There was no way to see if shell performance got better or worse over time. To compare the performance of two different systems. Or even to tell in a rigorous way whether an optimization that seemed to make sense actual improved things. Over the last few weeks I’ve been working to correct this; to get hard, repeatable numbers that represent how well the shell is performing.

The core of the GNOME Shell performance framework is the event log. When event logging is enabled, all sorts of different types of events are logged: when painting of a frame begins, when painting of a frame ends, when the user enters the overview, and so forth. The log is stored in a compact format (as little as 6 bytes per event), so can be recorded with very little overhead. It doesn’t significantly affect performance.

The other thing that is recorded in the event log is statistics. A statistic is some measurement about the current state: for example, how many bytes of memory have been allocated. Every few seconds, registered statistics are polled and written to the event log as a special event type. Statistics collection can also be triggered manually.

Once we have an event log recorded, we can analyze it to determine metrics. We can measure the latency between clicking on the activities overview button and the first frame of the zoom animation. We can see how many bytes are leaked each time the user goes to the overview by comparing the memory usage before and after. Since we want to measure exactly the same conditions every time, we don’t want to analyze a performance log generated by the user actually doing stuff; instead we script the operation of the shell from Javascript. You can see how this looks by looking at the run() function in js/perf/core.js. The rest of this performance script contains the logic to compute the metrics when the recorded event log is replayed (For example, during replay the script_overviewShowDone() function is called when replaying a script.overviewShowDone event.)

Running gnome-shell --replace --perf=core produces a summary of the computed metrics that looks, in part, like:

# Additional malloc'ed bytes the second time the overview is shown
leakedAfterOverview 83200
# Time to first frame after triggering overview, first time
overviewLatencyFirst 192482
# Time to first frame after triggering overview, second time
overviewLatencySubsequent 66838

(The times are in microseconds.) Being able to get these numbers for a particular build of the shell is good, but what we really want to be able to do is compare these numbers over lots of systems over time. So, there’s also a website shell-perf.gnome.org where statistics can be uploaded.

(The way that uploading works is that after registering a system on the site, you are mailed instructions about how to create a ~/.config/gnome-shell/perf.ini with an appropriate secret key, and the --perf-upload option uploads the report. Please only do this with Git builds of gnome-shell for now – there are some updates to the metrics even since the 2.29.2 release yesterday.)

If you browse around the site, you’ll see that you can look at the recorded metrics for different systems or for an individual system over time. You can also look at a specific uploaded report. An example:

shell-perf.gnome.org: detailed view of a performance report

I should point out, since it’s not very obvious, that navigation to individual reports is by clicking on the report headers in the system view. In the report view, you can see the details of the uploaded metrics. But you also can see the entire original event log! (The event log browser is using the HTML canvas – I’ve tested it in Firefox and Epiphany – it probably works in most browsers that GNOME developers are likely to be using.) Having the event log means that if an anomalous performance report is uploaded we can actually dig in and see more about what’s going on.

Follow

Get every new post delivered to your Inbox.