Moving x86 video drivers into the kernel

Since Keith's recent talk on X at, I've been asked a few times and seen discussion elsewhere (such as in the comments on the linked article) what Sun thinks about moving various portions of the Xorg video drivers from the current userspace shared objects into true kernel drivers, as most people assume this means Linux kernel drivers and reducing the portability of Xorg to non-Linux platforms such as Solaris/OpenSolaris and the BSD's.

While I'm of course too far down the food chain to claim to speak officially for Sun, as the lead engineer for the Xorg integration to Solaris and someone who has talked to a number of the other Solaris engineers about it, I'd say the Solaris engineering teams agree having in-kernel drivers for video cards is something we want and need.

On Solaris, we've long had in kernel graphics drivers for all our SPARC frame buffers, since that was a small and controlled set, and writing the driver for it was simply part of the cost of producing the card. On the x86 side though, we had for years the same model as most other x86 Unix-like OS'es - a generic set of kernel modules, and all device-specific logic in the X server. The Solaris x86 kernel modules were vgatext and xsvc. vgatext provides the text console, using standard/ancient VGA interfaces which virtually all PC graphics boards support. xsvc (which was closed source due to legal encumberances, but has recently been replaced with a newly written open source version by the Solaris Xen project team) provides the mappings to the PCI bus that allow Xorg to probe the bus hardware to find devices, and then map in the video card once it's found.

Recently, new kernel modules have started appearing in Solaris - the first two for 3-D rendering acceleration, nvidia's driver and the DRI driver for Intel graphics. Two more are coming soon from the Suspend-and-Resume project, to provide the hooks to save and restore necessary state across system suspends - they'll initially support the ATI Rage XL from the Sun Ultra 20 and ATI ES-1000 in the Ultra 20 M2. (Dave and Jay gave a talk on their implementation at last years' X Developer's Conference.)

Additionally, our security guys have been pushing for more kernel frame buffer drivers to reduce our need for running Xorg as root. They'd also like to find a way to close the now infamous Ring 0 security weakness. (We've invited them to drop in to this week's X.Org Developer's Conference to talk about this - they're planning to show up Thursday morning for the Secure X talk, so we may try to have a breakout session on this topic at that time to discuss this.)

Now, we still can't write drivers ourselves for every x86 video card on the market, which is one of the big reasons we chose to move from Xsun to Xorg on Solaris, so we'd still prefer if a model could be found that allowed sharing drivers between Solaris, Linux, and BSD (which means agreeing on both licensing and interfaces with the kernel), similar to what we already have in DRI. If we can work out a model like that, then I think Sun would support it - either way, we're going to have to have more graphics driver functionality in the kernel, and sharing would be better than going our own way.

[Technorati Tags: , , , , , , ]


A common video driver framework was once attempted by kgi. some brave soul ported it to freebsd relatively recently and enabled it to be truly OS independent.

Unfortunately, those 3 letters will very likely spark some flamewars on several lists, but maybe its design or parts thereof could be adapted in whatever succeeds it?

Posted by Patrick Georgi on February 04, 2007 at 11:04 PM PST #

Moving to in-kernel drivers has a lot of interaction with the kernel console system. I wrote up my thoughts on this in the console section of this paper

If this is done for Linux I would recommend compiling out the current VT system (easy to do from Kconfig) and design a new system which appears identical to the user but doesn't have the nasty state sharing problems of the current VT implementation. There are many other issues to consider: boot console, i18n, multiuser, DRM, existing fbdev, mergedfb, etc, etc.

I am evolved with an embedded device startup now so I don't have time to contribute anymore.

Posted by Jon Smirl on February 05, 2007 at 12:16 AM PST #

I've often wondered about the possibility of doing video cards similar to things like audio devices. I.e. have card's attach to /dev/ati0, /dev/nv0 or /dev/video or whatever have you, and implement a basic drawing API/protocol similar to something one would find in X. Therefore anyone writing a kernel video driver would wind up implementing simple primitives like line(x0,y0,x1,y1), rect(...), rectfill(...), etc, for that specific hardware. Then on the other side, one could just specify "dummy" or similar as their X driver, and this would be an X server driver that just passed the screen in terms of said primitives along to /dev/video or whatever.
For the 2D case it sounds like it could be pretty simple. This fits in with the Unix idea of 'everything is a file' that i personally love so much, and could potentially make things like debugging pretty simple. To debug the dummy X driver one could just direct the output to a file or similar instead of /dev/video and inspect it manually. Similarly, to debug all these new kernel video drivers people may just be able to cat/echo commands and primitives to /dev/video were the API/protocol setup to be just that simple. As to the feasibility of keeping things human readable, if string handling in kernel turned out to be too much of a pain simple fixed length hex strings could just replace literal strings. I can't imagine it would be too hard to validate commands and values/arguments in kernel for security. I don't know about speed. The 3D case sounds more complex, could one just pass GL down to cards that support it? Kinda sounds like some of the GL X servers may have already solved some of these problems? Things like shader languages sound like they would get even more ugly.
This would also provide a simple migration strategy from userland drivers to kernel drivers, when your project implemented the kernel driver for your video card, you could just switch the driver in your xorg.conf from the xorg driver to the dummy driver to work with the kernel driver.
This would also allow each project to use whatever licenses they wanted when they implement their own kernel drivers.

Posted by Thoren on February 05, 2007 at 06:32 AM PST #

The problem with such an interface is that those cards are too different.

Some might provide a fillrect(x1,y1,x2,y2,rgba,rgba_border,border_width) function, others might only have a fillrect that fills rgb without borders - but they could provide a different way to get the same result in a very fast way (eg. 2 rgba triangles and 4 rgba lines)

Different cards might want to get the data in different data representations - converting everything in kernel (after it was probably converted in the X server on top already) will slow things down and won't improve security, to say the least.

As for OpenGL, very few cards actually implement its state machine in hardware - most drivers do extensive optimizations on the data (that are sometimes specific to the way that card works) and management of state.

As for state, a graphic context is a huge beast, esp if you include 3d. either that driver only provides support for a single client at a time, or it has to take care of that.

For such reasons, KGI only provided resource management in the kernel module, and put the accelerated driver that makes use of that in a common graphic library (libggi). That way, the kernel module still stays relatively simple but specific to the card (so they could validate some critical data points) and the advanced driver logic stays in userspace, but separate from any specific application.

Posted by Patrick Georgi on February 13, 2007 at 09:39 PM PST #

Post a Comment:
Comments are closed for this entry.

Engineer working on Oracle Solaris and with the X.Org open source community.


The views expressed on this blog are my own and do not necessarily reflect the views of Oracle, the X.Org Foundation, or anyone else.

See Also
Follow me on twitter


« April 2014