Whither USENIX?

As I mentioned earlier, I recently returned from USENIX '04, where we presented the DTrace paper. It was a little shocking to me that our paper was the only paper to come exclusively from industry: most papers had no industry connection whatsoever, and the papers that had any authors from industry were typically primarily written by PhD students interning at industry labs. The content of the General Session was thus academic in the strictest sense: it consisted of papers written by graduate students, solving problems in systems sufficiently small to be solvable by a single graduate student working for a reasonably short period of time. The problem is that many of these systems -- to me at least -- are so small as to not be terribly relevant. This is important because relevance is sufficiently vital to USENIX to be embodied in the Mission Statement: USENIX "supports and disseminates research with a practical bias." And of course, there is a more pragmatic reason to seek relevance in the General Session: most of the attendees are from industry, and most of them are paying full-freight. Given that relevance is so critical to USENIX, I was a little surprised that -- unlike most industry conferences I have attended -- there was no way to provide feedback on the General Session. How does the Steering Committee know if the research has a "practical bias" if they don't ask the question?

This leads to the more general question: how do we keep the "practical bias" in academic systems research? Before I try to answer that directly, it's worth looking at the way research is conducted by other engineering disciplines. (After all, one of the things that separates systems from the rest of computer science is its relative proximity to engineering.) To me, it's very interesting to look at the history of mechanical engineering at MIT. In particular, note the programs that no longer exist:

  • Marine engineering, stopped in 1913
  • Locomotive engineering, stopped in 1918
  • Steam turbine engineering, stopped in 1918
  • Engine design, stopped in 1925
  • Automotive engineering, stopped in 1949
Why did these programs stop? It's certainly not because there weren't engineering problems to solve. (I can't imagine that anyone would argue that a 1949 V8 was the ne plus ultra of internal combustion engines.) This is something of an educated guess (I'm not a mechanical engineer, so I trust someone will correct me if I'm grossly wrong here), but I bet these programs were stopped because the economics no longer made sense: it became prohibitively expensive to meaningfully contribute to the state-of-the-art. That is, these specialities were so capital and resource intensive, that they could no longer be undertaken by a single graduate student, or even by a single institution. By the time an institution had built a lab and the expertise to contribute meaningfully, the lab would be obsolete and the expertise would have graduated. Moreover, the disciplines were mature enough that there was an established industry that understood that research begat differentiated product, and differentiated product begat profit. Industry was therefore motivated to do its own research -- which is a good thing, because only industry could afford it.

And what has happened to, say, engine design since the formal academic programs stopped? Hard problems are still being solved, but the way those problems are solved has changed. For example, look at the 2001 program for the Small Engine Technology Conference. A roughly typically snippet:

  • G.P. BLAIR - The Queen's University of Belfast (United Kingdom)
    D.O. MACKEY, M.C. ASHE, G.F. CHATFIELD - OPTIMUM Power Technology (USA)
    Exhaust pipe tuning on a four-stroke engine; experimentation and simulation

  • G.P. BLAIR, E. CALLENDER The Queen's University of Belfast (United Kingdom)
    D.O. MACKEY - OPTIMUM Power Technology (USA)
    Maps of discharge coefficient data for valves, ports and throttles

  • V. LAKSHMINARASIMHAN, M.S. RAMASAMY, Y. RAMACHANDRA BABU TVS-Suzuki (India)
    4 stroke gasoline engine performance optimization using statistical techniques

  • K. RAJASHEKHAR SWAMY, V. HARNE, D.S. GUNJEGAONKAR TVS-Suzuki (India)
    K.V. GOPALKRISHNAN - Indian Institute of Technology (India)
    Study and development of lean burn systems on small 4-stroke gasoline engine

Note that there's some work exclusively by industry, and some work done in conjunction with academia. (There's some work done exclusively by academia, too -- but it's the exception, and it tends to be purely analytical work.) And here's the Program Committee for this conference: Of these, three are clearly academics, and seven are clearly from industry.

Okay, so that's one example of how a traditional engineering discipline conducts joint academic/industrial research. Let's get back to USENIX with a look at the Program Committee for USENIX '05. Note that the mix is exactly the inverse: twelve work for a university and five work for a company. Worse, of those five putatively from industry, all of them work in academic labs. In our industry, these labs have a long tradition of being pure research outfits -- they often have little-to-no product responsibilities. (Which, by the way, is just an observation -- it's not meant to be a value judgement.)

Even more alarming, the makeup of the FREENIX '05 program committee is almost completely from industry. This leads to the obvious question: is FREENIX becoming some sort of dumping ground for systems research deemed to be too "practically biased" for the academy? I hope not: aside from the obvious problem of confusing research problems with business models, having the General Session become strictly academic and leaving the FREENIX track to become strictly industrial effectively separates the academics from the practitioners. And this, in my opinion, is exactly what we don't need...

So how do we keep the "practical bias" in the academic work presented at USENIX? For starters, industry should be better represented at the Program Committee and on the Steering Committee. In my opinion, this is most easily done by eliminating FREENIX (as such), folding the FREENIX Program Committee into the General Session Program Committee, and then having an interleaved "practical" track within the General Session. That is, alternate between practical sessions and academic ones -- forcing the practitioners to sit through the academic sessions and vice versa.

That may be too radical, but the larger point is that we need to start having an honest conversation: how do we prevent USENIX from becoming irrelevant?

Comments:

For me much of the value of USENIX has always been the cross fertilization that happens when you mix industry and academia---smart people focusing rigorously on real-world problems. Without this mix, USENIX would certainly loose much of its appeal. I see two big challenges to maintaining the "practical bias" of USENIX---the different incentives in academia and industry, and the catch-22 of academic publishing. In academia, publishing papers is strongly encouraged. Graduate students need publications to get jobs. Junior faculty need publications to get tenure. Tenured faculty need publications to get grant money. In contrast, most employers discourage their employees from taking the time to write up their work for publications (never mind the time commitment of serving on a program committee or the USENIX board). Another challenge for getting industry papers accepted is the catch-22 of academic publishing. If the paper review process isn't rigorous (by academic standards) then papers published at USENIX won't carry much weight for tenure committees and the folks giving out grant money. Without that, the academics won't submit papers. But these standards act as a barrier of entry to people outside of academia who haven't been trained in the forms of presentation and analysis that academic program committees look for. In light of these challenges, I think the FREENIX track at USENIX is a good thing. Because it doesn't require that papers live up to the standards of the academic community, it is a good forum for presenting interesting new tools and systems. Prior to this year, the FREENIX track ran concurrently with the general (academic) track, allowing people to move back and forth between the sessions. Finally, if you don't like the direction USENIX is going, get involved and do something about it. They want to know what they can do to attract more attendees and bring in more revenue. Better yet, work from within. Go the the conference as often as you can. Get to know the regulars, the folks who publish there, serve on program committees, etc. Encourage your colleagues to submit papers; the program committee can't publish what it doesn't receive. (Where are the Zones and ZFS papers?) Volunteer to review submissions for next year's conference. Volunteer to run conference. The more people know that you're intelligent, knowledgeable, interested, and willing to work, the more likely you'll be invited to serve on program committees.

Posted by Keith Smith on July 06, 2004 at 10:52 AM PDT #

I've read about this before. It was in a IEEE Spectrum article, in September of 2003, at the Reflections section called "Is Industrial Research an Oxymoron?". The author - Robert W. Lucky - said that we are in danger of that all conferences and articles in research publications could came from University. He said that all the Industry labs are being closed and that research in academia is too focuses in Theorical questions. Practical research is near from his end. I suggest you, and the all readers, to look for this article and read it. (sorry for my poor english, I'm working on it ;)

Posted by Jos� Mar�a Ruiz on July 06, 2004 at 05:34 PM PDT #

[Trackback] Today I skipped through PlanetSUN and discovered an interesting article about the relationship between industry and university....

Posted by arved's weblog on July 06, 2004 at 07:24 PM PDT #

As a graduate of MIT with a bachelors and a masters degree, I wanted to weigh in briefly on your comment about the discontinuance of certain engineering disciplines there (based on my experience). While your explanation for these degree programs being stopped may be part of the reason, I think you are missing a big part of the picture. It is not the case that marine, engine, steam, and automobile engineering have gone away at MIT. They haven't, by a long shot. MIT still does significant research in these areas, and many students graduate with much hands on experience in these fields (for those that wish it). But MIT no long offers these disciplines as separate degree programs because they have been folded into so many other degree programs. Its not that MIT doesn't have the resources to host research in these areas; Its that these fields have evolved enough that they aren't relevant on their own anymore (though marine engineering is still its own major, I believe). As a mech. e. student, you get plenty of education and experience with engines, but they are within the context of other research, such as power generation. Much research in these fields comes from the engineering being done in other fields...

Posted by Dana Spiegel on July 06, 2004 at 11:08 PM PDT #

Bryan, Having served at many program committees, including Usenix'04, I can testify that the hardest job as a pc chair is to get industrial folks to join you program committee. People in academia and industrial research either see it as part of their job or they actually get browny points out of it to serve on a PC. With the increasing submission numbers there is a lot of work, and always very unrewarding. People from product group often do not have the time or the interest to join a PC, and it does not get rewarded with the standard work practise, so it would be something the would have to do in the evening hours. For every PC I have chaired I have always leaned heavy on my industrial contacts to join in, and I always failed. The same goes for paper writing by people from product groups, it is just not being done because there is no reward given within the enterprise for such an achievement. At program committee meeting papers from industry (not the research labs) often get preferential treatment as we are very well aware that audiences are eager for 'real world' reports, and often presentation standards are relaxed to make these paper cross the threshold. In my view it is not as much the pc or steering committee side that is the problem her. This problem really lies in the fact that industrial folks claim that there is no time to write paper, as there is no reward within their organization for doing so (e.g. it will not not be a plus on your next performance review).

Posted by Werner Vogels on July 06, 2004 at 11:24 PM PDT #

[Trackback]

The question <EM>why aren't there more papers from industry at this conference</EM>, is something I hear often. It is a subject I care about so here are some thoughts one this subject in response one of those remarks.

At <A href="...

Posted by All Things Distributed on July 07, 2004 at 01:04 AM PDT #

Interesting and thoughtful comments. First of all, to correct a widely-held misconception: many of us in industry are explicitly encouraged to write papers. I'm not speaking just for Sun here -- I know many engineers at other companies that are encouraged to write papers. (And -- fortunately -- the more a company is doing cutting-edge work, the more likely this is to be true.) But here's the problem: the gap between industrial work and academic work has grown so large, that reviewers often don't understand the work or (more commonly) don't appreciate that the problem being solved is actually a problem. This, coupled with the naturally high rejection rate of most conferences, leads to increased odds of rejection. And this is the real problem: it's not that we don't want to do the work, it's just that we don't want to do the work and then have it amount to nothing. Now: academia obviously can't (and shouldn't) lower its standards. That, after all, is the whole point. But it's very frustrating to have papers rejected for the wrong reasons. The right way to fix this is to get (much) more industry involvement on the PCs. And Werner, with respect to your observation that more of the USENIX '04 PC came from industry: I agree that there was reason for optimism with this year's PC. So if everyone agrees that this is (or was) a step in the right direction, why is the USENIX '05 PC so obviously a step backwards?

Posted by Bryan Cantrill on July 07, 2004 at 02:00 AM PDT #

Brian, you make some very good points here, I'll give it some thought and will follow-up later. With respect who is on the PC, it is very dependent which conference/organization is behind it. Usenix gives the PC chair complete freedom to compose the program committee, although I know from experiences that Usenix strongly suggest industrial participation. Usenix is actually one of the better organizations when it comes to serving both the professional and the research community. So why are there no more people from industry on the usenix'05 pc? You'll have to ask Vivek...

Posted by Werner Vogels on July 07, 2004 at 09:28 AM PDT #

[Trackback]

In a comment on his weblog Bryan Cantrill responds to my posting yesterday about papers from industry at conferences. Bryan rejects my statement that one of the main cau...

Posted by All Things Distributed on July 08, 2004 at 03:53 AM PDT #

A huge amount of engineering behind a solution won't necessarily lead a PC to accept a paper. The dominant measure of a paper's worth is how much a reader will learn from it. A beautiful engineering marvel is only publishable if it can teach readers something. The fact that you've solved a problem is not nearly as relevant as how much people can learn from that solution. There is a real conundrum in academic systems research, with regards to publications. Pubs are the currency of the trade, so more is always better, Putting more into a given paper isn't always an improvement, as it can muddle the point. For example, there are plenty of things in DTrace which your USENIX paper doesn't really delve into, due to limitations of space. Therefore, instead of summarizing a long project with a lot of effort into a single paper, the academic approach is to write a series of papers; Mach is a good example. Academic systems research is as much about defining problems as solving them; in the absence of a financial market, academia needs a metric for comparison. It does so, in part, by trying to precisely define the problems. For example, the intro of your DTrace paper is really different from most academic introductions: it spends two paragraphs describing the problem, then goes for a column and a half describing features of the solution. For example, the intro states: Data Integrity. DTrace always reports any errors that prevent trace data from being recorded. In the absence of such errors, DTrace guarantees data integrity... My initial assumption would be that you'd always collect all the data, but this paragraph leads me to think that accomplishing this is non-trivial. What are the complications or problems that arise in achieving this? If I were to try to write something like DTrace, what do you know now that it would great to not have to learn the hard way? What am I missing here? In another example, the first two paragraphs of Section 3.5 are straight-forward "yes, yes, virtualization allows you to enforce safety." The third paragraph is more interesting, because it notes that you had to change the kernel page fault handler, and gives good justification for the decision to do so (the comparative costs). What do I learn from reading the DTrace paper that I wouldn't learn from reading its manual?

Posted by Phil Levis on July 08, 2004 at 07:43 AM PDT #

Phil, to address your question about data integrity: DTrace is an arbitrary context instrumentation framework. As such, DTrace must be able to operate in contexts in which memory cannot be allocated. As a result, there will always be conditions under which DTrace can drop data. (This is true of any arbitrary-context instrumentation framework.) The important bit is that DTrace guarantees that any such drops will be reported, and that -- in the absence of drops -- the data is sound. This is am important guarantee, as other frameworks (e.g. K42) have windows during buffer switching in which data can be lost. DTrace has eliminated these windows using the method outlined in the second paragraph of Section 3.3. In terms of your "yes, yes, virtualization allows you to enforce safety": I very much doubt that you would have anticipated the memory-mapped I/O device issue described in the second paragraph of 3.5. You dismiss these paragraphs as obvious, and yet failure to address this issue would invalidate the safety claim. This is a good example of academia vs. industry: you blow those two paragraphs off as obvious, while to us they are incredibly important; to us the _need_ for these paragraphs is obvious. Finally, in terms of your larger question: the DTrace paper and the manual are largely orthogonal. While the manual describes how to use DTrace (in ~400 pages!), the paper emphasizes the novel contribution of DTrace.

Posted by Bryan Cantrill on July 08, 2004 at 12:21 PM PDT #

For me, the most distressing sign of the gap between computing's "two cultures" is not _what_ academics do, but _how_ they do it. I'm an adjunct professor in Computer Science at the University of Toronto (I spend one day a week there, and the other four at HP). I'd estimate that less than a quarter of professors or graduate students use any kind of version control system; fewer than a tenth use an IDE, and most have never heard of the rest of the "New Standard Model" [1]. In my darker moments, I believe this is because software doesn't have to be right in order to be publishable. (After all, when was the last time you saw a paper rejected because of concerns about bugs in the code?) There is therefore no effective pressure on academics to keep up with current best programming practices, which makes it harder for them to understand industry's concerns, and harder for them to collaborate on projects that are meant to ship to customers. [1] http://pyre.third-bit.com/heliumblog/archives/2004_06.html#000040

Posted by Greg Wilson on July 09, 2004 at 04:50 AM PDT #

Since you critique some of the press coverage you've gotten in earlier posts, I wonder if one is forthcoming for the Register story you have linked on the side. Also, I'd like to completely agree with Greg about the academic programming environment. You could say that they do not need a lot of the heavy-duty tools used by industry and be partially right. However, there are significant drawbacks to this approach, including discouraging of software collaboration. The academic software I worked on had no documentation or comments and my advisor was so disconnected from the programming process, he even had trouble using a Mac! A dishonest student could abuse this particular situation and end up like Jan Hendrik Schon, the physicist at Bell Labs.

Posted by Ajay Kosaraju on July 09, 2004 at 07:40 AM PDT #

I agree I doubt I would have immediately realized the issues pointed out in 3.5, and don't think that the paragraphs in 3.3 are irrelevant. What I was trying to say is that, to some degree, you write a paper to an audience. The academic and industrial audiences differ in terms of what they want to read about, because they often wrestle with different problems (or, at least, wrestle with them differently). I agree that there need to be more venues for publications about systems "that actually work." From my perspective as an academic, though, the major reason I like systems that work (yay, Xen) is that means the author has actually dealt with all the problems (e.g., memory mapped IO), and not tried to sweep hard ones under the rug.

Posted by Philip Levis on July 11, 2004 at 01:21 PM PDT #

I guess I would flip that around: there need to be fewer venues for systems that don't (or can't) work. In software systems -- unlike in most other engineered systems -- "good enough" often isn't: incredibly small flaws (relatively speaking) can invalidate an entire idea. Of course, getting something complicated completely working takes a tremendous amount of effort -- far too much for a graduate student -- which is why I believe that academic systems work must partner with industry to be effective. But I think you're getting to a very important point: right now, academic and industrial audiences have different values. These values need not become the same, but they must be more similar than they are today: if academia and industry develop opposing value systems, there is little hope for the effective communication of ideas between the two. I view a primary role of USENIX as bringing these two value systems closer together -- which is why I am so dismayed at the de-industrialization of the Program Committee.

Posted by Bryan Cantrill on July 11, 2004 at 02:21 PM PDT #

Post a Comment:
Comments are closed for this entry.
About

bmc

Search

Top Tags
Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today