Выбрать главу

So that’s how I ended up getting out of physics and into computer science and programming. I wasn’t really programming before that; it was all math and science. My parents didn’t let me get an Apple II. I tried once. I didn’t beg but I was saying, “I could learn a foreign language with this,” which was a complete smoke screen. “No. You’ll probably waste time playing games.” And they were right. So they saved me from that fate.

Seibeclass="underline" Other than providing more summer employment than physics, what about programming drew you in?

Eich: The connection between theory and practice, especially at the front end of the compiler construction process, was attractive to me. Numerical methods I didn’t get into too much. They’re less attractive because you end up dealing with all sorts of crazy trade-offs in representing real numbers as finite precision floating-point numbers and that’s just hellish. It still bites JavaScript users because we chose this hardware standard from the ’80s and it’s not always operating the way people expect.

Seibeclass="underline" Because, like the Spanish Inquisition, no one really expects floating point.

Eich: No one expects the rounding errors you get—powers of five aren’t representable well. They round badly in base-two. So dollars and cents, sums and differences, will get you strange long zeros with a nine at the end in JavaScript. There was a blog about this that blamed Safari and Mac for doing math wrong and it’s IEEE double—it’s in everything, Java and C.

Physics was also less satisfying to me because it has kind of stalled. There’s something not quite right when you have these big inductive theories where people are polishing corners and inventing stuff like dark energy, which is basically unfalsifiable. I was gravitating toward something that was more practical but still had some theoretical strength based in mathematics and logic.

Then I went to University of Illinois Champaign-Urbana to get a master’s degree, at least. I was thinking of going all the way but I got stuck in a project that was basically shanghaied by IBM. They had a strange 68020 machine they had acquired from a company in Danbury, Connecticut, and they ported Xenix to it. It was so buggy they co-opted our research project and had us become like a QA group. Every Monday we’d have the blue suit come out and give us a pep talk. My professors were kind of supine about it. I should’ve probably found somebody new, but I also heard Jim Clark speak on campus and I pretty much decided I wanted to go work at Silicon Graphics.

Seibeclass="underline" What were you working on at SGI?

Eich: Kernel and networking code mostly. The amount of language background that I used there grew over time because we ended up writing our own network-management and packet-sniffing layer and I wrote the expression language for matching fields and packets, and I wrote the translator that would reduce and optimize that to a short number of mask-and-match filters over the front 36 bytes of the packet.

And I ended up writing another language implementation, a compiler that would generate C code given a protocol description. Somebody wanted us to support AppleTalk in this packet sniffer. It was a huge, complex grab bag of protocol syntax for sequences and fields of various sizes and dependent types of… mostly arrays, things like that. It was fun and challenging to write. I ended up using some of the old Dragon book—Aho and Ullman—compiler skills. But that was it. I think I did a unifdef clone. Dave Yost had done one and it didn’t handle #if expressions and it didn’t do expression minimization based on some of the terms being pound-defined or undefined, so I did that. And that’s still out there. I think it may have made its way into Linux.

I was at SGI from ’85 to ’92. In ’92 somebody I knew at SGI had gone to MicroUnity and I was tired of SGI bloating up and acquiring companies and being overrun with politicians. So I jumped and it was to MicroUnity, which George Gilder wrote about in the ’90s in Forbes ASAP as if it was going to be the next big thing. Then down the memory hole; it turned into a $200 million crater in North Sunnyvale. It was a very good learning experience. I did some work on GCC there, so I got some compiler-language hacking. I did a little editor language for MPEG2 video where you could write this crufty pseudospec language like the ISO spec or the IEC spec, and actually generate test bit streams that have all the right syntax.

Seibeclass="underline" And then after MicroUnity you ended up at Netscape and the rest is history. Looking back, is there anything you wish you had done differently as far as learning to program?

Eich: I was doing a lot of physics until I switched to math and computer science. I was doing enough math that I was getting some programming but I had already studied some things on my own, so when I was in the classes I was already sitting in the back kind of moving ahead or being bored or doing something else. That was not good for personal self-discipline and I probably missed some things that I could’ve studied.

I’ve talked to people who’ve gone through a PhD curriculum and they obviously have studied certain areas to a greater depth than I have. I feel like that was the chance that I had then. Can’t really go back and do it now. You can study anything on the Internet but do you really get the time with the right professor and the right coursework, do you get the right opportunities to really learn it? But I’ve not had too many regrets about that.

As far as programming goes, I was doing, like I said, low-level coding. I’m not an object-oriented, design-patterns guy. I never bought the Gamma book. Some people at Netscape did, some of Jamie Zawinski’s and my nemeses from another acquisition, they waved it around like the Bible and they were kind of insufferable because they weren’t the best programmers.

I’ve been more low-level than I should’ve been. I think what I’ve learned with Mozilla and Firefox has been about more test-driven development, which I think is valuable. And other things like fuzz testing, which we do a lot of. We have many source languages and big deep rendering pipelines and other kinds of evaluation pipelines that have lots of opportunity for memory safety bugs. So we have found fuzz testing to be more productive than almost any other kind of testing.

I’ve also pushed us to invest in static analysis and that’s been profitable, though it’s fairly esoteric. We have some people we hired who are strong enough to use it.

Seibeclass="underline" What kind of static analysis?

Eich: Static analysis of C++, which is difficult. Normally in static analysis you’re doing some kind of whole-program analysis and you like to do things like prove facts about memory. So you have to disambiguate memory to find all the aliases, which is an exponential problem, which is generally infeasible in any significant program. But the big breakthrough has been that you don’t really need to worry about memory. If you can build a complete controlflow graph and connect all the virtual methods to their possible implementation, you can do a process of partial evaluation over the code without actually running it. You can find dead code and you can find redundant tests and you can find missing null tests.

And you can actually do more if you go to higher levels of discourse where we all operate, where there’s a proof system in our head about the program we’re writing. But we don’t have a type system in the common languages to express the terms of the proof. That’s a real problem. The Curry-Howard correspondence says there’s a correspondence between logic systems and type systems, and types are terms and programs are proofs, and you should be able to write down these higher-level models that you’re trying to enforce. Like, this array should have some constraint on its length, at least in this early phase, and then after that it maybe has a different or no constraint. Part of the trick is you go through these nursery phases or other phases where you have different rules. Or you’re inside your own abstraction’s firewall and you violate your own invariants for efficiency but you know what you’re doing and from the outside it’s still safe. That’s very hard to implement in a fully type-checked fashion.