Seibeclass="underline" Is there something intrinsic in programming that’s always going to draw people with that kind of mentality?
Bloch: Absolutely. We love brainteasers. But we have to temper this love with the knowledge that we’re solving real problems for real people. And if we don’t do that we are, essentially, whacking off. I think that part of the failure of the first company that I was involved in was due to the fact that we didn’t understand that what we were doing wasn’t pure engineering.
We weren’t really thinking that the most important thing we could do was solve real problems for real customers. The moment you lose sight of that and who your customers are, you’re dead meat. But I do think that it tends to conflict with the sort of people who are attracted to programming, who are the people who love brainteasers. But I think you can have your cake and eat it too. Keep that empathy gene on when you’re designing your APIs, but then, in order to make them run bloody fast, you can freely descend into the puzzle palace.
You’ll have plenty of opportunity to solve brainteasers when designing and optimizing algorithms and data structures, especially concurrent ones. You have to be able to think with mathematical precision about stuff that is quite complex, and you have to be able to come up with creative ways of combining primitives to achieve the desired effect.
But you have to know where you can and should apply that kind of thinking and where it will just produce a system that is unmaintainable or unusable.
Seibeclass="underline" Are the opportunities for doing that kind of programming going away? A lot of this low-level stuff is implemented in the VM that you’re using or the concurrency libraries that you’re using. So for a lot of people, anymore, programming is about gluing stuff together.
Bloch: I totally agree. Well, in relative terms it’s diminishing. The percentage of programmers who have to do this is way smaller than it used to be. Back when you bought a machine and it didn’t even have an operating system on it, nevermind a programming language or any ready-written applications, yeah, everybody had to do that.
The world in which most programmers have to do this is vanishing or vanished. But in absolute terms there’s probably as much need as there ever was for that sort of people. We want to have our cake and eat it too—we want to have the advantages of safe languages coupled with the speed of hand-tuned assembly code, so we need people to write these virtual machines and these garbage collectors and design these chips which are themselves basically works of software, albeit realized in hardware.
I think there’s plenty of employment for people who like doing this stuff, but we have to carefully target them. I think if you have people who are pure puzzle solvers you have to couple them with management who can make sure that they are using their skills in the organization’s best interests.
There’s this problem, which is, programming is so much of an intellectual meritocracy and often these people are the smartest people in the organization; therefore they figure they should be allowed to make all the decisions. But merely the fact that they’re the smartest people in the organization doesn’t mean they should be making all the decisions, because intelligence is not a scalar quantity; it’s a vector quantity. And if you lack empathy or emotional intelligence, then you shouldn’t be designing APIs or GUIs or languages.
What we’re doing is an aesthetic pursuit. It involves craftsmanship as well as mathematics and it involves people skills and prose skills—all of these things that we don’t necessarily think of as engineering but without which I don’t think you’ll ever be a really good engineer. So I think it’s just something that we have to remind ourselves of. But I think it’s one of the most fun jobs on the planet. I think we’re really lucky to have grown up at the time that we did when these skills led to these jobs. I don’t know what we would have been doing a few generations back.
Joe Armstrong
Joe Armstrong is best known as the creator of the programming language Erlang and the Open Telecom Platform (OTP), a framework for building Erlang applications.
In the modern language landscape, Erlang is a bit of an odd duck. It is both older and younger than many popular languages: Armstrong started work on it in 1986—a year before Perl appeared—but it was available only as a commercial product and used primarily within Ericsson until it was released as open source in 1998, three years after Java and Ruby appeared. Its roots are in the logic programming language Prolog rather than some member of the Algol family. And it was designed for a fairly specific kind of software: highly available, highly reliable systems like telephone switches.
But the characteristics that made it good for building telephone switches also—and almost inadvertently—made it quite well suited to writing concurrent software, something which has drawn notice as programmers have started wrestling with the consequences of the multicore future.
Armstrong, too, is a bit of an odd duck. Originally a physicist, he switched to computer science when he ran out of money in the middle of his physics PhD and landed a job as a researcher working for Donald Michie—one of the founders of the field of artificial intelligence in Britain. At Michie’s lab, Armstrong was exposed to the full range of AI goodies, becoming a founding member of the British Robotics Association and writing papers about robotic vision.
When funding for AI dried up as a result of the famous Lighthill report, it was back to physics-related programming for more than half a decade, first at the EISCAT scientific association and later the Swedish Space Corporation, before finally joining the Ericsson Computer Science Lab, where he invented Erlang.
In our several days of conversation over his kitchen table in Stockholm, we talked about, among other things, the Erlang approach to concurrency, the need for better and simpler ways of connecting programs, and the importance of opening up black boxes.
Seibeclass="underline" How did you learn to program? When did it all start?
Armstrong: When I was at school. I was born in 1950 so there weren’t many computers around then. The final year of school, I suppose I must have been 17, the local council had a mainframe computer—probably an IBM. We could write Fortran on it. It was the usual thing—you wrote your programs on coding sheets and you sent them off. A week later the coding sheets and the punch cards came back and you had to approve them. But the people who made the punch cards would make mistakes. So it might go backwards and forwards one or two times. And then it would finally go to the computer center.
Then it went to the computer center and came back and the Fortran compiler had stopped at the first syntactic error in the program. It didn’t even process the remainder of the program. It was something like three months to run your first program. I learned then, instead of sending one program you had to develop every single subroutine in parallel and send the lot. I think I wrote a little program to display a chess board—it would plot a chess board on the printer. But I had to write all the subroutines as parallel tasks because the turnaround time was so appallingly bad.
Seibeclass="underline" So you would write a subroutine with, basically, a unit test so you would see that it had, in fact, run?
Armstrong: Yes. And then you’d put it all together. I don’t know if that counts as learning programming. When I went to university I was in the physics department at University College of London. I think we probably had programming from the first year. Then you had this turnaround of three hours or something. But again it was best to run about four or five programs at the same time so you got them back fairly quickly.