The reason we don’t, has to do with concurrency. You see, the chips, when you put them next to each other, they all execute in parallel. And they send messages. They are based on this message-passing paradigm of programming, which is what I believe in. And that’s not how we write software together. So I think one direction Erlang might take, or I would like it to take, is this component direction. I haven’t done it yet, but I’d like to make some graphic front ends that make components and I’d like to make software by just connecting them together. Dataflow programming is very declarative. There’s no notion of sequential state. There’s no program counter flipping through this thing. It just is. It’s a declarative model and it’s very easy to understand. And I miss that in most programming languages.
That’s not to say that what’s inside an individual black box isn’t very complicated. Take grep, for example. Seen from the outside—imagine a little square. The input is a stream of data, a file. You say cat foo span grep and grep has got some arguments, it’s got a regular expression it’s got to match. OK. And out of grep come all the lines that match that regular expression. Now, at a perceptual level, understanding what grep does is extremely simple. It has an input which is a file. It has an input which is a regular expression. It has an output which is a set of lines or a stream of lines that match the regular expression. But that is not to say that the algorithm inside the black box is simple—it could be exceedingly complicated.
What’s going on inside the black boxes can be exceedingly complicated. But gluing things together from these complicated components does not itself have to be complicated. The use of grep is not complicated in the slightest. And what I don’t see in system architectures is this clear distinction between the gluing things together and the complexity of the things inside the boxes.
When we connect things together through programming language APIs we’re not getting this black box abstraction. We’re putting them in the same memory space. If grep is a module that exposes routines in its API and you give it a char* pointer to this and you’ve got to malloc that and did you deep copy this string—can I create a parallel process that’s doing this? Then it becomes appallingly complicated to understand. I don’t understand why people connect things together in such complicated ways. They should connect things together in simple ways.
Seibeclass="underline" Comparing how you think about programming now with how you thought when you were starting out, what’s the biggest change in your thinking?
Armstrong: The big changes in how I think about programming have nothing to do with the hardware. Obviously it’s a lot faster and lot more powerful but your brain is a million times more powerful than the best software tools. I can write programs and then suddenly, days later, say, “There’s a mistake in that program—if this happens and that happens and that happens and this happens, then it will crash.” And then I go and look in the code—yup, I was right. There has never been a symptom. Now you tell me a development system that can do that kind of stuff. So the changes that have happened as a programmer, they’re mental changes within me.
There are two changes and I think they’re to do with the number of years you program. One is, when I was younger quite often I would write a program and work at it until it’s finished. When it was finished I would stop working on it. It was done, finished. Then I’d get an insight—“Ah! Wrong! Idiot!” I’d rewrite it. Again: “Yeah, it’s wrong”—rewrite it.
I remember thinking to myself, “Wouldn’t it be nice if I could think all of this stuff instead of writing it?” Wouldn’t it be nice if I could get this insight without writing it. I think I can do that now. So I would characterize that period, which took 20 years, as learning how to program. Now I know how to program. I was doing experiments to learn how to program. I think I know how to program now and therefore I don’t have to do the experiments anymore.
Occasionally I have to do very small experiments—write extremely small programs just to answer some question. And then I think through things and they more or less work as I expect when I program them because I’ve thought through them. That also means it takes a long time. A program that you write, you get the insight, you rewrite—it might take you a year to write. So I might think about it for a year instead. I’m just not doing all this typing.
That’s the first thing. The second thing that’s happened is intuition. When I was younger, I would do the all-night hacks, programming to four in the morning and you get really tired and it’s macho programming—you hack the code in hour after hour. And it’s not going well and you persevere and you get it working. And I would program when the intuition wasn’t there.
And what I’ve learned is, programming when you’re tired, you write crap and you throw it all away the next day. And 20 years ago I would program although I was getting a strong feeling that this isn’t right—there’s something wrong with this code. I have noticed over the years, the really good code I would write was when I’m in complete flow—just totally unaware of time: not even really thinking about the program, just sitting there in a relaxed state just typing this stuff and watching it come out on the screen as I type it in. That code’s going to be OK. The stuff where you can’t concentrate and something’s saying, “No, no, no, this is wrong, wrong, wrong”—I was ignoring that years ago. And I’d throw it all away. Now I can’t program anymore if it says, “No.” I just know from experience, stop—don’t write code. Stop with the problem. Do something else.
Because I was good at math and that sort of stuff at school, I thought, “Oh, I’m a logical person.” But I took these psychology tests and got way high scores on intuition. And quite low scores on logical thinking. Not low—I can do math and stuff; I’m quite good at them. But because I was good at math I thought science was about logic and math. But I wouldn’t say that now. I’d say it’s an awful lot of intuition, just knowing what’s right.
Seibeclass="underline" So now that you spend more time thinking before you code, what are you actually doing in that stage?
Armstrong: Oh, I’m writing notes—I’m not just thinking. Doodling on paper. I’m probably not committing much to code. If you were to monitor my activity it’d be mostly thinking, a bit of doodling. And another thing, very important for problem solving, is asking my colleagues, “How would you solve this?” It happens so many times that you go to them and you say, “I’ve been wondering about whether I should do it this way or that way. I’ve got to choose between A and B,” and you describe A and B to them and then halfway through that you go, “Yeah, B. Thank you, thank you very much.”
You need this intelligent white board—if you just did it yourself on a white board there’s no feedback. But a human being, you’re explaining to them on the white board the alternative solutions and they join in the conversation and suggest the odd thing. And then suddenly you see the answer. To me that doesn’t extend to writing code. But the dialog with your colleagues who are in the same problem space is very valuable.
Seibeclass="underline" Do you think it’s those little bits of feedback or questions? Or is it just the fact of explaining it?
Armstrong: I think it is because you are forcing it to move it from the part of your brain that has solved it to the part of your brain that has verbalized it and they are different parts of the brain. I think it’s because you’re forcing that to happen. I’ve never done the experiment of just speaking out loud to an empty room.
Seibeclass="underline" I heard about a computer science department where in the tutor’s office they had a stuffed animal and the rule was you had to explain your problem to the stuffed animal before you could bother the tutor. “OK, Mr. Bear, here’s the thing I’m working on and here’s my approach—aha! There it is.”