Arie Altena
index
George Dyson is an historian of technology. His most recent book Turing’s Cathedral: The Origins of the Digital Universe (2012), tells the story of the group of people, led by John von Neumann at the Institute for Advanced Study in Princeton, New Jersey, who built one of the first computers with a fully electronic Random Access Memory. He also is the author of Darwin Among the Machines (1997), Project Orion (2002), and Baidarka (1986). Dyson was a keynote speaker at Sonic Acts in 2012, where this interview took place.
Arie Altena: Turing’s Cathedral, your book on the origin of the digital universe, was ten years in the making. It is a very precise historical account of the early development of the computer at Princeton, including the links to the clandestine hydrogen bomb project. I suppose that your book is at least partly based on original research – also considering your own background as a child. As the son of the physicist Freeman Dyson, you grew up with these scientists and engineers, and played on the campus where they worked on building this computer. How much in your book is original research?
George Dyson: Everyone argues about who was first in developing a working computer. Those are endless arguments. I didn’t want to establish who was first, but to understand what really happened. You can argue forever about who – or what – was first. The question of what really happened is particularly interesting, because much of it was clouded in wartime secrecy. The Americans didn’t always know what the British were doing, the British didn’t always know what the Americans were doing, and so on. In America we have this concept of gateway drugs. If you drink beer, it is the gateway to stronger alcohol, which is the gateway to other drugs. For me, there is a similar spectrum in research. Books are the gateway to journals, journals the gateway to archives. But the final, the really hard drug, is when you find material that isn’t even in an archive, but in somebody’s basement. No historian has ever seen it. This book has a lot of that. In quantity maybe one quarter of the content in the book is new, but in importance, probably half of it is based on new research. I spent a lot of time talking to people who might have things in their basements. In three cases they did. It was like finding the Dead Sea Scrolls. ‘Here is the real evidence of what happened’. That was exciting. To get there takes a long time. You first have to win the confidence of the people in question.
AA: On of the main characters in your book is Julian Bigelow...
GD: He’s the guy who built the prototype of this machine that we all use, thousands of times a day. They’re everywhere. All those machines are copies of what this man built with his own hands – with the help of a half-dozen fellow engineers. But this knowledge gets lost – nobody asks anymore who actually built the archetype of this machine. And it could have been built very differently.
AA: This is one of the fascinating aspects of Turing’s Cathedral. Turing and Von Neumann, they had the ideas, but someone with engineering capabilities is also necessary, someone who can build the components required to assemble the machine you’ve imagined. Your book recounts a great story about valves – they have to be stable, and standardised, to make the machine work...
GD: Or if they cannot be stable or standardised, you have to make the architecture of the machine work with bad tubes! What Bigelow’s group did was quite amazing. They didn’t simply engineer the computer in the sense of making the drawings and then handing them to the machine shop; first they had to build the machine shop.
AA: To us the computer is an almost disembodied machine. Through the story of Bigelow you turn the attention back to engineering and it becomes clear that the computer could have been built in a different way – as you just said. Where might it have gone in a different direction?
GD: Obviously, computer architecture could have gone in a number of different directions. It’s an accident of history that we ended up with this particular architecture that works so well. The machine that Bigelow built runs 40 bits in parallel in the machine. At the time this was absolutely crazy. Why try to do something 40 bits at a time when you don’t even know how to do it one bit at a time? But they believed they were going to get a tube called the RCA Selectron, which was an all-digital 4000-bit memory tube. They didn’t get it, and they had to make a work-around. But the anticipation of the Selectron meant that the architecture was all ready for solid-state memory when it finally showed up. Solid-state memory plugs into this architecture very well. Thanks to the decision to run 40 bits in parallel, we didn’t have to make a great architectural shift later. From the beginning the entire system – especially the address space – could scale, without having to change the code. Well, the code has to be changed a little, but not fundamentally. This was a very lucky accident. If we had gone for a serial architecture in the 1950s, the transition wouldn’t have been so easy.
AA: Is that also one of the reasons why we now work with a Von Neumann–Bigelow computer, and not with a machine derived from the Zuse Computer, or the Colossus that was built in England during the Second World War?
GD: Yes. It is a bit unfair though. There is quite a strong animosity towards Von Neumann, and it is deserved in a way. One of the documents I found this is like a smoking gun – suggest that IBM, who hired Von Neumann as a consultant, did gain some unethical advantage over the competition. Univac, IBM’s leading competition for government contracts, and the first to get a machine into actual production, had their security clearance mysteriously withheld. So IBM took the lead in producing computers, with the IBM 701, an exact copy of the machine built at IAS (Institute for Advanced Studies, Princeton). It could easily have gone the other way.
AA: Though we mostly assume that the idea of artificial life started at the end of the 1980s with the first wave of interest in genetic algorithms, your book shows that right at the beginning of the Von Neumann computer, Nils Aall Barricelli came up with the idea of self-replicating code. Which is quite stunning.
GD: Well, there’s another thing I just found out... Although Barricelli came to Princeton in 1953, he actually tried to come in 1951. Many of these people who came to the IAS had visa problems. Barricelli was a Norwegian–Italian living in Rome. Then he moved back to Norway because of the war, and when he applied to come to the United States under the Fulbright programme, they said: your application needs to go back to Rome, because you’re Italian, and the people in Rome said no you’re a Norwegian and so on. He waited for two years, but the computer was also delayed, so in the end he arrived in 1953, and it turned out to be the right time...
AA: How did Barricelli’s idea of replicating code originate?
GD: He was thinking about genetics – this was even before Watson–Crick discovered the structure of DNA. He was doing experiments by hand on graph paper with numbers. Somehow he heard that Von Neumann was building a computing machine. So he wrote him in 1951 saying ‘I want to come and use this machine’.
AA: Because it calculated faster?
GD: Yes. And Von Neumann answered, after doing some rough calculations, that he could have so-and-so much time. I just met someone who knew Barricelli very well. That was frustrating for me, because it was after the book was finished. We think that exchanging genetic information between organisms by computers must be some a completely new and very difficult technical problem. But in the last few years biology is learning that micro-organisms have been doing this all along. There are viruses and bacteria that more or less store their genetic information out ‘in the cloud’. It turns out that you can remove half the genetic sequences of some of these microbes, and they will rebuild it by taking it back from the environment through viruses. The viruses represent a library of genetic sequences. It’s a very interesting concept, and it seems that life, in computing terms, has the API, the application programming interface, to do remote accessing of cloud-based sequences. So the fact that we start doing this with computers, from the point of view of the cell, is not entirely new.
AA: In your book you show that it somehow all came together around the same time: the building of the machine, the idea of using code, Watson and Crick’s idea of DNA and how DNA codes life. You describe this as the birth of the digital universe, yet one could also say that for human beings a new era started.
GD: You can look at this question on many different levels. The higher level is to look from the level of life in general. Life developed by taking advantage of self-replicating molecules, which it used as a tool to convey its information. Life is always looking for new opportunities. Instead of taking the perspective of us using computers, you can look at it as life itself storing information in computers rather than in DNA, because it transmits faster. The animal or the plant that is able to spread its seeds the fastest and the widest wins. The life forms that propagate the best are going to use computers as a vehicle for genetic coding, because computers transmit faster. This could be good or bad, but it’s not science fiction, it is actually happening. AA Only in the case of happens in a different computers it universe.
GD: Yes. On the dark side, this means that computational intelligence is learning how to operate life. You want to be careful...
AA: You could also say that we found out there is more of an interaction between those two universes.
GD: They are co-operating. We are no longer the top intelligence. We evolved in a world we didn’t really understand. The forces of nature were greater than us. In a way we are returning to that world. We know these machines no better than we know ourselves.
AA: The history of the computer is closely connected to the idea of controlling the forces of nature. of the things that has been there from the beginning is the idea of modelling the weather, and being able to control it, just as it’s also the history of being able to control the hydrogen bomb. It’s a story about control...
GD: The Von Neumann computer project began with an interest in predicting the weather in order to control it. Of course the government was very happy to fund that. One question historically speaking, I haven’t answered, is that there seems to good reason to believe that the weather prediction project was a smoke screen for the development of the hydrogen bomb. They had to do the calculations for the hydrogen bomb, but these had to be secret. Von Neumann was so clever, he could take the same machine and the same mathematics.... Let’s say we’re working on the weather, and we’ll use the calculations for the bomb. He wanted to do both, and he did do both. He was very successful at it. The fact that you can give a five-day forecast now, is based on the same codes and the same models developed 60 years ago. Something I didn’t notice at first is that they worked on five main problems that were mathematically similar, but on completely different time scales. The bomb explosions were over in millionths of a second; the shockwave was seconds to minutes; weather prediction was hours to days; biological evolution was hundreds of thousands of years; and then they worked on the evolution of stars, which is on the timescale of the age of solar system. It is an amazing span of time. And then I put that on a graph, to see how it was spaced and see what it represented. Our human attention span is right exactly in the middle. Why are we right in the middle of this?
AA: We can see as far as our instruments allow us to see, which is much further than our eyes can see. Does that relate in any way to the history of the computer?
GD: It does, because these very short and very long intervals of time might otherwise not have been accessible to us. In terms of human survival the more important thing may be to keep track of things that are very slow. We are worried about what the climate is going to be like in a hundred years or more, and now we have a way of knowing that. I think we are overly focused on fast things and not enough on the slow processes. A system that takes long-term effects into account would be better for us. We need slow calculation.
AA: Politics is not really doing that...
GD: Obviously our political system is a failure right now. No one seems to question that political leadership is failing in all countries. There is a connection between failing political leadership and computation. The real leadership no longer comes from politics – there are no politicians like Pierre Trudeau and John Diefenbaker any more. It comes from Google. It is based on money. This is scary. This is what the VPRO television documentary, Money and Speed, Inside the Black Box was about as well. The financial forces are huge. I think it is important to recognise how many of our social problems originate in computerisation.
AA: The documentary shows that a part of the financial markets is ‘ruled’ by algorithms. Our human idea of a stock market is that it is based on investments in the future, on the idea that something is going to be different – better – in two years time or more. The algorithm that trades and acts on strange behaviour in the computer model doesn’t know time or future. That’s also one of the things your book is about, the idea that human time is completely different from time in a digital universe. Could you explain that?
GD: That is one of the most profound things, and if you understand it, you also understand why the world is so confused right now. I think one of the largest misunderstandings is the belief that your computer has a clock. What is the clock’s speed? Well, maybe your computer’s clock is 1.2 GHz and mine is 2.4 GHz, so mine is twice as fast as yours. But it isn’t a clock. In our world a clock measures intervals of time, but the ‘clock’ that is in your computer only regulates the sequence of steps in performing a computation. It happens to have a certain speed – but it keeps speeding up, and its only purpose is to ensure that two things never happen at the same time. On the Bigelow machine the speed was not fixed. You could go slower and faster. But in the computer world, there is no time. There is only what happens next. It’s just how fast the electrons move. Every year sees a new machine that is twice as fast. That’s why time in the digital universe is completely disconnected from ours. And that’s why the digital world, from our point of view, seems to be speeding up. From the point of view of the digital world it is the opposite: if you looked at our world from the digital world, everything is slowing down. Computers might ask: ‘Why are people getting slower and slower? Why don’t they do anything? Each time I looked the humans gave me more instructions, but now I’m waiting and waiting. He still hasn’t typed the next letter’. The human and the digital are two worlds on completely different time scales. That’s the huge transformation that is going on. When we don’t give instructions to the computer, they go to sleep. The idea of cloud computing is a way of using that empty time. The huge server farms that are being built – the ones that have their own power plants – aren’t sitting around waiting, they’re doing stuff. What they’re doing is similar to dreaming. When we go to sleep, our brains don’t shut down, they process and dream. That’s why Google is so efficient, because they use all their machines all the time.
AA: Why has our understanding of computation, and our engineering competence declined so much? If indeed it has declined?
GD: In my experience it has declined more in craftsmanship than in engineering. In America we have largely stopped teaching how to use tools. For someone like me this is very sad. Not many young people know how to use a chainsaw. These skills are exchanged for skills in handling things like iPhones, which are strange objects that omehow work, but when they stop working you get a new one. From my point of view it is important to understand what you use, so you don’t hand over all the power to machines.
AA: Could you envision ways in which empowerment is restored to our relationships with technology and machines?
GD: Yes. There is a very active movement of people who still do programming at deep levels. There are still people who do understand the code, perhaps not on the Von Neumann level, but at least on a Unix level. We need those people. On the hardware level however the knowledge is disappearing. We are now at the point where if a machine fails, it knows itself what it needs: I need a new motherboard. If it needs more than that, we throw it away. We don’t fix computers anymore. Years ago Google reached a point where they were adding something like 30 new machines a day without throwing the dead ones away – they only added new ones. That was a huge transition. It was much cheaper to just add new cells than clean up old ones. Who knows what those machines are doing now.
AA: Where could all of this lead?
GD: I don’t know. But we don’t have to wait that long to find out. You can think of a number of science fiction scenarios. The obvious one is that the system collapses, and nobody can even find food without their iPhone telling them where to look. We’ll lose 98% of the global population, et cetera. That’s the scariest one. Another one is a sort of H.G. Wells story, where the machines keep everyone happy. The people programming and taking care of the computers are all doing really well, while the rest of the people suffer. This is going to diverge into a situation where a certain number of people propagate the machines and the rest of the people are put away as being unnecessary. That is scary. Then there is the scenario that we start losing our intelligence because we don’t really need it. The machines don’t need intelligent people; they just need people to be content with taking care of their basic machine needs. That’s scary as well. Then there’s a happy possibility that we’ll have more free time and wealth and we’ll use it carefully, and that the globalisation of computing ends war. That’s possible too, as this is a very different world from the world of 50 years ago.
AA: What would the computer dream of, if it is dreaming in the way you just suggested?
GD: I have no idea!
AA: Could we find out?
GD: Good question! There is a plausible theory that dreaming came first, and consciousness followed later. It assumes we were born dreaming and eventually matched the reality to the dream. And when we go to sleep we return to dreaming. It may well be the same with machine intelligence. What it does in its spare time is where the machine’s consciousness will arise, not from something we programmed.
This text was published in The Dark Universe, 2013.
some rights reserved
Arie Altena
index