Posts Tagged ‘Computer Science’

Beyond Turing’s Machines

April 21, 2012 Leave a comment

by Andrew Hodges


In marking Alan Turing’s centenary, it’s worth asking what was his most fundamental achievement and what he left for future science to take up when he took his own life in 1954. His success in World War II, as the chief scientific figure in the British cryptographic effort, with hands-on responsibility for the Atlantic naval conflict, had a great and immediate impact. But in its ever-growing influence since that time, the principle of the universal machine, which Turing published in 1937 (1), beats even this.

When, in 1945, he used his wartime technological knowledge to design a first digital computer, it was to make a practical version of that universal machine (2). All computing has followed his lead. Defining a universal machine rests on one idea, essential to Turing’s mathematical proof in 1936, but quite counter-intuitive, and bearing no resemblance to the large practical calculators of the 1930s. It put logic, not arithmetic, in the driving seat. This central observation is that

instructions are themselves a form of data. This vital idea was exploited by Turing immediately in his detailed plan of 1945. The computer he planned would allow instructions to operate on instructions to produce new instructions. The logic of software takes charge of computing. As Turing explained, all known processes could now be encoded, and all could be run on a single machine. The process of encodement could itself be automated and made user-friendly, using any logical language you liked. This approach went far beyond the vision of others at the time.

Even more fundamental than the universal machine is the concept of computability that Turing defined in 1936. His first step was to ask for a precise definition of “mechanical process,” and he proceeded by analyzing what it means for someone to follow a rule. In modern terms, “anything that can be done by a computer program” was his answer, with the invention of the computer as a by-product. This leaves open the question of whether such an analysis would include absolutely everything that a material system (including brains) might be able to achieve. Ever since 1936, this nagging question has been on the agenda, and it is still there.

In 1939, Turing suggested that mathematical steps that are not rule-following, and so not computable, could be identified with mental “intuition” (3). However, he gave no discussion of whether the human brain was actually embodying uncomputable physical processes. Wartime experience led Turing in a different direction. His brilliant codebreaking algorithms, outdoing human guessing, stimulated the conviction that all mental operations must be computable, including those functions of the mind not apparently following methodical rules. His 1950 paper (4), most famous for the wit of the Turing test of intelligence (5), also included a careful discussion of computability. It set out a basic argument that if the brain’s action is computable, then it can be implemented on a computer, this being a universal machine. In defending this point of view, Turing referred to what would now be called chaotic effects in the brain and argued that these did not prevent computer simulation. Notably, at this time Turing was also founding a new branch of mathematical biology: He was applying the insights of an applied mathematician who was also one of the first to use a computer for simulating physical systems.

In 1951, however, Turing gave a radio talk with a different take on this question, suggesting that the nature of quantum mechanics might make simulation of the physical brain impossible. This consideration can be traced back in Turing’s thought to 1932, when he first studied the axioms of quantum mechanics [see (6)]. Turing then took up renewed interest in quantum theory and noted a problem about the observation of quantum systems (now known as the quantum Zeno effect). With his death, this train of thought was lost, but the serious question of relating computation to fundamental physics has remained.

Since the 1980s, quantum computing has given a practical technological arena in which computation and quantum physics interact excitingly, but it has not yet changed Turing’s picture of what is computable. There are also many thought-experiment models that explore what it would mean to go beyond the limits of the computable. Some rather trivially require that machine components could operate with boundless speed or allow unlimited accuracy of measurement. Others probe more deeply into the nature of the physical world. Perhaps the best-known body of ideas is that of Roger Penrose (7). These draw strongly on the very thing that motivated Turing’s early work—the relationship of mental operations to the physical brain. They imply that uncomputable physics is actually fundamental to physical law and oblige a radical reformulation of quantum mechanics.


Superficially, any such theory contradicts the line that Turing put forward after 1945. But more deeply, anything that brings together the fundamentals of logical and physical description is part of Turing’s legacy. He was most unusual in disregarding lines between mathematics, physics, biology, technology, and philosophy. In 1945, it was of immediate practical concern to him that physical media could be found to embody the 0-or-1 logical states needed for the practical construction of a computer. But his work always pointed to the more abstract problem of how those discrete states are embodied in the continuous world. The problem remains: Does computation with discrete symbols give a complete account of the physical world? If it does, how can we make this connection manifest? If it does not, where does computation fail, and what would this tell us about fundamental science?


    1. A. M. Turing

    Proc. London Math. Soc. S2-42, 230 (1937).

    1. M. Davis

    , The Universal Computer: The Road from Leibniz to Turing (Taylor & Francis, Boca Raton, FL,Turing Centenary edition, 2012).

    1. A. M. Turing

    Proc. London Math. Soc. S2-45, 161 (1939).

    1. A. M. Turing

    Mind 49, 433 (1950).

    1. R. M. French

    Science 336, 164 (2012).

    1. A. Hodges

    , Alan Turing: The Enigma (Princeton Univ. Press, Princeton, NJ, Turing Centenary edition,2012).

    1. R. Penrose

    , The Emperor’s New Mind (Oxford Univ. Press, Oxford, 1989).


ACM Names 2012-13 Athena Lecturer

April 21, 2012 Leave a comment

ACM TechNews

via ACM Names 2012-13 Athena Lecturer.

Nancy Lynch, MIT [image courtesy ACM].
The Association for Computing Machinery’s Council on Women in Computing (ACM-W) yesterday named MIT’s Nancy Lynch its 2012-13 Athena Lecturer, recognizing Lynch for her advances in distributed systems enabling dependable Internet and wireless network applications. The Athena Lecturer award, which comes with a $10,000 honorarium provided by Google, “celebrates women researchers who have made fundamental contributions to computer science.”

According to the press release:

“Lynch’s work has influenced both theoreticians and practitioners,” said Mary Jane Irwin, who heads the ACM-W Athena Lecturer award committee. “Her ability to formulate many of the core problems of the field in clear and precise ways has provided a foundation that allows computer system designers to find ways to work around the limitations she verified, and to solve problems with high probability” [more following the link].

In a career spanning more than 30 years, Lynch identified the boundaries between what is possible and provably impossible to solve in distributed settings. She developed new distributed algorithms, created precise models for analyzing distributed algorithms and systems, and discovered limitations on what distributed algorithms can accomplish.

Lynch’s breakthrough research with M.J. Fischer and M.S. Paterson produced the “FLP” result. It defined as a mathematical problem the challenge of establishing agreement in asynchronous distributed systems (i.e., those with no timing assumptions) in the presence of failures. This innovation had a major impact on the design of fault-tolerant distributed data-management systems and communication systems.

Lynch’s textbook, Distributed Algorithms, is the definitive reference on the basics of the field. It introduces readers to the fundamental issues underlying the design of distributed systems, including communication, coordination, synchronization, and uncertainty. It integrates the results of distributed algorithms research using a common mathematical framework.

The Athena Lecturer award will be presented at ACM’s Annual Awards Banquet in mid-June.

Lynch will deliver the Athena Lecture at the 2013 PODC (Principles of Distributed Computing) conference.

To learn more about this year’s Athena Lecturer and view past recipients, see the ACM’s press release.

(Contributed by Erwin Gianchandani, CCC Director)

%d bloggers like this: