ETH Zurich's weekly web journal - auf deutsch
ETH Life - wissen was laeuft ETH Life - wissen was laeuft


ETH Life - wissen was laeuft ETH Life - wissen was laeuft
Home

ETH - Eidgenoessische Technische Hochschule Zuerich - Swiss Federal Institute of Technology Zurich
Section: Science Life
deutsche Version english Version
Print-Version Drucken

Published: 03.11.2005, 06:00
Modified: 02.11.2005, 21:44
Interview with ETH Professor Bertrand Meyer on the conference on verified software.
Flawless software

A weeklong conference took place at ETH Zurich mid-October on the theme of "Verified Software: Theories, Tools, Experiments"(1) ETH Life talked to Bertrand Meyer, ETH Professor of Software Engineering and conference organiser.

Jakob Lindenmeyer

Professor Meyer, what characterises good software?

Bertrand Meyer: There is a very simple answer: whatever satisfies the user. If Photoshop crashes once in a while, you may still like it if it otherwise provides you with what you want. On the other hand, an airplane control system simply should not fail under any circumstances. There is a much more technical definition which involves a large number of software quality factors: correctness, or the ability to function according to specification; robustness, or the ability to react reasonably to cases outside of the specification; ease of use, ease of learning; security, or the ability to protect against malicious use..

What has changed is that software is more and more integrated at all levels of the major processes in society, so the stakes are higher. Non-quality can be not just annoying but catastrophic.

How can good software be verified to make it fault free?

There's a multiplicity of approaches, but the key ideas fall into two categories: a priori and a posteriori. A priori means building the system with strict specification, design and implementation techniques and tools, going all the way to mathematical proofs when possible. A posteriori means applying verification tools that systematically analyze an existing system for possible deficiencies; this approach has made tremendous advances in recent years, thanks to new techniques such as model checking and to progress in computer power. Clearly the best method, whenever possible, is to combine both approaches, taking advantage of design methods emphasizing a priori quality as we teach at ETH but then submitting the result to rigorous checking anyway.


About Bertrand Meyer

Bertrand Meyer studied in France and the USA. After working in industry, in 1985 he founded the Interactive Software Engineering Company in Santa Barbara and developed the "Eiffel“ application. Meyer has been professor of Software Engineering at ETH Zurich for four years and took over as chairman of the Computer Science Department a year ago. His research focuses on the development of methods, techniques, languages and tools for improving the quality of software(2).



What are the main reasons why computers still crash, even today?

Any crash of any program is in the end the result of a programming error. A typical example is the "buffer overflow", which results from trying to access an array or string element but providing a wrong index, as if I were to ask you for the n-th character in "ETH ZURICH" and n has value 50. Silly as it may sound, this one error is involved in a majority of security attacks on the Web. It should never have occurred in the first place; even if the programmer makes a mistake, modern programming languages and tools catch it, but for all kinds of valid and less valid reasons they are not always used in practice. Another major source of errors is complexity. It must be understood that modern operating systems, with tens of millions of lines of source code, are more complex than any other engineering systems. In terms of their complexity they are like human systems, say a city, but in terms of their tolerance to error they are like engineering systems, say an airplane. In an airplane more or less everything must be right, but the complexity is manageable. In a city–with its people, buildings, cars, trams, police and constant events–the complexity is very high but the system tolerates faults; a single breakdown, say of a traffic light, usually doesn't paralyse the whole system. In big programs the complexity is comparable, but there is very little tolerance to faults. For example, Microsoft often complains that people blame Windows for many crashes that are due to third-party drivers, the tens of thousands of little programs that provide the interface to devices coming from many providers. One can argue that they should be able to protect the operating system from drivers' failures, but the complexity is there. Computer scientists have tended to argue for simplicity as a way to reduce mistakes. It is true that some of the complexity is self-imposed, but a good part just comes from the need to handle the outside world as it is.

What can happen when faulty software is used in critical processes?

By far the worst effect of faulty software is the one that doesn't make headlines; no one is killed, but programs crash, give wrong results, take too long to execute. The US National Institute of Standards recently estimated that improperly tested software causes a US$ 60 billion waste every year for the US industry alone. Bad software, on the other hand, can and does kill. One of the worst but also best documented incidents was the Therac-25 radiation therapy device which, a few years ago, killed seven patients by sending the wrong radiation dose, as a result of a software error. The crash of the maiden flight of the Ariane 5 rocket launcher from the European Space Agency in 1996 cost billions of dollars and was entirely due to the improper reuse of a software element from the previous iteration of the Ariane technology. Software is so involved at all levels of the modern world that any deficiency can have major effects. In fact–touch wood–we have been relatively lucky so far.


ETH Professor Bertrand Meyer: "One major reason for flaws in software is the increasing complexity of operating systems."

How do errors get into faulty software?

Not investing in the right tools; using older programming languages, out of intellectual laziness or mere habit; rushing to implementation rather than doing proper design; not exerting proper quality control; not enforcing close collaboration between the software developers and the future users or letting politics interfere with technical choices.

One of the most important conferences on "Verified Software" took place mid-October at ETH; what were the main results?

The conference that founded the field of software engineering as a whole, and largely introduced the term, was held in 1968 in Garmisch-Partenkirchen in Bavaria and remains to this day a legendary event. We have some hope that the Verified Software conference will play a similar role. It was designed as a kind of jolt, to make the software community aware that progress in both a priori and a posteriori methods has reached a stage when we can at least become really serious about software that is either fault-free or close to fault-free.

The conference was based on Tony Hoare's concept of a "Grand Challenge". Tell us about this challenge.

Tony Hoare is one of the best known figures in computer science, winner of the Turing award and the Kyoto prize. He originated in particular many of the methods that make it possible to prove software correct, as well as inventing Quicksort, one of the most famous algorithms known. He also performed groundbreaking work in the area of parallel computation. After a long and prestigious career he could have just retired, but instead moved to Microsoft Research and launched, as his next project, this Grand Challenge on Verified Software. For the past five years he has been traveling the world trying to convince everyone that the time has come to start a major coordinated effort to improve by a breakthrough the quality of the software we give to the world. His goal is a 15-year effort and his model is the Human Genome project. The Zurich conference was the culmination of this preparatory work and the launch of the Grand Challenge.

What contribution is your own research here at ETH making to the "Grand Challenge"?

My entire work, both before I joined ETH four years ago and since then, is devoted to better techniques of software development. The techniques of "Design by Contract" makes it possible to develop software out of elements equipped with precise definitions of their responsibilities, and enforce these properties. Based on these ideas we follow both the a posteriori and a priori roads. "A posteriori", we have developed an extensive testing framework, Autotest, which tests software libraries completely automatically, at the push of a button, without writing a single test case; objects and values are generated by the framework. We have already found a number of bugs this way in existing libraries. "A priori", we are extending these techniques to permit program proofs, in particular for complex object-oriented programs involving references. We also address the tricky but essential issue of parallel programming through a mechanism called SCOOP, elevating the level of abstraction to avoid errors. The Eiffel language and method which I have developed over the years is a kind of concentrate of best practices in software engineering, designed to permit very high quality software and is indeed used in major defense and financial applications in the US and Europe. It's also a research vehicle, and we are offering the technology to the Grand Challenge as a basis for the development and verification of software. We also use it for teaching at many levels. At the department of Computer Science of ETH my group is not the only one doing research in verified software. I can cite the groups of Peter Mueller on extending static checking, David Basin on model-based security, Jean-Raymond Abrial on formal proofs, and Daniel Kroening on model checking, among others. This area has become very active at ETH, with several complementary approaches being pursued in parallel. We definitely hope, each in our own way, to play a leading role in addressing the Grand Challenge. There is no doubt that having the launching event at ETH has brought attention to our work and helped position ETH as one of the key players.


Footnotes:
(1) Verified Software: Theories, Tools, Experiments (VSTTE) conference website: http://vstte.inf.ethz.ch/
(2) Bertrand Meyer's homepage: http://se.inf.ethz.ch/~meyer/



You can write a feedback to this article or read the existing comments.




!!! Dieses Dokument stammt aus dem ETH Web-Archiv und wird nicht mehr gepflegt !!!
!!! This document is stored in the ETH Web archive and is no longer maintained !!!