Tuesday, 12 January 2010

Tales from Verification History

For anyone interested in the Verification controversy post below, my colleague Vijay recently did a really great presentation at the Oxford Comlab entitled Tales from Verification History. The talk discusses the background surrounding the current debate, and references all the relevant papers.

Slides for the presentation are available together with a transcript.

For a discussion of the “Social Processes and Proofs of Theorems and Programs” paper, see slide 48, and page 14 of the transcript (but I would thoroughly recommend reading the whole thing).

Verification controversy

My colleague, Vijay D'Silva, has an exceptional capacity for keeping abreast of literature, debate, and “who's who” in formal verification. He frequently tips off our formal verification group about interesting developments.

Yesterday, he alerted me to a wonderful piece of controversy in the verification world. Moshe Vardi, Editor in Chief of Communications of the ACM, recently published an editorial encouraging wider debate in Computing Science. His editorial starts with a reference to a famous paper of Richard A. De Millo, Richard J. Lipton and Alan J. Perlis: “Social Processes and Proofs of Theorems and Programs”, which Communications published in 1979. “Social Processes...” takes the view that formal verification is doomed to fail, for a variety of well-argued reasons. Vardi's view is that the article, 30 years, on, looks rather misguided.

This editorial has certainly fulfilled Vardi's aim of sparking off some debate, though actually on Richard Lipton's blog rather than in Communications - the surviving authors of "Social Processes..." have not taken his comments well!!

I hadn't read this famous article, and did so today. It's wonderful – if you haven't read it, then please do so now! While I disagree that the article was misguided, Vardi is right that the verification community has moved on, and some of the arguments in “Social Processes...” do not apply to what we now think of as formal verification. This is not because De Millo et al. were wrong, it's because the goals of formal verification have changed.

I thought I'd discuss two particular points in this area that are important to me, as a proponent of pragmatic formal verification.

What we want to verify has changed

Back in 1979, the goal of verification was to show that a program “is correct”, i.e. meets its specification. The idea was that the requirements a program should satisfy would be formalised, then the program, as a monolithic unit, would be shown logically to satisfy the formal properties, via a formal proof.

There are many practical problems associated with doing such a proof, whether manually or automatically, even if the program's requirements have been formalised. However, the fundamental problem is the difficulty of formalising whole-program requirements for large pieces of software. This simply cannot be done for large applications like compilers or word processors. For simpler programs, a formalisation of requirements may be possible, but the formalisation might be extremely complex and hard-to-understand. It may be necessary to use temporal logic to state properties, in which case, in my opinion, the game is up: even moderately complex formulae in temporal logics are virtually impossible to understand intuitively.

The modern, pragmatic standpoint on this issue is much simpler. Limit the scope of formal verification. Instead of aiming to prove whole-program correctness, just try to determine whether an assertion at a particular program point can be violated. The crucial point here is that an assertion states something unambiguous about a low-level, but important, detail of a program. For example, that a value should remain within a certain range, or that two boolean flags should never simultaneously be true. Modern programmers write assertions all the time. Of course, they cannot be used to specify whole-program correctness for complex systems, but as De Millo et al. rightly point out, the notion of whole-program correctness is a dream anyway. Unfortunately, checking assertions is still algorithmically impossible in general. Nevertheless, automated techniques can go a long way in this area.

What we expect a verifier to tell us has changed

In the “Disbelieving Verifications” part of the paper, the authors argue that, if the programmer gets the message 'VERIFIED', from an automatic verification tool (if such a tool could indeed be built), this “mystical stamp of approval” would tell them very little. However, the authors note that in practice, the programmer would never get this answer: for, say, a 20,000 line program, they would always get 'NOT VERIFIED', since no sizeable program is ever perfect.

The techniques of 1979 did indeed aim simply to decide whether or not a program was correct. Many modern techniques do more than this, particularly model checking. The strength of a model checker is that, when asked to verify say whether an assertion in a C program can ever be false, if the answer is “yes it can”, the tool also says “and here's how: ...”, and reports an execution sequence illuminating the bug. Unlike a mechanical correctness proof, these counterexamples can be understood by programmers, who can use the counterexamples to debug their program.

If a programmer uses a verifier to reveal bug after bug in their program, via counterexamples, and then, after 20 bugs have been fixed, the verifier finally reports 'VERIFIED', then this stamp of approval does not seem so mystical. The programmer's confidence in the program has increased due to all the fixed bugs. The verification tool proved its worth by managing to find those bugs, and the fact that it cannot find any more suggests that something is right about the program!

Counterexamples relate to another major point in “Social Processes...”: that when a mathematician does a proof, they leap up and say “I've got it!”, and run and tell their colleagues, to whom they try to justify the proof. This leads to discussion, perhaps to a refutation of the proof, or increased confidence in it. This discussion and excitement is at the heart of the social process of mathematics. However, “formal verifications”, i.e. the sequences of logical steps used to prove a program correct, do not generate such excitement and do not form part of a social process: the results of one formal verification are of no interest to software practitioners in general, and one verification result provides no general insight into the correctness of whole classes of programs, nor even to syntactically similar programs.

However, one can get excited about counterexamples. When I take the time to understand a counterexample, the counterexample surprises me. I think: “Ah, I see!!!”. My assumptions about aspects of my algorithm are shattered and need to be re-thought. Sometimes I am so excited that I do run and tell my colleagues. (Sometimes I get so excited that I recount stories of nasty bugs I have found to my wife, who unfortunately has no idea what I'm talking about and quickly tries to change the subject!) From a given counterexample, one can think of a simpler counterexample representing the same bug. Then one can extract the essence of the bug which the counterexample represents. If this class of bugs proves very important, this might lead to designers of verification tools tuning their algorithms to be super-aware of bugs of this nature, so that their tools can quickly report easy-to-understand counterexamples. This is a social process.

In summary, it is true that formal verification has had success over the last 30 years, particularly due to advances in model checking. However, I think it's a massive leap for Vardi to suggest that formal verification has been so successful that it rebuts the many compelling arguments in De Millo, Lipton and Perlis's paper. It's fairer to say that the goals of verification have changed. And we in the verification community should not get too carried away with what we have achieved. Software developers are not yet using verification tools on a regular basis. They are not built into mainstream compilers like GCC. There is a long way to go.

Wednesday, 16 December 2009

DMA race checking

I am in a good mood this week because I (together with my co-authors Daniel Kroening and Philipp Ruemmer) just had a paper accepted to TACAS on checking DMA races in multicore software.

The paper is online here.

The inspiration for this work came via my experience at Codeplay Software Ltd., writing system-level code for the Cell processor. To achieve high performance on a Cell SPE, you needs to write low-level DMA operations to pre-fetch data. Getting these operations correct is hard - it's easy to end up with two DMA get operations simultaneously writing to the same local buffer, and corrupting it.

Our paper first shows how bounded model checking can be used to automatically find bugs with the way DMA operations are used. Then it shows how a technique called k-induction (introduced in this paper) can be used to actually prove absence of bugs for certain classes of programs, which is a lot harder to do.

We've made a tool, Scratch, which accepts Cell SPE programs involving DMA operations, and checks them to find, or prove absence of, DMA-related bugs. We used Scratch to find a bug in one of the examples that comes with the IBM Cell SDK (see report on Cell BE forum).

IBM also have a race check library that can be useful in finding DMA-related bugs, and finds them very quickly if they occur on a given test input. Scratch has the advantage that it uses symbolic execution, so can find more subtle bugs by considering program execution for all possible input values.

Scratch needs some more work before it will be of much use to Cell BE developers; in particular it doesn't yet handle all the vector language extensions which are used by SPE programmers.

However, it will in due course, so if you are interested in using Scratch please contact me and I'll prioritise development of particular features that you require to get started!

The bad news is that the Cell appears to be somewhat on a downer. However, similar issues arise when performing asynchronous memory copies in GPU programming, which is definitely on the up (especially with the OpenCL language), so I think there's a lot of cool stuff still to do in this area!

Tuesday, 1 December 2009

Having trouble getting the Murphi verifier to compile?

I've spent quite a bit of time today getting the Murphi verification tool to compile. I thought I'd write down how I did it in case this is of use to others.

First, I found that the version here did not compile due to flex/yacc errors. I don't know how to solve these, but luckily the version here (which has the same version number!) didn't present those problems.

Instead, it gave me this error:
error.C:58:20: error: stream.h: No such file or directory
In file included from mu.h:441,
from error.C:56:

It seems that stream.h comes from the days before C++ include files didn't finish with ".h". This header file has been deprecated for some time, but it seems that with gcc 4.3.3, which I have, it is not even provided.

It turns out that GCC 4.2.2 comes with a folder called "backward", that has "stream.h" and related header files. Armed with these files (which I provide online), I was able to get Murphi to compile OK by changing one line in the makefile from:

CFLAGS = -O4 # options for g++

to

CFLAGS = -O4 -I /home/scratch/gcc4.2.2_backward/backward # options for g++

(this is where I put the "backward" files).

Happy Murphi-ing!

Wednesday, 25 November 2009

Northern Concurrency Workshop

I am just back from the Northern Concurrency Workshop, which was organised by Joey Coleman at Newcastle University.
It was a really, really fun event. The talks were largely dominated by separation logic, which I am only just getting into, so I would say I understood less than 30% of the speakers' words. Nevertheless, most of the other 70% sounded tantalisingly interesting even though it was over my head! I gave a more practical talk on applying model checking to multicore programs with DMA.
What I liked about the workshop is that despite some pretty big names in concurrency theory being present, the atmosphere was very relaxed. There was a lot of friendly discussion; it really did have the feel of a workshop, rather than just a series of talks.
The standout presentation for me was "Verifying malloc", given by John Wickerson from University of Cambridge. It was really well explained, and so obviously of practical importance.
It is expected that there will be another one in Cambridge in September 2010, so I look forward to that!

Wednesday, 14 October 2009

Glasgow

This weekend I had a flying visit to Glasgow for a session with my band, and to see a musical put on by a company which my wife Chris used to be a member of. Apologies to friends of mine in Glasgow who I didn't get a chance to see -- next time hopefully I'll be up for longer.

It was wonderful to be back in Glasgow. Maybe it's just because I know it, but the city seems so fluid and easy to get around in. I drove to Byres road for a coffee with my mother, and effortlessly slipped into a space where you can park free for 30 minutes. I drove from there to Berkeley studios via the Clydeside expressway which is smooth and fast, and then when I needed to be in the west end a bit longer I easily found somewhere free to park just a few streets away. Maybe I could do this in Oxford if I knew it better, but my experience of driving in Oxford so far has been awful!

I met up with the Latonic guys for drinks after seeing the musical, and we went to a really cool basement bar on Bath Street, can't remember the name, where our mate Mehdi was DJing. I left the bar around 2, and then had the most amazing experience -- I wanted some food and was fully prepared to be ripped off at some awful chippy like Mr Chips, when I saw that Greggs was open. Yes, Greggs the baker, at 2am! So instead of paying £3 for a portion of under-cooked chips served to me by a weary chip shop owner, I paid £1.60 for two cheese and onion pasties, which are exactly what one wants after several drinks, and was served by a very friendly member of Greggs staff. It was amazing!

It seems like some other people have been talking about this too. I suppose it might be bad news for the chip shops, and maybe bad news for the Greggs staff, but it's OK by me!

Tuesday, 29 September 2009

Bash hates enthusiasm

I often feel good when I get to "svn commit" something. I want to type something like:

svn commit -m "Example ready to go!"

But this enthusiasm is not welcomed by the bash shell, which says:

bash: !": event not found

So I have to type:

svn commit -m "Example ready to go\!"

and reaching for the '\' key just doesn't seem spontaneous. Especially on a Mac, where it's in the wrong bloody place!

So I end up being firm, and typing the ploddingly unenthusiastic command:

svn commit -m "Example ready to go."