22–24 July 2015, L'Aquila, Italy
To reduce complexity, general-purpose modeling languages strive for abstraction. But what is the right level of abstraction? Design-oriented models capture the logical or physical organization of software, abstracting from their dynamic behavior. Foundational models capture core features in a way suitable for meta-theory, but rely on cumbersome encodings of other features. Specifications close to actual code get obfuscated by the low-level intricacies of specific implementations.
This talk reports on research on abstract behavioral specifications, aiming for a middle ground in the gap between design, implementation, and foundations. We consider executable, yet abstract models which are faithful to the control and data flow of concurrent and distributed object-oriented systems, yet abstract enough to facilitate formal verification. We then consider how novel technologies such as virtualization and cloud computing reintroduce low-level concerns at the abstraction level of the models. Deployment decisions form an integral part of resource-aware, virtualized applications. We discuss how these technologies raise new challenges for model-based analysis.
Einar Broch Johnsen is Professor at the Department of Informatics, University of Oslo. His research interests are in the field of software development and analysis, spanning from programming languages and methodology, via system specification and modeling, to formal methods and associated theory. Prof. Johnsen is particularly interested in the analysis of concurrent and distributed systems, resource awareness and resource management, and mechanisms for software evolution and reuse. His research in the last years has increasingly focused on abstract behavioral specification formalisms and their use for model-based verification of real systems. Prof. Johnsen is currently coordinating the European research project Engineering Virtualized Services (Envisage) which develops model-based verification techniques to predict the behavior of services deployed on the cloud.
The C and C++ languages were originally designed without concurrency support, but the recent revision of the C and C++ standards introduced an intricate but precise semantics for threads; today's C and C++ compilers, whose optimisers were initially developed in absence of any well-defined concurrency memory model, are being extended to support this new concurrency standard. This is a fantastic opportunity to put at work all our tools to formalise, test, and reason about large scale semantics and software.
In this talk, after recalling the C and C++ memory models, we will explore in a theorem prover the correctness of compiler optimisations and present simple necessary conditions that can be used as a reference by compiler implementers. As an application, we will show how this theory enables building an automatic compiler fuzzer that hunts “concurrency compiler bugs”: subtle wrong code generation bugs which are observable only when the miscompiled functions interact with concurrent contexts.
Perhaps surprisingly, we will also show that by leveraging the semantics of low-level relaxed atomic accesses (which allows programmers to take full advantage of weakly-ordered memory operations), it is possible to build counterexamples to several common source-to-source program transformations (such as expression linearisation and “roach motel” reorderings) that modern compilers perform and that are deemed to be correct. We will evaluate a number of possible local fixes, some strengthening and some weakening the model, and perhaps conclude, that, currently, there is no really satisfactory proposal for the semantics of a general-purpose shared-memory concurrent programming language.
Francesco Zappa Nardelli is a Researcher at Inria Paris – Rocquencourt. His research interests focus on concurrent computation on top of relaxed memory models, ranging from hardware models of modern architectures to high-level programming language specification and compilation. He received his Ph.D. from Université Paris 7 in 2003. Since then, he has worked on language design for distributed computation, type systems for integrating typed and untyped code in scripting languages, and tool support for semantics (including the Ott tool).