Jeffrey Bosboom's Blog

[blog] [projects] [about]

PLDI 2013 trip report, part 3

Writing up reviews for all the talks I attended turns out to be a rather demotivating amount of work, so from here on out I’ll just mention the talks for which I have comments to make. (Eventually I have to ship something.) I’ll still provide my notes for the talks I don’t comment on.

Keynote

Programming Languages in Security (Fred Schneider)

gratis PDF, my notes page 1, page 2

The second day’s keynote was given by Fred Schnieder and examined the role of programming languages in computer security. The talk began by pointing out the asymmetry between attackers and defenders: attackers only need to find one vulnerability to win, while defenders must stop every attack. Currently computer security centers around engineering specific defenses against specific attacks. For example, marking regions of memory nonexecutable defends against code injection via stack and heap overflows, and address space layout randomization defends against return-to-libc attacks. Both of these defenses can be bypassed by a motivated attacker (e.g., format string vulnerabilities can leak enough information to bypass ASLR), leading to further defenses.

Defenses would be more efficient if they provably addressed entire classes of attacks rather than being engineered for specific ones, but proving properties such as “defenses in class D enforce policies in class P despite attacks in class A” (example from paper) requires what Fred terms a “science” of security (in the sense of a set of laws with predictive ability), which we do not yet have. In Fred’s model, such a science of security needs to define the interactions between policies, attacks and defenses.

Policies are properties of a system that define what actions the system must or must not perform. For example, policies for a web server might require the server to provide service to all users who successfully authenticate (a liveness policy), but not allow users to detect what other users are online or access other users’ data (an information flow policy). Fred described two existing formalizations of policy: trace properties (safety and liveness), which we know how to enforce (via proof-checking or reference monitors) but which aren’t strong enough to define useful policies such as information flow, and hyperproperties, which are strong enough but which we can’t efficiently check. (See the linked paper above for an overview in the sidebars and the reference list below for details.)

Attacks are sequences of interactions with a system’s interfaces that could cause it to violate its policies. A system’s interfaces include both obvious inputs and outputs, such as keyboards, displays and network connections, and side channels such as the system’s electromagnetic radiation. Whether a given set of interface interactions is an attack depends on the policies being enforced, so we can define attacks in terms of the policies they violate, though this is necessarily interface- (and thus system-)dependent.

Finally, defenses prevent attacks from violating policy. There are many ways to categorize defenses. One classification separates static defenses, such as type checkers or rewriters, that guarantee policies (the paper calls this language-based security), and run-time defenses that aim to prevent an attacker from accessing an interface. Defenses can also be viewed as moving the trusted portion of a program (the part responsible for enforcing policy), preventing attacks that target another part of the program. In the keynote, Fred discussed two specific defenses, reference monitors and obfuscators.

Reference monitors intercept a program’s interactions with an interface and can modify or deny them. Operating systems that enforce file system permissions are probably the most common reference monitors. Through their ability to deny interactions, reference monitors can enforce safety properties (for example, that users cannot access other users’ files). However, they are unable to enforce e.g. information flow policies, because information flow is not a safety property. The best a reference monitor could do is enforce an overapproximating safety policy; for example, to prevent information flow between files X and Y, a monitor could prevent opening Y for writing after X has been opened for reading (even if information wouldn’t flow between X and Y, the monitor cannot tell). On the other hand, reference monitors can be modeled as automata that recognize program traces, then be partially evaluated and inlined at the check site for better performance. From the trust relocation viewpoint, a reference monitor relocates trust from the program to the monitor itself.

Obfuscators create a secret an attacker needs to know in order to mount an attack. Obfuscators don’t directly address vulnerabilities, but make them more difficult to exploit. Obfuscators range from full semantics-preserving random binary rewriters or randomized compilers down to less invasive transformations such as address space layout randomization. Obfuscators address another aspect of the attacker-defender asymmetry as well: some attacks will only be effective against some program morphs (randomly-obfuscated variants), allowing policies like “a majority of the servers are not subverted” to be enforced.

Fred concluded by arguing against the Whorfian view of programming language security. Rather than thinking of languages having inherent security properties (“C++ is insecure”), we can view languages as a means to express laws about policies, attacks and defenses.

This was an excellent keynote that exposed me to a new way of thinking about security. In particular, the characterization of defenses as trust relocation helped me generalize my thinking about static analysis and proofs about code. I’ve been exposed to this idea in the context of Java bytecode verification, but now I feel I can apply it to my own systems.

Fred referenced many papers during his keynote. While he doesn’t post his slides online because he feels they aren’t representative of his talks (he doesn’t read his slides), he did collect the references into a reading list in response to my e-mail.

Talks

Taming Compiler Fuzzers (Compiler Validation session)

ACM DL, gratis PDF, John Regehr’s blog post, my notes

John Regehr’s blog post summarizes this article well enough already, so I won’t repeat him here.

After the talk, I had a brief stairwell conversation with someone (whose name I didn’t manage to catch) who complained that the fuzzer tamer’s rankings aren’t useful because they can’t distinguish between important and unimportant bugs. Unfortunately, ranking bugs by importance is a research topic in itself, and a quick Google Scholar search didn’t turn up any papers claiming to solve this problem. In general, silent wrong-code bugs (code compiles, but computes the wrong result) rank above crashing wrong-code bugs, which rank above compiler crashes, but the other factor is the number of users that will encounter those bugs. A compiler crash that breaks a popular library will probably be fixed faster than a silent wrong-code bug that affects one SPEC benchmark with a particular optimization used only for benchmarking. The fuzzer tamer can’t make those decisions on its own, but it can sort the test cases so humans can prioritize them more efficiently.

Techniques from this paper might be useful in finding good counterexamples in counterexample-guided synthesis (CEGIS). In this context, “good counterexamples” are input-output pairs that prune as much of the search space as possible, reducing the number of counterexamples that each candidate program needs to be tested against. In some sense, CEGIS is the opposite of fuzz testing; where the fuzz tester discovers bugs where the program deviates from the specification, the synthesizer discovers features that the program must have to meet the specification. During the poster session I tried to make the connection between Armando and the author.

AutoSynch: An Automatic-Signal Monitor Based on Predicate Tagging

ACM DL, gratis PDF, slides, my notes

AutoSynch extends Java monitors to add implicit signaling through a preprocessor and library. The programmer specifies the predicate with a waituntil statement (e.g., waituntil count > 0;), which the preprocessor transforms into Java code that uses the AutoSynch library. The major performance problem with implicit signaling is context switching. AutoSynch avoids context switching when evaluating predicates by making them into closures. AutoSynch also avoids “signaling storms” from notifyAll (which will wake every waiting thread even though only one can acquire the monitor) by waking one thread whose predicate is satisfied when releasing the monitor (termed relay signaling). Finally, AutoSynch uses predicate tagging to optimize the number of predicates evaluated when looking for a thread to awake, though the performance impact is mixed. Overall, AutoSynch’s implicit signaling is nearly as fast as explicit signaling, and sometimes faster when notifyAll calls can be avoided.

The Google Guava Java library already includes an implicit signaling system quite similar to AutoSynch in its Monitor class (Javadoc, source). Instead of a waituntil statement, users call monitor.enterWhen passing in an instance of the Monitor.Guard functional interface whose isSatisfied method evaluates the predicate. The Monitor takes care to use signal in place of signalAll when possible, including in the presence of interrupts. Thus Guava’s Monitor avoids context switches to evaluate predicates and avoids signaling storms, the major benefits of AutoSynch. Guava’s Monitor doesn’t support predicate tagging (since the Monitor.Guard closures are opaque to the Monitor), but predicate tagging isn’t always a performance improvement anyway. AutoSynch does provide a much nicer syntax, but at the cost of complicating the build process with a preprocessor, and Monitor will become much more usable when the anonymous inner classes used to implement guards are replaced with Java 8 closures. Monitor provides nearly all the benefits of AutoSynch in standard Java.

Notes from other talks

CLAP: Recording Local Executions to Reproduce Concurrency Failures, Dynamic Determinacy Analysis, General Data Structure Expansion for Multi-threading, Concurrent Libraries with Foresight, Thresher: Precise Refutations for Heap Reachability, Transit: Specifying Protocols with Concolic Snippets