After a long period of inactivity, I am pleased to report signs of life for two of my papers-- and a tech report, besides.

My paper with Ran Canetti, Universally Composable Symbolic Analysis of Mutual Authentication and Key Exchange Protocols, has been accepted for publication by the Journal of Cryptography. The other, Soundness and Completeness of Formal Encryption: the Cases of Key-Cycles and Partial Information Leakage, was accepted by the Journal of Computer Security a while ago but finally published this month. I should note, thought, that I can only claim half-authorship on that second paper. It combines two separate yet related conference papers into one journal paper, as is common in my field. However, I was only an author of one of the conference papers. My co-authors---Pedro Adao, Gergei Bana, and Andre Scedrov---wrote the other conference paper without me and were very generous in allowing me full authorship credit on the resulting journal paper.

Both of these papers continue to explore one of my main research areas: the relationship between the formal and computational schools of analysis for cryptographic protocols. A full explanation of this research deserves its own page (note to self: write a 'Research' page for this site) but the short form is this: we need to be able to prove that cryptographic protocols (like SSH, TLS/SSL, and Kerberos) are secure. We cryptographers actually developed two different and independent ways of doing this. One (the 'formal' approach) can be automated, (which is good) but requires a lot of strong assumptions (which is bad). The other (the 'computational' approach) requires a lot of tedious and manual work (bad) but is extremely reliable and much more convincing (good). So here's the question: will these two approaches ever disagree? This is an interesting question in its own right (well, to me and at least a whole half dozen other people on the planet) but also has a practical impact. If the answer is 'no', then we can get the best of both worlds: an automated approach that yields very strong and convincing results.

The answer, of course, is 'sometimes yes, sometimes no'. The first papers shows that, except for a few weird corner-cases, the answer is 'yes'. In fact, we show that this 'yes' is so strong, we can actually do the automated analyses much faster than anyone had previously thought. It's a very nice result, and I'm proud of it. But those corner-cases remain, and we focus on them in the second paper. There, we discuss a few of the actual 'gaps' between the two analysis-techniques, and how we might be able to handle them. Really, though, the corner-cases are pretty contrived, and don't actually show up in real-world protocols.

I should also mentioned that my NPS colleagues and I put out a tech report, Implementing AES on the CellBE. The CellBE is a 9-processor chip made by IBM, and used in the PS3 to speed up rendering. It's a general-purpose chip, though, with some special features for parallel computing. To demonstrate this, IBM published some benchmarks to show how fast it could run the AES encryption scheme. My colleagues and I at NPS wanted to see if we could beat those benchmarks, and we did. (We did not, I note, have access to IBM's CellBE implementation of AES. Go, us.)