Abstracts

Eric Bach

A Mathematical Prologue to the Study of Electronic Encryption
Exemplary Designs from the Golden Age of Shift Register Cryptography
Many computer science researchers came to cryptography in the 1980’s when the public-key explosion was in full swing. The beautiful development of asymmetric methods, based on ideas from classical number theory, was too striking to ignore. But as teachers of cryptography courses, we can also wonder about the development of all-electronic encryption technology between the end of World War II (1945) and the first public-key publications (around 1975). I contend that there is now enough open information in patents no longer witheld, and in ideas re-discovered by outsiders, that one can begin to make sense of this period.

Joachim von zur Gathen

Two complexity methods in cryptography: reductions, and structured vs. general models
Reductions arose in computability theory and were adapted by Cook and Karp to the complexity theory setting. They are arguably its most important tool, for NP-completeness and other questions.
These reductions lead to a rather good conjectural understanding of questions like P vs. NP, but we cannot prove superlinear lower bounds for problems in P on general models of computation like Turing machines. Restricting calculations to structured models, where one can only perform operations that naturally arise from the problems, sometimes gives good lower bounds. Thus we have matching upper and lower bounds for sorting or polynomial arithmetic in the appropriate models.
In this survey talk, we look at the transfer of these methods to cryptography. Reductions are more difficult to use than in the complexity setting, but yield some satisfactory results. Structured models provide pleasant results for discrete logarithms, and interesting contradictory conjectures for the relation between factoring integers and breaking the RSA cryptosystem.

Ueli Maurer

Abstract models of computation and complexity lower bounds
Computational security proofs in cryptography, without unproven intractability assumptions, exist today only if one restricts the computational model. For example, one can prove a lower bound on the complexity of computing discrete logarithms in a cyclic group if one considers only generic algorithms which can not exploit the properties of the representation of the group elements. A simple abstract model of computation is proposed which allows to capture such reasonable restrictions on the power of algorithms. Several instantiations of the model and the corresponding lower bounds are discussed, including different flavors of generic algorithms in groups.
The equivalence of breaking Diffie-Hellman and discrete logarithms
We demonstrate a reduction of the computational Diffie-Hellman (CDH) problem to the discrete logarithm (DL) problem in the same group. The reduction is generic and applies to any group. There may be some
group orders for which the reduction is not efficient, and this leads to an interesting number-theoretic conjecture which would close the gap. One can construct groups for which CDH and DL are equivalent.
Breaking RSA generically is equivalent to factoring
We show that a generic ring algorithm for breaking RSA in Z_N can be converted into an algorithm for factoring the corresponding RSA-modulus N. Our results imply that any attempt at breaking RSA without factoring N will be non-generic and hence will have to manipulate the particular bit-representation of the input in Z_N. This provides new evidence that breaking RSA may be equivalent to factoring the modulus. Joint work with Divesh Aggarwal.

Rene Peralta

Small Circuits for Boolean Functions
We consider the problem of building efficient circuits using arithmetic modulo 2. This problem is highly intractable for any meaningful measure of “efficiency”. Thus, we can only hope to develop heuristics. I will discuss known bounds on the multiplicative complexity of functions. Some of these results are constructive, and thus can be used in the first step of our circuit optimization method.
The second part of the talk considers the problem of optimizing the linear parts of a circuit. In this problem, we are given a binary m-by-n matrix M, and the task is to build a circuit for computing Mx (where x is an n-bit input). It turns out most work in this area makes an implicit assumption that turns out to be false. When one sheds this assumption, it becomes clear that new heuristics are needed. We have coded one such heuristic. I will report on how it fares compared to traditional methods.

Nitin Saxena

How to make algebraic computations GRH free?
Generalized Riemann Hypothesis (GRH) is a classical analytic conjecture about zeta functions. Surprisingly, it turns up in analyzing polynomial factoring and related algebraic algorithms. In this talk we will exhibit techniques that eliminate the need of GRH from various such algorithms.
Our central result is that given a polynomial f(x) over a finite field, we can find in deterministic subexponential time a nontrivial automorphism of the algebra k[x]/(f(x)). This main tool leads to various new GRH-free results, most striking of which are:

  1. Given a noncommutative algebra over a finite field, we can find a zero divisor in deterministic subexponential time.
  2. Factoring r-th cyclotomic polynomial over a finite field when 8|r or r has at least two distinct odd prime factors.