We want to give proof of having executed a defined computation, and that this computation will have to belong to a specific class of problems. How do we express the computation that we need to prove?

By reading some of the previous posts, we now understand that the computation has to belong to the class of problems whose solutions can be verified by a computer in finite time, once we know those solutions. To be able to handle them in our “Pinocchio” zk-Snark, we’ll express our computation as a QAP, a “Quadratic Arithmetic Program”. A Quadratic Arithmetic Program is an equation composed by a vector of values and three vectors of polynomials. As it is not intuitive to express a generic computation directly as a QAP, we’ll need to get to that through a series of steps.

 

First challenge: writing our computation

We’ll start by expressing our computation in a programming language that has some limitations: we can only use variable assignment and then add and multiply operators. Variables are immutable, and we can’t use unbounded loops. The language can only handle integers modulo p, where p is a large number, e.g.: 0, 1, 2, …, 21888242871839275222246405745257275088696311157297823662689037894645226208583.

It’s essential we understand this well: because we can only use this programming language, if the computation that we want to prove was originally written in a Turing-complete language, we’ll have to translate it, and in doing that we can only use fixed size variables, no unbounded cycle, only sums and multiplications, a determined number of computational steps, ...

Ok, so let’s look at an example of what we can write with this language:

y = (x+z)^2 + z +1  

Ok ok, not that cool, I know!  At a first glance, it looks like a very poor programming language indeed. But we can actually expand it a lot, by using a few tricks: we can express a variable x in its binary format, by assigning one variable to each of the needed powers of 2:

x = k*2^3 + v*2^2 + r * 2 + z.

At this point, we can use the binary variable k, v, r, z to express conditionals and comparisons as we would do with signals in digital electronics. By using this trick, we can map conditionals, comparisons, etc of variables, to sums and multiplications of their bit representations.

Also, another trick to reduce complexity of proof calculation, is to use our knowledge of the witnesses. So, if we need to calculate a division, say y= x/w, we could simply write in our computation that we know a y, such that x=z*y, and that z =! 0. This way, if we can prove our computation, made just of a non-zero condition and a multiplication, we’ll have proven that we calculated  y= x/w without the need of implementing division in our limited language. In other words, in our example the fact that we divided x by w, is proven by our ability to prove that we know a y that is the result of that division. Please note, that we can also implement “real” divisions in our limited language, because we operate on a finite field, and in a finite field we can express division as a series of multiplications (see “Fermat’s little theorem” if you want to learn a bit more about this).

So, to wrap things up: we are building our computation over a limited programming language, and we can extend it to support more complex computations by putting together its components.