Notes of Prahladh Harsha’s talk, jan. 19th

 

2-query low error PCP composition

Scribe: Pierre Pansu, substantial corrections by Prahladh Harsha

In this lecture, we will discuss PCP composition (aka alphabet reduction). The context will be different from the one required by Irit in her proof of the PCP Theorem (via gap amplification).

 

1. Motivation for alphabet reduction

 

Start with LABEL COVER: a bipartite graph, two alphabets, a projection constraint for each edge. We are interested in GAP{(\alpha,\beta)}-LABEL COVER. NP-completeness of LABEL COVER can be reformulated as GAP{(1,1-1/|E|)}-LABEL COVER is NP-hard.

PCP theorem states that GAP{(1,1-\epsilon)}-LABEL COVER is NP-hard.

Applying Raz’s repetition theorem to the PCP theorem, the above gap {(1,1-\epsilon)} can be amplified to {(1,\delta)} for every constant {\delta>0}. More precisely,

Theorem 1 For every constant {\delta>0}, there exist alphabets {\Sigma} for which GAP{(1,\delta)}-LABEL COVER is NP-hard.

 

Recall that the parallel repetition shows taking the {t}-fold product of the label cover instance reduces the error exponentially in {t}. Thus, parallel repetition can reduce the error to any constant, but cannot obtain sub-constant error since otherwise we would need to take {t} to be super-constant which will result in a super-polynomial blowup in the instance size. The question we would like to address is if we can prove NP-hardness of label cover with sub-constant error. Such a result was shown by Raz and Safra 1997 [RS], and independently by Arora and Sudan 1997 [ASu] : GAP{(1,\delta)}-LABEL COVER over alphabet {\Sigma} is NP-hard for every {\delta\geq 1/\log |\Sigma|} as long as {|\Sigma| > n^{poly\log n}}. However, this result is not very useful for proving NP-hardness (at least via long codes) as the alphabet size is superpolynomial and typically, any hardness of approximation reduction (using long codes) is polynomial (if not exponential) in the alphabet size.

Moshkovitz and Raz 2008 [MR], then showed that the above result can be extended to all alphabets without any lower bound on the alphabet size. That is,

Theorem 2 GAP{(1,\delta)}-LABEL COVER over alphabet {\Sigma} is NP-hard for every {\delta\geq 1/\log |\Sigma|}.

 

This implies improves inapproximability bounds on other problems.

The core of their proof is an alphabet reduction technique reducing label cover instances with large alphabet size to label cover instance with small alphabet size. Dinur and Harsha 2009 [DH], give a simpler and more generic alphabet reduction technique (also called composition in PCP jargon). This is what we explain now.

 

2. Equivalence with robust PCPs

 

It will be more convenient to work with PCPs than label cover to discuss the alphabet reduction technique. To this end, we will show that label cover is equivalent to robust PCPs, a slightly modified version of PCPs with a stronger soundness criterion.

Label Cover to PCPs: Suppose we have a reduction from SAT to GAP{(1,\delta)}-LABEL COVER. We will show that this reduction implicitly is equivalent to the existence of a PCP for SAT. The PCP for SAT is described below. The proof for the PCP is the assignment to the right vertices (vertices with the small labels) of the label cover. The verifier works as follows: it picks a random left vertex (vertex with large label) and queries the proof for the labels of all vertices in the neighbourhood of this vertex. It then accepts, if there is a label (large label) to this left vertex that satisfies all the edge constraints between this vertex and its neighbours. Clearly, this PCP verifier satisfies the following. If the label cover instance is perfectly satisfiable, then the corresponding labeling to the right vertices causes the verifier to accept with probability 1. On the other hand, suppose the label cover instance was at most {\delta} satisfiable, then the average agreement of a local view of the verifier with an accepting view is at most {\delta}. Observe that this is stronger than the usual soundness criterion for PCPs. This leads us to define robust PCPs.

Recall the definition of a {(r,q)}-verifier for a language. It uses {r} random bits, it reads only {q} bits of a proof, it accepts a valid instance with probability {1} and rejects a wrong instance with probability at least {\delta}. In other words, for wrong instances, the local view is an accepting view with probability at most {\delta}. A robust PCP is similar except that it has a stronger soundness condition: for wrong instances, not only is the local view an accepting view with probability at most {\delta}, but in fact the expected agreement of a local with an accepting view is at most {\delta}.

Definition 3 A PCP is robust with robust soundness error {\delta} if for every wrong instance, the expected agreement of a local view with an accepting view is at most {\delta}. In other words, in average, the relative Hamming distance of the {q}-string read to accepted strings is at least {1-\delta}.

In other words, not only does the verifier reject inconsistent proofs with high probability, but inconsistency can be seen on small parts of the read {q}-string.

The above transformation from label cover to robust PCPs is invertible. The robust soundness error of the PCP is exactly the soundness error of the label cover. In this transformation, the alphabet size of a LABEL COVER instance gets converted to the number of accepting configuration, which is in term bounded by exp of the number of queries {q} of a robust PCP. Thus, the task of alphabet reduction in the label cover world translates to query reduction in the robust PCP world. So we shall describe a procedure that reduces the number of queries of a robust PCP.

 

3. Reducing queries

 

How does one reduce the number of queries made by a PCP verifier? For this, let us first recall how a verifier works. On input {\phi} and random coins {r} and oracle access to the PCP {\pi}, the verifier performs the following actions.

  1. Read inputs {\phi, r}.
  2. Compute local window {I} and local predicate {f}.
  3. Read local view {\pi_I}.
  4. Accept if local view satisfies local predicate (i.e., {f(\pi_I) = 1}).

Reducing the number of queries involves performing steps 3 and 4 without actually reading the local view {\pi_I}. How does one check that a local view {\pi_I} satisfy the local predicate {f} without even reading the local view {\pi_I}? The idea is to use another PCP verifier to perform this check. The PCP Theorem states that it is possible to check a proof without reading the entire proof. This idea is due to Arora and Safra, 1992 [ASa]. However, this is not a simple recursion. There is a consistency issue here. Inner verifier not only needs to check local predicate is satisfiable (easy), but also that is satisfiable by local view. We get around this by defining PCPs that not only locally check, but also locally decode!

 

3.1. Decodable PCPs

 

We will now define decodable PCPs. Recall that a PCP is a NP proof that is locally checkable. Though not stipulated by the definition, the PCP, in all known constructions, is in fact an encoding of the NP proof. The notion of a decodable PCP (dPCP) will make this aspect of PCPs explict. The dPCP verifier in addition to checking if the PCP is correct or not, will also decode the NP proof. More formally, the dPCP verifier, in addtion to the standard inputs, the input {\phi} and the random coins {r}, is also given an additional input {j}, an index into the input proof. The dPCP verifier, based on {\phi, r, j}, queries a local view of the proof and either rejects or accepts. However, when it accepts the local view, it is expected to not just output, but output (ie., decode) the symbol in the {j}th location of the NP proof.

Definition 4 A decodable PCP verifier has oracle access to a decodable PCP and is given as inputs the input instance {\phi} and random coins {r} (like the standard PCP verifier) and an additional input, an index {j} pointing to a single location in the NP proof. Based on its inputs, the dPCP verifier queries the proof {\pi} in a few locations and either rejects the proof of outputs a symbol (which presumably is the symbol in the {j}th location of the NP proof that the dPCP supposedly encodes). The dPCP satisfies the following properties.

Completeness: For every NP proof {y}, there is a dPCP proof {\pi=\pi(y)} such that with probability one (over the random coins {r} and input index {j}), the verifier correctly returns the symbol {y_j} (ie., the symbol in the {j}th location of the NP proof {y}).

Soundness: For every dPCP proof {\pi}, there is a short list of NP proofs such that the probability (over the random coins {r} and input index {j}) that the dPCP verifier neither reject nor outputs a symbol consistent with any of the NP proofs in the list is at most {\delta}.

In other words, averaged over the random coins and the input indices into the NP proof, the probability that the dPCP verifier returns a symbol inconsistent with any of the NP proofs in the list is very small.

It is to be noted that the soundness requirement is over a random index {j} and not every index {j}. This is crucial difference between decodable PCPs and locally decodable codes. Furthermore existing constructions of PCP can easily be adapted to give decodable PCPs.

 

3.2. Composition with dPCPs

 

Given a “outer” robust PCP, we can now reduce its complexity by composing it with a “inner” decodable PCP as follows. The verifier first computes local window {I} and local predicate. Instead of checking local window satisfies local predicate, it invokes a dPCP verifier to do this. In addition, the verifier selects a random a random index {i} into the “outer” local window and asks the inner decoding verifier to decode this symbol. Finally, the verifier accepts if this decoded symbol is consistent with the corresponding symbol in the outer local view (i.e, {\pi_i}).

It is an easy exercise to check that this composed verifier is a PCP. Also, the number of queries made by the composed verifier is equal to the query complexity of the inner dPCP plus one additional query into the outer PCP, which is significantly lower than the query complexity of the outer robust PCP.

However, we need to check if this composed PCP verifier has robust soundness. Unfortunately, this composition does not preserve robustness: Composed view consists of two parts ? (view in outer and inner verifiers), each part can be completed to an accepting view (robust soundness is always at least half).

So we perform an alternate composition. Instead of decoding from inner proof and comparing to outer proof symbol, compare inner proofs to each other. The new composed verifier works as follows: it selects a random index {i} in the outer robust PCP, let {I_1,..,I_D} be all the outer local windows in which this index participates. It then invokes all the {D} inner dPCP verifiers corresponding to these {D} local views and asks each of them to decode the symbol {\pi_I}. The verifier accepts if all the inner dPCP verifiers do not reject and all their decoded answers are equal.

A careful (but not too difficult) analysis shows that this composition preserves robust soundness. Note, however that the query complexity of the composed verifier is at {D} times the inner query complexity. If {D} is not small, the verifier instead of choosing all the {D} views can choose a small sample of them in a pseudorandom fashion.

In the LABEL COVER language, the above composition gives a alphabet reduction technique. Performing it repeatedly, we recover Moshkovitz and Raz result.

 

References

 

[ASa] Sanjeev Arora and Shmuel Safra; Probabilistic checking of proofs: A new characterization of NP. J. ACM 45, (1):70–122, January 1998. (Preliminary Version in 33rd FOCS, 1992). doi:10.1145/273865.273901.

[ASu] Sanjeev Arora and Madhu Sudan; Improved low-degree testing and its applications. Combinatorica, 23 (3):365–426, 2003. (Preliminary Version in 29th STOC, 1997). eccc: TR97-003, doi:10.1007/s00493-003-0025-0.

[DH] Irit Dinur and Prahladh Harsha; Composition of low-error 2-query PCPs using decodable PCPs. In Proc. 50th IEEE Symp. on Foundations of Computer Science (FOCS), 2009. doi:10.1109/FOCS.2009.8.

[MR] Dana Moshkovitz and Ran Raz; Two Query PCP with Sub-Constant Error. In Proc. 49th IEEE Symp. on Foundations of Computer Science (FOCS), 2008. doi:10.1109/FOCS.2008.60.

[RS] Ran Raz and Shmuel Safra; A sub-constant error-probability low-degree test, and a sub-constant error- probability PCP characterization of NP. In Proc. 29th ACM Symp. on Theory of Computing (STOC), 1997. doi:10.1145/258533.258641.

 

About metric2011

metric2011 is a program of Centre Emile Borel, an activity of Institut Henri Poincaré, 11 rue Pierre et Marie Curie, 75005 Paris, France. See http://www.math.ens.fr/metric2011/
This entry was posted in Workshop lecture and tagged . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s