Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential vulnerability if not checking proof security level #160

Closed
Tracked by #177
irakliyk opened this issue Mar 2, 2023 · 5 comments
Closed
Tracked by #177

Potential vulnerability if not checking proof security level #160

irakliyk opened this issue Mar 2, 2023 · 5 comments

Comments

@irakliyk
Copy link
Collaborator

irakliyk commented Mar 2, 2023

This issue was brought up by Quan Thoi Minh Nguyen from Google.

The core of the issue is how we define the verifier interface here. Basically, the verifier accepts a StarkProof as an input and StarkProof struct contains ProofOptions struct. In turn, ProofOptions contains a bunch of protocol parameters which, in the end, define the security level of the proof. This setup could allow a potential attacker to send a fake proof at a very low security level (e.g., just a few bits), and the verifier would accept this proof.

So, it is up to the users of this library to make sure they check security level of the proof before passing it to the verify() function. While this works, it is probably not ideal as people might forget that this needs to be done leading to unintended concequences.

I can think of 3 ways of changing the verifier interface to make this type of bugs less likely.

Option 1: return security level from the verify() function

We could change the signature of the verify() function to something like this:

pub fn verify<AIR: Air, HashFn: ElementHasher<BaseField = AIR::BaseField>>(
    proof: StarkProof,
    pub_inputs: AIR::PublicInputs,
) -> Result<u32, VerifierError>

Where the returned u32 would specify the security level of the verified proof.

Option 2: pass ProofOptions struct separately

We could change the signature of the verify() function to something like this:

pub fn verify<AIR: Air, HashFn: ElementHasher<BaseField = AIR::BaseField>>(
    proof: StarkProof,
    options: ProofOptions,
    pub_inputs: AIR::PublicInputs,
) -> Result<(), VerifierError>

This would also require the we remove ProofOptions from StarkProof.

Option 3: specify minimum security level

We could change the signature of the verify() function to something like this:

pub fn verify<AIR: Air, HashFn: ElementHasher<BaseField = AIR::BaseField>>(
    proof: StarkProof,
    pub_inputs: AIR::PublicInputs,
    min_security_level: u32,
) -> Result<(), VerifierError>

My thinking is that option 1 is not really explicit enough and the return value could be ignored by the caller just as easily. Option 2 makes the interface less ergonomic and difficult to use, in my opinion. So, my preference would be with option 3 - but would love to know if others think differently.

@daira
Copy link

daira commented Apr 14, 2023

So, it is up to the users of this library to make sure they check security level of the proof before passing it to the verify() function.

I've never heard of any other zk proof library requiring that; to me it's clearly broken. Typically, what the verifier does is provide a circuit-specific verifying key, and that key determines the proving system parameters exactly. Anything else gives a prover adversary power that it shouldn't have. (Even allowing the prover to choose "stronger" parameters is still power that it doesn't need and shouldn't have, since it increases the attack surface.)

@irakliyk
Copy link
Collaborator Author

Typically, what the verifier does is provide a circuit-specific verifying key, and that key determines the proving system parameters exactly.

In the context of STARKs this is frequently not desirable for a couple of reasons:

  • We may want to provide options to generate proofs at different security levels for the same circuit. An example of this may be a STARK-based virtual machine which gives users an option to generate proofs at 100-bit or 128-bit security levels.
  • We may want to provide options to generate proofs at the same security level using different parameters (e.g., different combination of blowup factor, number of queries, grinding factor etc.). This is useful because it impacts things like proof generation time and proof size differently. For example, in some cases it may be OK for the prover to take much longer to produce a smaller proof. In other cases, we may tolerate bigger proof sizes, but want the prover to run as fast as possible. So for the same circuit and same security level we may want to adjust parameters to suit a specific use case.

In practice, this frequently means we could have up to a dozen (or maybe even more) different parameter sets for the same circuit, and the verifier should accept proofs for any of these sets.

So, given the above, the question is how does the prover let the verifier know which set of parameters they've chosen for a given proof. This could be done in a variety of ways and the way it is currently done in Winterfell is that the prover includes the set of parameters with the proof. It is then up to the verifier to accept or reject proofs based on the included parameters.

I do agree that this can cause issues, hence this issue and 3 potential options to address it. Other options are welcome too.

@daira
Copy link

daira commented May 19, 2023

In practice, this frequently means we could have up to a dozen (or maybe even more) different parameter sets for the same circuit, and the verifier should accept proofs for any of these sets.

That doesn't preclude having the verifying key specify what range of parameters are acceptable, so that the code using the verifier doesn't have to perform additional checks.

Btw, is the verifier work dependent on the parameters? If it is, then specifying only a minimum security level is not sufficient, because then producing proofs that are more expensive to verify could cause a denial of service.

@irakliyk
Copy link
Collaborator Author

Btw, is the verifier work dependent on the parameters? If it is, then specifying only a minimum security level is not sufficient, because then producing proofs that are more expensive to verify could cause a denial of service.

Yes - verifier work is roughly proportional to the proof size (which is roughly proportional to the number of queries). So, it is possible to submit a proof 10x of the expected size and make the verifier do 10x more work (this would be something like 1MB proof vs. usual 100KB proof). Making sure that proof sizes are in a "sane" range is left to the users of the library as by the time the proof gets to the verifier it is assumed to have been already deserialized. So, if someone decides to send a 100MB proof, it should be caught before it gets to the verifier.

That doesn't preclude having the verifying key specify what range of parameters are acceptable, so that the code using the verifier doesn't have to perform additional checks.

That's a good point. I wonder if a better solution could be something like this:

pub fn verify<AIR: Air, HashFn: ElementHasher<BaseField = AIR::BaseField>>(
    proof: StarkProof,
    pub_inputs: AIR::PublicInputs,
    acceptable_options: &AcceptableOptions,
) -> Result<(), VerifierError>

Where AcceptableOptions could look something like:

pub enum AcceptableOptions {
    MinConjecturedSecurity(u32),
    MinProvenSecurity(u32),
    OptionSet(Vec<ProofOptions>),
}

This way, users of the library will have to explicitly specify which proof parameters are acceptable. But they would retain flexibility do so by either by directly defining a set of parameters or specifying minimum acceptable security levels under different assumptions.

@irakliyk
Copy link
Collaborator Author

Closed by #219. We went with the approach described in the comment above. This will be a part of the v0.7.0 release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants