-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Potential vulnerability if not checking proof security level #160
Comments
I've never heard of any other zk proof library requiring that; to me it's clearly broken. Typically, what the verifier does is provide a circuit-specific verifying key, and that key determines the proving system parameters exactly. Anything else gives a prover adversary power that it shouldn't have. (Even allowing the prover to choose "stronger" parameters is still power that it doesn't need and shouldn't have, since it increases the attack surface.) |
In the context of STARKs this is frequently not desirable for a couple of reasons:
In practice, this frequently means we could have up to a dozen (or maybe even more) different parameter sets for the same circuit, and the verifier should accept proofs for any of these sets. So, given the above, the question is how does the prover let the verifier know which set of parameters they've chosen for a given proof. This could be done in a variety of ways and the way it is currently done in Winterfell is that the prover includes the set of parameters with the proof. It is then up to the verifier to accept or reject proofs based on the included parameters. I do agree that this can cause issues, hence this issue and 3 potential options to address it. Other options are welcome too. |
That doesn't preclude having the verifying key specify what range of parameters are acceptable, so that the code using the verifier doesn't have to perform additional checks. Btw, is the verifier work dependent on the parameters? If it is, then specifying only a minimum security level is not sufficient, because then producing proofs that are more expensive to verify could cause a denial of service. |
Yes - verifier work is roughly proportional to the proof size (which is roughly proportional to the number of queries). So, it is possible to submit a proof 10x of the expected size and make the verifier do 10x more work (this would be something like 1MB proof vs. usual 100KB proof). Making sure that proof sizes are in a "sane" range is left to the users of the library as by the time the proof gets to the verifier it is assumed to have been already deserialized. So, if someone decides to send a 100MB proof, it should be caught before it gets to the verifier.
That's a good point. I wonder if a better solution could be something like this: pub fn verify<AIR: Air, HashFn: ElementHasher<BaseField = AIR::BaseField>>(
proof: StarkProof,
pub_inputs: AIR::PublicInputs,
acceptable_options: &AcceptableOptions,
) -> Result<(), VerifierError> Where pub enum AcceptableOptions {
MinConjecturedSecurity(u32),
MinProvenSecurity(u32),
OptionSet(Vec<ProofOptions>),
} This way, users of the library will have to explicitly specify which proof parameters are acceptable. But they would retain flexibility do so by either by directly defining a set of parameters or specifying minimum acceptable security levels under different assumptions. |
Closed by #219. We went with the approach described in the comment above. This will be a part of the v0.7.0 release. |
This issue was brought up by Quan Thoi Minh Nguyen from Google.
The core of the issue is how we define the verifier interface here. Basically, the verifier accepts a
StarkProof
as an input andStarkProof
struct containsProofOptions
struct. In turn,ProofOptions
contains a bunch of protocol parameters which, in the end, define the security level of the proof. This setup could allow a potential attacker to send a fake proof at a very low security level (e.g., just a few bits), and the verifier would accept this proof.So, it is up to the users of this library to make sure they check security level of the proof before passing it to the
verify()
function. While this works, it is probably not ideal as people might forget that this needs to be done leading to unintended concequences.I can think of 3 ways of changing the verifier interface to make this type of bugs less likely.
Option 1: return security level from the
verify()
functionWe could change the signature of the
verify()
function to something like this:Where the returned
u32
would specify the security level of the verified proof.Option 2: pass
ProofOptions
struct separatelyWe could change the signature of the
verify()
function to something like this:This would also require the we remove
ProofOptions
fromStarkProof
.Option 3: specify minimum security level
We could change the signature of the
verify()
function to something like this:My thinking is that option 1 is not really explicit enough and the return value could be ignored by the caller just as easily. Option 2 makes the interface less ergonomic and difficult to use, in my opinion. So, my preference would be with option 3 - but would love to know if others think differently.
The text was updated successfully, but these errors were encountered: