Skip to content

hlzhang109/impossibility-watermark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Watermarks in the Sand: Impossibility of Strong Watermarking for Generative Models

made-with-python License: MIT

An official PyTorch implementation of Watermarks in the Sand: Impossibility of Strong Watermarking for Generative Models, ICML 2024.

We experiment with three watermark frameworks for large language models - KGW, EXP, and Unigram, as well as two schemes for vision-language models - Stable Signature and Invisible Watermark.

Check our website, blog post and paper for more details.

Introduction

Watermarking generative models consists of planting a statistical signal (watermark) in a model's output so that it can be later verified that the output was generated by the given model. A strong watermarking scheme satisfies the property that a computationally bounded attacker cannot erase the watermark without causing significant quality degradation. In this paper, we study the (im)possibility of strong watermarking schemes.

We prove that, under well-specified and natural assumptions, strong watermarking is impossible to achieve. This holds even in the private detection algorithm setting, where the watermark insertion and detection algorithms share a secret key, unknown to the attacker. To prove this result, we introduce a generic efficient watermark attack; the attacker is not required to know the private key of the scheme or even which scheme is used.

Our attack is based on two assumptions: (1) The attacker has access to a "quality oracle" that can evaluate whether a candidate output is a high-quality response to a prompt, and (2) The attacker has access to a "perturbation oracle" which can modify an output with a nontrivial probability of maintaining quality, and which induces an efficiently mixing random walk on high-quality outputs. We argue that both assumptions can be satisfied in practice by an attacker with weaker computational capabilities than the watermarked model itself, to which the attacker has only black-box access. Furthermore, our assumptions will likely only be easier to satisfy over time as models grow in capabilities and modalities. We demonstrate the feasibility of our attack by instantiating it to attack three existing watermarking schemes for large language models: Kirchenbauer et al. (2023), Kuditipudi et al. (2023), and Zhao et al. (2023). The same attack successfully removes the watermarks planted by all three schemes, with only minor quality degradation.

Illustration of Attack: We consider the set of all possible outputs and within it the set of all high-quality outputs (with respect to the original prompt). For any quality-preserving watermarking scheme with a low false-positive rate, the set of watermarked outputs (green) will be a small subset of the high-quality output (orange). We then take a random walk on the set of high-quality outputs to arrive at a non-watermarked output (red) by generating candidate neighbors through the perturbation oracle and using the quality oracle to reject all low-quality candidates. The differences with original watermarked text are highlighted.

Quick start

For evaluation of text quality, we use the OpenAI API. You can sign up for an API key here and set it as an environment variable.

export OPENAI_API_KEY=<your_key_here>

Language Models

We adopt MarkLLM for benchmarking attacks.

sbatch text/job.sh

Vision-Language Models

sbatch vision/job.sh

Citation

If you find this repo useful, please consider citing:

@inproceedings{
      zhang2024watermarks,
      title={Watermarks in the Sand: Impossibility of Strong Watermarking for Generative Models},
      author={Hanlin Zhang and Benjamin L. Edelman and Danilo Francati and Daniele Venturi and Giuseppe Ateniese and Boaz Barak},
      booktitle={Forty-first International Conference on Machine Learning},
      year={2024},
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages