-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reversible RNG? #34
Comments
Update: I just realized this is definitely wrong, hopefully something along these lines might still be possible :) |
Thanks for the issue. I am definitely interested in reversible RNG. In this book https://isidore.co/calibre/get/pdf/5310 Section 12 is about reversible RNG. I can implement one if it is useful. Also welcome for submitting PR if you are interested in implementing one. function f(x)
@zeros Float64 r
r += NiLang.i_rand()
x += r
~(r += NiLang.i_rand())
end |
Thanks for the link :) I guess it's a matter of finding a good RNG that only uses reversible operations, is that right? So xor is ok, but no bit shifts? RandomNumbers.jl has some nice implementations, but I'm not sure about constraints it would need to meet. Does the idea of
sound plausible to you? |
Thanks for the link. RandomNumbers.jl is a solid project! About the idea, I am not sure about the 4th step:
If you mean direct sampling the input variables with the known output. The answer is a clear no unless P=NP. Consider we have a NP-complete problem, e.g. maximum independent set problem. We can easily write a program to computing the size of independent set. But if you assign the output size to a specific value and ask the program to sample the possible solution, you basically want a non-deterministic Turing machine - which is unlikely to exist. Reversible program is just a programming style to write the program in a bijective style. Or do you mean back-propagating the posterior probability? It sounds cool but is also challenging. Is there something like chain rule is probabilistic programming? |
Definitely the latter. The idea here is based on rejection sampling. Suppose you have a prior, and some observations drawn from a conditional distribution based on that prior. Now imagine running the program forward for every possible RNG seed, and discarding those runs where the result is not equal to your observations. The ones you end up keeping would be distributed according to the posterior. Here's a fun example: julia> using AverageShiftedHistograms
julia> function rejection(obs, num_samples)
post = Vector{Float64}(undef, num_samples)
k = length(obs)
for j in 1:num_samples
# just being lazy about getting p into scope
p = rand()
while true
# uniform prior
p = rand()
# propose some data
x = rand(k) .< p
# stop sampling when we match the observed data
x == obs && break
end
post[j] = p
end
return post
end
rejection (generic function with 1 method)
julia> post = rejection([true, true, false], 10000);
julia> ash(post)
Ash
> edges | -0.0742 : 0.0023 : 1.0984
> kernel | biweight
> m | 5
> nobs | 10000
┌────────────────────────────────────────┐
2 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣰⠀⠀⢀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡄⣿⢠⡆⣼⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢠⢠⢧⢳⠏⢾⠹⠀⡿⣷⣿⡆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⢸⣿⠘⠘⠀⠈⠀⠀⠀⠈⠁⢱⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⢷⡞⡏⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣷⠀⠀⠀⠀⠀⠀⠀⠀⠀│
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡞⠀⠀⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⢦⠀⠀⠀⠀⠀⠀⠀⠀│
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣴⢰⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣧⠀⠀⠀⠀⠀⠀⠀│
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣦⠟⡎⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⢸⠀⠀⠀⠀⠀⠀⠀│
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢠⡴⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⡆⠀⠀⠀⠀⠀⠀│
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢷⠀⠀⠀⠀⠀⠀│
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⡟⠏⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀│
│⠀⠀⠀⠀⠀⠀⢀⣤⡾⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢹⠀⠀⠀⠀⠀│
│⠀⠀⠀⠀⠀⠀⡞⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⡆⠀⠀⠀⠀│
│⠀⠀⠀⢠⠶⠛⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢹⠀⠀⠀⠀│
0 │⢀⡤⠦⠞⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣇⠀⠀⠀│
└────────────────────────────────────────┘
0 1.1 So what I'm wondering is, can we get to would-be-accepted samples more directly by running the simulation in reverse? |
Hi, this looks like really nice work, and I'm impressed with the speedups you're getting over Zygote AD.
Have you thought about reversible random number generation? Random number generation is reversible in principle, may be a matter of finding one that's natural to express in your DSL.
I'm curious about using this for probabilistic programming. For example, say we flip a coin 100 times and observe 20 heads. If P(heads) is unknown, we might write this as
Starting with a uniform prior, we could (maybe?) get the posterior like this
Do you think this is possible with NiLang?
The text was updated successfully, but these errors were encountered: