Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should Subsampling be Recommended? #545

Open
fjclark opened this issue Nov 18, 2024 · 1 comment
Open

Should Subsampling be Recommended? #545

fjclark opened this issue Nov 18, 2024 · 1 comment

Comments

@fjclark
Copy link

fjclark commented Nov 18, 2024

My understanding is: subsampling is recommended so that Equation 4.2 of Kong et al., 2003 , which is derived for uncorrelated samples, can be used to estimate the variance. However, subsampling increases the variance. It seems unintuitive to increase the variance so that it can be better estimated. Would it not be better to minimise the variance by retaining all samples, and use a variance estimator which directly accounts for autocorrelation?

The Issue: Subsampling Increases the Variance

Geyer, 1992 (Section 3.6) discusses subsampling. He points out:

  • Subsampling decreases the statistical inefficiency in units of samples, but increases the statistical inefficiency in units of sampling time (Theorem 3.3)
  • "If the cost of using samples is negligible, any subsampling is wrong. One doesn't get a better answer by throwing away data."

I'm assuming that the cost of using samples is generally negligible compared to the cost of generating them.

The increase in variance caused by subsampling seems to be shown, for example, in Table III of Tan, 2012, where the variance of the MBAR/UWHAM uncertainties increase after subsampling (the variances without subsampling are calculated using block-bootstrapping).

Possible Solutions: Directly Accounting for Autocorrelation in the Variance Estimates

To account for autocorrelation in the variance estimates without subsampling, block bootstrapping could be used, with the block size selected according to the procedure of Politis and White, 2004 (and correction), for example. However, I understand that fast analytical estimates may be preferred to avoid repeated MBAR evaluations. Could the analytical estimates from Geyer, 1994/ Li et al., 2023 be used?

Why This May Be Irrelevant

I'm biased by the fact I work with ABFE calculations and regularly feed MBAR very highly correlated data which are aggressively subsampled, sometimes producing unreliable estimates (which are reasonable without subsampling). I understand that for most applications relatively few samples will be discarded and any increase in uncertainty may be small.

It would be great to hear some thoughts on this/ be corrected if I am misunderstanding.

Thanks!

@mrshirts
Copy link
Collaborator

I am extremely interested in this, and think we need to examine this. I suspect block bootstrapping is going to be the best way to do this. There are issues at small sample number I have seen with most analytical estimates, but I would like to investigate the two you list a bit more. I am going to be tied up for the next 10 days or so (and then digging out afterwards) but this is very interesting to me, and I would love to talk more.

I'd love to get a quantitative estimate of the additional errors introduced by subsampling. I would say in MBAR the biggest issue is poor estimation of the correlation time leading to too aggressive subsampling (another thing we have talked about), but even if the autocorrelation time is estimated correctly. We should think of the right experiments to show this (perhaps artificial data generated with a autoregressive model so that we know the autocorrelation time exactly).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants