-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Epoch folding with large datasets #805
Comments
Hi @rodrigruiz, thanks for the Issue. Another thing you might try for your specific need is passing the data with memory mapped arrays. I never tried those particular functions with memory maps, but they should work. Note also that HENDRICS has an algorithm for fast frequency/fdot searches (it's called quasi fast-folding algorithm). It can be used through HENzsearch or directly from the API |
Hi @matteobachetti , thanks so much for the detailed answer and the advice! |
BTW, if you think that a private function should really be made public because it's useful on its own, feel free to let us know! |
Sure, thanks! |
Hi,
We are using Stingray to analyse time data, and we are interested on performing an epoch folding search with a large dataset (much larger than what can be fit in the RAM of the computer). Our strategy would be to split the dataset into smaller subsets, fold each of them using the
fold_events
function, sum the folded profiles, and then perform the analysis on the sum of the folded profiles as it is done in theepoch_folding_search
function.However, we think that this is not possible because some of the functions called from within
epoch_folding_search
are not public (for example,_folding_search
or_profile_fast
).A solution would be to make these functions public, but maybe there's a better strategy to perform this analysis without modifying the Stingray software that we didn't think about. Any advice from your side would be very appreciated.
Thanks!
The text was updated successfully, but these errors were encountered: