-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extremely long computation time for LFI in practical cases #114
Comments
Hi Zarach Could give an example of a program where this issue occurs? |
Hi rmanhaeve, here is an example of the program: #the underscores are just somehow removed by the formatting. Then there are about 1000 complete examples in the following format: evidence(class(topic1),true). Some of them have no evidence for any of the words. |
Hi Zarach Could you also tell me what knowledge compiler you are using. Do you have the PySDD package installed? |
Hi rmanhaeve, PySDD is installed. I did not choose the compiler explicitly, therefore I think that SDD is used. |
Hi Zarach I have been looking into the issue a bit, but I have not been able to pinpoint the issue you've been having. I'll label it as a (potential) bug, which we'll have to look into later. Kind regards, |
Dear ProbLog Team,
I'm trying to use the Noisy-Or example for a rather practical case, with 4 Topics and 15 words.
I use about 1000 examples to learn the parameters where every example has complete evidence about the use of words.
The first iterations are more or less fast, but then the learning process gets slower from iteration to iteration.
I'm running the process on a server which is not nearly busy while it runs.
Therefore I'm wondering if this is normal behavior (or if I'm doing anything wrong) and if it's even possible then to run parameter learning (Noisy Or) for even bigger corpora.
The text was updated successfully, but these errors were encountered: