-
Notifications
You must be signed in to change notification settings - Fork 525
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Let me know what you'd like to do next" repeats after answering the question and doesnt stop #259
Comments
I'm also experiencing this issue with Ubuntu 22.04 and Python 3.10.12. |
well at least i am not the only one. Have you tried rolling back to a previous version? This sucks for me because I have an awesome idea for a "device case" that I want to demo but i need a working server first. Let me know if you figure something out @highb, ill do the same. |
Unfortunately, I haven't had time to tinker with it recently. The Discord has some discussions about similar issues if you want to check them out. |
I"m having the same issue, granted... i'm only trying to run locally while i troubleshoot which might be part of my problem. I'm running either command-r (not plus) or Llama70B or Mixtral8x7B-V2.8. Plus using local whisper, plus using piper, plus running ollama on a separate server, plus trying to run the moble app (Which i got it working!) but it's.... in development and i never expected it to just... WORK especially since i'm tyring to just create a shortcut for my side button. I get similar results. but i think part of that as well is the system prompt that ollama might be passing due to the fact i have OpenWebUI running on that same server to be able to craft models. "It seems we're encountering similar issues, which I initially attributed to a poorly configured system or corrupted memory. Specifically, it appears to reference a Windows 7 computer, suggesting it may be necessary for utilizing the computer module."Then i had to convince it that it just got done... on my mac... (I dont even own a windows 7 computer nor have i even had a reason to have it in ANY context window) It does seem to make a dramatic impact if you get the context window right or wrong or if you pass too much it tends to do things that no matter the model (i've tried a bunch) If you are trying to showcase it, i'd say keep it simple, but be specific about the simplicity. . . .if that makes any bit of sense. I do like how you can do the %save_message and %load_message however i think i am running into trying to pass too much context again. I'm currently trying to get it to teach itself how to use AIFS/Chroma which... should just work but it seems to be an ongoing issue and conversation. Not here, just between me and the LLM. ha! |
I finally tried to start from scratch again. Blew away the install and made a new condo env, installed python 3.10. Downloaded everything again and now it seems to work fine. I wish I had some insight I could provide but I am currently no longer experiencing this issue.
Get Outlook for Android<https://aka.ms/AAb9ysg>
…________________________________
From: ai-Ev1lC0rP ***@***.***>
Sent: Sunday, May 5, 2024 1:16:26 AM
To: OpenInterpreter/01 ***@***.***>
Cc: james wolf ***@***.***>; Author ***@***.***>
Subject: Re: [OpenInterpreter/01] "Let me know what you'd like to do next" repeats after answering the question and doesnt stop (Issue #259)
I"m having the same issue, granted... i'm only trying to run locally while i troubleshoot which might be part of my problem. I'm running either command-r (not plus) or Llama70B or Mixtral8x7B-V2.8. Plus using local whisper, plus using piper, plus running ollama on a separate server, plus trying to run the moble app (Which i got it working!) but it's.... in development and i never expected it to just... WORK especially since i'm tyring to just create a shortcut for my side button.
I get similar results. but i think part of that as well is the system prompt that ollama might be passing due to the fact i have OpenWebUI running on that same server to be able to craft models.
"It seems we're encountering similar issues, which I initially attributed to a poorly configured system or corrupted memory. Specifically, it appears to reference a Windows 7 computer, suggesting it may be necessary for utilizing the computer module."Then i had to convince it that it just got done... on my mac... (I dont even own a windows 7 computer nor have i even had a reason to have it in ANY context window)
It does seem to make a dramatic impact if you get the context window right or wrong or if you pass too much it tends to do things that no matter the model (i've tried a bunch) If you are trying to showcase it, i'd say keep it simple, but be specific about the simplicity. . . .if that makes any bit of sense. I do like how you can do the %save_message and %load_message however i think i am running into trying to pass too much context again. I'm currently trying to get it to teach itself how to use AIFS/Chroma which... should just work but it seems to be an ongoing issue and conversation.
Not here, just between me and the LLM. ha!
—
Reply to this email directly, view it on GitHub<#259 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ABDZIED4SPEPLHYK62IDWIDZAW6CVAVCNFSM6AAAAABG3RXKBWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOJUGYZTKOBSG4>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
I found out it was an open-interpreter issue. The
|
Describe the bug
I was previously running on the older version of the 01 server and updated everything a few days ago when the latest (0.2.5) was release. After updating I first tested using my Atom device. It answers the initial question but i can see in the terminal output it gets into a loop after the question and just starts outputting like this until I kill it:
This (above) is the output from retesting with just using the spacebar (to eliminate the device as a possible source of the issue. Not sure what to try next. Rebooted, installed again on a new conda environment, same issue.
Desktop (please complete the following information):
Any help or suggestions appreciated!
The text was updated successfully, but these errors were encountered: