Adding self-guided context discovery - looking for feedback #1210
Replies: 2 comments
-
I'm considering trying out the Cline Memory Bank and the Roo Code Memory Bank for improved project context in VS Code, as they seem potentially useful based on what you've mentioned. I haven't tested them yet. Has anyone here tried them out and can share their experience? |
Beta Was this translation helpful? Give feedback.
-
Hi @dmanresa-saes! I haven't personally tried either of those memory bank extensions yet, but they look interesting and relevant to what I was suggesting. From what I can see:
My proposal is a bit different - instead of just storing memories/context, I'm thinking about having the AI assistant actively discover what context it needs and seek it out automatically in the codebase. If you end up trying either of these extensions, I'd love to hear about your experience! It would be really helpful to know how they perform with larger codebases and how much manual work is still required to provide proper context. Are you working on particularly large projects where context management is becoming an issue? |
Beta Was this translation helpful? Give feedback.
-
Adding self-guided context discovery - looking for feedback
Hey all!
I've been using Roo Code for a few weeks now and I'm really impressed with what you've built. It's definitely become a part of my daily workflow!
While working with it on some larger projects, I noticed something that could be a cool addition. Right now we have to manually add files to the context, but what if the AI could figure out what context it needs on its own?
I've been playing around with some ideas that might help:
Having the AI identify when it doesn't have enough info and proactively search the codebase for what it needs - like when it mentions a function but doesn't have the implementation, it could say "hang on, let me find that for you" and actually go look for it
Some smarter context management - like maybe ranking files by importance to the current question and removing less relevant stuff when we hit token limits
I've been experimenting with LangChain for this kind of stuff in side projects, and it seems to have good components for this - especially for memory management and document handling.
Just wondering if:
I'd be happy to prototype something if you think it could be valuable! Not looking to mess with your architecture, just genuinely think this could be a cool enhancement.
Thanks for making such a useful tool!
Beta Was this translation helpful? Give feedback.
All reactions