-
Notifications
You must be signed in to change notification settings - Fork 16
Performance improvement #4
Comments
What can be explicitly improved? |
Actually, I have no idea. Now, Coala is run in a process and print result to a temp file. when the process exits, language server read result from temp file and parse result, just like coala-sublime does. I have tried to call coala in python, not run coala in a separated process, but failed. It means some replica code and state hosted in language server. But I think it could improve the performance. This is only an idea. 😄 |
maybe you can keep a python instance alive, so you don't need to start python again and again, so it has to import all bears. And when necessary, you just signal this python-daemon to trigger an analysis, and the daemon calls into the coala-main function. |
eh,I'm not sure if I understand. Now the language server is a long-running task. Could this server act as the daemon? |
Nvm then ;) |
I ran coala on a sample file and here are some stats, this is not based a detailed profiling though.
To improve performance coala lang server can have a custom entry point into coalib and not depend on hacking around the cli interface. From what I have observed execute_section (https://github.com/coala/coala/blob/0b1abe0808bcaa4e0930bf5276c4d5e6e0b43a41/coalib/coala_main.py#L173) does the primary job of running the bears, collecting and returning their results. As @gaocegege pointed we have the long running lang server load the bears (https://github.com/coala/coala/blob/0b1abe0808bcaa4e0930bf5276c4d5e6e0b43a41/coalib/coala_main.py#L139) and reuse them in each cycle. Since lang server's job is limited to linting the code and not fixing it we can have multiple sections being run in parallel on threads on the same file (coala supports parallel jobs but I am currently not aware if it supports sections running in parallel) and since we can directly have the results from execute_section it would only need 1 json encoding cycle unlike 3 which we currently require. Although, compared to the current implementation of coalashim this approach would be harder to maintain if coala's api changes. |
I would like to work on this. |
Re parallelisation, I suggest chatting to @Makman2 on gitter before going down that path. No doubt he would love to have help improving the parallelisation with coala core, which will benefit all users rather than just coala-vs-code. Tighter coupling with coala is IMO the way to go. |
Parallel execution of sections is currently not officially supported. It might be possible to call some API directly and circumvent this, but I don't recommend that. |
If you have additional ideas to improve the performance of this language-server (or such daemon-like integrations in general), please tell me so I can improve the NextGen-Core further :) |
Btw: Just filed coala/coala#5334 which is also relevant for this application. |
Now this language server has a bad performance.
The text was updated successfully, but these errors were encountered: