Skip to content
This repository has been archived by the owner on Feb 1, 2019. It is now read-only.

Performance improvement #4

Open
gaocegege opened this issue Jan 25, 2017 · 11 comments
Open

Performance improvement #4

gaocegege opened this issue Jan 25, 2017 · 11 comments

Comments

@gaocegege
Copy link
Member

Now this language server has a bad performance.

@Makman2
Copy link
Member

Makman2 commented Jan 25, 2017

What can be explicitly improved?

@gaocegege
Copy link
Member Author

gaocegege commented Jan 26, 2017

Actually, I have no idea.

Now, Coala is run in a process and print result to a temp file. when the process exits, language server read result from temp file and parse result, just like coala-sublime does.

I have tried to call coala in python, not run coala in a separated process, but failed. It means some replica code and state hosted in language server. But I think it could improve the performance.

This is only an idea. 😄

@Makman2
Copy link
Member

Makman2 commented Jan 26, 2017

maybe you can keep a python instance alive, so you don't need to start python again and again, so it has to import all bears. And when necessary, you just signal this python-daemon to trigger an analysis, and the daemon calls into the coala-main function.

@gaocegege
Copy link
Member Author

eh,I'm not sure if I understand. Now the language server is a long-running task. Could this server act as the daemon?

@Makman2
Copy link
Member

Makman2 commented Jan 26, 2017

Now the language server is a long-running task.

Nvm then ;)

@ksdme
Copy link
Member

ksdme commented Apr 6, 2018

I ran coala on a sample file and here are some stats, this is not based a detailed profiling though.

section: python | Elapsed Time: 0.34254980087280273 sec
section: flakes | Elapsed Time: 1.7597076892852783 sec
section: autopep8 | Elapsed Time: 1.8842582702636719 sec
section: linelength | Elapsed Time: 0.3533940315246582 sec
Total time | Elapsed Time @main: 9.874181509017944 sec

To improve performance coala lang server can have a custom entry point into coalib and not depend on hacking around the cli interface. From what I have observed execute_section (https://github.com/coala/coala/blob/0b1abe0808bcaa4e0930bf5276c4d5e6e0b43a41/coalib/coala_main.py#L173) does the primary job of running the bears, collecting and returning their results. As @gaocegege pointed we have the long running lang server load the bears (https://github.com/coala/coala/blob/0b1abe0808bcaa4e0930bf5276c4d5e6e0b43a41/coalib/coala_main.py#L139) and reuse them in each cycle.

Since lang server's job is limited to linting the code and not fixing it we can have multiple sections being run in parallel on threads on the same file (coala supports parallel jobs but I am currently not aware if it supports sections running in parallel) and since we can directly have the results from execute_section it would only need 1 json encoding cycle unlike 3 which we currently require.

Although, compared to the current implementation of coalashim this approach would be harder to maintain if coala's api changes.

@ksdme
Copy link
Member

ksdme commented Apr 6, 2018

I would like to work on this.

@jayvdb
Copy link
Member

jayvdb commented Apr 6, 2018

Re parallelisation, I suggest chatting to @Makman2 on gitter before going down that path. No doubt he would love to have help improving the parallelisation with coala core, which will benefit all users rather than just coala-vs-code.

Tighter coupling with coala is IMO the way to go.
If coala doesnt provide the ideal API , lets investigate how we can improve coala to allow it to be embedded in lang server without unnecessary overheads. I think https://github.com/coala/coala-quickstart/ will also need something like that for the 'green mode'.

@Makman2
Copy link
Member

Makman2 commented Apr 7, 2018

Parallel execution of sections is currently not officially supported. It might be possible to call some API directly and circumvent this, but I don't recommend that.
But like for many other issues too, the NextGen-Core is the solution. It's capable of running bears belonging to different sections in parallel. Running the core is also way easier than it is now (using coalib.core.run()). I would suggest we try to use the new core, I don't want to hack around the old one any more, that is wasted time as it's going to be replaced.

@Makman2
Copy link
Member

Makman2 commented Apr 7, 2018

If you have additional ideas to improve the performance of this language-server (or such daemon-like integrations in general), please tell me so I can improve the NextGen-Core further :)

@Makman2
Copy link
Member

Makman2 commented Apr 7, 2018

Btw: Just filed coala/coala#5334 which is also relevant for this application.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Development

No branches or pull requests

5 participants