Skip to content

Latest commit

 

History

History
176 lines (97 loc) · 16.1 KB

tsc-2025-02-24.md

File metadata and controls

176 lines (97 loc) · 16.1 KB

Servo TSC Meeting February 2025

Agenda

  • Status update
  • New self-hosted runners
  • WebView API
  • AI policy review
  • Governance
  • Outreach
  • AOB

Notes

Attending:

  • TSC members: delan, emilio, gterzian, jschwe, Loirooriol, mrego, mrobinson, mukilan, nicoburns, wusyong
  • Other: JoshuaBrest, matlu, Sid Askary, ststimac, Taym95

Status update

rego: We published the update for January / February, a little late.

  • Lots of updates for CSS Tables. We pass more tests than Chrome there.
  • Lots of work on Shadow DOM. Very close to enabling it by default.
  • Improvements to ReadableStreams. Merged.
  • The FontFace API now has a prototype implementation.
  • Lots of improvements to script module - refactoring.
  • Preferences have been revamped, has a different syntax.
  • Lots of work on WebView API with a goal of having a minimal example of Servo using that. Several changes to different ports of Servo to adapt to new WebView API.
  • Servoshell is being migrated to egui dialogs from tinyfiledialogs as part of the Outreachy internship.

New self-hosted runners

rego: #127 (comment)

delan: gives update

  • Switch to use more self-hosted runners.
  • Improved speed of builds.

dealn: Other things hinted at in proposal. The additional capacity will allow us to host other builds we aren't handling there. OHOS / Android builds, runtime benchmarking, wpt. WPT is most important as it's on critical path for try jobs.

delan: If we want to self-host WPT runs we need to ask for more capacity than just the three servers we have.

nico: Thanks for all the work on the self-hosted runners, because it's made a huge difference to Servo. Not exactly sure, but now a full WPT run takes about 20 minutes (hour before). Everyday builds take 5 minutes rather than 30. On WPT tests I wonder how much it's worth prioritizing that as it might be a lot of resources to improve on the 20 GitHub runners we get. If we can sort out the pull requests it could make a bigger difference. Around WPT, I think there was a discussion before about optimizing the runner itself.

delan: For running WPT tests, fewer jobs might be required because the new runners might run the jobs (tests) faster.

delan: Thanks for the kind words. It has indeed been a lot of work. For WPT runs, it depends on the research we are doing. With the faster servers, we may not need 20 jobs any longer, because jobs might be faster. Still doing research there. You're right though, it may or may not be worth it. For PR builds -- I'd like to make them faster. I've gotta be honest though, I think it's unlikely to work out. If you have self-hosted PR builds, the build has access to all of the secrets and you have to rely on the human to not make a mistake, we leak all of secrets for the whole repo. I suspect we won't be able to do it while using GH Actions. Need to look more closely.

nico: It might be useful to run smaller subsets of WPT subtests, maybe we can save a bunch of time.

delan: Good idea. Probably easy to implement.

WebView API

rego: Lots of work on web view API. Yu Wei asking about what's the plan.

martin: the webview work has been to make it similar to the WebKit API, because it is easy to use and have a lot of features

martin: The goal is to have an embedding API similar to the WebKit one. Easy to use and has an API with many features.

martin: we're in the middle, so maybe some things are weird during the transition period

martin: the idea would be to have a webview widget and you can use it, hopefully with a very few lines of code

martin: ability to have multiple windows, web views. Goal to have around 50 lines to create a simple web view.

martin: longer term we have other ideas, such as exposing a C API, that will allow us to ship Servo as a shared object, to allow people using Servo without having to build it

martin: which would be very nice for people not having to setup a more complex environment

yu wei: I really like the current direction we are working on. I plan to give it a try in the GTK ecosystem, but also trying to wrap around and propose a GObject around it to officially support the GTK ecosystem. I wonder what the plan is, push the WebView API as a library or become like a browser?

yu wei: If we want to propose servoshell as a browser, I might propose other changes.

martin: I don't really know the answer to the question about if Servo should be a browser or a webview, I know there are hybrid modes like WPE that exposes an embedded API and a minimal browser (which is useful for signage use cases)

martin: regarding the GObject API, I think it's an interesting idea, perhaps that can even be our C API, something to think about

delan: On the question about whether Servo wants to be a browser, I'm not sure either. I think it's interesting and I see it as something that we'd likely want to explore. Mostly is because one of the problems we have to solve is to change the messaging around Servo. There's a pervasive belief in the general public that Servo is a web engine and only a browser.

delan: Interesting to hear that there are also technical challenges.

nico: I like the direction we are going with api. With regard to browser, I think we might want to explore the browser that's more fleshed out than servoshell. I think it's good for marketing and there's lots of buzz around ladybird. Not sure if you can download it, but you can build it and they have 3-4 UIs because they are doing native UI on each platform. If we are designing embedding API it's a good driver and test case. That requires us to make sure that the API is rich enough. It also make us sort out the UI. It makes us ensure the embedding API works well with one toolkit.

yu wei: What nico says is what my concern is right now. We use egui and working on servoshell now, trying to work with egui shouldn't block figuring out how it works with other embedding APIs. I think firefox and Chromium render their own UI. If Servo wants to grow as a formal web browser, you need to figure out how to define this kind of the thing. I would push for de-coupling dependency on servoshell.

rego: Thanks for input. I guess this is evolving.

jonathan: So for the OpenHarmony usecase, we are now starting to actively looking into using servo as a backend for the webview. For that usecase we would also be looking into making a C-API on top of the libservo embedding API. Since the servo embedding API is currently still be worked on, I'm not sure if we should do that in-tree, or if we should do that in a different repo for now. Regarding the browser discussion - I personally think it might be a bit early to start focusing on a browser, since the engine is still quite far from being "ready" and might be stretching our resources. I was also previously under the impression, that verso would be focusing on the browser part, so I'm wondering if we really need to do more in servo, and couldn't leave that up to verso?

yu wei: For Verso after a few months, it's like a personal project. I have a vision for what I want to push, I have lot of wild ideas that might not fit what Servo wants to do. Maybe it's a bit different from what Servo wants to do. If Servo wants to build a browser, how do we render Chrome UI / decouple egui from. My priority might be a bit different -- how to do C API / grow into browser project.

martin: Regarding C-API, it's a good idea to have it upstream. My only two thoughts is to have it when the webview api is more stable, that way we don't have to keep fixing the C-API. I want the C-API to match the rust one as much as possible (aside from referencing counting from stuff). And if there is a way we should avoid duplicating the effort (in open harmony).

AI policy review

rego: https://gist.github.com/gterzian/26d07e24d7fc59f5c713ecff35d68f01

rego: summarizes

rego: Last year we set up policy that was discussed and approved that doesn't allow AI when contributing to Servo, response to noisy PRs. Gregory has been experimenting with this and feels it could be useful for Servo. We agreed to allow an initial experiment. He used CoPilot. He proposes a new policy with how to use CoPilot in Servo.

gregory: I've been experimenting. Used in one PR. Asked for permission to use commits in PR so that they could merge into Servo. I think I've described why I found it useful -- though only in the tsc chat. I found using it useful. Once set up basic structure of code and implementing spec and documenting code by pasting prose from spec, then writing code that implements it. CoPilot is aware of spec and suggest implementation. It works well in the sense that it saves you looking up which method you want to call and prevents typo and wrong naming conventions. Before CoPilot I get compilation error and have to look it up and fix name. CoPilot knows this from codebase. Allows implementing web API and gives incentive to document things properly. The nice thing about CoPilot is that it only works well if you do that. Only works if code is good. As a result of the experiment, I'm now proposing that we make change to policy to implement Web API. I've written protocol for how to do it. 90% is how to implement web api and the rest is how to use copilot.

gregory: GitHub has a feature that you can block submissions using public code.

martin: Thanks for the info. Question: Does it only work well if you paste the specs as comments? What does it look like if you don't? Is [the code] it okay? Really broken? Could a reviewer tell?

gregory: GitHub Copilot gives suggestion and you can suggest it by pushing tab. I tried just keep pushing tab and it would keep giving you suggestions and it would generate 20 lines of complete nonsense. If you use it properly you use it one line at a time. I've never used ChatGPT. That's not how CoPilot works it gives you one or several lines at a time. If you don't use it in a disciplined way, you just get a bunch of nonsense that would never compile or look really strange. It wouldn't look plausible as if you asked LLM to generate a bunch of code.

martin: How do we know that we are getting changes where people push tab 20 times? It is nonsense, but it doesn't look like it.

gregory: What I mean by nonsense is that it isn't code at all. If I copy paste line from spec, if I suggest another line it gives comments and sometimes it gives you a whole series of lines of comments. It doesn't look like plausible code.

nico: I wanted to raise a possibility. We could have people declare that they've used AI as part of the pull request and the reviewer could see that and use it as part of their judgement. Someone could just ignore that directive. We have no real way of knowing that, but same as now with current directive.

gregory: It could be a good compromise. In my experience when I review a PR I always look at the spec open next to PR. I look at whether it follows the spec or not. Sometimes it's that they didn't understand what the spec meant, but it could be complete nonsense -- you'd see very quickly that it doesn't follow the spec.

gregory: Personally I'm not concerned about it producing plausible, but wrong code. I'm excited about how it could produce better code generally.

rego: One big concern shared in TSC were the copyright issues. I was asking the Linux Foundation about that. They only have a general policy -- generic and abstract. I see in this protocol you are suggesting only for copilot and only for part of the code, the copyright issue is maybe smaller. It's a bit weird that the protocol is only for a particular tool and not a generic thing. I understand that this is only the first step and only experimenting.

sid: I wanted to say as an interested party, we are having same discussions in Rust community. There are several layers to this discussion as someone pointed out from tagging the code to having bots coming in to take over parts of the repository. In that community they are working on a policy and maybe we could collaborate. We should not be inaudible

rego: Looks like a good idea. Maybe we can work together. asks for link

sid: https://rust-lang.zulipchat.com/#narrow/channel/392734-council/topic/AI.20policy

delan: First of all, want to thank gregory for bringing proposal forward and putting a whole bunch of effort in trying it out and documenting how it went.

delan: While I found the experiment helpful. It helped me understand details of copilot better. Personally, there are still concerns I have about correctness, because of how these tools work fundamentally. They're designed optimized for finding text that looks plausible, which isn't the same thing as being correct. I still have concerns around that from being an author to being a verifier. I still have those concerns. I still have the legal and ethical concerns from the original policy. I don't think they've been addressed. The details aren't important as time is limited. The way I see it is that there are these risks and concerns that I still have. On the other hand we have these potentially useful tools. All of the work that can be done with AI assistance can and has been done without tooling. It seems to me that the safest option is to continue with the status quo. My personal position, not speaking for my colleagues is to continue with policy. We should strengthen it by adding check box to declare they've followed the policy -- you either to not know or lie -- hoping this will make it more reliable. Also think we should add docs to book to configure your tools to follow policy. If we keep the current policy we'd have docs to ensure how to keep these tools disabled.

gregory: My perspective is different, because even though you can say we've been doing a good job on Servo, but I think there is a lot of of potential ... 10-20 percent of PR was done with CoPilot. You don't have to keep looking things up such as spelling. These kind of mistakes you make and you keep having to look it up. CoPilot takes away those pain points.

gregory: On Zulip I pasted a link to a paper that follows use of CoPilot on open-source project and they saw that people were able to write more code and explain things to each other. It reduces some of that friction. They thought it was a large benefit for people who are less experienced with the project. It makes it easier for people to contribute. We are talking about a large potential benefit. we are doing a good job of documenting contribution, so I don't think we should set aside this potential.

gregory: I think we should address risk and concerns in a practical way, but try to harness potential.

martin: The research paper was good, but we should bring in research from people who are not selling the product.

nico: Recommend people read link from rust community.

sid: Regarding AI, one last thing, not alone in this. AI has taken oxygen and money from lots of organizations -- non-profit and for-profit. It's not lost on me who is trying to keep funding on open-source project. I don't want to shut the door to AI. Shutting the door might hurt funding. Please open door to AI.

Governance

rego: Charter update: #118

rego: After discussion we are working on updating charter. Planning to meet LF Europe this week.

Outreach

rego: I'm not aware of any upcoming events. Margin gave a talk in the Barcelona Free Software Meetup in January. Not sure if people will be at rust week. In June we have the Web Engines Hackfest. An opportunity to work together.

rego: Martin's talk at Barcelona Free Software meetup: https://www.youtube.com/watch?v=s0MIHKv45C0

rego: Web Engines Hackfest: https://blogs.igalia.com/mrego/announcing-the-web-engines-hackfest-2025/

sid: I don't know if my colleagues brought it up, but we are thinking of having a browser summit at rust week on Saturday of unconference. Would be nice for people who are coming to rust week to attend that meeting.

Nico: who is planning to go to rust week other than myself?

[In chat]: Jonathan, Taym, Mats, Martin (maybe)

AOB

rego: Update on donations: money we have received and how we have spent it.

rego: https://opencollective.com/servo/updates/servo-collective-2024-in-review1