-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
new builds are not picking up new commits #157
Comments
It looks like the build has been broken since last October :-/ Following the readme, I was able to complete a build in a local docker container. My built image reproduced the bug above--it did not have any of the latest commits, just like the latest images built by travis and deployed to our build server. As far as I can tell, the most recent commit in any published firmware build is 619e263. The problem appears to be that the sudowrt firmware image files in the sudowrt-firmware docker image are not getting cleaned out during the build step. There is probably an openwrt clean command we can run to properly clean out the build directories or force the files to be overwritten. |
Oh. Maybe the bug is simply that we are running |
I verified that I imagine there is a way to write a build-lite script that just copies the files into the image instead of rebuilding a bunch of tooling that is already built, but Next steps are:
|
if there was a way to indicate that files have not changed then new compiling/file-generation will not happen and just |
@bennlich you are on the right track. This appears to be caused by a mistake on my part.
However, older, presumably working commits had
One problem, Sorry about that confusion, it seems like I just got mixed up about the build process somewhere in all my commits. |
I'm confused about what the issue is that requires bringing |
The idea of What I realized, IIRC, is that the issue with build times wasn't an openwrt issue, but an issue with how we were managing our docker containers. That is, if you build your images using the same computer (or docker container) over and over again, the first build takes ~40mins but each build after that only takes a few minutes, depending on the specs of your machine. What we previously suggested was spinning up a new docker container every time you wanted to build the firmware. Instead, I suggested that we provide an "already baked" docker image in which someone (i.e. @paidforby) had already run a build once. That being said, I think |
So I've been messing around with this a little. It's all starting to come back. The reason you should not run I also discovered an error in the docker image itself. For some reason, I had only ran build_pre in the image, rather than the full build. I fixed this in a new docker image here. This image also fixes some outdated packages that were baked into it. You can see that I slightly improved build times with my last few commits, https://travis-ci.org/sudomesh/sudowrt-firmware/builds Also someone should test the latest builds (3934f7 is the most recent) http://builds.sudomesh.org/sudowrt-firmware/latest/2019-12-04/ar71xx/ to see if they have pulled in changes and effectively resolved this issue (I would check myself but I don't have an N600 on hand). |
Thank you for reliving this again @paidforby ! And for documenting all your thinking--it's super helpful! 🐬 🐬 🐬 So, to recap, if I understand correctly:
Follow-ups: A) Do you think there's any more work to be done to speed up the builds?
Z) I'm now wondering what is the difference between |
Good recap @bennlich. To address your questions, A) I'm sure there are tricks to further optimizing OpenWrt builds. Though, I'm pretty sure the limitations of build times on TravisCI are mostly with bandwidth and computing power. Downloading the 7.7GB, pre-baked OpenWrt builder image takes at least 7mins. While the build takes 13mins. However, if I download the same image and execute the same entrypoint on my laptop, it only takes 1 minute and 11 seconds to build. As long as TravisCI builds complete before timing out, it doesn't matter how long they take. And I think 1 minute local builds are more than adequate. Z) I think you, @bennlich, have a grasp of how the build process works, but it definitely can be a little confusing since there have been so many cooks in the kitchen, so to speak. For posterity's sake, I'll give a quick history of our build script development and a break down of the current files and thought process behind it all (maybe this is something that should be in the README?) TL;DR,
Long version (skip to the end for a description of files) I'm not 100% sure how it started, but I believe it began with two bash scripts. One called In January 2017, we introduced Docker into the equation. Using a Dockerfile, you now did not need to install a bunch of dependencies on your personal laptop. The Dockerfile would get a plain version of ubuntu 14.04 and then would install all the necessary dependencies. This also created the need for Then, in November 2017, we introduced a little bash script called Also, around this time, TravisCI was finally correctly setup to run passing builds (it existed in the repo since 2015, but was never working?). In Decemeber 2017, Travis only built the docker image, i.e. execute the Dockerfile (which ran Finally, August 2018, I reapproached some old issues like #116 and #111 with the idea of making Travis more useful. This lead us to our current point where Travis actually runs the whole build by first downloading a completed build image and re-running the build, instead of only running To recap by listing files by approximate creation date, with a description of their current use,
Hopefully, this history and information is of use to someone, I just came to the realization that I may be the only person involved who remembers/knows any of this, so I figured I would dump my brain here. |
First clue: retrieve_ip was still deleting itself even though eenblam pushed a commit months ago to leave it in place. Just saw this on a node flashed with https://builds.sudomesh.org/sudowrt-firmware/latest/2019-04-28/ar71xx/openwrt-ar71xx-generic-mynet-n600-squashfs-factory-ede0ea.bin
I currently don't understand how this is possible, unless the image we're flashing somehow does not have @eenblam 's changes from 0e6d123.
The text was updated successfully, but these errors were encountered: