-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Setting up Azure pipelines #1
Comments
Checking in, that all sounds good. 👍 |
@DeeDeeG Could you set up the CI? I saw you had Azure Pipelines running on your personal account. How did you do this? |
Someone at upstream was kind enough to put my branches up at upstream and run it themselves. I'll look into if there is a free tier of Azure Pipelines though, so we can set it up here without paying some sort of subscription. Edit: This looks like the place to do it. "Start free with GitHub" https://azure.microsoft.com/en-us/services/devops/pipelines/ Kinda makes sense now that Microsoft owns Github... |
There is a free tier, but how do we get the same configuration that upstream is using? Is there a duplicate button or something, or is there a config file we need to use somewhere? I could not find anything in the repository itself. |
Ah yeah, the config is in a pretty obscure place. I only found it because it mentions Python, and I did some PRs about being ready for Python3. https://github.com/atom/atom/tree/master/script/vsts
According to upstream's nice |
Particularly here: https://github.com/atom/atom/tree/master/script/vsts/platforms |
Editing this as I go to be as complete as I can manage:
|
I have created the repository: https://dev.azure.com/atomcommunity/atomcommunity |
I'm curious what happens if you add a pipeline from the side nav bar area, and after authenticating to GitHub with OAuth so it can officially link up with/access this repo. (Trying to write out the instructions in my above comment, but that's where I get stuck.) |
I'm trying it on my own personal fork now to see how far I can get and update my instructions/steps above. |
I added pull requests and release builds for now: https://dev.azure.com/atomcommunity/atomcommunity/_build. |
I'm a bit confused about what this means, but there are "Missing tasks" required to run the CI, and supposedly these are installable via https://marketplace.visualstudio.com/ according to the error message. |
Disregard, I'll update if I figure something new out. |
I created the "starter pipeline" with no errors. Full starter pipeline yaml (click to expand)# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script'
- script: |
echo Add other tasks to build, test, and deploy your project.
echo See https://aka.ms/yaml
displayName: 'Run a multi-line script' I can't tell if it's stalled waiting on an Azure Pipelines worker/VM, or because there are no actual steps in the script, so it never finishes... https://dev.azure.com/DeeDeeG/b/_build/results?buildId=3&view=logs&j=12f1170f-54f2-53f3-20dd-22fc7dff55f9 Now to read the docs and see if I can make one that actually does something. Working toward eventually running the Linux/Windows/macOS tests from upstream Atom. |
Did you see #3? I need to look into the error message |
I think this needs to be installed to the Azure DevOps org: https://marketplace.visualstudio.com/items?itemName=1ESLighthouseEng.PipelineArtifactCaching It's referenced in all three platforms' pipeline (CI) steps. |
This comment has been minimized.
This comment has been minimized.
Now I'm getting |
I started the "Pull Requests" pipeline ( Running CI on my personal fork: https://dev.azure.com/DeeDeeG/b/_build/results?buildId=7&view=results |
Now I'm getting an Good news: CI is basically up and running for me on my fork. I'm not sure why that package doesn't build at my fork at the moment, but it's still progress, I suppose. Full error (click to expand):watcher.target.mk:145: recipe for target 'Release/obj.target/watcher/src/binding.o' failed
make: *** [Release/obj.target/watcher/src/binding.o] Error 1
make: Leaving directory '/home/vsts/work/1/s/node_modules/@atom/watcher/build'
gyp ERR! build error
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:194:23)
gyp ERR! stack at ChildProcess.emit (events.js:315:20)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:275:12)
gyp ERR! System Linux 4.15.0-1089-azure
gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /home/vsts/work/1/s/node_modules/@atom/watcher
gyp ERR! node -v v12.18.1
gyp ERR! node-gyp -v v5.1.0
gyp ERR! not ok
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! @atom/[email protected] install: `prebuild-install || node-gyp rebuild`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the @atom/[email protected] install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/vsts/.npm/_logs/2020-07-03T02_37_00_415Z-debug.log
##[error]Bash exited with code '1'.
Finishing: npm install (Sorry my Azure Pipelines project was set to "private" before, should be "public" now.) |
Using Azure is pretty cool. It allows directly editing the code on its website. I was wondering if we use the same setup and a docker image or something to streamline development. Building Atom locally takes time. At least we should upload the bootstrap for people to use. If it is bootstrap, they can run it using atom-dev thing. |
I am kind of reluctant to use a lot of platforms and CI stuff, because I like to verify what I'm seeing locally. If you can get what you describe working, I don't see why not to try it though. 🤷 |
Uploading the repo after bootstrap allows fast development. Doing it locally is hard or takes time. For example, on Windows, bootstrapping requires a installation of Visual Studio 15, which is not desirable in 2020. |
I'm not sure where would let us upload that many files? The bootstrapped repo can be multiple gigabytes. And it would be a bit slow to download it. (Would it just be a zip/tarball hosted somewhere?) Also, keep in mind that there are some native packages, so at minimum it would be one version of the bootstrapped repo per OS, maybe more than that if other stuff in the platform changes within a given OS. So I have trouble picturing it, but I am open to seeing it happen. |
I set the Pull Request pipeline to run every night on I didn't want to trigger the Pull Request pipeline every time we merge into |
We can optimize the implementation of (Basically split "bootstrap" and "build" into two This should cause the cache to be saved immediately after bootstrapping. It will then be available if building fails for whatever reason. This eliminates the main downside of The alternative: Revert |
Caching a bootstrap that fails building does not make sense. There is a good reason behind this approach. We don't want to cache faulty configuration. However, using #31 and #46, we can put the tests in a separate job very easily. In #31. tests run separately from the build step, which allows caching to happen if building passes. |
I respectfully disagree. There is more than one reason for a build failure, and only one of them is problems with dependencies. The project has its own code which can introduce errors/CI failures.
It saves about 15 minutes per OS/arch on the next run. If the boostrap process finishes with no errors, it is a valid bootstrap result. It can be re-used for testing code tweaks that are outside of and a separate concern from the dependencies. (If the dependencies in the Recall that at the time I'm writing this, nothing from the build beyond the bootstrap process is saved in the cache. Upstream have had this all figured out for some time, and to deviate from it (without providing an enhanced use of cache of some kind to justify it, e.g. restoring stuff from after the successful and finished build in a reasonable way) would introduce a regression in the availability of our cache without a concrete benefit. Edit to add: Theoretically, there could be some dependency tweaking in the scripts or VSTS templates that would change the build dependencies, but would not cause cache misses. If so, we should add these files to the cache identifier. |
This might sound possible in theory, but it does not work in practice. Other than having a cache that might not be able to build Atom we will waste a lot of time. Imagine we have a suitable bootstrap cache. It takes ~5min to prepare a system. Then we spend ~2-3min to restore the cache. Then the bootstrap step is skipped. Now we have wasted ~8min to fire up a system and restore a cache that is not used anywhere. Considering 4 operating systems, this becomes wasting half an hour (4*8 = 32min). Now in the build job, again we spend another ~8min to prepare the systems for building. This becomes 1 hour in total! Now, if we instead use this 8min that we have spent so far to build atom, we save 1 hour for each CI run. |
You may be right. In terms of Cache@2: if there is a way to save the cache without having to close the Agent and spin up a new one, then that's more what I was thinking. I can't think of a way to do that at the moment, but I am keeping a close eye on the documentation that I read in case something makes this possible. It is what upstream was doing with SaveCache@1, though. I think SaveCache@1 might be a bit slower to read/write the cache, by a very small amount, but I like that you can determine exactly when the save occurs. I am not going to start a long discussion about that here, because it's not a huge thing. But I have come to prefer the old caching strategy by a bit. As a minor point to clear up (some build time numbers): I don't think the 4*8 minutes figure is the relevant amount of time, because we do not wait for runs serially, those times are in parallel. We do not experience the wait that way. So, back to parallel times: Indeed, 8 minutes * 2 for a total of 16 minutes is still a meaningful amount of time. So adding that time to builds would be bad. I agree this is bad, just I think the numbers are not as extreme as one hour. |
Now that #46 is merged, and the node/npm install parts of CI aren't in This is for the hypothetical case where we have updated Node, NPM, or some environment variables or config relevant to how the bootstrap and build should proceed. |
CI isn't passing on |
This is fixed in #63 |
Glad that CI is working. We should disable or delete the CI steps that try to publish artifacts to Amazon S3 buckets, since we don't own an Amazon S3 account, and that step has been erroring out at the very end, making our "Release Branch Build" pipeline always look red. Other than that, CI is functionally working (tests passing) so that is great, thank you! Edit: This is a PR now: #66 |
Doing Interesting discussion here: If possible/if it's okay with you, I'd like to revert #3 (or explicitly set to use Linux) for the Status: Will make a PR once I'm done with the stuff from my comment directly above this one. |
That sounds like a PR. We will not revert things on master anymore. It is OK if you want to do it in your PR. |
Linux Build in CI has been failing by exiting out after the two package formats are apparently successfully built. I also had a similar experience outside of CI on my personal machine, where the I think this is flakiness, not a hard "100% of the time" issue, but it's still weird. |
Also, just a heads up that there are some more hard-coded URLs pointing to I do think if we fix the hard-coded URLs like we did in #70, then this build failure scenario should also go away when using non-hard-coded URLs pointing to our own repos. |
I will close this as this is mostly done. |
We need to set up Azure pipelines to get CI similar to upstream.
The text was updated successfully, but these errors were encountered: