-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Workspace (default) lock creation for no apparent reason #2200
Comments
Any idea when a new release might land with this fix in it? |
As, a side note, I saw this in my environment when I set:
in my |
I'm seeing this happening on Edit: I just realised I'm getting a different error message, so probably not related
|
I think it might still be related to #2131. I just haven't opened a new issue about it, yet. @pauloconnor do you want to open an issue for it or should I? |
Go for it |
is this still happening with |
Any news?? I'm using version 0.22.2 and it has the same bug. |
I haven't seen this in a while. @adrianocanofre are you using Atlantis as a GitHub App? How are the webhooks set up? Can you check your Atlantis logs to see if it gets a double webhook call? |
Not sure if this is exactly the same issue, but if I do two comments at the same time with
They're separate projects with separate state so I was hoping to run both in parallel |
and do you have parallel plan enabled?
version of atlantis?
…On Wed, Jan 11, 2023 at 8:10 AM Alex Nordlund ***@***.***> wrote:
Not sure if this is exactly the same issue, but if I do two comments at
the same time with atlantis plan -d aws/thing and atlantis plan -d
aws/other the *first* plan fails with the following error message:
Wait until the previous command is complete and try again.```
They're separate projects with separate state so I was hoping to run both in parallel
—
Reply to this email directly, view it on GitHub
<#2200 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAQ3ERGGAH6MD6A2CBT3KATWR3LQHANCNFSM5TNOHTCQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
it may be better to enable
|
parallel_plan is off (or at least not set) and it's happening on |
in 0.21.0 is the same?
…On Fri, Jan 13, 2023 at 8:33 AM Alex Nordlund ***@***.***> wrote:
parallel_plan is off (or at least not set) and it's happening on v0.22.2
—
Reply to this email directly, view it on GitHub
<#2200 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAQ3ERARJJFE2PETRZFP6EDWSF7UZANCNFSM5TNOHTCQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
yes, before upgrading we saw it happen on |
So it sounds like it's happening with or without
It sounds like this is a regression. Would anyone be able to keep going below 0.19.2 in order to find when this feature used to work? This would allow us to better pinpoint where this breaking change was introduced. Any other details would help too. OP shows us the Also there are no debug logs included. We're always looking for maintainers too to resolve issues and add tests. Please consider contributing if you want this fixed. 🙏 |
I use the v0.22.2, and it's happening. |
@nitrocode
|
nevermind my problem was the same as #1880 webhooks were setup in app and repo |
@davidsielert glad you figured it out. Thank you for closing the loop on that. @yohannafrans there is no update here unfortunately on this issue. The last request was to try to reproduce this issue with older versions to see if/when this regression was introduced. If you're willing to propose a PR, we'd be happy to review it. |
I rolled all the way back to 0.19.0 and the lock creation issue went away. Unfortunately it seems to be missing a lot of features that I would want, such as repo_config_file (introduced in v0.22.0, it seems...), as I have infra code in a big monorepo and I don't prefer to have atlantis.yaml stuck at root... Any ideas what's happening? |
that is interesting, not the big challenge is to find what between all those releases caused the issue. @krrrr38 do you have any ideas about this? I think at some you mentioned this issue. |
I'm not familiar with this issue. It may cause by duplicating gh-apps and manual configured webhook. @victor-chan-groundswell What kind of integration do you use like gh-app, gh-user or gitlab-user and so on? Could you share your webhook event like this? |
I am using gh-user.... Looking at the logs on 0.19.0, I do not see that specific logs on the event, but I do see logs of it executing what it's supposed to do.... logs have been modified to removed sensitive info...
Just for fun and games, I tried using 0.22.0 just to see what happens..... Atlantis erroed out by... well... not doing anything.... and eventually dropped this error message.... {"level":"error","ts":"2023-04-18T05:53:22.926Z","caller":"logging/simple_logger.go:163","msg":"invalid key: e2e64c02-ae7b-4f2b-8bdb-ff890f611bd5","json":{},"stacktrace":"github.com/runatlantis/atlantis/server/logging.(*StructuredLogger).Log\n\tgithub.com/runatlantis/atlantis/server/logging/simple_logger.go:163\ngithub.com/runatlantis/atlantis/server/controllers.(*JobsController).respond\n\tgithub.com/runatlantis/atlantis/server/controllers/jobs_controller.go:92\ngithub.com/runatlantis/atlantis/server/controllers.(*JobsController).getProjectJobsWS\n\tgithub.com/runatlantis/atlantis/server/controllers/jobs_controller.go:70\ngithub.com/runatlantis/atlantis/server/controllers.(*JobsController).GetProjectJobsWS\n\tgithub.com/runatlantis/atlantis/server/controllers/jobs_controller.go:83\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2109\ngithub.com/gorilla/mux.(*Router).ServeHTTP\n\tgithub.com/gorilla/[email protected]/mux.go:210\ngithub.com/urfave/negroni/v3.Wrap.func1\n\tgithub.com/urfave/negroni/[email protected]/negroni.go:59\ngithub.com/urfave/negroni/v3.HandlerFunc.ServeHTTP\n\tgithub.com/urfave/negroni/[email protected]/negroni.go:33\ngithub.com/urfave/negroni/v3.middleware.ServeHTTP\n\tgithub.com/urfave/negroni/[email protected]/negroni.go:51\ngithub.com/runatlantis/atlantis/server.(*RequestLogger).ServeHTTP\n\tgithub.com/runatlantis/atlantis/server/middleware.go:70\ngithub.com/urfave/negroni/v3.middleware.ServeHTTP\n\tgithub.com/urfave/negroni/[email protected]/negroni.go:51\ngithub.com/urfave/negroni/v3.(*Recovery).ServeHTTP\n\tgithub.com/urfave/negroni/[email protected]/recovery.go:210\ngithub.com/urfave/negroni/v3.middleware.ServeHTTP\n\tgithub.com/urfave/negroni/[email protected]/negroni.go:51\ngithub.com/urfave/negroni/v3.(*Negroni).ServeHTTP\n\tgithub.com/urfave/negroni/[email protected]/negroni.go:111\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2947\nnet/http.(*conn).serve\n\tnet/http/server.go:1991"} |
I believe this is related to #2253 which reverted #2180. #2180 was a fix for #2131 which is not reverted. So I'm going to make the initial assumption it is related. I'll be digging into this in more depth as this is probably our biggest regression and I have a good test case (Autodesk runs Terragrunt in a mono-repo). |
A good example of this is due to the changes in #2131 implementing path in the locker: Which
When you have multiple projects trigger in the same PR, they are hitting the same path in locker. |
So, after I started working more on Atlantis, I realized that it might be an error on my end.... Here's what I have to do: I reworked my yaml file to ensure that each project uses a different workspace and name..... I am one of the folks who use this to generate the Atlantis file names... I just used the default option, which does not generate distinct workspace names for each project that generating the project id that it is not the default option..... Not blaming them but rather... myself for not knowing Atlantis better... live and learn.... |
I do not believe is you fault @victor-chan-groundswell there was a PR that was merged long ago that actually broke the locks logic and introduced a regression and is affecting this use case. We are working on analyzing the whole locking code to make sure the fix to this is what we expect. |
well, I spoke too soon.... after messing around with it more... even with distinct project names and distinct workspace names for each project, I'm running into the same issue again....
|
Hello, any news on this issue? I'm affected by this as well and the only fix is to downgrade to |
@carlitos081 I built my own docker container (I had to do it anyway since I use a custom workflow with terragrunt and I have to have that in the container as well) and overwrite the default containers version of terraform. That should at least work around that blocker. |
@victor-chan-groundswell can you share you docker file? Did you install terraform and alter the seamlink on the container? This is mine, I only install terragrunt:
Thanks |
|
I think I have a slight hint (not sure at all) that this happens when the |
I configured terragrunt-atlantis-config which runs a |
I was able to confirm from the Atlantis log that this is caused by the pre-workflow hook. This is reproduceable by making commits and push them sequentially to the open PR. Atlantis log (replaced sensitive information): {
"level": "error",
"ts": "2023-05-10T09:42:08.454Z",
"caller": "events/command_runner.go:169",
"msg": "Error running pre-workflow hooks The default workspace at path . is currently locked by another command that is running for this pull request.\nWait until the previous command is complete and try again.. Proceeding with plan command.",
"json": {
"repo": "Org/repo",
"pull": "1"
},
"stacktrace": "github.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunAutoplanCommand\n\t/home/runner/work/atlantis/atlantis/server/events/command_runner.go:169"
}
{
"level": "warn",
"ts": "2023-05-10T09:42:08.727Z",
"caller": "events/project_command_builder.go:323",
"msg": "workspace was locked",
"json": {
"repo": "Org/repo",
"pull": "1"
},
"stacktrace": "github.com/runatlantis/atlantis/server/events.(*DefaultProjectCommandBuilder).buildAllCommandsByCfg\n\t/home/runner/work/atlantis/atlantis/server/events/project_command_builder.go:323\ngithub.com/runatlantis/atlantis/server/events.(*DefaultProjectCommandBuilder).BuildAutoplanCommands\n\t/home/runner/work/atlantis/atlantis/server/events/project_command_builder.go:215\ngithub.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildAutoplanCommands.func1\n\t/home/runner/work/atlantis/atlantis/server/events/instrumented_project_command_builder.go:29\ngithub.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).buildAndEmitStats\n\t/home/runner/work/atlantis/atlantis/server/events/instrumented_project_command_builder.go:71\ngithub.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildAutoplanCommands\n\t/home/runner/work/atlantis/atlantis/server/events/instrumented_project_command_builder.go:26\ngithub.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).runAutoplan\n\t/home/runner/work/atlantis/atlantis/server/events/plan_command_runner.go:85\ngithub.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).Run\n\t/home/runner/work/atlantis/atlantis/server/events/plan_command_runner.go:288\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunAutoplanCommand\n\t/home/runner/work/atlantis/atlantis/server/events/command_runner.go:174"
}
{
"level": "error",
"ts": "2023-05-10T09:42:08.727Z",
"caller": "events/instrumented_project_command_builder.go:75",
"msg": "Error building auto plan commands: The default workspace at path . is currently locked by another command that is running for this pull request.\nWait until the previous command is complete and try again.",
"json": {},
"stacktrace": "github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).buildAndEmitStats\n\t/home/runner/work/atlantis/atlantis/server/events/instrumented_project_command_builder.go:75\ngithub.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildAutoplanCommands\n\t/home/runner/work/atlantis/atlantis/server/events/instrumented_project_command_builder.go:26\ngithub.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).runAutoplan\n\t/home/runner/work/atlantis/atlantis/server/events/plan_command_runner.go:85\ngithub.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).Run\n\t/home/runner/work/atlantis/atlantis/server/events/plan_command_runner.go:288\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunAutoplanCommand\n\t/home/runner/work/atlantis/atlantis/server/events/command_runner.go:174"
}
{
"level": "error",
"ts": "2023-05-10T09:42:09.338Z",
"caller": "events/pull_updater.go:17",
"msg": "The default workspace at path . is currently locked by another command that is running for this pull request.\nWait until the previous command is complete and try again.",
"json": {
"repo": "Org/repo",
"pull": "1"
},
"stacktrace": "github.com/runatlantis/atlantis/server/events.(*PullUpdater).updatePull\n\t/home/runner/work/atlantis/atlantis/server/events/pull_updater.go:17\ngithub.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).runAutoplan\n\t/home/runner/work/atlantis/atlantis/server/events/plan_command_runner.go:90\ngithub.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).Run\n\t/home/runner/work/atlantis/atlantis/server/events/plan_command_runner.go:288\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunAutoplanCommand\n\t/home/runner/work/atlantis/atlantis/server/events/command_runner.go:174"
} EDIT: Since we don't use auto plan, I was able to fix this behaviour by setting |
Thanks for confirming. Please try versions prior to 0.19.2 to see where this regression began. It will make it easier to identify a fix for this. |
I'm afraid I can't test older versions on this environment, this is being used by multiple teams on a daily basis. |
As stated in the new ADR to address locks, its due to conflicting I'm working to engage the community on a best solution forward in regards to how and why we need to lock. You can read more in #3345 |
Community Note
Overview of the Issue
I have set up atlantis and configured multiple
projects
.I am not using
workspaces
(therefore, for each project only thedefault
workspace should be applicable).However, when creating a GitHub Pull Request that includes changes to multiple projects, I get the following error(s)
This is despite the fact that docs state:
Reproduction Steps
workspaces
Logs
Environment details
If not already included, please provide the following:
Atlantis server-side config file:
Repo
atlantis.yaml
file:Any other information you can provide about the environment/deployment.
Additional Context
The text was updated successfully, but these errors were encountered: