Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run plan/apply for multiple projects in parallel #260

Open
lkysow opened this issue Sep 7, 2018 · 48 comments
Open

Run plan/apply for multiple projects in parallel #260

lkysow opened this issue Sep 7, 2018 · 48 comments
Labels
feature New functionality/enhancement help wanted Good feature for contributors

Comments

@lkysow
Copy link
Member

lkysow commented Sep 7, 2018

Via @psalaberria002, would like Atlantis to run plan and apply in parallel when running a command that needs to run on multiple projects. This would make things much faster.

@jolexa
Copy link
Contributor

jolexa commented Mar 5, 2019

We have some repos that works on >10 AWS accounts (each account is a project in atlantis.yaml). When making a change to all the accounts, Atlantis is painfully show while it sequentially works on every account. We expect this usage to increase, by the way.

@majormoses
Copy link
Contributor

I can certainly see the value in this but I think we need to be careful as to how this is introduced. While in the case of aws providers rate limits are more mitigated as most of their api limits are gated at the account level and rate limit awareness is built in but this is not true of all (or even most) providers. For example we use the github provider to manage teams, repos, etc. We have something like 30+ states/projects and if we ran them all at once it would most certainly fail.

@lkysow lkysow added the feature New functionality/enhancement label Apr 4, 2019
@mwarkentin
Copy link
Contributor

Could you make this opt in? atlantis apply --parallel

I'm dealing with about 50 RDS instances right now, each in their own projects - and need to enable multi-az, change instance type, etc. in different steps. This is incredibly slow waiting for each to happen 1 at a time.

My other alternative would be to make 50 separate PRs to "fake" the parallelization but that's tricky to manage as well.

@majormoses
Copy link
Contributor

To better understand the use cases where this is important would it be something that makes sense as adhoc on PRs, repo config, server config, or yes? I am leaning towards it be adhoc on PRs as needed or repo config.

@mwarkentin
Copy link
Contributor

I’m not too familiar with the distinction between repo config and server config, but it seems it would make sense to have a way to set the default behavior as well as enable or disable on an adhoc basis within a PR.

@darrylb-github
Copy link

👍 - was just about to ask about this. I have a repo with 4 projects and currently waiting on atlantis to finish planning as it's running them sequentially.

@Fauzyy
Copy link
Contributor

Fauzyy commented Jul 22, 2019

We've been running a simple repo config implementation of parallel plans for a couple weeks in our fork and it's working well: https://github.com/runatlantis/atlantis/compare/v0.8.3..segmentio:v0.8.3-segment0.2.1

Caveat: because Atlantis has internal locks on the workspace name, this only works if all projects in the repo use Terraform Workspaces.

Example config:

version: 3
automerge: true
parallel_plans: true
...

This is for plans only, I feel that making applies parallel could have unintended side-effects.

There's also a flag to set the server-side cap on the # of concurrent goroutines that run per-request via --parallel-plans-pool-size=5 (default of 10)

Would love to get this or something similar into upstream

@YesYouKenSpace
Copy link
Contributor

Will this be considered?

@lkysow
Copy link
Member Author

lkysow commented Jan 20, 2020

Yeah that looks like a good solution since it avoids the problem of a lot of refactoring by only applying when using different workspaces. Segment's branch looks pretty diverged though so someone would need to make a clean PR.

@psalaberria002
Copy link
Contributor

Does that mean we won't be able to parallelize it when not using workspaces?

@lkysow
Copy link
Member Author

lkysow commented Jan 20, 2020

Does that mean we won't be able to parallelize it when not using workspaces?

Yes. But it's a first step. For not using workspaces the locking needs to be figured out so we don't modify a file-system that's got mutations in progress.

@Lowess
Copy link

Lowess commented Jun 2, 2020

@lkysow quick question as #926 was recently released under 0.13.0.

I am running Atlantis with a custom Terragrunt integration (really similar to the docs example

I wanted to leverage the newly introduced parallel features which is unfortunately not supported for Terragrunt workflows (yet) due to the workspace limitation.

I am wondering if I could fool atlantis by setting different workspaces (even though in terragrunt you don't need them) to make atlantis run plans and applies in parallel for prod / stage configs.

Is this a bad idea or you guys think that could be a valid workaround ? Looking at the plan output from terragrunt it looks good to me but I was wondering if there is something else I should worry about ?

  • Here is my Terragrunt layout:
.
├── modules
└── terragrunt
    ├── account-1
    │   ├── account.hcl
    │   └── virginia
    │       ├── prod
    │       ├── region.hcl
    │       └── stage
    └── terragrunt.hcl
  • Here is a sample from my atlantis.yaml:
---
version: 3

parallel_plan: true
parallel_apply: true
automerge: true

projects:
  # Project Anchor
  - &terragrunt
    name: template
    dir: '.null'
    workflow: terragrunt
    autoplan:
      enabled: true
      when_modified:
        - "./terraform/modules/**/*.tf"
        - "**/*.tf"
        - "**/terragrunt.hcl"


  - <<: *terragrunt
    name: account-1-virginia-prod-app-A
    dir: ./terraform/terragrunt/account-1/virginia/prod/app-A
    workspace: prod

  - <<: *terragrunt
    name: account-1-virginia-stage-app-A
    dir: ./terraform/terragrunt/account-1/virginia/stage/app-A
    workspace: stage

  [....]

Thanks for your feedback !

@lkysow
Copy link
Member Author

lkysow commented Jun 2, 2020

Hi @Lowess, the reason this only works for workspaces is because Atlantis clones the repo into a separate directory for each workspace so there is no contention. If you set different workspaces then Atlantis will clone into separate directories so it should work.

@angeloskaltsikis
Copy link

@Lowess i wonder whether you have tried using Terragrunt with Terraform workspaces and how this experiment went?
We are having multiple Terragrunt files and looking for ways on how we could make the {plan,apply} commands run on parallel as it may take atlantis a couple of hours to run a single command on 100s of Terragrunt leafs.

@ghostsquad
Copy link

ghostsquad commented Oct 12, 2020

Having the same problem, in that I have multiple projects in the same repo, all using different backends, etc. I'm not using any "workspaces" (Terraform definition of such), so everything is the "default" workspace. Parallel plans/apply don't work because it thinks that there's a shared lock in place. The first of 6 projects runs successfully, and the rest are reported that they cannot run because they are locked.

2020/10/12 22:57:04+0000 [INFO] myorg/terraform-stuff#43: Running plans in parallel
2020/10/12 22:57:04+0000 [INFO] myorg/terraform-stuff#43: Acquired lock with id "myorg/terraform-stuff/deploy/environments/account-1/us-west-2/prod/default"
2020/10/12 22:57:04+0000 [INFO] myorg/terraform-stuff#43: Acquired lock with id "myorg/terraform-stuff/deploy/environments/account-2/us-west-2/nonprod/default"
2020/10/12 22:57:04+0000 [INFO] myorg/terraform-stuff#43: Acquired lock with id "myorg/terraform-stuff/deploy/environments/account-3/us-west-2/prod/default"
2020/10/12 22:57:04+0000 [INFO] myorg/terraform-stuff#43: Acquired lock with id "myorg/terraform-stuff/deploy/environments/account-3/us-west-2/nonprod/default"
2020/10/12 22:57:04+0000 [INFO] myorg/terraform-stuff#43: Acquired lock with id "myorg/terraform-stuff/deploy/environments/account-1/us-west-2/nonprod/default"
2020/10/12 22:57:04+0000 [INFO] myorg/terraform-stuff#43: Acquired lock with id "myorg/terraform-stuff/deploy/environments/account-3/us-west-2/tools/default"
2020/10/12 22:57:06+0000 [INFO] myorg/terraform-stuff#43: Successfully ran "cd \"${PWD%\"${REPO_REL_DIR}\"}\" && jb install\n" in "/atlantis/data/repos/myorg/terraform-stuff/43/default/deploy/environments/account-1/us-west-2/prod"
2020/10/12 22:57:06+0000 [INFO] myorg/terraform-stuff#43: Successfully ran "jsonnet -J \"${PWD%\"${REPO_REL_DIR}\"}/vendor\" terrasonnet.tf.jsonnet > terrasonnet.tf.json\n" in "/atlantis/data/repos/myorg/terraform-stuff/43/default/deploy/environments/account-1/us-west-2/prod"
2020/10/12 22:57:10+0000 [INFO] myorg/terraform-stuff#43: Successfully ran "/atlantis/data/bin/terraform0.13.4 init -input=false -no-color -upgrade" in "/atlantis/data/repos/myorg/terraform-stuff/43/default/deploy/environments/account-1/us-west-2/prod"
2020/10/12 22:57:10+0000 [INFO] myorg/terraform-stuff#43: Successfully ran "/atlantis/data/bin/terraform0.13.4 workspace show" in "/atlantis/data/repos/myorg/terraform-stuff/43/default/deploy/environments/account-1/us-west-2/prod"
2020/10/12 22:57:13+0000 [EROR] myorg/terraform-stuff#43: Running "/atlantis/data/bin/terraform0.13.4 plan -input=false -refresh -no-color -out \"/atlantis/data/repos/myorg/terraform-stuff/43/default/deploy/environments/account-1/us-west-2/prod/account-1-prod-us-west-2-default.tfplan\"" in "/atlantis/data/repos/myorg/terraform-stuff/43/default/deploy/environments/account-1/us-west-2/prod": exit status 1
2020/10/12 22:58:14+0000 [INFO] myorg/terraform-stuff#43: Successfully ran "/atlantis/data/bin/terraform0.13.4 plan -input=false -refresh -no-color" in "/atlantis/data/repos/myorg/terraform-stuff/43/default/deploy/environments/account-1/us-west-2/prod"
2020/10/12 23:00:30+0000 [INFO] server: Parsed comment as command="unlock" verbose=false dir="" workspace="" project="" flags=""

@richstokes
Copy link
Contributor

+1 it would be awesome if we could plan in parallel multiple projects, assuming they are all on different backends.

@Sebor
Copy link

Sebor commented Apr 30, 2021

I've faced with the same issue. I use terragrunt without workspaces and I'd like to plan\apply in parallel.
Does anybody know a workaround?

@mwarkentin
Copy link
Contributor

Finally got a chance to try this out as we added a subset of projects which use Terraform Workspaces. It looks like I missed this caveat though:

Caveat: because Atlantis has internal locks on the workspace name, this only works if all projects in the repo use Terraform Workspaces.

I had assumed that it would allow us to plan the subset in parallel, and other projects would continue to function as before, however when we opened a PR with multiple (standard) projects each using the default workspace, ended up with The default workspace is currently locked by another command that is running for this pull request. Wait until the previous command is complete and try again. for all but one of the plans.

Would it be possible to support this use case? Maybe by supporting the parallel_[plan|apply] configuration at the project level instead of for the top level atlantis config?

@oliverisaac
Copy link
Contributor

If we enable parallel_plan, it would be good to have a configuration item for how many parallel to run. Even a 2x parallel would be a huge relief for us

@ImIOImI
Copy link

ImIOImI commented Aug 25, 2021

A plan for us takes upwards of 30 minutes to execute due to having so many different environments. This isn't conducive to a healthy CI/CD where we want to fail fast.

@virgofx
Copy link

virgofx commented Nov 16, 2021

This would be an excellent feature to add. Similar to others, when Atlantis is managing multiple separate roots with different backends the serial nature increases execution time tremendously. Even if it were an opt-in feature such that terraform roots/projects could add some sort of -parallel feature such that it's still 100% backwards compatible would be a great addition.

@chenrui333 chenrui333 added the help wanted Good feature for contributors label Dec 30, 2021
@mihaiturcu
Copy link

any news here? :)

@Dawnflash
Copy link
Contributor

This would be great, in our use case all projects differ in TF state and can absolutely run in parallel. This could be a project-level opt-out (parallel: false). It would save us lots of time.

@hussainbani
Copy link

Do we have any news on this? We want this feature.

@voidlily
Copy link

Did they retract 0.19.3? That's the version I'm running and it works for me

@nitrocode
Copy link
Member

Isn't this available now via https://www.runatlantis.io/docs/repo-level-atlantis-yaml.html#run-plans-and-applies-in-parallel ?

# atlantis.yaml in repo
parallel_plan: true
parallel_apply: true

cc: @voidlily @nwsparks @hussainbani @Dawnflash @jamengual @lkysow

I would prefer enabling this in the server configuration instead of in each terraform repo but a repo level config for this is better than serial plans and serial applies.

@jamengual
Copy link
Contributor

I can't recall now but there was an issue and a PR that was reverted to allow a parallel plan per dir in each project, which is an issue for repos with many folders per env etc

@jamengual
Copy link
Contributor

jamengual commented Oct 6, 2022

if someone has cycles:
#2253

The code is there, the issue is there and the suggested solution is documented......I mean......you can't have a better head start

@lachlanmulcahy
Copy link

Isn't this available now via https://www.runatlantis.io/docs/repo-level-atlantis-yaml.html#run-plans-and-applies-in-parallel ?

This only works if you've implemented multiple workspaces. Atlantis will attempt to run all your projects in the 'default' workspace in parallel and will fail out most of them because "the default workspace is locked."

@roulettedares
Copy link
Contributor

running atlantis:v0.20.1, parallel_plan: true works for me without locking issues, but i get the below error when trying to run atlantis apply -d environments/myorg-preprd/preprd. all environments use the default workspace

atlantis-0 atlantis {"level":"info","ts":"2022-10-11T20:30:50.743Z","caller":"models/shell_command_runner.go:156","msg":"successfully ran \"/usr/local/bin/terraform0.13.7 plan -input=false -refresh -out \\\"/atlantis-data/repos/myorg/terraform/423/default/environments/myorg-preprd/preprd/default.tfplan\\\"\" in \"/atlantis-data/repos/myorg/terraform/423/default/environments/myorg-preprd/preprd\"","json":{"repo":"myorg/terraform","pull":"423"}}
atlantis-0 atlantis {"level":"info","ts":"2022-10-11T20:30:51.085Z","caller":"events/instrumented_project_command_runner.go:53","msg":"plan success. output available at: https://github.com/myorg/terraform/pull/423","json":{"repo":"myorg/terraform","pull":"423"}}
atlantis-0 atlantis {"level":"info","ts":"2022-10-11T20:31:36.239Z","caller":"events/events_controller.go:533","msg":"parsed comment as command=\"apply\" verbose=false dir=\"\" workspace=\"\" project=\"\" flags=\"\"","json":{"gh-request-id":"X-Github-Delivery=b3539690-49a3-11ed-942d-b98d8f8893e6"}}
atlantis-0 atlantis {"level":"error","ts":"2022-10-11T20:31:38.253Z","caller":"events/command_runner.go:427","msg":"PANIC: runtime error: invalid memory address or nil pointer dereference\nruntime/panic.go:260 (0x44c995)\nruntime/signal_unix.go:835 (0x44c965)\ngithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:329 (0xb8be0b)\ngithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:398 (0xb8c3d7)\ngithub.com/runatlantis/atlantis/server/events/vcs/instrumented_client.go:179 (0xb92dcb)\ngithub.com/runatlantis/atlantis/server/events/vcs/proxy.go:72 (0xb947c4)\ngithub.com/runatlantis/atlantis/server/events/vcs/pull_status_fetcher.go:28 (0xb95045)\ngithub.com/runatlantis/atlantis/server/events/apply_command_runner.go:108 (0xd74c15)\ngithub.com/runatlantis/atlantis/server/events/command_runner.go:296 (0xd79823)\nruntime/asm_amd64.s:1594 (0x467ca0)\n","json":{"repo":"myorg/terraform","pull":"423"},"stacktrace":"github.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).logPanics\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:427\nruntime.gopanic\n\truntime/panic.go:890\nruntime.panicmem\n\truntime/panic.go:260\nruntime.sigpanic\n\truntime/signal_unix.go:835\ngithub.com/runatlantis/atlantis/server/events/vcs.(*GithubClient).GetCombinedStatusMinusApply\n\tgithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:329\ngithub.com/runatlantis/atlantis/server/events/vcs.(*GithubClient).PullIsMergeable\n\tgithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:398\ngithub.com/runatlantis/atlantis/server/events/vcs.(*InstrumentedClient).PullIsMergeable\n\tgithub.com/runatlantis/atlantis/server/events/vcs/instrumented_client.go:179\ngithub.com/runatlantis/atlantis/server/events/vcs.(*ClientProxy).PullIsMergeable\n\tgithub.com/runatlantis/atlantis/server/events/vcs/proxy.go:72\ngithub.com/runatlantis/atlantis/server/events/vcs.(*pullReqStatusFetcher).FetchPullStatus\n\tgithub.com/runatlantis/atlantis/server/events/vcs/pull_status_fetcher.go:28\ngithub.com/runatlantis/atlantis/server/events.(*ApplyCommandRunner).Run\n\tgithub.com/runatlantis/atlantis/server/events/apply_command_runner.go:108\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:296"}
atlantis-0 atlantis {"level":"info","ts":"2022-10-11T20:33:21.288Z","caller":"events/events_controller.go:533","msg":"parsed comment as command=\"apply\" verbose=false dir=\"\" workspace=\"\" project=\"\" flags=\"\"","json":{"gh-request-id":"X-Github-Delivery=f1dc0aa0-49a3-11ed-9c31-8f78a882a809"}}
atlantis-0 atlantis {"level":"error","ts":"2022-10-11T20:33:23.747Z","caller":"events/command_runner.go:427","msg":"PANIC: runtime error: invalid memory address or nil pointer dereference\nruntime/panic.go:260 (0x44c995)\nruntime/signal_unix.go:835 (0x44c965)\ngithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:329 (0xb8be0b)\ngithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:398 (0xb8c3d7)\ngithub.com/runatlantis/atlantis/server/events/vcs/instrumented_client.go:179 (0xb92dcb)\ngithub.com/runatlantis/atlantis/server/events/vcs/proxy.go:72 (0xb947c4)\ngithub.com/runatlantis/atlantis/server/events/vcs/pull_status_fetcher.go:28 (0xb95045)\ngithub.com/runatlantis/atlantis/server/events/apply_command_runner.go:108 (0xd74c15)\ngithub.com/runatlantis/atlantis/server/events/command_runner.go:296 (0xd79823)\nruntime/asm_amd64.s:1594 (0x467ca0)\n","json":{"repo":"myorg/terraform","pull":"423"},"stacktrace":"github.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).logPanics\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:427\nruntime.gopanic\n\truntime/panic.go:890\nruntime.panicmem\n\truntime/panic.go:260\nruntime.sigpanic\n\truntime/signal_unix.go:835\ngithub.com/runatlantis/atlantis/server/events/vcs.(*GithubClient).GetCombinedStatusMinusApply\n\tgithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:329\ngithub.com/runatlantis/atlantis/server/events/vcs.(*GithubClient).PullIsMergeable\n\tgithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:398\ngithub.com/runatlantis/atlantis/server/events/vcs.(*InstrumentedClient).PullIsMergeable\n\tgithub.com/runatlantis/atlantis/server/events/vcs/instrumented_client.go:179\ngithub.com/runatlantis/atlantis/server/events/vcs.(*ClientProxy).PullIsMergeable\n\tgithub.com/runatlantis/atlantis/server/events/vcs/proxy.go:72\ngithub.com/runatlantis/atlantis/server/events/vcs.(*pullReqStatusFetcher).FetchPullStatus\n\tgithub.com/runatlantis/atlantis/server/events/vcs/pull_status_fetcher.go:28\ngithub.com/runatlantis/atlantis/server/events.(*ApplyCommandRunner).Run\n\tgithub.com/runatlantis/atlantis/server/events/apply_command_runner.go:108\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:296"}
atlantis-0 atlantis {"level":"info","ts":"2022-10-11T20:34:33.993Z","caller":"events/events_controller.go:533","msg":"parsed comment as command=\"apply\" verbose=false dir=\"environments/devtools/primary\" workspace=\"\" project=\"\" flags=\"\"","json":{"gh-request-id":"X-Github-Delivery=1d4308b0-49a4-11ed-8c8a-18570fd37789"}}
atlantis-0 atlantis {"level":"error","ts":"2022-10-11T20:34:36.121Z","caller":"events/command_runner.go:427","msg":"PANIC: runtime error: invalid memory address or nil pointer dereference\nruntime/panic.go:260 (0x44c995)\nruntime/signal_unix.go:835 (0x44c965)\ngithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:329 (0xb8be0b)\ngithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:398 (0xb8c3d7)\ngithub.com/runatlantis/atlantis/server/events/vcs/instrumented_client.go:179 (0xb92dcb)\ngithub.com/runatlantis/atlantis/server/events/vcs/proxy.go:72 (0xb947c4)\ngithub.com/runatlantis/atlantis/server/events/vcs/pull_status_fetcher.go:28 (0xb95045)\ngithub.com/runatlantis/atlantis/server/events/apply_command_runner.go:108 (0xd74c15)\ngithub.com/runatlantis/atlantis/server/events/command_runner.go:296 (0xd79823)\nruntime/asm_amd64.s:1594 (0x467ca0)\n","json":{"repo":"myorg/terraform","pull":"423"},"stacktrace":"github.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).logPanics\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:427\nruntime.gopanic\n\truntime/panic.go:890\nruntime.panicmem\n\truntime/panic.go:260\nruntime.sigpanic\n\truntime/signal_unix.go:835\ngithub.com/runatlantis/atlantis/server/events/vcs.(*GithubClient).GetCombinedStatusMinusApply\n\tgithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:329\ngithub.com/runatlantis/atlantis/server/events/vcs.(*GithubClient).PullIsMergeable\n\tgithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:398\ngithub.com/runatlantis/atlantis/server/events/vcs.(*InstrumentedClient).PullIsMergeable\n\tgithub.com/runatlantis/atlantis/server/events/vcs/instrumented_client.go:179\ngithub.com/runatlantis/atlantis/server/events/vcs.(*ClientProxy).PullIsMergeable\n\tgithub.com/runatlantis/atlantis/server/events/vcs/proxy.go:72\ngithub.com/runatlantis/atlantis/server/events/vcs.(*pullReqStatusFetcher).FetchPullStatus\n\tgithub.com/runatlantis/atlantis/server/events/vcs/pull_status_fetcher.go:28\ngithub.com/runatlantis/atlantis/server/events.(*ApplyCommandRunner).Run\n\tgithub.com/runatlantis/atlantis/server/events/apply_command_runner.go:108\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:296"}atlantis-0 atlantis {"level":"info","ts":"2022-10-11T20:30:50.743Z","caller":"models/shell_command_runner.go:156","msg":"successfully ran \"/usr/local/bin/terraform0.13.7 plan -input=false -refresh -out \\\"/atlantis-data/repos/myorg/terraform/423/default/environments/myorg-preprd/preprd/default.tfplan\\\"\" in \"/atlantis-data/repos/myorg/terraform/423/default/environments/myorg-preprd/preprd\"","json":{"repo":"myorg/terraform","pull":"423"}}
atlantis-0 atlantis {"level":"info","ts":"2022-10-11T20:30:51.085Z","caller":"events/instrumented_project_command_runner.go:53","msg":"plan success. output available at: https://github.com/myorg/terraform/pull/423","json":{"repo":"myorg/terraform","pull":"423"}}
atlantis-0 atlantis {"level":"info","ts":"2022-10-11T20:31:36.239Z","caller":"events/events_controller.go:533","msg":"parsed comment as command=\"apply\" verbose=false dir=\"\" workspace=\"\" project=\"\" flags=\"\"","json":{"gh-request-id":"X-Github-Delivery=b3539690-49a3-11ed-942d-b98d8f8893e6"}}
atlantis-0 atlantis {"level":"error","ts":"2022-10-11T20:31:38.253Z","caller":"events/command_runner.go:427","msg":"PANIC: runtime error: invalid memory address or nil pointer dereference\nruntime/panic.go:260 (0x44c995)\nruntime/signal_unix.go:835 (0x44c965)\ngithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:329 (0xb8be0b)\ngithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:398 (0xb8c3d7)\ngithub.com/runatlantis/atlantis/server/events/vcs/instrumented_client.go:179 (0xb92dcb)\ngithub.com/runatlantis/atlantis/server/events/vcs/proxy.go:72 (0xb947c4)\ngithub.com/runatlantis/atlantis/server/events/vcs/pull_status_fetcher.go:28 (0xb95045)\ngithub.com/runatlantis/atlantis/server/events/apply_command_runner.go:108 (0xd74c15)\ngithub.com/runatlantis/atlantis/server/events/command_runner.go:296 (0xd79823)\nruntime/asm_amd64.s:1594 (0x467ca0)\n","json":{"repo":"myorg/terraform","pull":"423"},"stacktrace":"github.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).logPanics\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:427\nruntime.gopanic\n\truntime/panic.go:890\nruntime.panicmem\n\truntime/panic.go:260\nruntime.sigpanic\n\truntime/signal_unix.go:835\ngithub.com/runatlantis/atlantis/server/events/vcs.(*GithubClient).GetCombinedStatusMinusApply\n\tgithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:329\ngithub.com/runatlantis/atlantis/server/events/vcs.(*GithubClient).PullIsMergeable\n\tgithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:398\ngithub.com/runatlantis/atlantis/server/events/vcs.(*InstrumentedClient).PullIsMergeable\n\tgithub.com/runatlantis/atlantis/server/events/vcs/instrumented_client.go:179\ngithub.com/runatlantis/atlantis/server/events/vcs.(*ClientProxy).PullIsMergeable\n\tgithub.com/runatlantis/atlantis/server/events/vcs/proxy.go:72\ngithub.com/runatlantis/atlantis/server/events/vcs.(*pullReqStatusFetcher).FetchPullStatus\n\tgithub.com/runatlantis/atlantis/server/events/vcs/pull_status_fetcher.go:28\ngithub.com/runatlantis/atlantis/server/events.(*ApplyCommandRunner).Run\n\tgithub.com/runatlantis/atlantis/server/events/apply_command_runner.go:108\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:296"}
atlantis-0 atlantis {"level":"info","ts":"2022-10-11T20:33:21.288Z","caller":"events/events_controller.go:533","msg":"parsed comment as command=\"apply\" verbose=false dir=\"\" workspace=\"\" project=\"\" flags=\"\"","json":{"gh-request-id":"X-Github-Delivery=f1dc0aa0-49a3-11ed-9c31-8f78a882a809"}}
atlantis-0 atlantis {"level":"error","ts":"2022-10-11T20:33:23.747Z","caller":"events/command_runner.go:427","msg":"PANIC: runtime error: invalid memory address or nil pointer dereference\nruntime/panic.go:260 (0x44c995)\nruntime/signal_unix.go:835 (0x44c965)\ngithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:329 (0xb8be0b)\ngithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:398 (0xb8c3d7)\ngithub.com/runatlantis/atlantis/server/events/vcs/instrumented_client.go:179 (0xb92dcb)\ngithub.com/runatlantis/atlantis/server/events/vcs/proxy.go:72 (0xb947c4)\ngithub.com/runatlantis/atlantis/server/events/vcs/pull_status_fetcher.go:28 (0xb95045)\ngithub.com/runatlantis/atlantis/server/events/apply_command_runner.go:108 (0xd74c15)\ngithub.com/runatlantis/atlantis/server/events/command_runner.go:296 (0xd79823)\nruntime/asm_amd64.s:1594 (0x467ca0)\n","json":{"repo":"myorg/terraform","pull":"423"},"stacktrace":"github.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).logPanics\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:427\nruntime.gopanic\n\truntime/panic.go:890\nruntime.panicmem\n\truntime/panic.go:260\nruntime.sigpanic\n\truntime/signal_unix.go:835\ngithub.com/runatlantis/atlantis/server/events/vcs.(*GithubClient).GetCombinedStatusMinusApply\n\tgithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:329\ngithub.com/runatlantis/atlantis/server/events/vcs.(*GithubClient).PullIsMergeable\n\tgithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:398\ngithub.com/runatlantis/atlantis/server/events/vcs.(*InstrumentedClient).PullIsMergeable\n\tgithub.com/runatlantis/atlantis/server/events/vcs/instrumented_client.go:179\ngithub.com/runatlantis/atlantis/server/events/vcs.(*ClientProxy).PullIsMergeable\n\tgithub.com/runatlantis/atlantis/server/events/vcs/proxy.go:72\ngithub.com/runatlantis/atlantis/server/events/vcs.(*pullReqStatusFetcher).FetchPullStatus\n\tgithub.com/runatlantis/atlantis/server/events/vcs/pull_status_fetcher.go:28\ngithub.com/runatlantis/atlantis/server/events.(*ApplyCommandRunner).Run\n\tgithub.com/runatlantis/atlantis/server/events/apply_command_runner.go:108\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:296"}
atlantis-0 atlantis {"level":"info","ts":"2022-10-11T20:34:33.993Z","caller":"events/events_controller.go:533","msg":"parsed comment as command=\"apply\" verbose=false dir=\"environments/devtools/primary\" workspace=\"\" project=\"\" flags=\"\"","json":{"gh-request-id":"X-Github-Delivery=1d4308b0-49a4-11ed-8c8a-18570fd37789"}}
atlantis-0 atlantis {"level":"error","ts":"2022-10-11T20:34:36.121Z","caller":"events/command_runner.go:427","msg":"PANIC: runtime error: invalid memory address or nil pointer dereference\nruntime/panic.go:260 (0x44c995)\nruntime/signal_unix.go:835 (0x44c965)\ngithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:329 (0xb8be0b)\ngithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:398 (0xb8c3d7)\ngithub.com/runatlantis/atlantis/server/events/vcs/instrumented_client.go:179 (0xb92dcb)\ngithub.com/runatlantis/atlantis/server/events/vcs/proxy.go:72 (0xb947c4)\ngithub.com/runatlantis/atlantis/server/events/vcs/pull_status_fetcher.go:28 (0xb95045)\ngithub.com/runatlantis/atlantis/server/events/apply_command_runner.go:108 (0xd74c15)\ngithub.com/runatlantis/atlantis/server/events/command_runner.go:296 (0xd79823)\nruntime/asm_amd64.s:1594 (0x467ca0)\n","json":{"repo":"myorg/terraform","pull":"423"},"stacktrace":"github.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).logPanics\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:427\nruntime.gopanic\n\truntime/panic.go:890\nruntime.panicmem\n\truntime/panic.go:260\nruntime.sigpanic\n\truntime/signal_unix.go:835\ngithub.com/runatlantis/atlantis/server/events/vcs.(*GithubClient).GetCombinedStatusMinusApply\n\tgithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:329\ngithub.com/runatlantis/atlantis/server/events/vcs.(*GithubClient).PullIsMergeable\n\tgithub.com/runatlantis/atlantis/server/events/vcs/github_client.go:398\ngithub.com/runatlantis/atlantis/server/events/vcs.(*InstrumentedClient).PullIsMergeable\n\tgithub.com/runatlantis/atlantis/server/events/vcs/instrumented_client.go:179\ngithub.com/runatlantis/atlantis/server/events/vcs.(*ClientProxy).PullIsMergeable\n\tgithub.com/runatlantis/atlantis/server/events/vcs/proxy.go:72\ngithub.com/runatlantis/atlantis/server/events/vcs.(*pullReqStatusFetcher).FetchPullStatus\n\tgithub.com/runatlantis/atlantis/server/events/vcs/pull_status_fetcher.go:28\ngithub.com/runatlantis/atlantis/server/events.(*ApplyCommandRunner).Run\n\tgithub.com/runatlantis/atlantis/server/events/apply_command_runner.go:108\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:296"}

jamengual pushed a commit that referenced this issue Nov 24, 2022
@nitrocode
Copy link
Member

Ah so the biggest limitation is that if you use parallel plans, it only works for the workspace: default.

If you use -p, instead of -d, and define all your projects, does it work with the correct workspace in your in-repo atlantis.yaml file ?

@patr00n
Copy link

patr00n commented Jan 16, 2023

Still not working even with parallel_plan: true.

The default workspace at path terraform/ is currently locked by another command that is running for this pull request.
Wait until the previous command is complete and try again.

Our use case is reusing same code but separating states with -backend-config argument. The file content looks like
-backend-config=./dev.tfbackend

key = "dev.tfstate"

-backend-config=./prod.tfbackend

key = "prod.tfstate"

@Fabianoshz
Copy link
Contributor

Hi @patr00n, I'm currently working on fixing the hooks behavior so we can enable parallel runs for plans and applies here: #2882, but before that this PR needs to get merged: #2921 so we can reduce the amount of clones instead of increasing because of the hooks.

@brandon-fryslie
Copy link

brandon-fryslie commented Apr 3, 2023

We also have a folder structure that uses the same terraform root modules for different environments and we pass in -backend-config=backend-env1.tfvars and -var-file=env1.tfvars cli args to control which environment to target (similar to some users above). We have a large amount of existing infrastructure and cannot switch to terraform workspaces for various reasons. It's unfortunate parallel plans and applies do not seem to work for use cases beyond using terraform workspaces.

I am setting TF_DATA_DIR in my projects in the repo config, but it looks like it is Atlantis's own locking mechanism that is breaking the ability to parallelize things. If there was simply a way to configure the Atlantis lockfile path in the project config (possibly via environment variable, or another method) I think that would solve the issue entirely. We can already run all of the plans in parallel locally by setting TF_DATA_DIR so there is no interference in the .terraform directory. We're also generating our repo config based on the directory structure so it would be trivial to set an additional option to control the lockfile path so it matches the environment name.

Is there any known workaround at this point for running parallel plans/applies w/o using terraform workspaces or do we need to wait for additional functionality to be implemented? Atlantis is a wonderful tool and we really appreciate all the hard work that has gone into it!!

@brandon-fryslie
Copy link

OK, I was able to come up with a workaround that seems to be working for my use case (which is described in the comment above):

  • Set the TF_DATA_DIR environment variable in your custom workflow config so the .terraform directory doesn't conflict between the same module running against different environments (I use the environment name, e.g. dev, staging, etc)
  • Set the workspace option in the project config so Atlantis thinks it is running in a different workspace
  • In your custom workflow, you will need to use a custom run step rather than using the build-in plan or apply provided by Atlantis. This prevents Atlantis from attempting to switch to the new workspace when running the plan/apply. I'm also setting TF_WORKSPACE=default in the workflow but likely this isn't necessary

Example project config:

name: proj1_dev
workspace: dev # this will be ignored when overriding the default plan/apply behavior
dir: terraform/proj1
workflow: dev

The custom workflow for plan in environment dev would be something like this (do similarly for apply, and for each environment. I recommend generating it via a script):

env:
  name: ENV_NAME
  value: dev
env:
  name: TF_DATA_DIR
  value: .terraform_dev # pattern I'm using is .terraform_${ENV_NAME}
init:
  extra_args:
    - "-backend-config"
    - "environments/${ENV_NAME}/backend.tfvars"
 run: "terraform plan -input=false -refresh -out $PLANFILE -var-file environments/${ENV_NAME}/terraform.tfvars"

This is not heavily tested but at least initially it seems to be working just fine for allowing parallel plans for running the same module w/ multiple environments using different backend configurations (without using terraform workspaces).

You will have the wrong workspace name in the Atlantis UI & PR comments but if you're not using Terraform workspaces likely that is unimportant. Tagging @patr00n since your setup seems to be similar to mine.

@nitrocode
Copy link
Member

Good workaround. So it seems that the escape hatch for doing parallel plans&applies dir non-workspaces in the same root dir can be done by setting a fictitious workspace and then explicitly setting the workspace.

I'd imagine changes would need to be made to atlantis to support atlantis locking to reuse a root dir without workspaces. For each lock it places, it should check if a lock if already placed by the same pr for the same dir, for the same atlantis run and if so, it shouldn't throw an error. Seems like a bug.

@brandon-fryslie
Copy link

brandon-fryslie commented Apr 4, 2023

setting a fictitious workspace and then explicitly setting the workspace

It is not needed to explicitly set a workspace as long as you're using the default workspace. But it is necessary to override Atlantis's default behavior of switching to that workspace (by overriding the built-in plan/apply functionality via run commands).

I'd imagine changes would need to be made to atlantis to support atlantis locking to reuse a root dir without workspaces.

All that would be necessary would be to allow setting the lockfile name to something other than the workspace name. In the workaround we're effectively using the workspace parameter to override the Atlantis lockfile name with hopefully minimal other side effects. Of course, it's also important to use TF_DATA_DIR to override the location of the .terraform directory or there will be issues there.

We are not using terraform workspaces at all. We have multiple AWS accounts, with a 'hub' account having IAM permissions to assume a specific role (w/ permissions to plan/apply terraform) in each 'spoke' account. Each individual account contains the state resources (s3 bucket/dynamodb table). That setup does not work well with terraform workspaces (i.e., having a workspace per account), where state resources are reused for each workspace. Terraform workspaces are compatible with this setup but don't provide any value so we aren't using them.

@jamengual
Copy link
Contributor

I use one monorepo with workspaces per module and each remote state is one per account.
the workspace is = to the namingConvention-var.name of the instantiated module so for example a vpc module will be pepe-dev-uw2-vpc and that will be the workspace for that instantiation of that module in the dev account remote state.
The chances of colliding locks are VERY low and it works well.

@george-zubrienko
Copy link

george-zubrienko commented Sep 8, 2023

A side-note here, is there any issue or plans to support launching plan/apply jobs as k8s jobs? That would solve all issues with multiple projects/workspaces parallelisation. This would require and RWM PV to store plan files, but that's a minor thing IMO and all could be configured from the helm chart.

@jamengual
Copy link
Contributor

jamengual commented Sep 8, 2023 via email

@george-zubrienko
Copy link

but then you still need to have a way to sync the status of the jobs back to the original PR etc, I do not think is that simple, it will require a lot of work.

I agree it is not a days feat. However that would resolve issues with provider version conflicts, allow almost infinite parallelization level etc.
Job status tracking is indeed tricky, esp when considering eviction a possibility. But linking a job with a PR is not something a few labels cannot solve imo. I'd be more worried about failure recovery tbf.
But say there are people brave enough to try, would this be something Atlantis community accepts?

@jamengual
Copy link
Contributor

We would love to such a contribution but if you are serious and you have support ( company/time) we will have to sync up to discuss how this could be built to match the future roadmap for atlantis 1.0.
@GenPage

take a look at this to understand more about the locks issue.

#3345

@george-zubrienko
Copy link

We would love to such a contribution but if you are serious and you have support ( company/time) we will have to sync up to discuss how this could be built to match the future roadmap for atlantis 1.0.

I might be able to negotiate some time to get this feature in. We run most of our org workloads using k8s jobs built from code/consumed from CRDs, this kind of feature we have experience with. I can also put forward an architecture proposal for discussion. Before the actual implementation I need a week or two to sort out time allocation for an engineer, and one more thing. I am not very good with lawyer stuff, but I know for sure some commercial IaC vendors use k8s jobs exactly for this. Even though the way I see this implemented in Atlantis would be most likely different, I want to ask internally and here as well, if this can be a problem?

@jamengual
Copy link
Contributor

anyone can use k8s jobs no? As long s we do not use an app that has an incompatible licence we should be good.

One thing to keep in mind is that we will need a way to expand this solution to non-k8s workloads, since Atlantis supports both, so any code changes will have to be backwards compatible.

@george-zubrienko
Copy link

george-zubrienko commented Sep 12, 2023

One thing to keep in mind is that we will need a way to expand this solution to non-k8s workloads, since Atlantis supports both, so any code changes will have to be backwards compatible.

I was thinking adding this as opt-in feature. If users do not enable it in helm values, then its vanilla mode with current functionality. Wdyt?
I'll go ahead and create an issue with design proposal so we can move this discussion there and start the process of aligning with 1.0 release.

@GenPage
Copy link
Member

GenPage commented Sep 12, 2023

Yes, please propose a design proposal for things like this. We are already attempting to fix the issue with locking/parallel plans with #3345

That mainly focuses on the file system side of things. Adding support for adding K8s jobs as "workers" is completely different. While we do want to trend in a more decentralized manner, it must done so in a way that is carefully architected so that K8s doesn't because a requirement

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New functionality/enhancement help wanted Good feature for contributors
Projects
None yet
Development

Successfully merging a pull request may close this issue.