Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added support for s390x and ppc64le via catalog source #214

Closed
wants to merge 7 commits into from

Conversation

R3hankhan123
Copy link

Updated build image workflow to prepare Multi-arch for authorino-operator-catalog. Tested locally and deployed via the steps given in the readme
Screenshot 2024-09-24 at 11 46 19 AM
Screenshot 2024-09-24 at 11 46 45 AM
The pr was co authored by @Deepali1999 and @R3hankhan123

@R3hankhan123
Copy link
Author

@guicassolato @jasonmadigan @willthames Please review the changes for supporting authorino-operator-catalog image for s390x & pp64cle

Copy link
Collaborator

@guicassolato guicassolato left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @R3hankhan123.

First and foremost, thank you for this PR! Indeed we haven't been building the catalog images for s390x and ppc64le arches – only the operator and bundle images.

I do have a few concerns with the proposed approach nonetheless, starting with the substitution of buildah-build for a custom script. This would make Authorino the only component of Kuadrant not using buildah-build for any of its container images in CI.

Provided Buildah supports s390x and ppc64le, any particular reason why not to keep it, maybe by adding the two arches here? Did you have any issues with the current approach and supporting these arches?

I've left other comments as well. Hopefully we'll be able iterate over those together and continue with the work on this PR because it is indeed very much needed.

Comment on lines 17 to 18
for tag in "${tags[@]}"
do
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will detach the tags into separate builds (separate manifests). Other container images (operator and olm bundle) are built just once with multiple tags. Breaking with this pattern may affect automation that depend on the link from single manifest build.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

opm index add doesnt support build for multiple tags. Thats we have to iterate. buildah has the capability to build once with multiple tags which is not the case with opm

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand. As I mentioned before, I'd rather not break the link between tags built based on the exact same version of the code and build args, but to keep them all linked as part of the same manifest instead.

This not being possible here, I guess we'll have to wait until we get confirmation no automation currently depends on linked tags.

In the meantime, maybe we could build the catalogs for s390x and ppc64le separately from amd64 and arm64? We keep the current ones as they are based on buildah and add another step for s390x and ppc64le? WDYT?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am afraid thats not possible as its overwriting the images as we push it into the quay repo
Screenshot 2024-10-03 at 2 57 45 PM

Comment on lines -80 to -88
## Deploy authorino operator using operator-sdk
1. Install operator-sdk bin
```sh
make operator-sdk
```
2. Run operator-sdk bundle command
```
./bin/operator-sdk run bundle quay.io/kuadrant/authorino-operator-bundle:latest
```
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why removing these instructions?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initially catalog image support was not there for s390x and ppc64le, so this instruction was added for user to install operator using operator sdk. Now since we have added catalog support, so user can follow the generic approach.

- name: Run make catalog (main)
if: ${{ github.ref_name == env.MAIN_BRANCH_NAME }}
run: |
make catalog \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This make target has been recently refactored to replace the deprecated Sqlite-based generation of the catalog with file-based builds (#201).

Also, I'd rather stay with single way of doing it between local dev env and CI, if possible. So I wish there's a way for us to keep relying on the make target here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

opm index add generate Operator index container images from pre-existing Operator bundles and builds as well. Since it does both the jobs so thats why we have removed make catalog options

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if we can rollback to using opm index add for generating the catalog. I'm gonna have to ask people smarter than me on this one, I'm afraid.

cc @didierofrivia @eguzki

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if that means going back to sqlite based catalogs, I'd say it's not a good idea.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@didierofrivia opm index add doesn't mean it will fall back to sqlite based catalogs. Below snapshot has been taken from OpenShift Docs. It says that default Red Hat provided Operator Catalog releases in the file based catalog format from 4.11 and we have used 4.14.4 version of OPM
WhatsApp Image 2024-10-03 at 20 29 40_172a3e58

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For what it's worth, inspecting one of the pushed images (for example quay.io/r3hankhan/authorino-operator-catalog:latest-s390x), I'm pretty sure they're sqlite based and not file-based catalogs (looking at the last layer)

# * `docker`
# * `opm`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's OK to assume docker, but OPM should be installed to the ./bin directory if the process depends on it. The make targets currently used to build the catalog images take care of that. we should favour that approach, if possible.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently scripts is being invoked only from github workflow , which handles opm binary creation automatically. Incase user wants to use scripts by chance then , he has to run make target for opm as a pre-requisites. If you want we can add that in the comments.

@R3hankhan123
Copy link
Author

R3hankhan123 commented Sep 25, 2024

Hi @R3hankhan123.

First and foremost, thank you for this PR! Indeed we haven't been building the catalog images for s390x and ppc64le arches – only the operator and bundle images.

I do have a few concerns with the proposed approach nonetheless, starting with the substitution of buildah-build for a custom script. This would make Authorino the only component of Kuadrant not using buildah-build for any of its container images in CI.

Provided Buildah supports s390x and ppc64le, any particular reason why not to keep it, maybe by adding the two arches here? Did you have any issues with the current approach and supporting these arches?

I've left other comments as well. Hopefully we'll be able iterate over those together and continue with the work on this PR because it is indeed very much needed.

By adding s390x and ppc64le in buildah argument catalog image is generated with x86 opm binary. Thats the reason we are using opm binary for each architecture to generate catalog image in the scripts.

@R3hankhan123
Copy link
Author

Hi @guicassolato, can you provide further comments based on the reply

@R3hankhan123
Copy link
Author

@guicassolato is there any slack channel where we can discuss your concerns regarding the PR as its being dragged on

@guicassolato
Copy link
Collaborator

@guicassolato is there any slack channel where we can discuss your concerns regarding the PR as its being dragged on

@R3hankhan123, you can find us all at the #kuadrant channel in the Kubernetes Slack workspace.

I think the 2 main concerns at this point are:

  1. Ensure we're not rolling back to SQLite-based catalogs. I see you've replied to that, though I cannot confirm if that OpenShift deprecation notice means what you say it does indeed. I need somebody else with more experience on that than myself to open the generated catalog images and approve the PR regarding this point of the catalog format.
  2. Currently, if only for amd64 and arm64 architectures, we cannot push latest and SHA-tagged catalog images to registry in two separate image manifests. These two image tags must be linked into a single image manifest in the registry, until the head of main changes and latest is rebuilt and linked again to another SHA. There are automation in QE that depends on this link.

@R3hankhan123
Copy link
Author

R3hankhan123 commented Nov 4, 2024

Hi @guicassolato, I have made the catalog source filed based and now also the tags are linked

Screenshot 2024-11-04 at 1 29 37 PM Screenshot 2024-11-04 at 1 29 55 PM

Co-authored-by: Rehan Khan <[email protected]>
Co-authored-by: Deepali Kushwah <[email protected]>
Signed-off-by: Rehan Khan <[email protected]>
@R3hankhan123
Copy link
Author

@guicassolato is there any slack channel where we can discuss your concerns regarding the PR as its being dragged on

@R3hankhan123, you can find us all at the #kuadrant channel in the Kubernetes Slack workspace.

I think the 2 main concerns at this point are:

  1. Ensure we're not rolling back to SQLite-based catalogs. I see you've replied to that, though I cannot confirm if that OpenShift deprecation notice means what you say it does indeed. I need somebody else with more experience on that than myself to open the generated catalog images and approve the PR regarding this point of the catalog format.
  2. Currently, if only for amd64 and arm64 architectures, we cannot push latest and SHA-tagged catalog images to registry in two separate image manifests. These two image tags must be linked into a single image manifest in the registry, until the head of main changes and latest is rebuilt and linked again to another SHA. There are automation in QE that depends on this link.

Both the points have been addressed in the latest changes

@R3hankhan123
Copy link
Author

Hi @guicassolato any more changes to be made?

@didierofrivia
Copy link
Member

@R3hankhan123 Hey thanks for your time and the interest in our project! I'm a bit confused in your approach, you say you can't use buildah because the catalog image is generated with x86 opm binary...

Hi @R3hankhan123.
First and foremost, thank you for this PR! Indeed we haven't been building the catalog images for s390x and ppc64le arches – only the operator and bundle images.
I do have a few concerns with the proposed approach nonetheless, starting with the substitution of buildah-build for a custom script. This would make Authorino the only component of Kuadrant not using buildah-build for any of its container images in CI.
Provided Buildah supports s390x and ppc64le, any particular reason why not to keep it, maybe by adding the two arches here? Did you have any issues with the current approach and supporting these arches?
I've left other comments as well. Hopefully we'll be able iterate over those together and continue with the work on this PR because it is indeed very much needed.

By adding s390x and ppc64le in buildah argument catalog image is generated with x86 opm binary. That's the reason we are using opm binary for each architecture to generate catalog image in the scripts.

The resulting docker image created with opm is referencing the latest tag as default which is build for those multi archs needed, that's not enough? how is that failing when you get to use the catalog image resulting from buildah with s390x and ppc64le? Do you have them hosted somewhere? Have you checked buildah multi arch examples
?

In the case one need to directly specify the base opm image in the catalog dockerfile, I would choose to craft the catalog passing the arch as matrix strategy in GH workflow, but still rely on buildah and the scripts we have in place, opening the make catalog target to accept the arch env so it can add the -i flag accordingly, then using buildah in the next step to build the image as part of the matrix iteration. What I mean is that your proposing to execute a script which might not work as expected on GH workflows than locally and changes radically the way we are running our pipelines jobs.

Probably something like this : #222 , would that be Ok for your usecase?

@R3hankhan123
Copy link
Author

@R3hankhan123 Hey thanks for your time and the interest in our project! I'm a bit confused in your approach, you say you can't use buildah because the catalog image is generated with x86 opm binary...

Hi @R3hankhan123.
First and foremost, thank you for this PR! Indeed we haven't been building the catalog images for s390x and ppc64le arches – only the operator and bundle images.
I do have a few concerns with the proposed approach nonetheless, starting with the substitution of buildah-build for a custom script. This would make Authorino the only component of Kuadrant not using buildah-build for any of its container images in CI.
Provided Buildah supports s390x and ppc64le, any particular reason why not to keep it, maybe by adding the two arches here? Did you have any issues with the current approach and supporting these arches?
I've left other comments as well. Hopefully we'll be able iterate over those together and continue with the work on this PR because it is indeed very much needed.

By adding s390x and ppc64le in buildah argument catalog image is generated with x86 opm binary. That's the reason we are using opm binary for each architecture to generate catalog image in the scripts.

The resulting docker image created with opm is referencing the latest tag as default which is build for those multi archs needed, that's not enough? how is that failing when you get to use the catalog image resulting from buildah with s390x and ppc64le? Do you have them hosted somewhere? Have you checked buildah multi arch examples ?

In the case one need to directly specify the base opm image in the catalog dockerfile, I would choose to craft the catalog passing the arch as matrix strategy in GH workflow, but still rely on buildah and the scripts we have in place, opening the make catalog target to accept the arch env so it can add the -i flag accordingly, then using buildah in the next step to build the image as part of the matrix iteration. What I mean is that your proposing to execute a script which might not work as expected on GH workflows than locally and changes radically the way we are running our pipelines jobs.

Probably something like this : #222 , would that be Ok for your usecase?

Hi @didierofrivia the PR you have raised, when I ran the workflow and noticed that the images were being overwritten ie amd64's image was being overwritten by arm64 and so on

Screenshot 2024-11-05 at 11 18 43 AM
rehankhan@Rehans-MacBook-Pro Documents % podman pull quay.io/r3hankhan/authorino-operator-catalog:latest
Trying to pull quay.io/r3hankhan/authorino-operator-catalog:latest...
Getting image source signatures
Copying blob sha256:aa97670cb9dee5ec6b3056e0789422f2adc072a268fd881fbd6fb1151162ce57
Copying blob sha256:ae475359a3fb8fe5e3dff0626f0c788b94340416eb5c453339abc884ac86b671
Copying blob sha256:1cd0595314a53d179ddaf68761c9f40c4d9d1bcd3f692d1c005938dac2993db6
Copying blob sha256:7062267a99d0149e4129843a9e3257b882920fb8554ec2068a264b37539768bc
Copying blob sha256:f89d6e2463fc5fff8bba9e568ec28a6030076dbc412bd52dff6dbf2b5897a59d
Copying blob sha256:25d123725cf91c20b497ca9dae8e0a6e8dedd8fe64f83757f3b41f6ac447eac0
Copying config sha256:93f8d113d3d8962d3f235f9c1792aca0bec1730d2429a037e81f7cf0f3581212
Writing manifest to image destination
WARNING: image platform (linux/amd64) does not match the expected platform (linux/arm64)
93f8d113d3d8962d3f235f9c1792aca0bec1730d2429a037e81f7cf0f3581212

@didierofrivia
Copy link
Member

didierofrivia commented Nov 5, 2024

Hi @R3hankhan123, have you tried specifying the arch when you pull the image? as in

podman pull quay.io/r3hankhan/authorino-operator-catalog:latest --arch=linux/arm64

OR

docker pull quay.io/r3hankhan/authorino-operator-catalog:latest --platform linux/arm64

Let me know how it goes.

@R3hankhan123
Copy link
Author

R3hankhan123 commented Nov 5, 2024

Hi @R3hankhan123, have you tried specifying the arch when you pull the image? as in

podman pull quay.io/r3hankhan/authorino-operator-catalog:latest --arch=linux/arm64

OR

docker pull quay.io/r3hankhan/authorino-operator-catalog:latest --platform linux/arm64

Let me know how it goes.

@didierofrivia I am getting this

rehankhan@Rehans-MacBook-Pro Documents % podman pull quay.io/r3hankhan/authorino-operator-catalog:test --arch=linux/arm64
Trying to pull quay.io/r3hankhan/authorino-operator-catalog:test...
Getting image source signatures
Copying blob sha256:c8022d07192eddbb2a548ba83be5e412f7ba863bbba158d133c9653bb8a47768
Copying blob sha256:2e4cf50eeb92ac3a7afe75e15d96a26dee99449f86b46c75b5d95f4418a5bca0
Copying blob sha256:4d4401f0320bd6f39c22d9a4a0eba68686c97d1928363283fc47ba8a8dee6382
Copying blob sha256:6f4cfee9177b9f884e8d86b48261a25094b2fcea1a7920919f47ea00712dbee8
Copying blob sha256:0f8b424aa0b96c1c388a5fd4d90735604459256336853082afb61733438872b5
Copying blob sha256:d858cbc252ade14879807ff8dbc3043a26bbdb92087da98cda831ee040b172b3
Copying blob sha256:d557676654e572af3e3173c90e7874644207fda32cd87e9d3d66b5d7b98a7b21
Copying blob sha256:1069fc2daed1aceff7232f4b8ab21200dd3d8b04f61be9da86977a34a105dfdc
Copying blob sha256:b40161cd83fc5d470d6abe50e87aa288481b6b89137012881d74187cfbf9f502
Copying blob sha256:3f4e2c5863480125882d92060440a5250766bce764fee10acdbac18c872e4dc7
Copying blob sha256:80a8c047508ae5cd6a591060fc43422cb8e3aea1bd908d913e8f0146e2297fea
Copying blob sha256:c57d5d2ad6083e98b301e2cb9283489173321ca77397fb6c150bfda946173fdb
Copying blob sha256:ba63ca86b039286f74b2529ac458da1de06035b23bd9b74f2fae2484c30700c0
Copying blob sha256:c072e97f89853830431248238603055ecfb103c6ad3386e3ff8fd46fc9beace6
Copying blob sha256:31b3e74066eb1f3153fceede0b1299ee0492c1573dcf5d6a8181b12b5b3bc788
Copying blob sha256:90f7e50e911c59ef9f96acedce3ac546a20147adf3c8fe2855e8010ac3ba227e
Copying config sha256:b5a7ac7e7cfaf4f837600c1f244ec9c3715ba9c445c37cf1fedf06711e1e3f0b
Writing manifest to image destination
WARNING: image platform (linux/ppc64le) does not match the expected platform (linux/linux/arm64)
b5a7ac7e7cfaf4f837600c1f244ec9c3715ba9c445c37cf1fedf06711e1e3f0b

@didierofrivia
Copy link
Member

@R3hankhan123 Could you try first podman pull quay.io/kuadrant/authorino-operator-catalog:test-builda-multiarchs --arch=linux/ppc64le or the arch you want to try out (corresponding to #224)... and if that doesn't work, there are also the following images: quay.io/kuadrant/authorino-operator-catalog:building-multiarch-catalogs-{ARCH} from this PR: #222.

Cheers!

@R3hankhan123
Copy link
Author

@didierofrivia whn I am trying to pull image corresponding to #224 I am getting the following

podman pull quay.io/kuadrant/authorino-operator-catalog:test-builda-multiarchs --arch=linux/s390x  
Trying to pull quay.io/kuadrant/authorino-operator-catalog:test-builda-multiarchs...
Error: choosing an image from manifest list docker://quay.io/kuadrant/authorino-operator-catalog:test-builda-multiarchs: no image found in manifest list for architecture linux, variant "s390x", OS linux

But I am able to pull the image corresponding to #222

@R3hankhan123
Copy link
Author

@R3hankhan123 Hey thanks for your time and the interest in our project! I'm a bit confused in your approach, you say you can't use buildah because the catalog image is generated with x86 opm binary...

Hi @R3hankhan123.
First and foremost, thank you for this PR! Indeed we haven't been building the catalog images for s390x and ppc64le arches – only the operator and bundle images.
I do have a few concerns with the proposed approach nonetheless, starting with the substitution of buildah-build for a custom script. This would make Authorino the only component of Kuadrant not using buildah-build for any of its container images in CI.
Provided Buildah supports s390x and ppc64le, any particular reason why not to keep it, maybe by adding the two arches here? Did you have any issues with the current approach and supporting these arches?
I've left other comments as well. Hopefully we'll be able iterate over those together and continue with the work on this PR because it is indeed very much needed.

By adding s390x and ppc64le in buildah argument catalog image is generated with x86 opm binary. That's the reason we are using opm binary for each architecture to generate catalog image in the scripts.

The resulting docker image created with opm is referencing the latest tag as default which is build for those multi archs needed, that's not enough? how is that failing when you get to use the catalog image resulting from buildah with s390x and ppc64le? Do you have them hosted somewhere? Have you checked buildah multi arch examples ?

In the case one need to directly specify the base opm image in the catalog dockerfile, I would choose to craft the catalog passing the arch as matrix strategy in GH workflow, but still rely on buildah and the scripts we have in place, opening the make catalog target to accept the arch env so it can add the -i flag accordingly, then using buildah in the next step to build the image as part of the matrix iteration. What I mean is that your proposing to execute a script which might not work as expected on GH workflows than locally and changes radically the way we are running our pipelines jobs.

Probably something like this : #222 , would that be Ok for your usecase?

@didierofrivia @guicassolato, now the workflows will build catalog images using buildah

Screenshot 2024-11-06 at 11 49 45 AM

The workflow will be something like this

Screenshot 2024-11-06 at 11 52 02 AM

@codecov-commenter
Copy link

codecov-commenter commented Nov 6, 2024

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 61.78%. Comparing base (d40dba0) to head (7a7a37c).
Report is 14 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #214   +/-   ##
=======================================
  Coverage   61.78%   61.78%           
=======================================
  Files           2        2           
  Lines         785      785           
=======================================
  Hits          485      485           
  Misses        249      249           
  Partials       51       51           
Flag Coverage Δ
unit 61.78% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@didierofrivia
Copy link
Member

@R3hankhan123 Can you confirm that #222 solves your needs then?

@R3hankhan123
Copy link
Author

R3hankhan123 commented Nov 6, 2024

@R3hankhan123 Can you confirm that #222 solves your needs then?

@didierofrivia PR #222 meets our needs just that the common manifests are not being created, but i have resolved that in my latest code change

@R3hankhan123
Copy link
Author

After discussing with @didierofrivia , PR #222 will work for our use case.
Cheers!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants