-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thoughts on using a configuration management framework? #37
Comments
Hi, Justin, great to see your interest and the willingness to contribute! Would you elaborate a little bit more on the problem that you're trying to solve? Also, have you already check the section in the FAQ: I'm running SageMaker in a VPC. Do I need to make extra configuration?? It shows example Dockerfiles where everything is coming pre-installed, so you don't need to "configure" anything extra, change URLs etc. |
The particular use case is connecting sagemaker studio's jupyter server app to the kernel gateway apps to enable interactive plotting libraries that need a web server running. Similar to the web vnc example. I did see the dockerfiles. Building them an environment without direct internet access isn't possible (same issue as running the scripts directly). A couple specific things I thought a config manager could help address:
|
Thank you, @jmahlik , I will take a look in your concerns. As to the last point, what version of the library do you use on the client and on the remote? The pipefail option has been added to some scripts in the latest version. Which ones you think still need this option turned on? |
I ended up doing the same thing @jmahlik. On the local side, I also refactored some of the code in to a Python install to integrate with VSCode for our Windows users. |
Hi, @DrJeckyl , do you also have no Internet access during the build of the custom image and require to download tools like AWS CLI and SSM Agent from internal locations? |
No @ivan-khvostishkov - We use a code pipeline with internet access when building the custom images. Admittedly, we are a few versions behind and should update to see what's different now. |
Hi, @DrJeckyl , did you have chance to try the latest version If you have Internet during build pipeline, then you can just add to your docker file this command:
It will download and install all libraries so later when you run the lifecycle config script it will detect that everything is already configured and won't try to install anything from Internet. In this case you don't need to patch the locations of the libraries. I understand that you want to patch the lifecycle script with the specific value for Of course, you need to modify the scripts a little bit to call the Systems Manager API, and you are encouraged to do so, because this repository is the sample code. But is there any logic that you propose to be the part of the main branch? If we add a new lifecycle configuration script that fetches the user IDs from Parameter Store, will it help to resolve the your issue? Let me know your thoughts. |
@jmahlik Following up on your original post, could you please help me to understand in which part you propose to run Ansible? As part of the lifecycle configuration script or as part of You mentioned that you're already in the process of creating the playbook, have you succeed in it? It would be great if you share your learnings. |
It's pretty hard to get this up and running in an account that has restricted internet access.
I had fork and refactor almost all of the bash scripts. This was quite a challenge as they are a little unwieldy (I mean it is bash after all). So, I had a thought based on how I handle setting up dev environments on linux boxes.
Moving the install/run functionality to a declarative configuration management system would make maintaining, extending and using the project easier.
What would your thoughts be on managing the installs and configurations via something like Ansible? I recommended ansible since it's lightweight and easy to work with. Its a python package. So only need python which we already have. But it could be any config system.
The user experience could remain the same, the bash scripts would be shims around the config manager. Likely, it could be simplified. Not so many steps to get up and running, you just run a command and it gets the system in the desired state, instead of having to nohup a bunch of bash scripts.
It'd be easier to:
I'd be willing to contribute work towards this since maintaining a copy of the bash scripts is quite painful. Already in the process of exploring a playbook for starting the ssh helper.
The text was updated successfully, but these errors were encountered: