Skip to content

Commit

Permalink
Built site for gh-pages
Browse files Browse the repository at this point in the history
  • Loading branch information
Quarto GHA Workflow Runner committed Nov 16, 2023
1 parent 3edf7f0 commit fec3d7f
Show file tree
Hide file tree
Showing 5 changed files with 45 additions and 43 deletions.
2 changes: 1 addition & 1 deletion .nojekyll
Original file line number Diff line number Diff line change
@@ -1 +1 @@
7c0a989e
075003d9
6 changes: 3 additions & 3 deletions search.json
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@
"href": "tutorials/data-access/earthdata-search.html",
"title": "Earthdata Search",
"section": "",
"text": "Earthdata Search\nThis tutorial guides you through how to use Earthdata Search for NASA Earth observations search and discovery, and how to connect the search output (e.g. download or access links) to a programmatic workflow (locally or from within the cloud).\n\nStep 1. Go to Earthdata Search and Login\nGo to Earthdata Search https://search.earthdata.nasa.gov and use your Earthdata login credentials to log in. If you do not have an Earthdata account, please see the Workshop Prerequisites for guidance.\n\n\nStep 2. Search for dataset of interest\nUse the search box in the upper left to type key words. In this example we are interested in the GEDI or ICESAT-2 which is managed by the LP DAAC and made available from the NASA Earthdata Cloud archive hosted in AWS cloud.\nType GEDI in the search bar Click on the “Available from AWS Cloud” filter option on the left.\nWe can click on the (i) icon for the dataset to read more details, including the dataset shortname (helpful for programmatic workflows) just below the dataset name; here ECO_L2T_LSTE.\n\n\nStep 3. Explore the dataset details, including Cloud Access information\nOnce we clicked the (i), scrolling down the info page for the dataset we will see Cloud Access information, such as:\n\nwhether the dataset is available in the cloud\n\nthe cloud Region (all NASA Earthdata Cloud data is/will be in us-west-2 region)\n\nthe S3 storage bucket and object prefix where this data is located\n\nlink that generates AWS S3 Credentials for in-cloud data access (we will cover this in the Direct Data Access Tutorials)\n\nlink to documentation describing the In-region Direct S3 Access to Buckets. Note: these will be unique depending on the DAAC where the data is archived. (We will show examples of direct in-region access in Tutorial 3.)\n\n\n\n\nFigure caption: Cloud access info in EDS\n\n\n\n\n\nFigure caption: Documentation describing the In-region Direct S3 Access to Buckets\n\n\nNote: Clicking on “For Developers” to exapnd will provide programmatic endpoints such as those for the CMR API, and more.\nFor now, let’s say we are intersted in getting download link(s) or access link(s) for specific data files (granules) within this collection.\nAt the top of the dataset info section, click on Search Results, which will take us back to the list of datasets matching our search parameters. Clicking on the dataset (ECOSTRESS ECO_L2T_LSTE) we now see a list of files (granules) that are part of the dataset (collection).\n\n\nStep 4a. Download or data access for a single granule\nTo download files for a granule click the download arrow on the card (or list row)\n\n\n\nFigure caption: Download granules\n\n\nYou can also get the S3 information (e.g., AWS region, bucket, temperary credentials for S3 access, and file names) by selecting the AWS S3 Access tab.\n\n\n\nFigure caption: S3 access for granules\n\n\n\nStep 4b. Download or data access for multiple granule\nTo download multiple granules, click on the green + symbol to add files to our project. Click on the green button towards the bottom that says “Download”. This will take us to another page with options to customize our download or access link(s).\n\n\n\nFigure caption: Select granules and click download\n\n\nOn the next page click the Direct Download option and click the green Download Data on the bottom left side of the page.\n\n\n\nFigure caption: Direct download multiple granules\n\n\nWe’re now taked to the final page for instructions to download and links for data access in the cloud. You should see three tabs: Download Files, AWS S3 Access, Download Script:\n\n\n\nFigure caption: Download to local\n\n\n\n\n\nFigure caption: Direct S3 access\n\n\nThe Download Files tab provides the https:// links for downloading the files locally\nThe AWS S3 Access tab provides the S3:// links, which is what we would use to access the data directly in-region (us-west-2) within the AWS cloud."
"text": "Earthdata Search\nThis tutorial guides you through how to use Earthdata Search for NASA Earth observations search and discovery, and how to connect the search output (e.g. download or access links) to a programmatic workflow (locally or from within the cloud).\n\nStep 1. Go to Earthdata Search and Login\nGo to Earthdata Search https://search.earthdata.nasa.gov and use your Earthdata login credentials to log in. If you do not have an Earthdata account, please see the Workshop Prerequisites for guidance.\n\n\nStep 2. Search for dataset of interest\nUse the search box in the upper left to type key words. In this example we are interested in the GEDI or ICESAT-2 which is managed by the LP DAAC and made available from the NASA Earthdata Cloud archive hosted in AWS cloud.\nType GEDI in the search bar Click on the “Available from AWS Cloud” filter option on the left.\nWe can click on the (i) icon for the dataset to read more details, including the dataset shortname (helpful for programmatic workflows) just below the dataset name; here we are using a keyword GEDI.\n\n\nStep 3. Explore the dataset details, including Cloud Access information\nOnce we clicked the (i), scrolling down the info page for the dataset we will see Cloud Access information, such as:\n\nwhether the dataset is available in the cloud\n\nthe cloud Region (all NASA Earthdata Cloud data is/will be in us-west-2 region)\n\nthe S3 storage bucket and object prefix where this data is located\n\nlink that generates AWS S3 Credentials for in-cloud data access (we will cover this in the Direct Data Access Tutorials)\n\nlink to documentation describing the In-region Direct S3 Access to Buckets. Note: these will be unique depending on the DAAC where the data is archived. (We will show examples of direct in-region access in Tutorial 3.)\n\n\n\n\nFigure caption: Documentation describing the In-region Direct S3 Access to Buckets\n\n\nNote: Clicking on “For Developers” to exapnd will provide programmatic endpoints such as those for the CMR API, and more.\nFor now, let’s say we are intersted in getting download link(s) or access link(s) for specific data files (granules) within this collection.\nAt the top of the dataset info section, click on Search Results, which will take us back to the list of datasets matching our search parameters. Clicking on the dataset (ECOSTRESS ECO_L2T_LSTE) we now see a list of files (granules) that are part of the dataset (collection).\n\n\nStep 4a. Download or data access for a single granule\nTo download files for a granule click the download arrow on the card (or list row)\n\n\n\nFigure caption: Download granules\n\n\nYou can also get the S3 information (e.g., AWS region, bucket, temperary credentials for S3 access, and file names) by selecting the AWS S3 Access tab.\n\n\n\nFigure caption: S3 access for granules\n\n\n\nStep 4b. Download or data access for multiple granule\nTo download multiple granules, click on the green + symbol to add files to our project. Click on the green button towards the bottom that says “Download”. This will take us to another page with options to customize our download or access link(s).\n\n\n\nFigure caption: Select granules and click download\n\n\nOn the next page click the Direct Download option and click the green Download Data on the bottom left side of the page. In this page some datasets will provide a data customization service, this is really important if we don’t want to download the full record but just the variables and region of interest.\n\n\n\nFigure caption: Direct download multiple granules\n\n\nIf we pick the direct download option, we’ll be redirected to the final page for instructions to download and links for data access in the cloud. You should see three tabs: Download Files, AWS S3 Access, Download Script:\n\n\n\nFigure caption: Download to local\n\n\nIf we select the data customization service we’ll be able to subset by region of interest, variables and temporal parameters.\n\n\n\nFigure caption: Harmony subsetter service\n\n\nVariables can be selected too.\n\n\n\nFigure caption: Harmony subsetter service\n\n\nThe Download Files tab provides the https:// links for downloading the files locally\nThe AWS S3 Access tab provides the S3:// links, which is what we would use to access the data directly in-region (us-west-2) within the AWS cloud."
},
{
"objectID": "tutorials/index.html",
Expand Down Expand Up @@ -193,7 +193,7 @@
"href": "tutorials/jupyterhub_demo/jupyterhub_demo.html",
"title": "Demo JupyterHub",
"section": "",
"text": "Author: Tasha Snow\nnacunmsidsvo Learning Objectives - **Learn how to access and use the Openscapes JupyterHub** - **Open the JupyterHub and clone the Openscapes Espacio and Sostenibilidad Colloquium repository**"
"text": "Author: Tasha Snow\nbryoakxxlfos Learning Objectives - **Learn how to access and use the Openscapes JupyterHub** - **Open the JupyterHub and clone the Openscapes Espacio and Sostenibilidad Colloquium repository**"
},
{
"objectID": "tutorials/jupyterhub_demo/jupyterhub_demo.html#access-the-cryocloud-powerpoint-whenever-you-need-to-reference-it",
Expand All @@ -207,7 +207,7 @@
"href": "tutorials/jupyterhub_demo/jupyterhub_demo.html#open-cryocloud",
"title": "Demo JupyterHub",
"section": "Open CryoCloud",
"text": "Open CryoCloud\n\nScroll through the server sizes. Stick with the 3.7Gb server (the default).\n\n```nacunmsidsvo Tip Be realistic about the max memory you will need. The amount you select, you are guaranteed, but if you use more you risk crashing your server for you and anyone else who is sharing with you. If you crash the server, it just requires logging out and reopening it, but it could be annoying for everyone.\nCheck your memory usage at the bottom in the middle of the screen.\n\n2) Choose the Python programming language.\n\n3) Sit back and learn about each of the tools!\n - JupyterHub options and viewing setup\n - Github\n - Virtual Linux desktop\n - SyncThing\n - Viewing and editing of different files\n\nNow after the demo...\n\n## Task: Clone the Espacio and Sostenibilidad Colloquium jupyterbook\n\nWe will import the [NASA Openscapes Espacio and Sostenibilidad Colloquium Github repository](https://github.com/NASA-Openscapes/2023-ssc.git).\n\nTo do this: \n1. Select the plus (`+`) sign above the `File Browser` to the left, which will bring up a `Launcher` window. \n\n2. Click the `terminal` button under Other to open it. This is your command line like you would have on any computer. \n\nBefore cloning the repo, you have the option to switch to another file folder using the _change directory_ terminal command: `cd folder` if you do not want the Hackweek repo in your current directory (you can check which directory you are currently in using _print working directory_ command: `pwd`).\ncd yourfoldername\n\n3. Now clone the hackweek code into your current directory: \ngit clone https://github.com/NASA-Openscapes/2023-ssc.git\n\n4. You will see the folder pop into your `File Browser` on the left if you have the current directory open. Click on the folder to navigate through the files. \n\n5. To open this tutorial, click on the `book` subdirectory > `tutorials` > `jupyterhub_demo` > and double click on `jupyterhub_demo`. This should open up this tutorial in case you want to review it in the future. \n\n## Shutting down your JupyterHub\n\n```{admonition} TIP\n**Best Practice: Shut down the Openscapes server when you are done to save us money.**\n\n**If you only close your tab or click log out, your server will continue running for 90 minutes.**\nWhenever you are done, it is best to shut down your server when you sign out to save money. Time on the JupyterHub costs money and there are systems in place to make sure your server doesn’t run indefinitely if you forget about it. After 90 minutes of no use, it will shut down. We prefer you shut down the server when so we save that 90 minutes of computing cost. To do so:\n\nIn upper left, click on File > Hub Control Panel, which will open another tab\nClick the Stop Server button. Once this button disappears after you clicked it, your server is off.\nClick Log Out in the top right of your screen and you will be logged out, or you can start a new server\nYou can now close this tab and the other tab where you were just working"
"text": "Open CryoCloud\n\nScroll through the server sizes. Stick with the 3.7Gb server (the default).\n\n```bryoakxxlfos Tip Be realistic about the max memory you will need. The amount you select, you are guaranteed, but if you use more you risk crashing your server for you and anyone else who is sharing with you. If you crash the server, it just requires logging out and reopening it, but it could be annoying for everyone.\nCheck your memory usage at the bottom in the middle of the screen.\n\n2) Choose the Python programming language.\n\n3) Sit back and learn about each of the tools!\n - JupyterHub options and viewing setup\n - Github\n - Virtual Linux desktop\n - SyncThing\n - Viewing and editing of different files\n\nNow after the demo...\n\n## Task: Clone the Espacio and Sostenibilidad Colloquium jupyterbook\n\nWe will import the [NASA Openscapes Espacio and Sostenibilidad Colloquium Github repository](https://github.com/NASA-Openscapes/2023-ssc.git).\n\nTo do this: \n1. Select the plus (`+`) sign above the `File Browser` to the left, which will bring up a `Launcher` window. \n\n2. Click the `terminal` button under Other to open it. This is your command line like you would have on any computer. \n\nBefore cloning the repo, you have the option to switch to another file folder using the _change directory_ terminal command: `cd folder` if you do not want the Hackweek repo in your current directory (you can check which directory you are currently in using _print working directory_ command: `pwd`).\ncd yourfoldername\n\n3. Now clone the hackweek code into your current directory: \ngit clone https://github.com/NASA-Openscapes/2023-ssc.git\n\n4. You will see the folder pop into your `File Browser` on the left if you have the current directory open. Click on the folder to navigate through the files. \n\n5. To open this tutorial, click on the `book` subdirectory > `tutorials` > `jupyterhub_demo` > and double click on `jupyterhub_demo`. This should open up this tutorial in case you want to review it in the future. \n\n## Shutting down your JupyterHub\n\n```{admonition} TIP\n**Best Practice: Shut down the Openscapes server when you are done to save us money.**\n\n**If you only close your tab or click log out, your server will continue running for 90 minutes.**\nWhenever you are done, it is best to shut down your server when you sign out to save money. Time on the JupyterHub costs money and there are systems in place to make sure your server doesn’t run indefinitely if you forget about it. After 90 minutes of no use, it will shut down. We prefer you shut down the server when so we save that 90 minutes of computing cost. To do so:\n\nIn upper left, click on File > Hub Control Panel, which will open another tab\nClick the Stop Server button. Once this button disappears after you clicked it, your server is off.\nClick Log Out in the top right of your screen and you will be logged out, or you can start a new server\nYou can now close this tab and the other tab where you were just working"
},
{
"objectID": "tutorials/jupyterhub_demo/jupyterhub_demo.html#summary",
Expand Down
40 changes: 20 additions & 20 deletions sitemap.xml
Original file line number Diff line number Diff line change
Expand Up @@ -2,82 +2,82 @@
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url>
<loc>https://nasa-openscapes.github.io/2023-ssc/index.html</loc>
<lastmod>2023-11-16T10:06:04.353Z</lastmod>
<lastmod>2023-11-16T11:03:20.913Z</lastmod>
</url>
<url>
<loc>https://nasa-openscapes.github.io/2023-ssc/how-tos/authentication/NASA_Earthdata_Authentication.html</loc>
<lastmod>2023-11-16T10:06:03.633Z</lastmod>
<lastmod>2023-11-16T11:03:20.249Z</lastmod>
</url>
<url>
<loc>https://nasa-openscapes.github.io/2023-ssc/how-tos/data-access/Intro_xarray_hvplot.html</loc>
<lastmod>2023-11-16T10:06:02.117Z</lastmod>
<lastmod>2023-11-16T11:03:19.045Z</lastmod>
</url>
<url>
<loc>https://nasa-openscapes.github.io/2023-ssc/tutorials/data-access/icepyx.html</loc>
<lastmod>2023-11-16T10:06:01.125Z</lastmod>
<lastmod>2023-11-16T11:03:17.833Z</lastmod>
</url>
<url>
<loc>https://nasa-openscapes.github.io/2023-ssc/tutorials/data-access/earthdata-search.html</loc>
<lastmod>2023-11-16T10:06:00.153Z</lastmod>
<lastmod>2023-11-16T11:03:16.705Z</lastmod>
</url>
<url>
<loc>https://nasa-openscapes.github.io/2023-ssc/tutorials/index.html</loc>
<lastmod>2023-11-16T10:05:59.013Z</lastmod>
<lastmod>2023-11-16T11:03:15.569Z</lastmod>
</url>
<url>
<loc>https://nasa-openscapes.github.io/2023-ssc/tutorials/schedule.html</loc>
<lastmod>2023-11-16T10:05:58.261Z</lastmod>
<lastmod>2023-11-16T11:03:14.833Z</lastmod>
</url>
<url>
<loc>https://nasa-openscapes.github.io/2023-ssc/tutorials/cloud/cloud-paradigm.html</loc>
<lastmod>2023-11-16T10:05:57.569Z</lastmod>
<lastmod>2023-11-16T11:03:14.181Z</lastmod>
</url>
<url>
<loc>https://nasa-openscapes.github.io/2023-ssc/tutorials/GEDI_data_SSC23.html</loc>
<lastmod>2023-11-16T10:05:56.633Z</lastmod>
<lastmod>2023-11-16T11:03:13.377Z</lastmod>
</url>
<url>
<loc>https://nasa-openscapes.github.io/2023-ssc/tutorials/further-resources.html</loc>
<lastmod>2023-11-16T10:05:54.277Z</lastmod>
<lastmod>2023-11-16T11:03:10.913Z</lastmod>
</url>
<url>
<loc>https://nasa-openscapes.github.io/2023-ssc/tutorials/prerequisites.html</loc>
<lastmod>2023-11-16T10:05:53.321Z</lastmod>
<lastmod>2023-11-16T11:03:09.977Z</lastmod>
</url>
<url>
<loc>https://nasa-openscapes.github.io/2023-ssc/tutorials/jupyterhub_demo/jupyterhub_demo.html</loc>
<lastmod>2023-11-16T10:05:54.633Z</lastmod>
<lastmod>2023-11-16T11:03:11.273Z</lastmod>
</url>
<url>
<loc>https://nasa-openscapes.github.io/2023-ssc/tutorials/science/Intro_xarray_hvplot.html</loc>
<lastmod>2023-11-16T10:05:57.117Z</lastmod>
<lastmod>2023-11-16T11:03:13.889Z</lastmod>
</url>
<url>
<loc>https://nasa-openscapes.github.io/2023-ssc/tutorials/cloud/index.html</loc>
<lastmod>2023-11-16T10:05:57.893Z</lastmod>
<lastmod>2023-11-16T11:03:14.461Z</lastmod>
</url>
<url>
<loc>https://nasa-openscapes.github.io/2023-ssc/tutorials/setup.html</loc>
<lastmod>2023-11-16T10:05:58.701Z</lastmod>
<lastmod>2023-11-16T11:03:15.277Z</lastmod>
</url>
<url>
<loc>https://nasa-openscapes.github.io/2023-ssc/tutorials/data-access/index.html</loc>
<lastmod>2023-11-16T10:05:59.753Z</lastmod>
<lastmod>2023-11-16T11:03:16.305Z</lastmod>
</url>
<url>
<loc>https://nasa-openscapes.github.io/2023-ssc/tutorials/data-access/earthaccess.html</loc>
<lastmod>2023-11-16T10:06:00.845Z</lastmod>
<lastmod>2023-11-16T11:03:17.397Z</lastmod>
</url>
<url>
<loc>https://nasa-openscapes.github.io/2023-ssc/how-tos/data-access/Earthdata_Cloud__Single_File__Direct_S3_Access_Clip_COG_Example.html</loc>
<lastmod>2023-11-16T10:06:01.633Z</lastmod>
<lastmod>2023-11-16T11:03:18.545Z</lastmod>
</url>
<url>
<loc>https://nasa-openscapes.github.io/2023-ssc/how-tos/data-discovery/Data_Discovery_CMR_API.html</loc>
<lastmod>2023-11-16T10:06:02.781Z</lastmod>
<lastmod>2023-11-16T11:03:19.697Z</lastmod>
</url>
<url>
<loc>https://nasa-openscapes.github.io/2023-ssc/how-tos/authentication/NASA_Earthdata_Login_Token.html</loc>
<lastmod>2023-11-16T10:06:04.033Z</lastmod>
<lastmod>2023-11-16T11:03:20.601Z</lastmod>
</url>
</urlset>
Loading

0 comments on commit fec3d7f

Please sign in to comment.