+```
+
+* Requires `models_dnn/` or `models_xgb/` folder in the root directory containing the pre-trained models for DNN and XGBoost, respectively.
+* In a `preds_dnn` or `preds_xgb` directory, creates a single `.parquet` (and optionally `.csv`) file containing all ids of the field in the rows and inference scores for different classes across the columns.
+* If running inference on specific ids instead of a field/ccd/quad (e.g. on GCN sources), run `./get_all_preds.sh specific_ids`
+
+## Handling different file formats
+When our manipulations of `pandas` dataframes is complete, we want to save them in an appropriate file format with the desired metadata. Our code works with multiple formats, each of which have advantages and drawbacks:
+
+- Comma Separated Values (CSV, .csv): in this format, data are plain text and columns are separated by commas. While this format offers a high level of human readability, it also takes more space to store and a longer time to write and read than other formats.
+
+ `pandas` offers the `read_csv()` function and `to_csv()` method to perform I/O operations with this format. Metadata must be included as plain text in the file.
+
+- Hierarchical Data Format (HDF5, .h5): this format stores data in binary form, so it is not human-readable. It takes up less space on disk than CSV files, and it writes/reads faster for numerical data. HDF5 does not serialize data columns containing structures like a `numpy` array, so file size improvements over CSV can be diminished if these structures exist in the data.
+
+ `pandas` includes `read_hdf()` and `to_hdf()` to handle this format, and they require a package like [`PyTables`](https://www.pytables.org/) to work. `pandas` does not currently support the reading and writing of metadata using the above function and method. See `scope/utils.py` for code that handles metadata in HDF5 files.
+
+- Apache Parquet (.parquet): this format stores data in binary form like HDF5, so it is not human-readable. Like HDF5, Parquet also offers significant disk space savings over CSV. Unlike HDF5, Parquet supports structures like `numpy` arrays in data columns.
+
+ While `pandas` offers `read_parquet()` and `to_parquet()` to support this format (requiring e.g. [`PyArrow`](https://arrow.apache.org/docs/python/) to work), these again do not support the reading and writing of metadata associated with the dataframe. See `scope/utils.py` for code that reads and writes metadata in Parquet files.
+
+## Mapping between column names and Fritz taxonomies
+The column names of training set files and Fritz taxonomy classifications are not the same by default. Training sets may also contain columns that are not meant to be uploaded to Fritz. To address both of these issues, we use a 'taxonomy mapper' file to connect local data and Fritz taxonomies.
+
+This file must currently be generated manually, entry by entry. Each entry's key corresponds to a column name in the local file. The set of all keys is used to establish the columns of interest for upload or download. For example, if the training set includes columns that are not classifications, like RA and Dec, these columns should not be included among the entries in the mapper file. The code will then ignore these columns for the purpose of classification.
+
+The fields associated with each key are `fritz_label` (containing the associated Fritz classification name) and `taxonomy_id` identifying the classification's taxonomy system. The mapper must have the following format, also demonstrated in `golden_dataset_mapper.json` and `DNN_AL_mapper.json`:
+
+```
+{
+"variable":
+ {"fritz_label": "variable",
+ "taxonomy_id": 1012
+ },
+
+"periodic":
+ {"fritz_label": "periodic",
+ "taxonomy_id": 1012
+ },
+
+ .
+ . [add more entries here]
+ .
+
+"CV":
+ {"fritz_label": "Cataclysmic",
+ "taxonomy_id": 1011
+ }
+}
+
+```
+
+## Generating features
+Code has been adapted from [ztfperiodic](https://github.com/mcoughlin/ztfperiodic) and other sources to calculate basic and Fourier stats for light curves along with other features. This allows new features to be generated with SCoPe, both locally and using GPU cluster resources. The feature generation script is run using the `generate-features` command.
+
+Currently, the basic stats are calculated via `tools/featureGeneration/lcstats.py`, and a host of period-finding algorithms are available in `tools/featureGeneration/periodsearch.py`. Among the CPU-based period-finding algorithms, there is not yet support for `AOV_cython`. For the `AOV` algorithm to work, run `source build.sh` in the `tools/featureGeneration/pyaov/` directory, then copy the newly created `.so` file (`aov.cpython-310-darwin.so` or similar) to `lib/python3.10/site-packages/` or equivalent within your environment. The GPU-based algorithms require CUDA support (so Mac GPUs are not supported).
+
+inputs:
+1. --source-catalog* : name of Kowalski catalog containing ZTF sources (str)
+2. --alerts-catalog* : name of Kowalski catalog containing ZTF alerts (str)
+3. --gaia-catalog* : name of Kowalski catalog containing Gaia data (str)
+4. --bright-star-query-radius-arcsec : maximum angular distance from ZTF sources to query nearby bright stars in Gaia (float)
+5. --xmatch-radius-arcsec : maximum angular distance from ZTF sources to match external catalog sources (float)
+6. --limit : maximum number of sources to process in batch queries / statistics calculations (int)
+7. --period-algorithms* : dictionary containing names of period algorithms to run. Normally specified in config - if specified here, should be a (list)
+8. --period-batch-size : maximum number of sources to simultaneously perform period finding (int)
+9. --doCPU : flag to run config-specified CPU period algorithms (bool)
+10. --doGPU : flag to run config-specified GPU period algorithms (bool)
+11. --samples_per_peak : number of samples per periodogram peak (int)
+12. --doScaleMinPeriod : for period finding, scale min period based on min-cadence-minutes (bool). Otherwise, set --max-freq to desired value
+13. --doRemoveTerrestrial : remove terrestrial frequencies from period-finding analysis (bool)
+14. --Ncore : number of CPU cores to parallelize queries (int)
+15. --field : ZTF field to run (int)
+16. --ccd : ZTF ccd to run (int)
+17. --quad : ZTF quadrant to run (int)
+18. --min-n-lc-points : minimum number of points required to generate features for a light curve (int)
+19. --min-cadence-minutes : minimum cadence between light curve points. Higher-cadence data are dropped except for the first point in the sequence (float)
+20. --dirname : name of generated feature directory (str)
+21. --filename : prefix of each feature filename (str)
+22. --doCesium : flag to compute config-specified cesium features in addition to default list (bool)
+23. --doNotSave : flag to avoid saving generated features (bool)
+24. --stop-early : flag to stop feature generation before entire quadrant is run. Pair with --limit to run small-scale tests (bool)
+25. --doQuadrantFile : flag to use a generated file containing [jobID, field, ccd, quad] columns instead of specifying --field, --ccd and --quad (bool)
+26. --quadrant-file : name of quadrant file in the generated_features/slurm directory or equivalent (str)
+27. --quadrant-index : number of job in quadrant file to run (int)
+28. --doSpecificIDs: flag to perform feature generation for ztf_id column in config-specified file (bool)
+29. --skipCloseSources: flag to skip removal of sources too close to bright stars via Gaia (bool)
+30. --top-n-periods: number of (E)LS, (E)CE periods to pass to (E)AOV if using (E)LS_(E)CE_(E)AOV algorithm (int)
+31. --max-freq: maximum frequency [1 / days] to use for period finding (float). Overridden by --doScaleMinPeriod
+32. --fg-dataset*: path to parquet, hdf5 or csv file containing specific sources for feature generation (str)
+33. --max-timestamp-hjd*: maximum timestamp of queried light curves, HJD (float)
+
+output:
+feature_df : dataframe containing generated features
+
+\* - specified in config.yaml
+
+### Example usage
+The following is an example of running the feature generation script locally:
+
+```
+generate-features --field 301 --ccd 2 --quad 4 --source-catalog ZTF_sources_20230109 --alerts-catalog ZTF_alerts --gaia-catalog Gaia_EDR3 --bright-star-query-radius-arcsec 300.0 --xmatch-radius-arcsec 2.0 --query-size-limit 10000 --period-batch-size 1000 --samples-per-peak 10 --Ncore 4 --min-n-lc-points 50 --min-cadence-minutes 30.0 --dirname generated_features --filename gen_features --doCPU --doRemoveTerrestrial --doCesium
+```
+
+Setting `--doCPU` will run the config-specified CPU period algorithms on each source. Setting `--doGPU` instead will do likewise with the specified GPU algorithms. If neither of these keywords is set, the code will assign a value of `1.0` to each period and compute Fourier statistics using that number.
+
+Below is an example run the script using a job/quadrant file (containing [job id, field, ccd, quad] columns) instead of specifying field/ccd/quad directly:
+
+```
+generate-features --source-catalog ZTF_sources_20230109 --alerts-catalog ZTF_alerts --gaia-catalog Gaia_EDR3 --bright-star-query-radius-arcsec 300.0 --xmatch-radius-arcsec 2.0 --query-size-limit 10000 --period-batch-size 1000 --samples-per-peak 10 --Ncore 20 --min-n-lc-points 50 --min-cadence-minutes 30.0 --dirname generated_features_DR15 --filename gen_features --doGPU --doRemoveTerrestrial --doCesium --doQuadrantFile --quadrant-file slurm.dat --quadrant-index 5738
+```
+
+### Slurm scripts
+For large-scale feature generation, `generate-features` is intended to be run on a high-performance computing cluster. Often these clusters require jobs to be submitted using a utility like `slurm` (Simple Linux Utility for Resource Management) to generate scripts. These scripts contain information about the type, amount and duration of computing resources to allocate to the user.
+
+Scope's `generate-features-slurm` code creates two slurm scripts: (1) runs single instance of `generate-features`, and (2) runs the `generate-features-job-submission` which submits multiple jobs in parallel, periodically checking to see if additional jobs can be started. See below for more information about these components of feature generation.
+
+`generate-features-slurm` can receive all of the arguments used by `generate-features`. These arguments are passed to the instances of feature generation begun by running slurm script (1). There are also additional arguments specific to cluster resource management:
+
+inputs:
+1. --job-name : name of submitted jobs (str)
+2. --cluster-name : name of HPC cluster (str)
+3. --partition-type : cluster partition to use (str)
+4. --nodes : number of nodes to request (int)
+5. --gpus : number of GPUs to request (int)
+6. --memory-GB : amount of memory to request in GB (int)
+7. --submit-memory-GB : Memory allocation to request for job submission (int)
+8. --time : amount of time before instance times out (str)
+9. --mail-user: user's email address for job updates (str)
+10. --account-name : name of account having HPC allocation (str)
+11. --python-env-name : name of Python environment to activate before running `generate_features.py` (str)
+12. --generateQuadrantFile : flag to map fields/ccds/quads containing sources to job numbers, save file (bool)
+13. --field-list : space-separated list of fields for which to generate quadrant file. If None, all populated fields included (int)
+14. --max-instances : maximum number of HPC instances to run in parallel (int)
+15. --wait-time-minutes : amount of time to wait between status checks in minutes (float)
+16. --doSubmitLoop : flag to run loop initiating instances until out of jobs (hard on Kowalski)
+17. --runParallel : flag to run jobs in parallel using slurm [recommended]. Otherwise, run in series on a single instance
+18. --user : if using slurm, your username. This will be used to periodically run `squeue` and list your running jobs (str)
+19. --submit-interval-minutes : Time to wait between job submissions, minutes (float)
+
+## Feature definitions
+### Selected phenomenological feature definitions
+
+| name | definition |
+| ---- | ---------- |
+|ad | Anderson-Darling statistic |
+|chi2red | Reduced chi^2 after mean subtraction |
+|f1_BIC | Bayesian information criterion of best-fitting series (Fourier analysis) |
+|f1_a | a coefficient of best-fitting series (Fourier analysis) |
+|f1_amp | Amplitude of best-fitting series (Fourier analysis) |
+|f1_b | b coefficient of best-fitting series (Fourier analysis) |
+|f1_phi0 | Zero-phase of best-fitting series (Fourier analysis) |
+|f1_power | Normalized chi^2 of best-fitting series (Fourier analysis) |
+|f1_relamp1 | Relative amplitude, first harmonic (Fourier analysis) |
+|f1_relamp2 | Relative amplitude, second harmonic (Fourier analysis) |
+|f1_relamp3 | Relative amplitude, third harmonic (Fourier analysis) |
+|f1_relamp4 | Relative amplitude, fourth harmonic (Fourier analysis) |
+|f1_relphi1 | Relative phase, first harmonic (Fourier analysis) |
+|f1_relphi2 | Relative phase, second harmonic (Fourier analysis) |
+|f1_relphi3 | Relative phase, third harmonic (Fourier analysis) |
+|f1_relphi4 | Relative phase, fourth harmonic (Fourier analysis) |
+|i60r | Mag ratio between 20th, 80th percentiles |
+|i70r | Mag ratio between 15th, 85th percentiles |
+|i80r | Mag ratio between 10th, 90th percentiles |
+|i90r | Mag ratio between 5th, 95th percentiles |
+|inv_vonneumannratio | Inverse of Von Neumann ratio |
+|iqr | Mag ratio between 25th, 75th percentiles |
+|median | Median magnitude |
+|median_abs_dev | Median absolute deviation of magnitudes |
+|norm_excess_var | Normalized excess variance |
+|norm_peak_to_peak_amp | Normalized peak-to-peak amplitude |
+|roms | Root of mean magnitudes squared |
+|skew | Skew of magnitudes |
+|smallkurt | Kurtosis of magnitudes |
+|stetson_j | Stetson J coefficient |
+|stetson_k | Stetson K coefficient |
+|sw | Shapiro-Wilk statistic |
+|welch_i | Welch I statistic |
+|wmean | Weighted mean of magtnidues |
+|wstd | Weighted standard deviation of magnitudes |
+|dmdt | Magnitude-time histograms (26x26) |
+
+### Selected ontological feature definitions
+| name | definition |
+| ---- | ---------- |
+| mean_ztf_alert_braai | Mean significance of ZTF alerts for this source |
+| n_ztf_alerts | Number of ZTF alerts for this source |
+| period | Period determined by subscripted algorithms (e.g. ELS_ECE_EAOV) |
+| significance | Significance of period |
+| AllWISE_w1mpro | AllWISE W1 mag |
+| AllWISE_w1sigmpro | AllWISE W1 mag error |
+| AllWISE_w2mpro | AllWISE W2 mag |
+| AllWISE_w2sigmpro | AllWISE W2 mag error |
+| AllWISE_w3mpro | AllWISE W3 mag |
+| AllWISE_w4mpro | AllWISE W4 mag |
+| Gaia_EDR3__parallax | Gaia parallax |
+| Gaia_EDR3__parallax_error | Gaia parallax error |
+| Gaia_EDR3__phot_bp_mean_mag | Gaia BP mag |
+| Gaia_EDR3__phot_bp_rp_excess_factor | Gaia BP-RP excess factor |
+| Gaia_EDR3__phot_g_mean_mag | Gaia G mag |
+| Gaia_EDR3__phot_rp_mean_mag | Gaia RP mag |
+| PS1_DR1__gMeanPSFMag | PS1 g mag |
+| PS1_DR1__gMeanPSFMagErr | PS1 g mag error |
+| PS1_DR1__rMeanPSFMag | PS1 r mag |
+| PS1_DR1__rMeanPSFMagErr | PS1 r mag error |
+| PS1_DR1__iMeanPSFMag | PS1 i mag |
+| PS1_DR1__iMeanPSFMagErr | PS1 i mag error |
+| PS1_DR1__zMeanPSFMag | PS1 z mag |
+| PS1_DR1__zMeanPSFMagErr | PS1 z mag error |
+| PS1_DR1__yMeanPSFMag | PS1 y mag |
+| PS1_DR1__yMeanPSFMagErr | PS1 y mag error |
+
+## Running automated analyses
+The primary deliverable of SCoPe is a catalog of variable source classifications across all of ZTF. Since ZTF contains billions of light curves, this catalog requires significant compute resources to assemble. We may still want to study ZTF's expansive collection of data with SCoPe before the classification catalog is complete. For example, SCoPe classifiers can be applied to the realm of transient follow-up.
+
+It is useful to know the classifications of any persistent ZTF sources that are close to transient candidates on the sky. Once SCoPe's primary deliverable is complete, obtaining these classifications will involve a straightforward database query. Presently, however, we must run the SCoPe workflow on a custom list of sources repeatedly to account for the rapidly changing landscape of transient events. See "Guide for Fritz Scanners" for a more detailed explanation of the workflow itself. This section continues with a discussion of how the automated analysis in `gcn_cronjob.py` is implemented using `cron`.
+
+### `cron` job basics
+`cron` runs scripts at specific time intervals in a simple environment. While this simplicity fosters compatibility between different operating systems, the trade-off is that some extra steps are required to run scripts compared to more familiar coding environments (e.g. within `scope-env` for this project).
+
+To set up a `cron` job, first run `EDITOR=emacs crontab -e`. You can replace `emacs` with your text editor of choice as long as it is installed on your machine. This command will open a text file in which to place `cron` commands. An example command is as follows:
+```bash
+0 */2 * * * cd scope && ~/miniforge3/envs/scope-env/bin/python ~/scope/gcn_cronjob.py > ~/scope/log_gcn_cronjob.txt 2>&1
+
+```
+Above, the `0 */2 * * *` means that this command will run every two hours, on minute 0 of that hour. Time increments increase from left to right; in this example, the five numbers are minute, hour, day (of month), month, day (of week). The `*/2` means that the hour has to be divisible by 2 for the job to run. Check out [crontab.guru](https://crontab.guru) to learn more about `cron` timing syntax.
+
+Next in the line, we change directories to `scope` in order for the code to access our `config.yaml` file located in this directory. Then, `~/miniforge3/envs/scope-env/bin/python ~/scope/gcn_cronjob.py` is the command that gets run (using the Python environment installed in `scope-env`). The `>` character forwards the output from the command (e.g. what your script prints) into a log file in a specific location (here `~/scope/log_gcn_cronjob.txt`). Finally, the `2>&1` suppresses 'emails' from `cron` about the status of your job (unnecessary since the log is being saved to the user-specified file).
+
+Save the text file once you finish modifying it to install the cron job. **Ensure that the last line of your file is a newline to avoid issues when running.** Your computer may pop up a window to which you should respond in the affirmative in order to successfully initialize the job. To check which `cron` jobs have been installed, run `crontab -l`. To uninstall your jobs, run `crontab -r`.
+
+### Additional details for `cron` environment
+Because `cron` runs in a simple environment, the usual details of environment setup and paths cannot be overlooked. In order for the above job to work, we need to add more information when we run `EDITOR=emacs crontab -e`. The lines below will produce a successful run (if SCoPe is installed in your home directory):
+
+```
+PYTHONPATH = /Users/username/scope
+
+0 */2 * * * /opt/homebrew/bin/gtimeout 2h ~/miniforge3/envs/scope-env/bin/python scope-gcn-cronjob > ~/scope/log_gcn_cronjob.txt 2>&1
+
+```
+In the first line above, the `PYTHONPATH` environment variable is defined to include the `scope` directory. Without this line, any code that imports from `scope` will throw an error, since the user's usual `PYTHONPATH` variable is not accessed in the `cron` environment.
+
+The second line begins with the familiar `cron` timing pattern described above. It continues by specifying the a maximum runtime of 2 hours before timing out using the `gtimeout` command. On a Mac, this can be installed with `homebrew` by running `brew install coreutils`. Note that the full path to `gtimeout` must be specified. After the timeout comes the call to the `gcn_cronjob.py` script. Note that the usual `#/usr/bin/env python` line at the top of SCoPe's python scripts does not work within the `cron` environment. Instead, `python` must be explicitly specified, and in order to have access to the modules and scripts installed in `scope-env` we must provide a full path like the one above (`~/miniforge3/envs/scope-env/bin/python`). The line concludes by sending the script's output to a dedicated log file. This file gets overwritten each time the script runs.
+
+### Check if `cron` job is running
+It can be useful to know whether the script within a cron job is currently running. One way to do this for `gcn_cronjob.py` is to run the command `ps aux | grep gcn_cronjob.py`. This will always return one item (representing the command you just ran), but if the script is currently running you will see more than one item.
+
+## Local feature generation/inference
+SCoPe contains a script that runs local feature generation and inference on sources specified in an input file. Example input files are contained within the `tools` directory (`local_scope_radec.csv` and `local_scope_ztfid.csv`). After receiving either ra/dec coordinates or ZTF light curve IDs (plus an object ID for each entry), the `run-scope-local` script will generate features and run inference using existing trained models, saving the results to timestamped directories. This script accepts most arguments from `generate-features` and `scope-inference`. Additional inputs specific to this script are listed below.
+
+inputs:
+1. --path-dataset : path (from base scope directory or fully qualified) to parquet, hdf5 or csv file containing specific sources (str)
+2. --cone-radius-arcsec : radius of cone search query for ZTF lightcurve IDs, if inputting ra/dec (float)
+3. --save-sources-filepath : path to parquet, hdf5 or csv file to save specific sources (str)
+4. --algorithms : ML algorithms to run (currently dnn/xgb)
+5. --group-names : group names of trained models (with order corresponding to --algorithms input)
+
+output:
+current_dt : formatted datetime string used to label output directories
+
+### Example usage
+```
+run-scope-local --path-dataset tools/local_scope_ztfid.csv --doCPU --doRemoveTerrestrial --scale_features min_max --group-names DR16_stats nobalance_DR16_DNN_stats --algorithms xgb
+
+run-scope-local --path-dataset tools/local_scope_radec.csv --doCPU --write_csv --doRemoveTerrestrial --group-names DR16_stats nobalance_DR16_DNN_stats --algorithms xgb dnn
+```
+
+## scope-download-classification
+inputs:
+1. --file : CSV file containing obj_id and/or ra dec coordinates. Set to "parse" to download sources by group id.
+2. --group-ids : target group id(s) on Fritz for download, space-separated (if CSV file not provided)
+3. --start : Index or page number (if in "parse" mode) to begin downloading (optional)
+4. --merge-features : Flag to merge features from Kowalski with downloaded sources
+5. --features-catalog : Name of features catalog to query
+6. --features-limit : Limit on number of sources to query at once
+7. --taxonomy-map : Filename of taxonomy mapper (JSON format)
+8. --output-dir : Name of directory to save downloaded files
+9. --output-filename : Name of file containing merged classifications and features
+10. --output-format : Output format of saved files, if not specified in (9). Must be one of parquet, h5, or csv.
+11. --get-ztf-filters : Flag to add ZTF filter IDs (separate catalog query) to default features
+12. --impute-missing-features : Flag to impute missing features using scope.utils.impute_features
+13. --update-training-set : if downloading an active learning sample, update the training set with the new classification based on votes
+14. --updated-training-set-prefix : Prefix to add to updated training set file
+15. --min-vote-diff : Minimum number of net votes (upvotes - downvotes) to keep an active learning classification. Caution: if zero, all classifications of reviewed sources will be added
+
+process:
+1. if CSV file provided, query by object ids or ra, dec
+2. if CSV file not provided, bulk query based on group id(s)
+3. get the classification/probabilities/periods of the objects in the dataset from Fritz
+4. append these values as new columns on the dataset, save to new file
+5. if merge_features, query Kowalski and merge sources with features, saving new CSV file
+6. Fritz sources with multiple associated ZTF IDs will generate multiple rows in the merged feature file
+7. To skip the source download part of the code, provide an input CSV file containing columns named 'obj_id', 'classification', 'probability', 'period_origin', 'period', 'ztf_id_origin', and 'ztf_id'.
+8. Set `--update-training-set` to read the config-specified training set and merge new sources/classifications from an active learning group
+
+output: data with new columns appended.
+
+```sh
+scope-download-classification --file sample.csv --group-ids 360 361 --start 10 --merge-features True --features-catalog ZTF_source_features_DR16 --features-limit 5000 --taxonomy-map golden_dataset_mapper.json --output-dir fritzDownload --output-filename merged_classifications_features --output-format parquet -get-ztf-filters --impute-missing-features
+```
+
+## scope-download-gcn-sources
+inputs:
+1. --dateobs: unique dateObs of GCN event (str)
+2. --group-ids: group ids to query sources, space-separated [all if not specified] (list)
+3. --days-range: max days past event to search for sources (float)
+4. --radius-arcsec: radius [arcsec] around new sources to search for existing ZTF sources (float)
+5. --save-filename: filename to save source ids/coordinates (str)
+
+process:
+1. query all sources associated with GCN event
+2. get fritz names, ras and decs for each page of sources
+3. save json file in a useful format to use with `generate-features --doSpecificIDs`
+
+```sh
+scope-download-gcn-sources --dateobs 2023-05-21T05:30:43
+```
+
+## scope-upload-classification
+inputs:
+1. --file : path to CSV, HDF5 or Parquet file containing ra, dec, period, and labels
+2. --group-ids : target group id(s) on Fritz for upload, space-separated
+3. --classification : Name(s) of input file columns containing classification probabilities (one column per label). Set this to "read" to automatically upload all classes specified in the taxonomy mapper at once.
+4. --taxonomy-map : Filename of taxonomy mapper (JSON format)
+5. --comment : Comment to post (if specified)
+6. --start : Index to start uploading (zero-based)
+7. --stop : Index to stop uploading (inclusive)
+8. --classification-origin: origin of classifications. If 'SCoPe' (default), Fritz will apply custom color-coding
+9. --skip-phot : flag to skip photometry upload (skips for existing sources only)
+10. --post-survey-id : flag to post an annotation for the Gaia, AllWISE or PS1 id associated with each source
+11. --survey-id-origin : Annotation origin name for survey_id
+12. --p-threshold : Probability threshold for posted classification (values must be >= than this number to post)
+13. --match-ids : flag to match input and existing survey_id values during upload. It is recommended to instead match obj_ids (see next line)
+14. --use-existing-obj-id : flag to use existing source names in a column named 'obj_id' (a coordinate-based ID is otherwise generated by default)
+15. --post-upvote : flag to post an upvote to newly uploaded classifications. Not recommended when posting automated classifications for active learning.
+16. --check-labelled-box : flag to check the 'labelled' box for each source when uploading classifications. Not recommended when posting automated classifications for active learning.
+17. --write-obj-id : flag to output a copy of the input file with an 'obj_id' column containing the coordinate-based IDs for each posted object. Use this file as input for future uploads to add to this column.
+18. --result-dir : name of directory where upload results file is saved. Default is 'fritzUpload' within the tools directory.
+19. --result-filetag: name of tag appended to the result filename. Default is 'fritzUpload'.
+20. --result-format : result file format; one of csv, h5 or parquet. Default is parquet.
+21. --replace-classifications : flag to delete each source's existing classifications before posting new ones.
+22. --radius-arcsec: photometry search radius for uploaded sources.
+23. --no-ml: flag to post classifications that do not originate from an ML classifier.
+24. --post-phot-as-comment: flag to post photometry as a comment on the source (bool)
+25. --post-phasefolded-phot: flag to post phase-folded photometry as comment in addition to time series (bool)
+26. --phot-dirname: name of directory in which to save photometry plots (str)
+27. --instrument-name: name of instrument used for observations (str)
+
+process:
+0. include Kowalski host, port, protocol, and token or username+password in config.yaml
+1. check if each input source exists by comparing input and existing obj_ids and/or survey_ids
+2. save the objects to Fritz group if new
+3. in batches, upload the classifications of the objects in the dataset to target group on Fritz
+4. duplicate classifications will not be uploaded to Fritz. If n classifications are manually specified, probabilities will be sourced from the last n columns of the dataset.
+5. post survey_id annotations
+6. (post comment to each uploaded source)
+
+```sh
+scope-upload-classification --file sample.csv --group-ids 500 250 750 --classification variable flaring --taxonomy-map map.json --comment confident --start 35 --stop 50 --skip-phot --p-threshold 0.9 --write-obj-id --result-format csv --use-existing-obj-id --post-survey-id --replace-classifications
+```
+
+## scope-manage-annotation
+inputs:
+1. --action : one of "post", "update", or "delete"
+2. --source : ZTF ID or path to .csv file with multiple objects (ID column "obj_id")
+3. --group-ids : target group id(s) on Fritz, space-separated
+4. --origin : name of annotation
+5. --key : name of annotation
+6. --value : value of annotation (required for "post" and "update" - if source is a .csv file, value will auto-populate from `source[key]`)
+
+process:
+1. for each source, find existing annotations (for "update" and "delete" actions)
+2. interact with API to make desired changes to annotations
+3. confirm changes with printed messages
+
+```sh
+scope-manage-annotation --action post --source sample.csv --group_ids 200 300 400 --origin revisedperiod --key period
+```
+
+## Scope Upload Disagreements (deprecated)
+inputs:
+1. dataset
+2. group id on Fritz
+3. gloria object
+
+process:
+1. read in the csv dataset to pandas dataframe
+2. get high scoring objects on DNN or on XGBoost from Fritz
+3. get objects that have high confidence on DNN but low confidence on XGBoost and vice versa
+4. get different statistics of those disagreeing objects and combine to a dataframe
+5. filter those disagreeing objects that are contained in the training set and remove them
+6. upload the remaining disagreeing objects to target group on Fritz
+
+```sh
+./scope_upload_disagreements.py -file dataset.d15.csv -id 360 -token sample_token
+```
diff --git a/_sources/zenodo.md.txt b/_sources/zenodo.md.txt
new file mode 100644
index 0000000..1619512
--- /dev/null
+++ b/_sources/zenodo.md.txt
@@ -0,0 +1,31 @@
+# Data Releases on Zenodo
+
+As more ZTF fields receive SCoPe classifications, we make them public on Zenodo. This page describes the data products available and provides a guide to preparing/publishing new data releases. The permanent link for the SCoPe III repository on Zenodo is [https://zenodo.org/doi/10.5281/zenodo.8410825](https://zenodo.org/doi/10.5281/zenodo.8410825). This link will always resolve to the latest data release.
+
+## Data product description
+
+The most recent data release contains these components:
+
+| file | description |
+| -------------- | ------------ |
+| field_296_100rows.csv | Demo predictions file containing 100 classified light curves |
+| fields.json | List of fields in classification catalog, generated when running `combine-preds` |
+| predictions_dnn_xgb_*N*_fields.zip | Zip file containing classification catalog (contains *N* combined DNN/XGB prediction files in CSV format) |
+| SCoPe_classification_demo.ipynb | Notebook interacting with demo predictions |
+| trained_dnn_models.zip | Zip file containing trained DNN models |
+| trained_xgb_models.zip | Zip file containing trained XGB models |
+| training_set.parquet | Parquet file containing training set |
+
+## Preparing a new release
+
+To begin a new release draft, click "New version" on the current release. The main difference between releases is the classification catalog Zip file, which adds more ZTF fields over time, along with `fields.json`. Other unchanged elements of the release may be imported from the previous release by clicking "Import files".
+
+## Publishing a new release
+
+To publish a new release after uploading the latest classification catalog, first ensure that all desired files are in place. Then, click "Get a DOI now!" to reserve a version-specific DOI for this release. Additionally, specify the publication date and version in `vX.Y.Z` format.
+
+Finally, create release notes by clicking "Add description" and choosing "Notes" under the "Type" dropdown. These release notes should specify what has changed from one version to the next.
+
+## Adding new users
+
+The "ZTF Source Classification Project" community [ztf-scope](https://zenodo.org/communities/ztf-scope) contains the current data repository. Inviting additional Zenodo users to this community as Curators will allow them to manage the repository and create new versions.
diff --git a/_static/basic.css b/_static/basic.css
new file mode 100644
index 0000000..30fee9d
--- /dev/null
+++ b/_static/basic.css
@@ -0,0 +1,925 @@
+/*
+ * basic.css
+ * ~~~~~~~~~
+ *
+ * Sphinx stylesheet -- basic theme.
+ *
+ * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS.
+ * :license: BSD, see LICENSE for details.
+ *
+ */
+
+/* -- main layout ----------------------------------------------------------- */
+
+div.clearer {
+ clear: both;
+}
+
+div.section::after {
+ display: block;
+ content: '';
+ clear: left;
+}
+
+/* -- relbar ---------------------------------------------------------------- */
+
+div.related {
+ width: 100%;
+ font-size: 90%;
+}
+
+div.related h3 {
+ display: none;
+}
+
+div.related ul {
+ margin: 0;
+ padding: 0 0 0 10px;
+ list-style: none;
+}
+
+div.related li {
+ display: inline;
+}
+
+div.related li.right {
+ float: right;
+ margin-right: 5px;
+}
+
+/* -- sidebar --------------------------------------------------------------- */
+
+div.sphinxsidebarwrapper {
+ padding: 10px 5px 0 10px;
+}
+
+div.sphinxsidebar {
+ float: left;
+ width: 230px;
+ margin-left: -100%;
+ font-size: 90%;
+ word-wrap: break-word;
+ overflow-wrap : break-word;
+}
+
+div.sphinxsidebar ul {
+ list-style: none;
+}
+
+div.sphinxsidebar ul ul,
+div.sphinxsidebar ul.want-points {
+ margin-left: 20px;
+ list-style: square;
+}
+
+div.sphinxsidebar ul ul {
+ margin-top: 0;
+ margin-bottom: 0;
+}
+
+div.sphinxsidebar form {
+ margin-top: 10px;
+}
+
+div.sphinxsidebar input {
+ border: 1px solid #98dbcc;
+ font-family: sans-serif;
+ font-size: 1em;
+}
+
+div.sphinxsidebar #searchbox form.search {
+ overflow: hidden;
+}
+
+div.sphinxsidebar #searchbox input[type="text"] {
+ float: left;
+ width: 80%;
+ padding: 0.25em;
+ box-sizing: border-box;
+}
+
+div.sphinxsidebar #searchbox input[type="submit"] {
+ float: left;
+ width: 20%;
+ border-left: none;
+ padding: 0.25em;
+ box-sizing: border-box;
+}
+
+
+img {
+ border: 0;
+ max-width: 100%;
+}
+
+/* -- search page ----------------------------------------------------------- */
+
+ul.search {
+ margin: 10px 0 0 20px;
+ padding: 0;
+}
+
+ul.search li {
+ padding: 5px 0 5px 20px;
+ background-image: url(file.png);
+ background-repeat: no-repeat;
+ background-position: 0 7px;
+}
+
+ul.search li a {
+ font-weight: bold;
+}
+
+ul.search li p.context {
+ color: #888;
+ margin: 2px 0 0 30px;
+ text-align: left;
+}
+
+ul.keywordmatches li.goodmatch a {
+ font-weight: bold;
+}
+
+/* -- index page ------------------------------------------------------------ */
+
+table.contentstable {
+ width: 90%;
+ margin-left: auto;
+ margin-right: auto;
+}
+
+table.contentstable p.biglink {
+ line-height: 150%;
+}
+
+a.biglink {
+ font-size: 1.3em;
+}
+
+span.linkdescr {
+ font-style: italic;
+ padding-top: 5px;
+ font-size: 90%;
+}
+
+/* -- general index --------------------------------------------------------- */
+
+table.indextable {
+ width: 100%;
+}
+
+table.indextable td {
+ text-align: left;
+ vertical-align: top;
+}
+
+table.indextable ul {
+ margin-top: 0;
+ margin-bottom: 0;
+ list-style-type: none;
+}
+
+table.indextable > tbody > tr > td > ul {
+ padding-left: 0em;
+}
+
+table.indextable tr.pcap {
+ height: 10px;
+}
+
+table.indextable tr.cap {
+ margin-top: 10px;
+ background-color: #f2f2f2;
+}
+
+img.toggler {
+ margin-right: 3px;
+ margin-top: 3px;
+ cursor: pointer;
+}
+
+div.modindex-jumpbox {
+ border-top: 1px solid #ddd;
+ border-bottom: 1px solid #ddd;
+ margin: 1em 0 1em 0;
+ padding: 0.4em;
+}
+
+div.genindex-jumpbox {
+ border-top: 1px solid #ddd;
+ border-bottom: 1px solid #ddd;
+ margin: 1em 0 1em 0;
+ padding: 0.4em;
+}
+
+/* -- domain module index --------------------------------------------------- */
+
+table.modindextable td {
+ padding: 2px;
+ border-collapse: collapse;
+}
+
+/* -- general body styles --------------------------------------------------- */
+
+div.body {
+ min-width: 360px;
+ max-width: 800px;
+}
+
+div.body p, div.body dd, div.body li, div.body blockquote {
+ -moz-hyphens: auto;
+ -ms-hyphens: auto;
+ -webkit-hyphens: auto;
+ hyphens: auto;
+}
+
+a.headerlink {
+ visibility: hidden;
+}
+
+a:visited {
+ color: #551A8B;
+}
+
+h1:hover > a.headerlink,
+h2:hover > a.headerlink,
+h3:hover > a.headerlink,
+h4:hover > a.headerlink,
+h5:hover > a.headerlink,
+h6:hover > a.headerlink,
+dt:hover > a.headerlink,
+caption:hover > a.headerlink,
+p.caption:hover > a.headerlink,
+div.code-block-caption:hover > a.headerlink {
+ visibility: visible;
+}
+
+div.body p.caption {
+ text-align: inherit;
+}
+
+div.body td {
+ text-align: left;
+}
+
+.first {
+ margin-top: 0 !important;
+}
+
+p.rubric {
+ margin-top: 30px;
+ font-weight: bold;
+}
+
+img.align-left, figure.align-left, .figure.align-left, object.align-left {
+ clear: left;
+ float: left;
+ margin-right: 1em;
+}
+
+img.align-right, figure.align-right, .figure.align-right, object.align-right {
+ clear: right;
+ float: right;
+ margin-left: 1em;
+}
+
+img.align-center, figure.align-center, .figure.align-center, object.align-center {
+ display: block;
+ margin-left: auto;
+ margin-right: auto;
+}
+
+img.align-default, figure.align-default, .figure.align-default {
+ display: block;
+ margin-left: auto;
+ margin-right: auto;
+}
+
+.align-left {
+ text-align: left;
+}
+
+.align-center {
+ text-align: center;
+}
+
+.align-default {
+ text-align: center;
+}
+
+.align-right {
+ text-align: right;
+}
+
+/* -- sidebars -------------------------------------------------------------- */
+
+div.sidebar,
+aside.sidebar {
+ margin: 0 0 0.5em 1em;
+ border: 1px solid #ddb;
+ padding: 7px;
+ background-color: #ffe;
+ width: 40%;
+ float: right;
+ clear: right;
+ overflow-x: auto;
+}
+
+p.sidebar-title {
+ font-weight: bold;
+}
+
+nav.contents,
+aside.topic,
+div.admonition, div.topic, blockquote {
+ clear: left;
+}
+
+/* -- topics ---------------------------------------------------------------- */
+
+nav.contents,
+aside.topic,
+div.topic {
+ border: 1px solid #ccc;
+ padding: 7px;
+ margin: 10px 0 10px 0;
+}
+
+p.topic-title {
+ font-size: 1.1em;
+ font-weight: bold;
+ margin-top: 10px;
+}
+
+/* -- admonitions ----------------------------------------------------------- */
+
+div.admonition {
+ margin-top: 10px;
+ margin-bottom: 10px;
+ padding: 7px;
+}
+
+div.admonition dt {
+ font-weight: bold;
+}
+
+p.admonition-title {
+ margin: 0px 10px 5px 0px;
+ font-weight: bold;
+}
+
+div.body p.centered {
+ text-align: center;
+ margin-top: 25px;
+}
+
+/* -- content of sidebars/topics/admonitions -------------------------------- */
+
+div.sidebar > :last-child,
+aside.sidebar > :last-child,
+nav.contents > :last-child,
+aside.topic > :last-child,
+div.topic > :last-child,
+div.admonition > :last-child {
+ margin-bottom: 0;
+}
+
+div.sidebar::after,
+aside.sidebar::after,
+nav.contents::after,
+aside.topic::after,
+div.topic::after,
+div.admonition::after,
+blockquote::after {
+ display: block;
+ content: '';
+ clear: both;
+}
+
+/* -- tables ---------------------------------------------------------------- */
+
+table.docutils {
+ margin-top: 10px;
+ margin-bottom: 10px;
+ border: 0;
+ border-collapse: collapse;
+}
+
+table.align-center {
+ margin-left: auto;
+ margin-right: auto;
+}
+
+table.align-default {
+ margin-left: auto;
+ margin-right: auto;
+}
+
+table caption span.caption-number {
+ font-style: italic;
+}
+
+table caption span.caption-text {
+}
+
+table.docutils td, table.docutils th {
+ padding: 1px 8px 1px 5px;
+ border-top: 0;
+ border-left: 0;
+ border-right: 0;
+ border-bottom: 1px solid #aaa;
+}
+
+th {
+ text-align: left;
+ padding-right: 5px;
+}
+
+table.citation {
+ border-left: solid 1px gray;
+ margin-left: 1px;
+}
+
+table.citation td {
+ border-bottom: none;
+}
+
+th > :first-child,
+td > :first-child {
+ margin-top: 0px;
+}
+
+th > :last-child,
+td > :last-child {
+ margin-bottom: 0px;
+}
+
+/* -- figures --------------------------------------------------------------- */
+
+div.figure, figure {
+ margin: 0.5em;
+ padding: 0.5em;
+}
+
+div.figure p.caption, figcaption {
+ padding: 0.3em;
+}
+
+div.figure p.caption span.caption-number,
+figcaption span.caption-number {
+ font-style: italic;
+}
+
+div.figure p.caption span.caption-text,
+figcaption span.caption-text {
+}
+
+/* -- field list styles ----------------------------------------------------- */
+
+table.field-list td, table.field-list th {
+ border: 0 !important;
+}
+
+.field-list ul {
+ margin: 0;
+ padding-left: 1em;
+}
+
+.field-list p {
+ margin: 0;
+}
+
+.field-name {
+ -moz-hyphens: manual;
+ -ms-hyphens: manual;
+ -webkit-hyphens: manual;
+ hyphens: manual;
+}
+
+/* -- hlist styles ---------------------------------------------------------- */
+
+table.hlist {
+ margin: 1em 0;
+}
+
+table.hlist td {
+ vertical-align: top;
+}
+
+/* -- object description styles --------------------------------------------- */
+
+.sig {
+ font-family: 'Consolas', 'Menlo', 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', monospace;
+}
+
+.sig-name, code.descname {
+ background-color: transparent;
+ font-weight: bold;
+}
+
+.sig-name {
+ font-size: 1.1em;
+}
+
+code.descname {
+ font-size: 1.2em;
+}
+
+.sig-prename, code.descclassname {
+ background-color: transparent;
+}
+
+.optional {
+ font-size: 1.3em;
+}
+
+.sig-paren {
+ font-size: larger;
+}
+
+.sig-param.n {
+ font-style: italic;
+}
+
+/* C++ specific styling */
+
+.sig-inline.c-texpr,
+.sig-inline.cpp-texpr {
+ font-family: unset;
+}
+
+.sig.c .k, .sig.c .kt,
+.sig.cpp .k, .sig.cpp .kt {
+ color: #0033B3;
+}
+
+.sig.c .m,
+.sig.cpp .m {
+ color: #1750EB;
+}
+
+.sig.c .s, .sig.c .sc,
+.sig.cpp .s, .sig.cpp .sc {
+ color: #067D17;
+}
+
+
+/* -- other body styles ----------------------------------------------------- */
+
+ol.arabic {
+ list-style: decimal;
+}
+
+ol.loweralpha {
+ list-style: lower-alpha;
+}
+
+ol.upperalpha {
+ list-style: upper-alpha;
+}
+
+ol.lowerroman {
+ list-style: lower-roman;
+}
+
+ol.upperroman {
+ list-style: upper-roman;
+}
+
+:not(li) > ol > li:first-child > :first-child,
+:not(li) > ul > li:first-child > :first-child {
+ margin-top: 0px;
+}
+
+:not(li) > ol > li:last-child > :last-child,
+:not(li) > ul > li:last-child > :last-child {
+ margin-bottom: 0px;
+}
+
+ol.simple ol p,
+ol.simple ul p,
+ul.simple ol p,
+ul.simple ul p {
+ margin-top: 0;
+}
+
+ol.simple > li:not(:first-child) > p,
+ul.simple > li:not(:first-child) > p {
+ margin-top: 0;
+}
+
+ol.simple p,
+ul.simple p {
+ margin-bottom: 0;
+}
+
+aside.footnote > span,
+div.citation > span {
+ float: left;
+}
+aside.footnote > span:last-of-type,
+div.citation > span:last-of-type {
+ padding-right: 0.5em;
+}
+aside.footnote > p {
+ margin-left: 2em;
+}
+div.citation > p {
+ margin-left: 4em;
+}
+aside.footnote > p:last-of-type,
+div.citation > p:last-of-type {
+ margin-bottom: 0em;
+}
+aside.footnote > p:last-of-type:after,
+div.citation > p:last-of-type:after {
+ content: "";
+ clear: both;
+}
+
+dl.field-list {
+ display: grid;
+ grid-template-columns: fit-content(30%) auto;
+}
+
+dl.field-list > dt {
+ font-weight: bold;
+ word-break: break-word;
+ padding-left: 0.5em;
+ padding-right: 5px;
+}
+
+dl.field-list > dd {
+ padding-left: 0.5em;
+ margin-top: 0em;
+ margin-left: 0em;
+ margin-bottom: 0em;
+}
+
+dl {
+ margin-bottom: 15px;
+}
+
+dd > :first-child {
+ margin-top: 0px;
+}
+
+dd ul, dd table {
+ margin-bottom: 10px;
+}
+
+dd {
+ margin-top: 3px;
+ margin-bottom: 10px;
+ margin-left: 30px;
+}
+
+.sig dd {
+ margin-top: 0px;
+ margin-bottom: 0px;
+}
+
+.sig dl {
+ margin-top: 0px;
+ margin-bottom: 0px;
+}
+
+dl > dd:last-child,
+dl > dd:last-child > :last-child {
+ margin-bottom: 0;
+}
+
+dt:target, span.highlighted {
+ background-color: #fbe54e;
+}
+
+rect.highlighted {
+ fill: #fbe54e;
+}
+
+dl.glossary dt {
+ font-weight: bold;
+ font-size: 1.1em;
+}
+
+.versionmodified {
+ font-style: italic;
+}
+
+.system-message {
+ background-color: #fda;
+ padding: 5px;
+ border: 3px solid red;
+}
+
+.footnote:target {
+ background-color: #ffa;
+}
+
+.line-block {
+ display: block;
+ margin-top: 1em;
+ margin-bottom: 1em;
+}
+
+.line-block .line-block {
+ margin-top: 0;
+ margin-bottom: 0;
+ margin-left: 1.5em;
+}
+
+.guilabel, .menuselection {
+ font-family: sans-serif;
+}
+
+.accelerator {
+ text-decoration: underline;
+}
+
+.classifier {
+ font-style: oblique;
+}
+
+.classifier:before {
+ font-style: normal;
+ margin: 0 0.5em;
+ content: ":";
+ display: inline-block;
+}
+
+abbr, acronym {
+ border-bottom: dotted 1px;
+ cursor: help;
+}
+
+.translated {
+ background-color: rgba(207, 255, 207, 0.2)
+}
+
+.untranslated {
+ background-color: rgba(255, 207, 207, 0.2)
+}
+
+/* -- code displays --------------------------------------------------------- */
+
+pre {
+ overflow: auto;
+ overflow-y: hidden; /* fixes display issues on Chrome browsers */
+}
+
+pre, div[class*="highlight-"] {
+ clear: both;
+}
+
+span.pre {
+ -moz-hyphens: none;
+ -ms-hyphens: none;
+ -webkit-hyphens: none;
+ hyphens: none;
+ white-space: nowrap;
+}
+
+div[class*="highlight-"] {
+ margin: 1em 0;
+}
+
+td.linenos pre {
+ border: 0;
+ background-color: transparent;
+ color: #aaa;
+}
+
+table.highlighttable {
+ display: block;
+}
+
+table.highlighttable tbody {
+ display: block;
+}
+
+table.highlighttable tr {
+ display: flex;
+}
+
+table.highlighttable td {
+ margin: 0;
+ padding: 0;
+}
+
+table.highlighttable td.linenos {
+ padding-right: 0.5em;
+}
+
+table.highlighttable td.code {
+ flex: 1;
+ overflow: hidden;
+}
+
+.highlight .hll {
+ display: block;
+}
+
+div.highlight pre,
+table.highlighttable pre {
+ margin: 0;
+}
+
+div.code-block-caption + div {
+ margin-top: 0;
+}
+
+div.code-block-caption {
+ margin-top: 1em;
+ padding: 2px 5px;
+ font-size: small;
+}
+
+div.code-block-caption code {
+ background-color: transparent;
+}
+
+table.highlighttable td.linenos,
+span.linenos,
+div.highlight span.gp { /* gp: Generic.Prompt */
+ user-select: none;
+ -webkit-user-select: text; /* Safari fallback only */
+ -webkit-user-select: none; /* Chrome/Safari */
+ -moz-user-select: none; /* Firefox */
+ -ms-user-select: none; /* IE10+ */
+}
+
+div.code-block-caption span.caption-number {
+ padding: 0.1em 0.3em;
+ font-style: italic;
+}
+
+div.code-block-caption span.caption-text {
+}
+
+div.literal-block-wrapper {
+ margin: 1em 0;
+}
+
+code.xref, a code {
+ background-color: transparent;
+ font-weight: bold;
+}
+
+h1 code, h2 code, h3 code, h4 code, h5 code, h6 code {
+ background-color: transparent;
+}
+
+.viewcode-link {
+ float: right;
+}
+
+.viewcode-back {
+ float: right;
+ font-family: sans-serif;
+}
+
+div.viewcode-block:target {
+ margin: -1px -10px;
+ padding: 0 10px;
+}
+
+/* -- math display ---------------------------------------------------------- */
+
+img.math {
+ vertical-align: middle;
+}
+
+div.body div.math p {
+ text-align: center;
+}
+
+span.eqno {
+ float: right;
+}
+
+span.eqno a.headerlink {
+ position: absolute;
+ z-index: 1;
+}
+
+div.math:hover a.headerlink {
+ visibility: visible;
+}
+
+/* -- printout stylesheet --------------------------------------------------- */
+
+@media print {
+ div.document,
+ div.documentwrapper,
+ div.bodywrapper {
+ margin: 0 !important;
+ width: 100%;
+ }
+
+ div.sphinxsidebar,
+ div.related,
+ div.footer,
+ #top-link {
+ display: none;
+ }
+}
\ No newline at end of file
diff --git a/_static/doctools.js b/_static/doctools.js
new file mode 100644
index 0000000..d06a71d
--- /dev/null
+++ b/_static/doctools.js
@@ -0,0 +1,156 @@
+/*
+ * doctools.js
+ * ~~~~~~~~~~~
+ *
+ * Base JavaScript utilities for all Sphinx HTML documentation.
+ *
+ * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS.
+ * :license: BSD, see LICENSE for details.
+ *
+ */
+"use strict";
+
+const BLACKLISTED_KEY_CONTROL_ELEMENTS = new Set([
+ "TEXTAREA",
+ "INPUT",
+ "SELECT",
+ "BUTTON",
+]);
+
+const _ready = (callback) => {
+ if (document.readyState !== "loading") {
+ callback();
+ } else {
+ document.addEventListener("DOMContentLoaded", callback);
+ }
+};
+
+/**
+ * Small JavaScript module for the documentation.
+ */
+const Documentation = {
+ init: () => {
+ Documentation.initDomainIndexTable();
+ Documentation.initOnKeyListeners();
+ },
+
+ /**
+ * i18n support
+ */
+ TRANSLATIONS: {},
+ PLURAL_EXPR: (n) => (n === 1 ? 0 : 1),
+ LOCALE: "unknown",
+
+ // gettext and ngettext don't access this so that the functions
+ // can safely bound to a different name (_ = Documentation.gettext)
+ gettext: (string) => {
+ const translated = Documentation.TRANSLATIONS[string];
+ switch (typeof translated) {
+ case "undefined":
+ return string; // no translation
+ case "string":
+ return translated; // translation exists
+ default:
+ return translated[0]; // (singular, plural) translation tuple exists
+ }
+ },
+
+ ngettext: (singular, plural, n) => {
+ const translated = Documentation.TRANSLATIONS[singular];
+ if (typeof translated !== "undefined")
+ return translated[Documentation.PLURAL_EXPR(n)];
+ return n === 1 ? singular : plural;
+ },
+
+ addTranslations: (catalog) => {
+ Object.assign(Documentation.TRANSLATIONS, catalog.messages);
+ Documentation.PLURAL_EXPR = new Function(
+ "n",
+ `return (${catalog.plural_expr})`
+ );
+ Documentation.LOCALE = catalog.locale;
+ },
+
+ /**
+ * helper function to focus on search bar
+ */
+ focusSearchBar: () => {
+ document.querySelectorAll("input[name=q]")[0]?.focus();
+ },
+
+ /**
+ * Initialise the domain index toggle buttons
+ */
+ initDomainIndexTable: () => {
+ const toggler = (el) => {
+ const idNumber = el.id.substr(7);
+ const toggledRows = document.querySelectorAll(`tr.cg-${idNumber}`);
+ if (el.src.substr(-9) === "minus.png") {
+ el.src = `${el.src.substr(0, el.src.length - 9)}plus.png`;
+ toggledRows.forEach((el) => (el.style.display = "none"));
+ } else {
+ el.src = `${el.src.substr(0, el.src.length - 8)}minus.png`;
+ toggledRows.forEach((el) => (el.style.display = ""));
+ }
+ };
+
+ const togglerElements = document.querySelectorAll("img.toggler");
+ togglerElements.forEach((el) =>
+ el.addEventListener("click", (event) => toggler(event.currentTarget))
+ );
+ togglerElements.forEach((el) => (el.style.display = ""));
+ if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) togglerElements.forEach(toggler);
+ },
+
+ initOnKeyListeners: () => {
+ // only install a listener if it is really needed
+ if (
+ !DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS &&
+ !DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS
+ )
+ return;
+
+ document.addEventListener("keydown", (event) => {
+ // bail for input elements
+ if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return;
+ // bail with special keys
+ if (event.altKey || event.ctrlKey || event.metaKey) return;
+
+ if (!event.shiftKey) {
+ switch (event.key) {
+ case "ArrowLeft":
+ if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break;
+
+ const prevLink = document.querySelector('link[rel="prev"]');
+ if (prevLink && prevLink.href) {
+ window.location.href = prevLink.href;
+ event.preventDefault();
+ }
+ break;
+ case "ArrowRight":
+ if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break;
+
+ const nextLink = document.querySelector('link[rel="next"]');
+ if (nextLink && nextLink.href) {
+ window.location.href = nextLink.href;
+ event.preventDefault();
+ }
+ break;
+ }
+ }
+
+ // some keyboard layouts may need Shift to get /
+ switch (event.key) {
+ case "/":
+ if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) break;
+ Documentation.focusSearchBar();
+ event.preventDefault();
+ }
+ });
+ },
+};
+
+// quick alias for translations
+const _ = Documentation.gettext;
+
+_ready(Documentation.init);
diff --git a/_static/documentation_options.js b/_static/documentation_options.js
new file mode 100644
index 0000000..7e4c114
--- /dev/null
+++ b/_static/documentation_options.js
@@ -0,0 +1,13 @@
+const DOCUMENTATION_OPTIONS = {
+ VERSION: '',
+ LANGUAGE: 'en',
+ COLLAPSE_INDEX: false,
+ BUILDER: 'html',
+ FILE_SUFFIX: '.html',
+ LINK_SUFFIX: '.html',
+ HAS_SOURCE: true,
+ SOURCELINK_SUFFIX: '.txt',
+ NAVIGATION_WITH_KEYS: false,
+ SHOW_SEARCH_SUMMARY: true,
+ ENABLE_SEARCH_SHORTCUTS: true,
+};
\ No newline at end of file
diff --git a/_static/file.png b/_static/file.png
new file mode 100644
index 0000000..a858a41
Binary files /dev/null and b/_static/file.png differ
diff --git a/_static/language_data.js b/_static/language_data.js
new file mode 100644
index 0000000..250f566
--- /dev/null
+++ b/_static/language_data.js
@@ -0,0 +1,199 @@
+/*
+ * language_data.js
+ * ~~~~~~~~~~~~~~~~
+ *
+ * This script contains the language-specific data used by searchtools.js,
+ * namely the list of stopwords, stemmer, scorer and splitter.
+ *
+ * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS.
+ * :license: BSD, see LICENSE for details.
+ *
+ */
+
+var stopwords = ["a", "and", "are", "as", "at", "be", "but", "by", "for", "if", "in", "into", "is", "it", "near", "no", "not", "of", "on", "or", "such", "that", "the", "their", "then", "there", "these", "they", "this", "to", "was", "will", "with"];
+
+
+/* Non-minified version is copied as a separate JS file, is available */
+
+/**
+ * Porter Stemmer
+ */
+var Stemmer = function() {
+
+ var step2list = {
+ ational: 'ate',
+ tional: 'tion',
+ enci: 'ence',
+ anci: 'ance',
+ izer: 'ize',
+ bli: 'ble',
+ alli: 'al',
+ entli: 'ent',
+ eli: 'e',
+ ousli: 'ous',
+ ization: 'ize',
+ ation: 'ate',
+ ator: 'ate',
+ alism: 'al',
+ iveness: 'ive',
+ fulness: 'ful',
+ ousness: 'ous',
+ aliti: 'al',
+ iviti: 'ive',
+ biliti: 'ble',
+ logi: 'log'
+ };
+
+ var step3list = {
+ icate: 'ic',
+ ative: '',
+ alize: 'al',
+ iciti: 'ic',
+ ical: 'ic',
+ ful: '',
+ ness: ''
+ };
+
+ var c = "[^aeiou]"; // consonant
+ var v = "[aeiouy]"; // vowel
+ var C = c + "[^aeiouy]*"; // consonant sequence
+ var V = v + "[aeiou]*"; // vowel sequence
+
+ var mgr0 = "^(" + C + ")?" + V + C; // [C]VC... is m>0
+ var meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$"; // [C]VC[V] is m=1
+ var mgr1 = "^(" + C + ")?" + V + C + V + C; // [C]VCVC... is m>1
+ var s_v = "^(" + C + ")?" + v; // vowel in stem
+
+ this.stemWord = function (w) {
+ var stem;
+ var suffix;
+ var firstch;
+ var origword = w;
+
+ if (w.length < 3)
+ return w;
+
+ var re;
+ var re2;
+ var re3;
+ var re4;
+
+ firstch = w.substr(0,1);
+ if (firstch == "y")
+ w = firstch.toUpperCase() + w.substr(1);
+
+ // Step 1a
+ re = /^(.+?)(ss|i)es$/;
+ re2 = /^(.+?)([^s])s$/;
+
+ if (re.test(w))
+ w = w.replace(re,"$1$2");
+ else if (re2.test(w))
+ w = w.replace(re2,"$1$2");
+
+ // Step 1b
+ re = /^(.+?)eed$/;
+ re2 = /^(.+?)(ed|ing)$/;
+ if (re.test(w)) {
+ var fp = re.exec(w);
+ re = new RegExp(mgr0);
+ if (re.test(fp[1])) {
+ re = /.$/;
+ w = w.replace(re,"");
+ }
+ }
+ else if (re2.test(w)) {
+ var fp = re2.exec(w);
+ stem = fp[1];
+ re2 = new RegExp(s_v);
+ if (re2.test(stem)) {
+ w = stem;
+ re2 = /(at|bl|iz)$/;
+ re3 = new RegExp("([^aeiouylsz])\\1$");
+ re4 = new RegExp("^" + C + v + "[^aeiouwxy]$");
+ if (re2.test(w))
+ w = w + "e";
+ else if (re3.test(w)) {
+ re = /.$/;
+ w = w.replace(re,"");
+ }
+ else if (re4.test(w))
+ w = w + "e";
+ }
+ }
+
+ // Step 1c
+ re = /^(.+?)y$/;
+ if (re.test(w)) {
+ var fp = re.exec(w);
+ stem = fp[1];
+ re = new RegExp(s_v);
+ if (re.test(stem))
+ w = stem + "i";
+ }
+
+ // Step 2
+ re = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/;
+ if (re.test(w)) {
+ var fp = re.exec(w);
+ stem = fp[1];
+ suffix = fp[2];
+ re = new RegExp(mgr0);
+ if (re.test(stem))
+ w = stem + step2list[suffix];
+ }
+
+ // Step 3
+ re = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/;
+ if (re.test(w)) {
+ var fp = re.exec(w);
+ stem = fp[1];
+ suffix = fp[2];
+ re = new RegExp(mgr0);
+ if (re.test(stem))
+ w = stem + step3list[suffix];
+ }
+
+ // Step 4
+ re = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/;
+ re2 = /^(.+?)(s|t)(ion)$/;
+ if (re.test(w)) {
+ var fp = re.exec(w);
+ stem = fp[1];
+ re = new RegExp(mgr1);
+ if (re.test(stem))
+ w = stem;
+ }
+ else if (re2.test(w)) {
+ var fp = re2.exec(w);
+ stem = fp[1] + fp[2];
+ re2 = new RegExp(mgr1);
+ if (re2.test(stem))
+ w = stem;
+ }
+
+ // Step 5
+ re = /^(.+?)e$/;
+ if (re.test(w)) {
+ var fp = re.exec(w);
+ stem = fp[1];
+ re = new RegExp(mgr1);
+ re2 = new RegExp(meq1);
+ re3 = new RegExp("^" + C + v + "[^aeiouwxy]$");
+ if (re.test(stem) || (re2.test(stem) && !(re3.test(stem))))
+ w = stem;
+ }
+ re = /ll$/;
+ re2 = new RegExp(mgr1);
+ if (re.test(w) && re2.test(w)) {
+ re = /.$/;
+ w = w.replace(re,"");
+ }
+
+ // and turn initial Y back to y
+ if (firstch == "y")
+ w = firstch.toLowerCase() + w.substr(1);
+ return w;
+ }
+}
+
diff --git a/_static/minus.png b/_static/minus.png
new file mode 100644
index 0000000..d96755f
Binary files /dev/null and b/_static/minus.png differ
diff --git a/_static/plus.png b/_static/plus.png
new file mode 100644
index 0000000..7107cec
Binary files /dev/null and b/_static/plus.png differ
diff --git a/_static/pygments.css b/_static/pygments.css
new file mode 100644
index 0000000..f4af905
--- /dev/null
+++ b/_static/pygments.css
@@ -0,0 +1,79 @@
+pre { line-height: 125%; }
+td.linenos .normal { color: inherit; background-color: transparent; padding-left: 5px; padding-right: 5px; }
+span.linenos { color: inherit; background-color: transparent; padding-left: 5px; padding-right: 5px; }
+td.linenos .special { color: #000000; background-color: #ffffc0; padding-left: 5px; padding-right: 5px; }
+span.linenos.special { color: #000000; background-color: #ffffc0; padding-left: 5px; padding-right: 5px; }
+.highlight .hll { background-color: #4f424c }
+.highlight { background: #2f1e2e; color: #e7e9db }
+.highlight .c { color: #776e71 } /* Comment */
+.highlight .err { color: #ef6155 } /* Error */
+.highlight .k { color: #815ba4 } /* Keyword */
+.highlight .l { color: #f99b15 } /* Literal */
+.highlight .n { color: #e7e9db } /* Name */
+.highlight .o { color: #5bc4bf } /* Operator */
+.highlight .p { color: #e7e9db } /* Punctuation */
+.highlight .ch { color: #776e71 } /* Comment.Hashbang */
+.highlight .cm { color: #776e71 } /* Comment.Multiline */
+.highlight .cp { color: #776e71 } /* Comment.Preproc */
+.highlight .cpf { color: #776e71 } /* Comment.PreprocFile */
+.highlight .c1 { color: #776e71 } /* Comment.Single */
+.highlight .cs { color: #776e71 } /* Comment.Special */
+.highlight .gd { color: #ef6155 } /* Generic.Deleted */
+.highlight .ge { font-style: italic } /* Generic.Emph */
+.highlight .ges { font-weight: bold; font-style: italic } /* Generic.EmphStrong */
+.highlight .gh { color: #e7e9db; font-weight: bold } /* Generic.Heading */
+.highlight .gi { color: #48b685 } /* Generic.Inserted */
+.highlight .gp { color: #776e71; font-weight: bold } /* Generic.Prompt */
+.highlight .gs { font-weight: bold } /* Generic.Strong */
+.highlight .gu { color: #5bc4bf; font-weight: bold } /* Generic.Subheading */
+.highlight .kc { color: #815ba4 } /* Keyword.Constant */
+.highlight .kd { color: #815ba4 } /* Keyword.Declaration */
+.highlight .kn { color: #5bc4bf } /* Keyword.Namespace */
+.highlight .kp { color: #815ba4 } /* Keyword.Pseudo */
+.highlight .kr { color: #815ba4 } /* Keyword.Reserved */
+.highlight .kt { color: #fec418 } /* Keyword.Type */
+.highlight .ld { color: #48b685 } /* Literal.Date */
+.highlight .m { color: #f99b15 } /* Literal.Number */
+.highlight .s { color: #48b685 } /* Literal.String */
+.highlight .na { color: #06b6ef } /* Name.Attribute */
+.highlight .nb { color: #e7e9db } /* Name.Builtin */
+.highlight .nc { color: #fec418 } /* Name.Class */
+.highlight .no { color: #ef6155 } /* Name.Constant */
+.highlight .nd { color: #5bc4bf } /* Name.Decorator */
+.highlight .ni { color: #e7e9db } /* Name.Entity */
+.highlight .ne { color: #ef6155 } /* Name.Exception */
+.highlight .nf { color: #06b6ef } /* Name.Function */
+.highlight .nl { color: #e7e9db } /* Name.Label */
+.highlight .nn { color: #fec418 } /* Name.Namespace */
+.highlight .nx { color: #06b6ef } /* Name.Other */
+.highlight .py { color: #e7e9db } /* Name.Property */
+.highlight .nt { color: #5bc4bf } /* Name.Tag */
+.highlight .nv { color: #ef6155 } /* Name.Variable */
+.highlight .ow { color: #5bc4bf } /* Operator.Word */
+.highlight .pm { color: #e7e9db } /* Punctuation.Marker */
+.highlight .w { color: #e7e9db } /* Text.Whitespace */
+.highlight .mb { color: #f99b15 } /* Literal.Number.Bin */
+.highlight .mf { color: #f99b15 } /* Literal.Number.Float */
+.highlight .mh { color: #f99b15 } /* Literal.Number.Hex */
+.highlight .mi { color: #f99b15 } /* Literal.Number.Integer */
+.highlight .mo { color: #f99b15 } /* Literal.Number.Oct */
+.highlight .sa { color: #48b685 } /* Literal.String.Affix */
+.highlight .sb { color: #48b685 } /* Literal.String.Backtick */
+.highlight .sc { color: #e7e9db } /* Literal.String.Char */
+.highlight .dl { color: #48b685 } /* Literal.String.Delimiter */
+.highlight .sd { color: #776e71 } /* Literal.String.Doc */
+.highlight .s2 { color: #48b685 } /* Literal.String.Double */
+.highlight .se { color: #f99b15 } /* Literal.String.Escape */
+.highlight .sh { color: #48b685 } /* Literal.String.Heredoc */
+.highlight .si { color: #f99b15 } /* Literal.String.Interpol */
+.highlight .sx { color: #48b685 } /* Literal.String.Other */
+.highlight .sr { color: #48b685 } /* Literal.String.Regex */
+.highlight .s1 { color: #48b685 } /* Literal.String.Single */
+.highlight .ss { color: #48b685 } /* Literal.String.Symbol */
+.highlight .bp { color: #e7e9db } /* Name.Builtin.Pseudo */
+.highlight .fm { color: #06b6ef } /* Name.Function.Magic */
+.highlight .vc { color: #ef6155 } /* Name.Variable.Class */
+.highlight .vg { color: #ef6155 } /* Name.Variable.Global */
+.highlight .vi { color: #ef6155 } /* Name.Variable.Instance */
+.highlight .vm { color: #ef6155 } /* Name.Variable.Magic */
+.highlight .il { color: #f99b15 } /* Literal.Number.Integer.Long */
\ No newline at end of file
diff --git a/_static/searchtools.js b/_static/searchtools.js
new file mode 100644
index 0000000..7918c3f
--- /dev/null
+++ b/_static/searchtools.js
@@ -0,0 +1,574 @@
+/*
+ * searchtools.js
+ * ~~~~~~~~~~~~~~~~
+ *
+ * Sphinx JavaScript utilities for the full-text search.
+ *
+ * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS.
+ * :license: BSD, see LICENSE for details.
+ *
+ */
+"use strict";
+
+/**
+ * Simple result scoring code.
+ */
+if (typeof Scorer === "undefined") {
+ var Scorer = {
+ // Implement the following function to further tweak the score for each result
+ // The function takes a result array [docname, title, anchor, descr, score, filename]
+ // and returns the new score.
+ /*
+ score: result => {
+ const [docname, title, anchor, descr, score, filename] = result
+ return score
+ },
+ */
+
+ // query matches the full name of an object
+ objNameMatch: 11,
+ // or matches in the last dotted part of the object name
+ objPartialMatch: 6,
+ // Additive scores depending on the priority of the object
+ objPrio: {
+ 0: 15, // used to be importantResults
+ 1: 5, // used to be objectResults
+ 2: -5, // used to be unimportantResults
+ },
+ // Used when the priority is not in the mapping.
+ objPrioDefault: 0,
+
+ // query found in title
+ title: 15,
+ partialTitle: 7,
+ // query found in terms
+ term: 5,
+ partialTerm: 2,
+ };
+}
+
+const _removeChildren = (element) => {
+ while (element && element.lastChild) element.removeChild(element.lastChild);
+};
+
+/**
+ * See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions#escaping
+ */
+const _escapeRegExp = (string) =>
+ string.replace(/[.*+\-?^${}()|[\]\\]/g, "\\$&"); // $& means the whole matched string
+
+const _displayItem = (item, searchTerms, highlightTerms) => {
+ const docBuilder = DOCUMENTATION_OPTIONS.BUILDER;
+ const docFileSuffix = DOCUMENTATION_OPTIONS.FILE_SUFFIX;
+ const docLinkSuffix = DOCUMENTATION_OPTIONS.LINK_SUFFIX;
+ const showSearchSummary = DOCUMENTATION_OPTIONS.SHOW_SEARCH_SUMMARY;
+ const contentRoot = document.documentElement.dataset.content_root;
+
+ const [docName, title, anchor, descr, score, _filename] = item;
+
+ let listItem = document.createElement("li");
+ let requestUrl;
+ let linkUrl;
+ if (docBuilder === "dirhtml") {
+ // dirhtml builder
+ let dirname = docName + "/";
+ if (dirname.match(/\/index\/$/))
+ dirname = dirname.substring(0, dirname.length - 6);
+ else if (dirname === "index/") dirname = "";
+ requestUrl = contentRoot + dirname;
+ linkUrl = requestUrl;
+ } else {
+ // normal html builders
+ requestUrl = contentRoot + docName + docFileSuffix;
+ linkUrl = docName + docLinkSuffix;
+ }
+ let linkEl = listItem.appendChild(document.createElement("a"));
+ linkEl.href = linkUrl + anchor;
+ linkEl.dataset.score = score;
+ linkEl.innerHTML = title;
+ if (descr) {
+ listItem.appendChild(document.createElement("span")).innerHTML =
+ " (" + descr + ")";
+ // highlight search terms in the description
+ if (SPHINX_HIGHLIGHT_ENABLED) // set in sphinx_highlight.js
+ highlightTerms.forEach((term) => _highlightText(listItem, term, "highlighted"));
+ }
+ else if (showSearchSummary)
+ fetch(requestUrl)
+ .then((responseData) => responseData.text())
+ .then((data) => {
+ if (data)
+ listItem.appendChild(
+ Search.makeSearchSummary(data, searchTerms)
+ );
+ // highlight search terms in the summary
+ if (SPHINX_HIGHLIGHT_ENABLED) // set in sphinx_highlight.js
+ highlightTerms.forEach((term) => _highlightText(listItem, term, "highlighted"));
+ });
+ Search.output.appendChild(listItem);
+};
+const _finishSearch = (resultCount) => {
+ Search.stopPulse();
+ Search.title.innerText = _("Search Results");
+ if (!resultCount)
+ Search.status.innerText = Documentation.gettext(
+ "Your search did not match any documents. Please make sure that all words are spelled correctly and that you've selected enough categories."
+ );
+ else
+ Search.status.innerText = _(
+ `Search finished, found ${resultCount} page(s) matching the search query.`
+ );
+};
+const _displayNextItem = (
+ results,
+ resultCount,
+ searchTerms,
+ highlightTerms,
+) => {
+ // results left, load the summary and display it
+ // this is intended to be dynamic (don't sub resultsCount)
+ if (results.length) {
+ _displayItem(results.pop(), searchTerms, highlightTerms);
+ setTimeout(
+ () => _displayNextItem(results, resultCount, searchTerms, highlightTerms),
+ 5
+ );
+ }
+ // search finished, update title and status message
+ else _finishSearch(resultCount);
+};
+
+/**
+ * Default splitQuery function. Can be overridden in ``sphinx.search`` with a
+ * custom function per language.
+ *
+ * The regular expression works by splitting the string on consecutive characters
+ * that are not Unicode letters, numbers, underscores, or emoji characters.
+ * This is the same as ``\W+`` in Python, preserving the surrogate pair area.
+ */
+if (typeof splitQuery === "undefined") {
+ var splitQuery = (query) => query
+ .split(/[^\p{Letter}\p{Number}_\p{Emoji_Presentation}]+/gu)
+ .filter(term => term) // remove remaining empty strings
+}
+
+/**
+ * Search Module
+ */
+const Search = {
+ _index: null,
+ _queued_query: null,
+ _pulse_status: -1,
+
+ htmlToText: (htmlString) => {
+ const htmlElement = new DOMParser().parseFromString(htmlString, 'text/html');
+ htmlElement.querySelectorAll(".headerlink").forEach((el) => { el.remove() });
+ const docContent = htmlElement.querySelector('[role="main"]');
+ if (docContent !== undefined) return docContent.textContent;
+ console.warn(
+ "Content block not found. Sphinx search tries to obtain it via '[role=main]'. Could you check your theme or template."
+ );
+ return "";
+ },
+
+ init: () => {
+ const query = new URLSearchParams(window.location.search).get("q");
+ document
+ .querySelectorAll('input[name="q"]')
+ .forEach((el) => (el.value = query));
+ if (query) Search.performSearch(query);
+ },
+
+ loadIndex: (url) =>
+ (document.body.appendChild(document.createElement("script")).src = url),
+
+ setIndex: (index) => {
+ Search._index = index;
+ if (Search._queued_query !== null) {
+ const query = Search._queued_query;
+ Search._queued_query = null;
+ Search.query(query);
+ }
+ },
+
+ hasIndex: () => Search._index !== null,
+
+ deferQuery: (query) => (Search._queued_query = query),
+
+ stopPulse: () => (Search._pulse_status = -1),
+
+ startPulse: () => {
+ if (Search._pulse_status >= 0) return;
+
+ const pulse = () => {
+ Search._pulse_status = (Search._pulse_status + 1) % 4;
+ Search.dots.innerText = ".".repeat(Search._pulse_status);
+ if (Search._pulse_status >= 0) window.setTimeout(pulse, 500);
+ };
+ pulse();
+ },
+
+ /**
+ * perform a search for something (or wait until index is loaded)
+ */
+ performSearch: (query) => {
+ // create the required interface elements
+ const searchText = document.createElement("h2");
+ searchText.textContent = _("Searching");
+ const searchSummary = document.createElement("p");
+ searchSummary.classList.add("search-summary");
+ searchSummary.innerText = "";
+ const searchList = document.createElement("ul");
+ searchList.classList.add("search");
+
+ const out = document.getElementById("search-results");
+ Search.title = out.appendChild(searchText);
+ Search.dots = Search.title.appendChild(document.createElement("span"));
+ Search.status = out.appendChild(searchSummary);
+ Search.output = out.appendChild(searchList);
+
+ const searchProgress = document.getElementById("search-progress");
+ // Some themes don't use the search progress node
+ if (searchProgress) {
+ searchProgress.innerText = _("Preparing search...");
+ }
+ Search.startPulse();
+
+ // index already loaded, the browser was quick!
+ if (Search.hasIndex()) Search.query(query);
+ else Search.deferQuery(query);
+ },
+
+ /**
+ * execute search (requires search index to be loaded)
+ */
+ query: (query) => {
+ const filenames = Search._index.filenames;
+ const docNames = Search._index.docnames;
+ const titles = Search._index.titles;
+ const allTitles = Search._index.alltitles;
+ const indexEntries = Search._index.indexentries;
+
+ // stem the search terms and add them to the correct list
+ const stemmer = new Stemmer();
+ const searchTerms = new Set();
+ const excludedTerms = new Set();
+ const highlightTerms = new Set();
+ const objectTerms = new Set(splitQuery(query.toLowerCase().trim()));
+ splitQuery(query.trim()).forEach((queryTerm) => {
+ const queryTermLower = queryTerm.toLowerCase();
+
+ // maybe skip this "word"
+ // stopwords array is from language_data.js
+ if (
+ stopwords.indexOf(queryTermLower) !== -1 ||
+ queryTerm.match(/^\d+$/)
+ )
+ return;
+
+ // stem the word
+ let word = stemmer.stemWord(queryTermLower);
+ // select the correct list
+ if (word[0] === "-") excludedTerms.add(word.substr(1));
+ else {
+ searchTerms.add(word);
+ highlightTerms.add(queryTermLower);
+ }
+ });
+
+ if (SPHINX_HIGHLIGHT_ENABLED) { // set in sphinx_highlight.js
+ localStorage.setItem("sphinx_highlight_terms", [...highlightTerms].join(" "))
+ }
+
+ // console.debug("SEARCH: searching for:");
+ // console.info("required: ", [...searchTerms]);
+ // console.info("excluded: ", [...excludedTerms]);
+
+ // array of [docname, title, anchor, descr, score, filename]
+ let results = [];
+ _removeChildren(document.getElementById("search-progress"));
+
+ const queryLower = query.toLowerCase();
+ for (const [title, foundTitles] of Object.entries(allTitles)) {
+ if (title.toLowerCase().includes(queryLower) && (queryLower.length >= title.length/2)) {
+ for (const [file, id] of foundTitles) {
+ let score = Math.round(100 * queryLower.length / title.length)
+ results.push([
+ docNames[file],
+ titles[file] !== title ? `${titles[file]} > ${title}` : title,
+ id !== null ? "#" + id : "",
+ null,
+ score,
+ filenames[file],
+ ]);
+ }
+ }
+ }
+
+ // search for explicit entries in index directives
+ for (const [entry, foundEntries] of Object.entries(indexEntries)) {
+ if (entry.includes(queryLower) && (queryLower.length >= entry.length/2)) {
+ for (const [file, id] of foundEntries) {
+ let score = Math.round(100 * queryLower.length / entry.length)
+ results.push([
+ docNames[file],
+ titles[file],
+ id ? "#" + id : "",
+ null,
+ score,
+ filenames[file],
+ ]);
+ }
+ }
+ }
+
+ // lookup as object
+ objectTerms.forEach((term) =>
+ results.push(...Search.performObjectSearch(term, objectTerms))
+ );
+
+ // lookup as search terms in fulltext
+ results.push(...Search.performTermsSearch(searchTerms, excludedTerms));
+
+ // let the scorer override scores with a custom scoring function
+ if (Scorer.score) results.forEach((item) => (item[4] = Scorer.score(item)));
+
+ // now sort the results by score (in opposite order of appearance, since the
+ // display function below uses pop() to retrieve items) and then
+ // alphabetically
+ results.sort((a, b) => {
+ const leftScore = a[4];
+ const rightScore = b[4];
+ if (leftScore === rightScore) {
+ // same score: sort alphabetically
+ const leftTitle = a[1].toLowerCase();
+ const rightTitle = b[1].toLowerCase();
+ if (leftTitle === rightTitle) return 0;
+ return leftTitle > rightTitle ? -1 : 1; // inverted is intentional
+ }
+ return leftScore > rightScore ? 1 : -1;
+ });
+
+ // remove duplicate search results
+ // note the reversing of results, so that in the case of duplicates, the highest-scoring entry is kept
+ let seen = new Set();
+ results = results.reverse().reduce((acc, result) => {
+ let resultStr = result.slice(0, 4).concat([result[5]]).map(v => String(v)).join(',');
+ if (!seen.has(resultStr)) {
+ acc.push(result);
+ seen.add(resultStr);
+ }
+ return acc;
+ }, []);
+
+ results = results.reverse();
+
+ // for debugging
+ //Search.lastresults = results.slice(); // a copy
+ // console.info("search results:", Search.lastresults);
+
+ // print the results
+ _displayNextItem(results, results.length, searchTerms, highlightTerms);
+ },
+
+ /**
+ * search for object names
+ */
+ performObjectSearch: (object, objectTerms) => {
+ const filenames = Search._index.filenames;
+ const docNames = Search._index.docnames;
+ const objects = Search._index.objects;
+ const objNames = Search._index.objnames;
+ const titles = Search._index.titles;
+
+ const results = [];
+
+ const objectSearchCallback = (prefix, match) => {
+ const name = match[4]
+ const fullname = (prefix ? prefix + "." : "") + name;
+ const fullnameLower = fullname.toLowerCase();
+ if (fullnameLower.indexOf(object) < 0) return;
+
+ let score = 0;
+ const parts = fullnameLower.split(".");
+
+ // check for different match types: exact matches of full name or
+ // "last name" (i.e. last dotted part)
+ if (fullnameLower === object || parts.slice(-1)[0] === object)
+ score += Scorer.objNameMatch;
+ else if (parts.slice(-1)[0].indexOf(object) > -1)
+ score += Scorer.objPartialMatch; // matches in last name
+
+ const objName = objNames[match[1]][2];
+ const title = titles[match[0]];
+
+ // If more than one term searched for, we require other words to be
+ // found in the name/title/description
+ const otherTerms = new Set(objectTerms);
+ otherTerms.delete(object);
+ if (otherTerms.size > 0) {
+ const haystack = `${prefix} ${name} ${objName} ${title}`.toLowerCase();
+ if (
+ [...otherTerms].some((otherTerm) => haystack.indexOf(otherTerm) < 0)
+ )
+ return;
+ }
+
+ let anchor = match[3];
+ if (anchor === "") anchor = fullname;
+ else if (anchor === "-") anchor = objNames[match[1]][1] + "-" + fullname;
+
+ const descr = objName + _(", in ") + title;
+
+ // add custom score for some objects according to scorer
+ if (Scorer.objPrio.hasOwnProperty(match[2]))
+ score += Scorer.objPrio[match[2]];
+ else score += Scorer.objPrioDefault;
+
+ results.push([
+ docNames[match[0]],
+ fullname,
+ "#" + anchor,
+ descr,
+ score,
+ filenames[match[0]],
+ ]);
+ };
+ Object.keys(objects).forEach((prefix) =>
+ objects[prefix].forEach((array) =>
+ objectSearchCallback(prefix, array)
+ )
+ );
+ return results;
+ },
+
+ /**
+ * search for full-text terms in the index
+ */
+ performTermsSearch: (searchTerms, excludedTerms) => {
+ // prepare search
+ const terms = Search._index.terms;
+ const titleTerms = Search._index.titleterms;
+ const filenames = Search._index.filenames;
+ const docNames = Search._index.docnames;
+ const titles = Search._index.titles;
+
+ const scoreMap = new Map();
+ const fileMap = new Map();
+
+ // perform the search on the required terms
+ searchTerms.forEach((word) => {
+ const files = [];
+ const arr = [
+ { files: terms[word], score: Scorer.term },
+ { files: titleTerms[word], score: Scorer.title },
+ ];
+ // add support for partial matches
+ if (word.length > 2) {
+ const escapedWord = _escapeRegExp(word);
+ Object.keys(terms).forEach((term) => {
+ if (term.match(escapedWord) && !terms[word])
+ arr.push({ files: terms[term], score: Scorer.partialTerm });
+ });
+ Object.keys(titleTerms).forEach((term) => {
+ if (term.match(escapedWord) && !titleTerms[word])
+ arr.push({ files: titleTerms[word], score: Scorer.partialTitle });
+ });
+ }
+
+ // no match but word was a required one
+ if (arr.every((record) => record.files === undefined)) return;
+
+ // found search word in contents
+ arr.forEach((record) => {
+ if (record.files === undefined) return;
+
+ let recordFiles = record.files;
+ if (recordFiles.length === undefined) recordFiles = [recordFiles];
+ files.push(...recordFiles);
+
+ // set score for the word in each file
+ recordFiles.forEach((file) => {
+ if (!scoreMap.has(file)) scoreMap.set(file, {});
+ scoreMap.get(file)[word] = record.score;
+ });
+ });
+
+ // create the mapping
+ files.forEach((file) => {
+ if (fileMap.has(file) && fileMap.get(file).indexOf(word) === -1)
+ fileMap.get(file).push(word);
+ else fileMap.set(file, [word]);
+ });
+ });
+
+ // now check if the files don't contain excluded terms
+ const results = [];
+ for (const [file, wordList] of fileMap) {
+ // check if all requirements are matched
+
+ // as search terms with length < 3 are discarded
+ const filteredTermCount = [...searchTerms].filter(
+ (term) => term.length > 2
+ ).length;
+ if (
+ wordList.length !== searchTerms.size &&
+ wordList.length !== filteredTermCount
+ )
+ continue;
+
+ // ensure that none of the excluded terms is in the search result
+ if (
+ [...excludedTerms].some(
+ (term) =>
+ terms[term] === file ||
+ titleTerms[term] === file ||
+ (terms[term] || []).includes(file) ||
+ (titleTerms[term] || []).includes(file)
+ )
+ )
+ break;
+
+ // select one (max) score for the file.
+ const score = Math.max(...wordList.map((w) => scoreMap.get(file)[w]));
+ // add result to the result list
+ results.push([
+ docNames[file],
+ titles[file],
+ "",
+ null,
+ score,
+ filenames[file],
+ ]);
+ }
+ return results;
+ },
+
+ /**
+ * helper function to return a node containing the
+ * search summary for a given text. keywords is a list
+ * of stemmed words.
+ */
+ makeSearchSummary: (htmlText, keywords) => {
+ const text = Search.htmlToText(htmlText);
+ if (text === "") return null;
+
+ const textLower = text.toLowerCase();
+ const actualStartPosition = [...keywords]
+ .map((k) => textLower.indexOf(k.toLowerCase()))
+ .filter((i) => i > -1)
+ .slice(-1)[0];
+ const startWithContext = Math.max(actualStartPosition - 120, 0);
+
+ const top = startWithContext === 0 ? "" : "...";
+ const tail = startWithContext + 240 < text.length ? "..." : "";
+
+ let summary = document.createElement("p");
+ summary.classList.add("context");
+ summary.textContent = top + text.substr(startWithContext, 240).trim() + tail;
+
+ return summary;
+ },
+};
+
+_ready(Search.init);
diff --git a/_static/sphinx_highlight.js b/_static/sphinx_highlight.js
new file mode 100644
index 0000000..8a96c69
--- /dev/null
+++ b/_static/sphinx_highlight.js
@@ -0,0 +1,154 @@
+/* Highlighting utilities for Sphinx HTML documentation. */
+"use strict";
+
+const SPHINX_HIGHLIGHT_ENABLED = true
+
+/**
+ * highlight a given string on a node by wrapping it in
+ * span elements with the given class name.
+ */
+const _highlight = (node, addItems, text, className) => {
+ if (node.nodeType === Node.TEXT_NODE) {
+ const val = node.nodeValue;
+ const parent = node.parentNode;
+ const pos = val.toLowerCase().indexOf(text);
+ if (
+ pos >= 0 &&
+ !parent.classList.contains(className) &&
+ !parent.classList.contains("nohighlight")
+ ) {
+ let span;
+
+ const closestNode = parent.closest("body, svg, foreignObject");
+ const isInSVG = closestNode && closestNode.matches("svg");
+ if (isInSVG) {
+ span = document.createElementNS("http://www.w3.org/2000/svg", "tspan");
+ } else {
+ span = document.createElement("span");
+ span.classList.add(className);
+ }
+
+ span.appendChild(document.createTextNode(val.substr(pos, text.length)));
+ const rest = document.createTextNode(val.substr(pos + text.length));
+ parent.insertBefore(
+ span,
+ parent.insertBefore(
+ rest,
+ node.nextSibling
+ )
+ );
+ node.nodeValue = val.substr(0, pos);
+ /* There may be more occurrences of search term in this node. So call this
+ * function recursively on the remaining fragment.
+ */
+ _highlight(rest, addItems, text, className);
+
+ if (isInSVG) {
+ const rect = document.createElementNS(
+ "http://www.w3.org/2000/svg",
+ "rect"
+ );
+ const bbox = parent.getBBox();
+ rect.x.baseVal.value = bbox.x;
+ rect.y.baseVal.value = bbox.y;
+ rect.width.baseVal.value = bbox.width;
+ rect.height.baseVal.value = bbox.height;
+ rect.setAttribute("class", className);
+ addItems.push({ parent: parent, target: rect });
+ }
+ }
+ } else if (node.matches && !node.matches("button, select, textarea")) {
+ node.childNodes.forEach((el) => _highlight(el, addItems, text, className));
+ }
+};
+const _highlightText = (thisNode, text, className) => {
+ let addItems = [];
+ _highlight(thisNode, addItems, text, className);
+ addItems.forEach((obj) =>
+ obj.parent.insertAdjacentElement("beforebegin", obj.target)
+ );
+};
+
+/**
+ * Small JavaScript module for the documentation.
+ */
+const SphinxHighlight = {
+
+ /**
+ * highlight the search words provided in localstorage in the text
+ */
+ highlightSearchWords: () => {
+ if (!SPHINX_HIGHLIGHT_ENABLED) return; // bail if no highlight
+
+ // get and clear terms from localstorage
+ const url = new URL(window.location);
+ const highlight =
+ localStorage.getItem("sphinx_highlight_terms")
+ || url.searchParams.get("highlight")
+ || "";
+ localStorage.removeItem("sphinx_highlight_terms")
+ url.searchParams.delete("highlight");
+ window.history.replaceState({}, "", url);
+
+ // get individual terms from highlight string
+ const terms = highlight.toLowerCase().split(/\s+/).filter(x => x);
+ if (terms.length === 0) return; // nothing to do
+
+ // There should never be more than one element matching "div.body"
+ const divBody = document.querySelectorAll("div.body");
+ const body = divBody.length ? divBody[0] : document.querySelector("body");
+ window.setTimeout(() => {
+ terms.forEach((term) => _highlightText(body, term, "highlighted"));
+ }, 10);
+
+ const searchBox = document.getElementById("searchbox");
+ if (searchBox === null) return;
+ searchBox.appendChild(
+ document
+ .createRange()
+ .createContextualFragment(
+ '' +
+ '' +
+ _("Hide Search Matches") +
+ "
"
+ )
+ );
+ },
+
+ /**
+ * helper function to hide the search marks again
+ */
+ hideSearchWords: () => {
+ document
+ .querySelectorAll("#searchbox .highlight-link")
+ .forEach((el) => el.remove());
+ document
+ .querySelectorAll("span.highlighted")
+ .forEach((el) => el.classList.remove("highlighted"));
+ localStorage.removeItem("sphinx_highlight_terms")
+ },
+
+ initEscapeListener: () => {
+ // only install a listener if it is really needed
+ if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) return;
+
+ document.addEventListener("keydown", (event) => {
+ // bail for input elements
+ if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return;
+ // bail with special keys
+ if (event.shiftKey || event.altKey || event.ctrlKey || event.metaKey) return;
+ if (DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS && (event.key === "Escape")) {
+ SphinxHighlight.hideSearchWords();
+ event.preventDefault();
+ }
+ });
+ },
+};
+
+_ready(() => {
+ /* Do not call highlightSearchWords() when we are on the search page.
+ * It will highlight words from the *previous* search query.
+ */
+ if (typeof Search === "undefined") SphinxHighlight.highlightSearchWords();
+ SphinxHighlight.initEscapeListener();
+});
diff --git a/_static/taxonomy.html b/_static/taxonomy.html
new file mode 100644
index 0000000..7c90393
--- /dev/null
+++ b/_static/taxonomy.html
@@ -0,0 +1,270 @@
+
+
+
+
+
+
+Time-Domain Taxonomy
+
+
+2024-04-09 01:11:14.680776 Version: 0.1.6
+
+
+
+
+
+