Skip to content

Commit

Permalink
Squashed commit of the following:
Browse files Browse the repository at this point in the history
commit 877aac1
Author: Jeremy Sadler <[email protected]>
Date:   Thu Oct 3 01:14:21 2024 +0000

    Cleaning up integrated changes

commit 6dfd2c0
Author: Jeremy Sadler <[email protected]>
Date:   Fri Sep 27 22:32:48 2024 +0000

    Making blocks more generic

commit 164b179
Author: Jeremy Sadler <[email protected]>
Date:   Wed Sep 18 23:54:45 2024 +0000

    Cleaned up some unnecessary methods.

commit d42cae2
Author: Jeremy Sadler <[email protected]>
Date:   Sun Aug 18 00:00:00 2024 +0000

    Fixing indexed variables

commit 6c648ca
Author: Jeremy Sadler <[email protected]>
Date:   Fri Aug 2 19:52:09 2024 +0000

    Removing Julia pieces (for now) and more mypy cleanup

commit 905e528
Author: Jeremy Sadler <[email protected]>
Date:   Thu Aug 1 20:06:31 2024 +0000

    Factory classes for vars and constraints

commit 8b18e97
Author: Jeremy Sadler <[email protected]>
Date:   Mon Jul 22 23:41:25 2024 +0000

    Improving test coverage

commit 069c3ef
Author: Jeremy Sadler <[email protected]>
Date:   Mon Jul 15 21:49:18 2024 +0000

    Adding tests

commit fe433a3
Author: Jeremy Sadler <[email protected]>
Date:   Thu Jul 11 22:40:17 2024 +0000

    Moving JuMP objects into their own file

commit 79af205
Author: Jeremy Sadler <[email protected]>
Date:   Tue Jul 9 21:20:36 2024 +0000

    Fixing an issue with linear trees

commit 8c459f1
Author: Jeremy Sadler <[email protected]>
Date:   Tue Jul 9 19:43:38 2024 +0000

    Making block-level modelling language choices percolate through generated variables and constraints

commit e01a352
Author: Jeremy Sadler <[email protected]>
Date:   Mon Jul 8 23:58:06 2024 +0000

    Including OmltExpr and OmltConstraints, spreading Omlt classes throughout the codebase.

commit c5b866b
Author: Jeremy Sadler <[email protected]>
Date:   Thu Jun 6 20:56:23 2024 +0000

    linting (2)

commit 98518f8
Author: Jeremy Sadler <[email protected]>
Date:   Thu Jun 6 20:42:30 2024 +0000

    linting (1)

commit 6e5292d
Author: Jeremy Sadler <[email protected]>
Date:   Thu Jun 6 13:33:16 2024 -0700

    Delete .github/workflows/python-package.yml

commit 7adf6e4
Author: Jeremy Sadler <[email protected]>
Date:   Thu Jun 6 20:22:56 2024 +0000

    adding abstract methods to expression interface

commit 1a8c124
Author: Jeremy Sadler <[email protected]>
Date:   Thu Jun 6 19:06:17 2024 +0000

    further fixing

commit 2764df1
Author: Jeremy Sadler <[email protected]>
Date:   Thu Jun 6 18:58:51 2024 +0000

    fixing variable initialization

commit dd69394
Author: Jeremy Sadler <[email protected]>
Date:   Thu Jun 6 18:47:01 2024 +0000

    tidying var.py

commit 6e141d4
Author: Jeremy Sadler <[email protected]>
Date:   Thu Jun 6 18:22:30 2024 +0000

    cleanup in expression.py

commit ef0885b
Author: Jeremy Sadler <[email protected]>
Date:   Wed Jun 5 20:01:17 2024 +0000

    Including OmltExpr expressions for the OmltVars

commit a64f6d7
Author: Jeremy Sadler <[email protected]>
Date:   Fri May 17 11:33:14 2024 -0700

    Update setup.cfg

commit 83ccaef
Author: Jeremy Sadler <[email protected]>
Date:   Sun Apr 21 18:05:47 2024 -0700

    Update setup.cfg

commit 63f0e5f
Author: Jeremy Sadler <[email protected]>
Date:   Mon Mar 18 22:41:10 2024 -0700

    Create python-package.yml

commit ea1154c
Author: Jeremy Sadler <[email protected]>
Date:   Fri May 17 11:45:42 2024 -0700

    Update main.yml

commit 7eecd26
Author: Jeremy Sadler <[email protected]>
Date:   Fri May 17 11:44:36 2024 -0700

    Update main.yml

commit f844c2d
Author: Jeremy Sadler <[email protected]>
Date:   Fri May 17 11:42:42 2024 -0700

    Update main.yml

commit 9ab7fc3
Author: Jeremy Sadler <[email protected]>
Date:   Fri May 17 11:40:35 2024 -0700

    Update main.yml

commit ab25542
Author: Jeremy Sadler <[email protected]>
Date:   Fri May 17 11:39:03 2024 -0700

    Update setup.cfg for Keras version

commit 61c8daf
Author: Jeremy Sadler <[email protected]>
Date:   Fri May 17 11:38:30 2024 -0700

    Update Python versions in main.yml

commit 0ae5b75
Author: Jeremy Sadler <[email protected]>
Date:   Mon Apr 22 00:35:06 2024 +0000

    Fixing some whitespace linting

commit cbcefcb
Author: Jeremy Sadler <[email protected]>
Date:   Mon Apr 22 00:20:48 2024 +0000

    restoring action workflow file

commit c911bb0
Author: Jeremy Sadler <[email protected]>
Date:   Mon Apr 22 00:16:13 2024 +0000

    removing tweaked action file

commit c929d54
Author: Jeremy Sadler <[email protected]>
Date:   Sat Apr 20 23:21:18 2024 -0700

    Fix Keras version at 2.9

    Keras 3 requires models to have the .keras file format. Going forward we should probably update the test models to use this format, but to unblock I'm holding back the Keras version.

commit 738f7fd
Author: Jeremy Sadler <[email protected]>
Date:   Sat Apr 20 23:01:19 2024 -0700

    Use tensorflow-cpu for testing to save space

commit c4ab257
Author: Jeremy Sadler <[email protected]>
Date:   Fri Apr 19 17:43:43 2024 -0700

    Make test for JuMP variables conditional on presence of JuMP

commit 09c9945
Author: Jeremy Sadler <[email protected]>
Date:   Fri Apr 19 17:35:36 2024 -0700

    Update var.py

commit 991dd37
Author: Jeremy Sadler <[email protected]>
Date:   Fri Apr 19 17:29:08 2024 -0700

    Update var.py

commit b57848a
Author: Jeremy Sadler <[email protected]>
Date:   Fri Apr 19 16:58:01 2024 -0700

    Getting dependencies lined up correctly

commit 1490f42
Author: Jeremy Sadler <[email protected]>
Date:   Fri Apr 19 16:52:08 2024 -0700

    Removing duplicate line

commit ef42ba3
Author: Jeremy Sadler <[email protected]>
Date:   Fri Apr 19 19:19:29 2024 +0000

    Cleaning up variables - MOI dependency

commit fa62661
Author: Jeremy Sadler <[email protected]>
Date:   Fri Apr 19 19:19:29 2024 +0000

    Cleaning up variables - MOI dependency

commit 5dae012
Author: Jeremy Sadler <[email protected]>
Date:   Fri Apr 19 19:19:29 2024 +0000

    Cleaning up variables

commit 29b89bc
Author: Jeremy Sadler <[email protected]>
Date:   Mon Apr 8 18:28:49 2024 +0000

    Implementing JuMP format scalar and indexed
    variables.

commit 3c20611
Author: Jeremy Sadler <[email protected]>
Date:   Tue Mar 19 00:58:12 2024 -0700

    Removing ipopt from CI workflow

commit 6e36c47
Author: Jeremy Sadler <[email protected]>
Date:   Mon Mar 18 22:16:04 2024 -0700

    Create main.yml

    copying CI workflow over

commit 9178a1b
Author: Jeremy Sadler <[email protected]>
Date:   Tue Mar 19 02:05:50 2024 +0000

    OmltVar wrapper class

commit 0e86c9f
Author: Jeremy Sadler <[email protected]>
Date:   Tue Mar 19 02:05:50 2024 +0000

    OmltVar wrapper class

commit 7515f57
Author: Jeremy Sadler <[email protected]>
Date:   Mon Jun 24 05:29:48 2024 +0000

    Fixing mypy typing errors

commit 7bb6f0d
Author: Jeremy Sadler <[email protected]>
Date:   Mon Jun 24 05:29:48 2024 +0000

    Fixing mypy typing errors

commit ce6a944
Author: Jeremy Sadler <[email protected]>
Date:   Sun Jun 23 00:27:31 2024 +0000

    Fixing ruff linting errors.

commit 8a44751
Author: Jeremy Sadler <[email protected]>
Date:   Thu Jun 13 22:08:18 2024 +0000

    Fixing initial batch of ruff errors

commit 0379ec6
Author: Lukas Turcani <[email protected]>
Date:   Thu May 30 18:32:52 2024 +0100

    Add back for mypy

commit 2b0f991
Author: Lukas Turcani <[email protected]>
Date:   Thu May 30 18:31:55 2024 +0100

    remove unnecessary things

commit 551530d
Author: Lukas Turcani <[email protected]>
Date:   Thu May 30 18:21:20 2024 +0100

    wip

commit 4c40b8b
Author: Lukas Turcani <[email protected]>
Date:   Thu May 30 18:20:48 2024 +0100

    thing

commit 8042185
Author: Lukas Turcani <[email protected]>
Date:   Thu May 30 18:20:00 2024 +0100

    wip

commit fd4ef72
Author: Lukas Turcani <[email protected]>
Date:   Thu May 30 18:18:08 2024 +0100

    add link

commit 5492ddb
Author: Lukas Turcani <[email protected]>
Date:   Thu May 30 18:03:26 2024 +0100

    wip

commit 3d056e9
Author: Lukas Turcani <[email protected]>
Date:   Thu May 30 12:57:50 2024 +0100

    Add thing

commit 0790eb8
Author: Lukas Turcani <[email protected]>
Date:   Thu May 30 11:37:53 2024 +0100

    Thing

commit e735707
Author: Lukas Turcani <[email protected]>
Date:   Thu May 30 11:34:35 2024 +0100

    Add conda

commit cc07a30
Author: Lukas Turcani <[email protected]>
Date:   Wed May 29 16:16:38 2024 +0100

    wip

commit c58ec68
Author: Lukas Turcani <[email protected]>
Date:   Tue May 28 22:41:59 2024 +0100

    wip

commit 0a2671f
Author: Lukas Turcani <[email protected]>
Date:   Tue May 28 22:20:10 2024 +0100

    Add workflows

commit 321a2e2
Author: Jiří Němeček <[email protected]>
Date:   Sat Aug 24 19:15:47 2024 +0200

    Fixing 404 errors of links to notebooks in the documentation (cog-imperial#143)

    I assume that the notebooks have been moved, but the documentation links
    did not reflect that

    **Legal Acknowledgement**\
    By contributing to this software project, I agree my contributions are
    submitted under the BSD license.
    I represent I am authorized to make the contributions and grant the
    license.
    If my employer has rights to intellectual property that includes these
    contributions,
    I represent that I have received permission to make contributions and
    grant the required license on behalf of that employer.

commit caebfc4
Author: Andrew Lee <[email protected]>
Date:   Thu Aug 22 13:28:24 2024 -0400

    Replace _BlockData with BlockData (cog-imperial#144)

    Pyomo recently made ComponentData classes public
    (Pyomo/pyomo#3221) which will be part of the
    upcoming release. Currently, this causes the following error to occur in
    OMLT:

    ```
    TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases
    ```

    The Pyomo team is working to try to address this issue, however OMLT
    should update its code to address this as otherwise deprecation warnings
    will be emitted when using the old class names.

    The fix is to replace all instances of `_BlockData` with `BlockData`
    (just removing the underscore) - this applies to any other instance of
    Pyomo component data objects as well (although I could only find 2
    instances of these in the OMLT code).

    **Legal Acknowledgement**\
    By contributing to this software project, I agree my contributions are
    submitted under the BSD license.
    I represent I am authorized to make the contributions and grant the
    license.
    If my employer has rights to intellectual property that includes these
    contributions,
    I represent that I have received permission to make contributions and
    grant the required license on behalf of that employer.

    Co-authored-by: jalving <[email protected]>

commit c6d274f
Author: Emma Johnson <[email protected]>
Date:   Thu Aug 22 10:56:10 2024 -0400

    Add tolerance to enforce strict inequalities in linear tree formulations (cog-imperial#163)

    This PR adds a tolerance at which to enforce ``strict'' inequalities in
    linear model trees: That is, the right branch will require that the
    feature value be greater than or equal to the bound plus this tolerance
    (epsilon). This means that users can tune epsilon in order to ensure
    that the MIP solution will match the tree prediction.

    Additionally, the PR simplifies the implementation of the hybrid bigm
    linear tree formulation by using two modern pyomo.gdp transformations.
    This does mean that the linear tree formulations will rely on
    pyomo>=6.7.1 though, if that's okay.

    **Legal Acknowledgement**\
    By contributing to this software project, I agree my contributions are
    submitted under the BSD license.
    I represent I am authorized to make the contributions and grant the
    license.
    If my employer has rights to intellectual property that includes these
    contributions,
    I represent that I have received permission to make contributions and
    grant the required license on behalf of that employer.

    ---------

    Co-authored-by: Emma Johnson <[email protected]>

commit d43643a
Author: Lukas Turcani <[email protected]>
Date:   Tue Aug 20 23:53:51 2024 +0100

    Clean up package boilerplate (cog-imperial#149)

    This PR does a couple of things to clean up the boilerplate related to
    packaging OMLT, see sections below for detailed explanations of the
    changes.

    * Remove `setup.cfg` , `setup.py`, `docs/requirements.txt`, `tox.ini` in
    favour of `pyproject.toml`.
    * Place `conda` requirements into `environment.yml`
    * Create new workflows `tests.yml` and `publish_release.yml`
    * Add quality checks using `ruff`, `mypy`, `doctest`
    * Use `just` for developer experience
    * Updated the `Development` section of `README`  to talk about `just`
    * Clean up `conf.py`
    * Move `pull_request_template.md`
    * Allow publishing of package to pypi by pushing a new version tag

    # Other comments

    * consider internal package structure
    * force squash merge of PRs - this keeps git history for the `main`
    branch nice and clean

    # Using `pyproject.toml`

    `pyrpoject.toml` is the simplest way to provide package metadata for a
    Python package. It is easy to read and also provides sections for
    configurating tools such as `pytest`, `ruff` and `mypy` all in one
    place. It works seamlessly with the modern Python ecosystem.

    I set up `pyproject.toml` to automactically detect the version of the
    code from git tags. No need to duplicate version numbers across the
    repo. Just add a new tag and everything will be updated. In addition,
    when a new git tag is pushed to the GitHub repo, the new
    `publish_release` workflow will be triggered and a new PYPI version
    released. (See more on this below).

    I also set it up so that the version is automatically added to a file
    called `src/omlt/_version.py` which holds the `__version__` variable.
    this file is autogenerated and therefore added to `.gitignore`. The
    `__version__` veriable is then re-exported in `src/omlt/__init__.py` so
    that our users have access to it.

    I tried to perserve all the information stored in the `setup.cfg` and
    other deleted files -- let me know if there is something i missed!

    ## Optional dependencies

    The `pyproject.toml` file allows the creation of optional dependencies.
    For example, our users can install

    ```bash
    pip install omlt[keras]
    # or
    pip install omlt[torch]
    # or
    pip install omlt[linear-tree,keras-gpu]
    ```
    Ofc any combination of optional dependencies is valid too. This allows
    our users to install the dependencies specific to their use case. Note
    that:

    * I made `onnx` and `onnxruntime` a required dependency because from my
    understanding it is almost always used
    * I added an optinoal dependency set called `dev` which developers can
    use to install all developer tools and all dependencies -- you need this
    to run all the tests for example
    * There is also `dev-gpu` which installs the GPU version of tensorflow
    in case the developer has a GPU

    The available optional dependencies are:

    * `linear-tree`, installs the linear tree dependency
    * `keras`, installs tensorflow and keras
    * `keras-gpu`, installs tensorflow for the gpu and keras
    * `torch`, installs torch and torch geometric
    * `dev-tools` - this is not to be used directly but allows easy re-use
    of dev tools in other optional dependencies, namely dev and dev-gpu
    * `docs` - installs dependencies required to compile docs
    * `dev` - dependecies needed for developing the project, such tooling
    * `dev-gpu` - same as dev but installed with gpu support

    Our documentation probably needs to be updated to tell users they wanna
    install omlt with some combination of `linear-tree`, `keras`,
    `keras-gpu`, `torch` optional dependencies depending on what features of
    the package they are using

    # Quality checks with `ruff`, `mypy` and `doctest`

    I've enabled `ruff`, `mypy` and `doctest`. Currently there are no
    doctests, but its good to have it set up so that it runs in case any are
    added in the future.

    Both `ruff` and `mypy` are failing because there are a number of things
    which need to fixed. For both `ruff` and `mypy` I have disabled some
    checks which it would be good to enable eventually but are probably a
    fair amount of work to fix -- these have comments in `pyproject.toml`.
    The remaining failing checks are ones which I would reccomend fixing
    ASAP. There's two approaches, merge now and fix these errors later. Or
    keep a separate branch where these are incrementally fixed. Up to you to
    decide what you prefer.

    I told ruff to check for `google` style docstrings. I think these are
    the best because they have good readbility and work the best with type
    hints in my opinion.

    # Using `just` instead of `tox`

    https://github.com/casey/just is a simple command runner. It allows the
    developers to define and re-use common operations, for example I can
    define a `check` recipe and then run

    ```bash
    just check
    ```

    in my command line and it will run all the tests. The beauty of this is
    that `just` is extremely simple. If you read the file its basically a
    sequence of bash instructions for each recipe. This makes the `recipes`
    really transparent, and easy to understand, and works as
    code-as-documentation. Users can just read the recipe and run the
    commands one by one to get the same effect without having `just`
    installed. There is no magic which helps with debugging issues. It's
    also language agnostic. `just` comes as a small stand-alone binary,
    which makes it a very non-intrusive tool to have on your computer that
    does not need any dependencies.

    The downside is that it does not provide automatic management for Python
    environments, which I belive tox does provide. The other side of this is
    that we allow developers to use their favorite tools for managing venvs
    rather than proscribing certain tools for this repo. (the difference
    with `just` being that it is essentially optional tool and also serving
    as documentation)

    I may be overly opinionated on this one, so feel free to push back.

    # Cleaning up `docs/conf.py`

    I removed a bunch of the commented out code. This makes it easier to see
    what the configuration is and also prevents the commented out options
    from becoming out of date when a new release of sphinx is made.

    # Moving `pull_request_template.md`

    I moved this into the `.github` folder because it is GitHub
    configuration. Very optional, but makes more sense to me.

    # `readthedocs` automated action

    this guide
    https://docs.readthedocs.io/en/stable/guides/pull-requests.html shows
    how to set it up. requires admin permissions on readthedocs -- can jump
    on a call to help with this

    # publishing with to `PYPI` with a git tag

    for this an API key for PYPI needs to be created and added to the repos
    secrets -- can jump on a call to help with this

    # consider `_internal` package structure

    One way to make it easier to manage private vs public code in a
    repository is to create an `_internal` folder where all the code goes.
    This way all code can be shared easily and moved between modules and its
    by default private, so changes to internal code does not break users.
    Public modules then just re-export code in the `_internal` submodules.
    You can see an example of this structure here
    https://github.com/lukasturcani/stk. Not a huge issue but I find it very
    helpful for managing what things are actually exposed to users the
    code-base grows.

    **Legal Acknowledgement**\
    By contributing to this software project, I agree my contributions are
    submitted under the BSD license.
    I represent I am authorized to make the contributions and grant the
    license.
    If my employer has rights to intellectual property that includes these
    contributions,
    I represent that I have received permission to make contributions and
    grant the required license on behalf of that employer.

    ---------

    Co-authored-by: Jeremy Sadler <[email protected]>
  • Loading branch information
jezsadler committed Oct 3, 2024
1 parent e8857ff commit ac0912c
Show file tree
Hide file tree
Showing 43 changed files with 1,316 additions and 937 deletions.
2 changes: 1 addition & 1 deletion docs/api_doc/omlt.block.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ OMLT Block
:show-inheritance:

.. note::
`OmltBlock` is the name used to declare the custom Pyomo block which is exposed to the user. The block functionality is given by `OmltBlockData` which inherits from Pyomo `_BlockData`.
`OmltBlock` is the name used to declare the custom Pyomo block which is exposed to the user. The block functionality is given by `OmltBlockData` which inherits from Pyomo `BlockData`.

.. autoclass:: omlt.block.OmltBlockData
:members:
Expand Down
4 changes: 2 additions & 2 deletions docs/notebooks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ The first set of notebooks demonstrates the basic mechanics of OMLT and shows ho

* `index_handling.ipynb <https://github.com/cog-imperial/OMLT/blob/main/docs/notebooks/neuralnet/index_handling.ipynb>`_ shows how to use `IndexMapper` to handle the mappings between indexes.

* `bo_with_trees.ipynb <https://github.com/cog-imperial/OMLT/blob/main/docs/notebooks/bo_with_trees.ipynb>`_ incorporates gradient-boosted trees into a Bayesian optimization loop to optimize the Rosenbrock function.
* `bo_with_trees.ipynb <https://github.com/cog-imperial/OMLT/blob/main/docs/notebooks/trees/bo_with_trees.ipynb>`_ incorporates gradient-boosted trees into a Bayesian optimization loop to optimize the Rosenbrock function.

* `linear_tree_formulations.ipynb <https://github.com/cog-imperial/OMLT/blob/main/docs/notebooks/trees/linear_tree_formulations.ipynb>`_ showcases the different linear model decision tree formulations available in OMLT.

Expand All @@ -24,7 +24,7 @@ The second set of notebooks gives application-specific examples:

* `mnist_example_convolutional.ipynb <https://github.com/cog-imperial/OMLT/blob/main/docs/notebooks/neuralnet/mnist_example_convolutional.ipynb>`_ trains a convolutional neural network on MNIST and uses OMLT to find adversarial examples.

* `graph_neural_network_formulation.ipynb <https://github.com/cog-imperial/OMLT/blob/main/docs/notebooks/graph_neural_network_formulation.ipynb>`_ transforms graph neural networks into OMLT and builds formulation to solve optimization problems.
* `graph_neural_network_formulation.ipynb <https://github.com/cog-imperial/OMLT/blob/main/docs/notebooks/neuralnet/graph_neural_network_formulation.ipynb>`_ transforms graph neural networks into OMLT and builds formulation to solve optimization problems.

* `auto-thermal-reformer.ipynb <https://github.com/cog-imperial/OMLT/blob/main/docs/notebooks/neuralnet/auto-thermal-reformer.ipynb>`_ develops a neural network surrogate (using sigmoid activations) with data from a process model built using `IDAES-PSE <https://github.com/IDAES/idaes-pse>`_.

Expand Down
4 changes: 1 addition & 3 deletions docs/notebooks/data/build_sin_quadratic_csv.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,7 @@
rng = np.random.default_rng()
sin_quads = pd.DataFrame(x, columns=["x"])
sin_quads["y"] = (
np.sin(w * x)
+ x**2
+ np.array([rng.uniform() * 0.1 for _ in range(n_samples)])
np.sin(w * x) + x**2 + np.array([rng.uniform() * 0.1 for _ in range(n_samples)])
)

plt.plot(sin_quads["x"], sin_quads["y"])
Expand Down
109 changes: 68 additions & 41 deletions docs/notebooks/neuralnet/auto-thermal-reformer-relu.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -78,20 +78,20 @@
],
"source": [
"import os\n",
"os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' # suppress CUDA warnings from tensorflow\n",
"\n",
"os.environ[\"TF_CPP_MIN_LOG_LEVEL\"] = \"2\" # suppress CUDA warnings from tensorflow\n",
"\n",
"# import the necessary packages\n",
"from omlt import OmltBlock, OffsetScaling\n",
"from omlt.io.keras import load_keras_sequential\n",
"from omlt.neuralnet import ReluBigMFormulation\n",
"from omlt.base import OmltConstraint\n",
"import pyomo.environ as pyo\n",
"import pandas as pd\n",
"import tensorflow.keras as keras\n",
"from tensorflow.keras.models import Sequential\n",
"import pyomo.environ as pyo\n",
"from tensorflow import keras\n",
"from tensorflow.keras.layers import Dense\n",
"from tensorflow.keras.models import Sequential\n",
"from tensorflow.keras.optimizers import Adam\n",
"from tensorflow.keras.callbacks import ModelCheckpoint"
"\n",
"from omlt import OffsetScaling, OmltBlock\n",
"from omlt.io.keras import load_keras_sequential\n",
"from omlt.neuralnet import ReluBigMFormulation"
]
},
{
Expand Down Expand Up @@ -152,10 +152,23 @@
],
"source": [
"# read in our csv data\n",
"columns = ['Bypass Fraction', 'NG Steam Ratio', 'Steam Flow',\n",
" 'Reformer Duty','AR', 'C2H6', 'C3H8', 'C4H10',\n",
" 'CH4', 'CO', 'CO2', 'H2', 'H2O', 'N2']\n",
"df = pd.read_csv('../data/reformer.csv', usecols=columns)\n",
"columns = [\n",
" \"Bypass Fraction\",\n",
" \"NG Steam Ratio\",\n",
" \"Steam Flow\",\n",
" \"Reformer Duty\",\n",
" \"AR\",\n",
" \"C2H6\",\n",
" \"C3H8\",\n",
" \"C4H10\",\n",
" \"CH4\",\n",
" \"CO\",\n",
" \"CO2\",\n",
" \"H2\",\n",
" \"H2O\",\n",
" \"N2\",\n",
"]\n",
"df = pd.read_csv(\"../data/reformer.csv\", usecols=columns)\n",
"print(df)"
]
},
Expand All @@ -170,9 +183,21 @@
"outputs": [],
"source": [
"# separate the data into inputs and outputs\n",
"inputs = ['Bypass Fraction', 'NG Steam Ratio']\n",
"outputs = [ 'Steam Flow', 'Reformer Duty','AR', 'C2H6', 'C3H8', 'C4H10',\n",
" 'CH4', 'CO', 'CO2', 'H2', 'H2O', 'N2']\n",
"inputs = [\"Bypass Fraction\", \"NG Steam Ratio\"]\n",
"outputs = [\n",
" \"Steam Flow\",\n",
" \"Reformer Duty\",\n",
" \"AR\",\n",
" \"C2H6\",\n",
" \"C3H8\",\n",
" \"C4H10\",\n",
" \"CH4\",\n",
" \"CO\",\n",
" \"CO2\",\n",
" \"H2\",\n",
" \"H2O\",\n",
" \"N2\",\n",
"]\n",
"dfin = df[inputs]\n",
"dfout = df[outputs]"
]
Expand All @@ -199,8 +224,8 @@
"\n",
"# capture the minimum and maximum values of the scaled inputs\n",
"# so we don't use the model outside the valid range\n",
"scaled_lb = dfin.min()[inputs].values\n",
"scaled_ub = dfin.max()[inputs].values"
"scaled_lb = dfin.min()[inputs].to_numpy()\n",
"scaled_ub = dfin.max()[inputs].to_numpy()"
]
},
{
Expand All @@ -223,13 +248,13 @@
],
"source": [
"# create our Keras Sequential model\n",
"nn = Sequential(name='reformer_relu_4_20')\n",
"nn.add(Dense(units=10, input_dim=len(inputs), activation='relu'))\n",
"nn.add(Dense(units=10, activation='relu'))\n",
"nn.add(Dense(units=10, activation='relu'))\n",
"nn.add(Dense(units=10, activation='relu'))\n",
"nn = Sequential(name=\"reformer_relu_4_20\")\n",
"nn.add(Dense(units=10, input_dim=len(inputs), activation=\"relu\"))\n",
"nn.add(Dense(units=10, activation=\"relu\"))\n",
"nn.add(Dense(units=10, activation=\"relu\"))\n",
"nn.add(Dense(units=10, activation=\"relu\"))\n",
"nn.add(Dense(units=len(outputs)))\n",
"nn.compile(optimizer=Adam(), loss='mse')"
"nn.compile(optimizer=Adam(), loss=\"mse\")"
]
},
{
Expand Down Expand Up @@ -450,8 +475,8 @@
],
"source": [
"# train our model\n",
"x = dfin.values\n",
"y = dfout.values\n",
"x = dfin.to_numpy()\n",
"y = dfout.to_numpy()\n",
"\n",
"history = nn.fit(x, y, epochs=100)"
]
Expand All @@ -469,7 +494,7 @@
"# save the model to disk\n",
"# While not technically necessary, this shows how we can load a previously saved model into\n",
"# our optimization formulation)\n",
"nn.save('reformer_nn_relu.keras')"
"nn.save(\"reformer_nn_relu.keras\")"
]
},
{
Expand Down Expand Up @@ -523,22 +548,24 @@
"outputs": [],
"source": [
"# load the Keras model\n",
"nn_reformer = keras.models.load_model('reformer_nn_relu.keras', compile=False)\n",
"nn_reformer = keras.models.load_model(\"reformer_nn_relu.keras\", compile=False)\n",
"\n",
"# Note: The neural network is in the scaled space. We want access to the\n",
"# variables in the unscaled space. Therefore, we need to tell OMLT about the\n",
"# scaling factors\n",
"scaler = OffsetScaling(\n",
" offset_inputs={i: x_offset[inputs[i]] for i in range(len(inputs))},\n",
" factor_inputs={i: x_factor[inputs[i]] for i in range(len(inputs))},\n",
" offset_outputs={i: y_offset[outputs[i]] for i in range(len(outputs))},\n",
" factor_outputs={i: y_factor[outputs[i]] for i in range(len(outputs))}\n",
" )\n",
" offset_inputs={i: x_offset[inputs[i]] for i in range(len(inputs))},\n",
" factor_inputs={i: x_factor[inputs[i]] for i in range(len(inputs))},\n",
" offset_outputs={i: y_offset[outputs[i]] for i in range(len(outputs))},\n",
" factor_outputs={i: y_factor[outputs[i]] for i in range(len(outputs))},\n",
")\n",
"\n",
"scaled_input_bounds = {i: (scaled_lb[i], scaled_ub[i]) for i in range(len(inputs))}\n",
"\n",
"# create a network definition from the Keras model\n",
"net = load_keras_sequential(nn_reformer, scaling_object=scaler, scaled_input_bounds=scaled_input_bounds)\n",
"net = load_keras_sequential(\n",
" nn_reformer, scaling_object=scaler, scaled_input_bounds=scaled_input_bounds\n",
")\n",
"\n",
"# create the variables and constraints for the neural network in Pyomo\n",
"m.reformer.build_formulation(ReluBigMFormulation(net))"
Expand All @@ -555,8 +582,8 @@
"outputs": [],
"source": [
"# now add the objective and the constraints\n",
"h2_idx = outputs.index('H2')\n",
"n2_idx = outputs.index('N2')\n",
"h2_idx = outputs.index(\"H2\")\n",
"n2_idx = outputs.index(\"N2\")\n",
"m.obj = pyo.Objective(expr=m.reformer.outputs[h2_idx], sense=pyo.maximize)\n",
"m.con = pyo.Constraint(expr=m.reformer.outputs[n2_idx] <= 0.34)"
]
Expand All @@ -572,7 +599,7 @@
"outputs": [],
"source": [
"# now solve the optimization problem (this may take some time)\n",
"solver = pyo.SolverFactory('cbc')\n",
"solver = pyo.SolverFactory(\"cbc\")\n",
"status = solver.solve(m, tee=False)"
]
},
Expand All @@ -597,10 +624,10 @@
}
],
"source": [
"print('Bypass Fraction:', pyo.value(m.reformer.inputs[0]))\n",
"print('NG Steam Ratio:', pyo.value(m.reformer.inputs[1]))\n",
"print('H2 Concentration:', pyo.value(m.reformer.outputs[h2_idx]))\n",
"print('N2 Concentration:', pyo.value(m.reformer.outputs[n2_idx]))"
"print(\"Bypass Fraction:\", pyo.value(m.reformer.inputs[0]))\n",
"print(\"NG Steam Ratio:\", pyo.value(m.reformer.inputs[1]))\n",
"print(\"H2 Concentration:\", pyo.value(m.reformer.outputs[h2_idx]))\n",
"print(\"N2 Concentration:\", pyo.value(m.reformer.outputs[n2_idx]))"
]
}
],
Expand Down
Loading

0 comments on commit ac0912c

Please sign in to comment.