Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing updates #494

Merged
merged 14 commits into from
Oct 12, 2023
Merged
Show file tree
Hide file tree
Changes from 8 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 22 additions & 0 deletions .github/workflows/capgen_unit_tests.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
name: Capgen Unit Tests
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gold2718 @climbfuji @mwaxmonsky @peverwhee
It would be nice if we had a CI test in the feature_capgen branch that ran the ccpp_prebuild.

We do run ccpp_prebuild as part of the ccpp-scm and ccpp-physics CI (e.g. https://github.com/ufs-community/ccpp-physics/blob/ufs/dev/.github/workflows/ci_scm_ccpp_prebuild.yml). One could easily modify this to run ccpp_prebuild using the feature_capgen fork, which would let us know if capgen development breaks the existing prebuild.
Thoughts? I can add this test to this PR if there is interest. Personally, I'd like to see this.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I personally think this is a great idea! Our original thought was that if we could "unify" the testing system first then it would make unifying the actual framework easier. Of course @peverwhee and @mwaxmonsky are the ones actually responsible for all of this so I'm happy to let them make the final decision (at least for the NCAR side).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that we already have ccpp_prebuild.py tests in main:

  1. In stub to build a CCPP stub that exercises the framework. See README.md in this directory. Annoyingly, this seems to be broken right now, at least on my macOS (I'll look into fixing this later today).
  2. In subdirectory tests (not the s at the end, the test directory is for capgen). These can be run via
cd tests
PYTHONPATH="$PWD/../scripts:$PWD/../scripts/parse_tools:$PYTHONPATH" python3 test_mkstatic.py
PYTHONPATH="$PWD/../scripts:$PWD/../scripts/parse_tools:$PYTHONPATH" python3 test_metadata_parser.py

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a local workflow file that can do this similar to ccpp-physics but after discussing with @peverwhee, we are going to add that in a separate PR because it fails in the current state but successfully runs when merged with the main branch.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

created issue #500 for prebuild CI integration


on:
climbfuji marked this conversation as resolved.
Show resolved Hide resolved
workflow_dispatch:
pull_request:
types: [opened, synchronize, reopened]
push:
branches:
#Trigger workflow on push to any branch or branch hierarchy:
- '**'

jobs:
unit_tests:
if: github.event_name == 'pull_request' || github.event_name == 'workflow_dispatch' || github.repository == 'NCAR/ccpp-framework'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am admittedly weak on the GitHub CI stuff but would this activate on something that happens that is not a PR because of the logical OR statements?
This logic is repeated in other files so I wil just ask this once since it is probably something I just don't understand (yet).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first two statements make sense. Either trigger on a PR or when someone runs the workflow manually from the "actions" tab (workflow dispatch). The last statement doesn't make sense to me, because it basically says always run when the repo is the NCAR repo. Was the intent to trigger on one of the two preceding statements only if it's the NCAR repo? Then I would have expected something like (pseudo-syntax, because I actually don't know if github actions accept complex logical expressions)

if (pull_request || workflow_dispatch) && repo == ncar

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am still fairly new to GitHub actions so this could possibly be simplified but he intent was to:

  1. Run the action on any change to the NCAR repo
  2. Run the action on pull requests (This may need to be updated as the workflows take longer)
  3. Allow manually running actions on forked repositories outside of pull requests.
    If there's better ways to support this in the workflow files, just let me know!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mwaxmonsky I don't think you need to worry about adding logic, just set "on" to pull_request and workflow_dispatch. This will run the test when any change is introduced via PR and allow for manual running.
There are some CI examples in the NCAR:ccpp-scm repository. Some tests are more exhaustive and only triggered for PR, other small tests are run with every push.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi All,

I was the one who told @mwaxmonsky and @peverwhee to add this logic. The reason is because we want these tests to trigger whenever a push occurs, but only to the NCAR repo (not forks). The logic above basically says "Run these tests if it is a pull request event, the workflow is triggered manually, OR it is a push event, but the push is occurring in the NCAR repo".

This is because folks might sometimes forget to merge their branch to the head of the NCAR branch, and thus when they "merge" the PR in it might result in a test failure that we wouldn't catch unless the tests are run one last time after the PR is merged in (which in Github world counts as a "push"). The other situation this might matter is if someone directly pushes to the NCAR repo. I realize that is generally a no-no, but I've seen it happen at least once in every repo I have worked on (usually by accident).

Of course an alternative would be to have these tests run for every push event, regardless of where, or to have the push event only occur on certain branches (e.g. feature/capgen). However, I personally don't like having tests run all the time when I am doing development on my own fork (especially if I know I am breaking things), and making it branch-specific might potentially cause issues down the road when we unify the framework. So this was the solution we landed on.

Anyways, I'm happy to change this logic if folks would like a different set of triggering events, but just wanted to try and explain it here first. Thanks!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's needed. Whenever you "push" a commit to a PR, which includes merging the latest develop in (on the command line, then push, or via the button on the PR page), the tests are triggered with the pull_request condition.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simply enforce in the branch protection that PRs need to be up to date with develop and you're done.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good point about the branch protection rules! If we implement that along with rules preventing any direct pushes to the NCAR repo (which may already exist?) then I agree that this can all go away.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Happy to go with you through the setup this afternoon or later.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@climbfuji Jesse checked and doesn't have permission to do so, so if you could instead walk me through it, I would appreciate that!

runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: update repos and install dependencies
run: sudo apt-get update && sudo apt-get install -y build-essential gfortran cmake python3 git
- name: Run unit tests
run: cd test && ./run_fortran_tests.sh

45 changes: 36 additions & 9 deletions .github/workflows/python.yaml
Original file line number Diff line number Diff line change
@@ -1,28 +1,55 @@
name: Python package

on: [push]
on:
climbfuji marked this conversation as resolved.
Show resolved Hide resolved
workflow_dispatch:
pull_request:
types: [opened, synchronize, reopened]
push:
branches:
#Trigger workflow on push to any branch or branch hierarchy:
- '**'

jobs:
build:

if: github.event_name == 'pull_request' || github.event_name == 'workflow_dispatch' || github.repository == 'NCAR/ccpp-framework'
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7]
python-version: ['3.7', '3.8', '3.9', '3.10', '3.11']

steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
pip install pytest
- name: Test with pytest
if: github.repository == 'NCAR/ccpp-framework' # Only run on main repo
run: |
export PYTHONPATH=$(pwd)/scripts:$(pwd)/scripts/parse_tools
pytest
pytest -v

doctest:
if: github.event_name == 'pull_request' || github.event_name == 'workflow_dispatch' || github.repository == 'NCAR/ccpp-framework'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need the complicated logic here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch! removed.

runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.7', '3.8', '3.9', '3.10', '3.11']

steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest
- name: Doctest
run: |
export PYTHONPATH=$(pwd)/scripts:$(pwd)/scripts/parse_tools
pytest -v scripts/ --doctest-modules
4 changes: 1 addition & 3 deletions pytest.ini
Original file line number Diff line number Diff line change
@@ -1,4 +1,2 @@
[pytest]
addopts = -ra -q --ignore=tests/test_capgen.py
testpaths =
tests
addopts = -ra --ignore=scripts/metadata2html.py --ignore-glob=test/**/test_reports.py
10 changes: 5 additions & 5 deletions scripts/ccpp_state_machine.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,11 @@
# CCPP framework imports
from state_machine import StateMachine

_INIT_ST = r"(?:(?i)init(?:ial(?:ize)?)?)"
_FINAL_ST = r"(?:(?i)final(?:ize)?)"
_RUN_ST = r"(?:(?i)run)"
_TS_INIT_ST = r"(?:(?i)timestep_init(?:ial(?:ize)?)?)"
_TS_FINAL_ST = r"(?:(?i)timestep_final(?:ize)?)"
_INIT_ST = r"(?:init(?:ial(?:ize)?)?)"
_FINAL_ST = r"(?:final(?:ize)?)"
_RUN_ST = r"(?:run)"
_TS_INIT_ST = r"(?:timestep_init(?:ial(?:ize)?)?)"
_TS_FINAL_ST = r"(?:timestep_final(?:ize)?)"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a dup of #493, is it needed in both places?


# Allowed CCPP transitions
# pylint: disable=bad-whitespace
Expand Down
27 changes: 9 additions & 18 deletions scripts/code_block.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
class CodeBlock(object):
"""Class to store a block of code and a method to write it to a file
>>> CodeBlock([]) #doctest: +ELLIPSIS
<__main__.CodeBlock object at 0x...>
<code_block.CodeBlock object at 0x...>
>>> CodeBlock(['hi mom']) #doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
ParseInternalError: Each element of <code_list> must contain exactly two items, a code string and a relative indent
Expand All @@ -24,14 +24,21 @@ class CodeBlock(object):
Traceback (most recent call last):
ParseInternalError: Each element of <code_list> must contain exactly two items, a code string and a relative indent
>>> CodeBlock([('hi mom', 1)]) #doctest: +ELLIPSIS
<__main__.CodeBlock object at 0x...>
<code_block.CodeBlock object at 0x...>
>>> from fortran_tools import FortranWriter
>>> outfile_name = "__code_block_temp.F90"
>>> outfile = FortranWriter(outfile_name, 'w', 'test file', 'test_mod')
>>> CodeBlock([('hi mom', 1)]).write(outfile, 1, {})

>>> CodeBlock([('hi {greet} mom', 1)]).write(outfile, 1, {}) #doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
ParseInternalError: 'greet' missing from <var_dict>
>>> CodeBlock([('hi {{greet}} mom', 1)]).write(outfile, 1, {})
>>> CodeBlock([('{greet} there mom', 1)]).write(outfile, 1, {'greet':'hi'})
>>> outfile.__exit__()
False
>>> import os
>>> os.remove(outfile_name)
"""

__var_re = re.compile(r"[{][ ]*([A-Za-z][A-Za-z0-9_]*)[ ]*[}]")
Expand Down Expand Up @@ -110,19 +117,3 @@ def write(self, outfile, indent_level, var_dict):
# end for

###############################################################################
if __name__ == "__main__":
# pylint: disable=ungrouped-imports
import doctest
import os
import sys
from fortran_tools import FortranWriter
# pylint: enable=ungrouped-imports
outfile_name = "__code_block_temp.F90"
with FortranWriter(outfile_name, 'w', 'test file', 'test_mod') as outfile:
fail, _ = doctest.testmod()
# end with
if os.path.exists(outfile_name):
os.remove(outfile_name)
# end if
sys.exit(fail)
# end if
16 changes: 4 additions & 12 deletions scripts/fortran_tools/parse_fortran.py
Original file line number Diff line number Diff line change
Expand Up @@ -665,6 +665,10 @@ def parse_fortran_var_decl(line, source, run_env):
'(8)'
>>> _VAR_ID_RE.match("foo(::,a:b,a:,:b)").group(2)
'(::,a:b,a:,:b)'
>>> from framework_env import CCPPFrameworkEnv
>>> _DUMMY_RUN_ENV = CCPPFrameworkEnv(None, ndict={'host_files':'', \
'scheme_files':'', \
'suites':''})
>>> parse_fortran_var_decl("integer :: foo", ParseSource('foo.F90', 'module', ParseContext()), _DUMMY_RUN_ENV)[0].get_prop_value('local_name')
'foo'
>>> parse_fortran_var_decl("integer :: foo = 0", ParseSource('foo.F90', 'module', ParseContext()), _DUMMY_RUN_ENV)[0].get_prop_value('local_name')
Expand Down Expand Up @@ -826,15 +830,3 @@ def parse_fortran_var_decl(line, source, run_env):
########################################################################

########################################################################

if __name__ == "__main__":
# pylint: disable=ungrouped-imports
import doctest
# pylint: enable=ungrouped-imports
from framework_env import CCPPFrameworkEnv
_DUMMY_RUN_ENV = CCPPFrameworkEnv(None, ndict={'host_files':'',
'scheme_files':'',
'suites':''})
fail, _ = doctest.testmod()
sys.exit(fail)
# end if
18 changes: 5 additions & 13 deletions scripts/metadata_table.py
Original file line number Diff line number Diff line change
Expand Up @@ -504,14 +504,18 @@ def table_start(cls, line):

class MetadataSection(ParseSource):
"""Class to hold all information from a metadata header
>>> from framework_env import CCPPFrameworkEnv
>>> _DUMMY_RUN_ENV = CCPPFrameworkEnv(None, {'host_files':'', \
'scheme_files':'', \
'suites':''})
>>> MetadataSection("footable", "scheme", _DUMMY_RUN_ENV, \
parse_object=ParseObject("foobar.txt", \
["name = footable", "type = scheme", "module = foo", \
"[ im ]", "standard_name = horizontal_loop_extent", \
"long_name = horizontal loop extent, start at 1", \
"units = index | type = integer", \
"dimensions = () | intent = in"])) #doctest: +ELLIPSIS
<__main__.MetadataSection foo / footable at 0x...>
<metadata_table.MetadataSection foo / footable at 0x...>
>>> MetadataSection("footable", "scheme", _DUMMY_RUN_ENV, \
parse_object=ParseObject("foobar.txt", \
["name = footable", "type = scheme", "module = foobar", \
Expand Down Expand Up @@ -1267,15 +1271,3 @@ def is_scalar_reference(test_val):
return check_fortran_ref(test_val, None, False) is not None

########################################################################

if __name__ == "__main__":
# pylint: enable=ungrouped-imports
import doctest
import sys
# pylint: disable=ungrouped-imports
from framework_env import CCPPFrameworkEnv
_DUMMY_RUN_ENV = CCPPFrameworkEnv(None, {'host_files':'',
'scheme_files':'',
'suites':''})
fail, _ = doctest.testmod()
sys.exit(fail)
12 changes: 2 additions & 10 deletions scripts/metavar.py
Original file line number Diff line number Diff line change
Expand Up @@ -1379,11 +1379,11 @@ class VarDictionary(OrderedDict):
>>> VarDictionary('bar', _MVAR_DUMMY_RUN_ENV, variables={})
VarDictionary(bar)
>>> VarDictionary('baz', _MVAR_DUMMY_RUN_ENV, variables=Var({'local_name' : 'foo', 'standard_name' : 'hi_mom', 'units' : 'm s-1', 'dimensions' : '()', 'type' : 'real', 'intent' : 'in'}, ParseSource('vname', 'scheme', ParseContext()), _MVAR_DUMMY_RUN_ENV)) #doctest: +ELLIPSIS
VarDictionary(baz, [('hi_mom', <__main__.Var hi_mom: foo at 0x...>)])
VarDictionary(baz, [('hi_mom', <metavar.Var hi_mom: foo at 0x...>)])
>>> print("{}".format(VarDictionary('baz', _MVAR_DUMMY_RUN_ENV, variables=Var({'local_name' : 'foo', 'standard_name' : 'hi_mom', 'units' : 'm s-1', 'dimensions' : '()', 'type' : 'real', 'intent' : 'in'}, ParseSource('vname', 'scheme', ParseContext()), _MVAR_DUMMY_RUN_ENV))))
VarDictionary(baz, ['hi_mom'])
>>> VarDictionary('qux', _MVAR_DUMMY_RUN_ENV, variables=[Var({'local_name' : 'foo', 'standard_name' : 'hi_mom', 'units' : 'm s-1', 'dimensions' : '()', 'type' : 'real', 'intent' : 'in'}, ParseSource('vname', 'scheme', ParseContext()), _MVAR_DUMMY_RUN_ENV)]) #doctest: +ELLIPSIS
VarDictionary(qux, [('hi_mom', <__main__.Var hi_mom: foo at 0x...>)])
VarDictionary(qux, [('hi_mom', <metavar.Var hi_mom: foo at 0x...>)])
>>> VarDictionary('boo', _MVAR_DUMMY_RUN_ENV).add_variable(Var({'local_name' : 'foo', 'standard_name' : 'hi_mom', 'units' : 'm s-1', 'dimensions' : '()', 'type' : 'real', 'intent' : 'in'}, ParseSource('vname', 'scheme', ParseContext()), _MVAR_DUMMY_RUN_ENV), _MVAR_DUMMY_RUN_ENV)

>>> VarDictionary('who', _MVAR_DUMMY_RUN_ENV, variables=[Var({'local_name' : 'foo', 'standard_name' : 'hi_mom', 'units' : 'm s-1', 'dimensions' : '()', 'type' : 'real', 'intent' : 'in'}, ParseSource('vname', 'scheme', ParseContext()), _MVAR_DUMMY_RUN_ENV)]).prop_list('local_name')
Expand Down Expand Up @@ -1982,11 +1982,3 @@ def new_internal_variable_name(self, prefix=None, max_len=63):
_MVAR_DUMMY_RUN_ENV)])

###############################################################################
if __name__ == "__main__":
# pylint: disable=ungrouped-imports
import doctest
import sys
# pylint: enable=ungrouped-imports
fail, _ = doctest.testmod()
sys.exit(fail)
# end if
11 changes: 1 addition & 10 deletions scripts/parse_tools/parse_object.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ class ParseObject(ParseContext):
"""ParseObject is a simple class that keeps track of an object's
place in a file and safely produces lines from an array of lines
>>> ParseObject('foobar.F90', []) #doctest: +ELLIPSIS
<__main__.ParseObject object at 0x...>
<parse_tools.parse_object.ParseObject object at 0x...>
>>> ParseObject('foobar.F90', []).filename
'foobar.F90'
>>> ParseObject('foobar.F90', ["##hi mom",], line_start=1).curr_line()
Expand Down Expand Up @@ -170,12 +170,3 @@ def __del__(self):
# end try

########################################################################

if __name__ == "__main__":
# pylint: disable=ungrouped-imports
import doctest
import sys
# pylint: enable=ungrouped-imports
fail, _ = doctest.testmod()
sys.exit(fail)
# end if
14 changes: 3 additions & 11 deletions scripts/parse_tools/parse_source.py
Original file line number Diff line number Diff line change
Expand Up @@ -201,10 +201,10 @@ def __getitem__(self, index):
class ParseContext(object):
"""A class for keeping track of a parsing position
>>> ParseContext(32, "source.F90") #doctest: +ELLIPSIS
<__main__.ParseContext object at 0x...>
<parse_tools.parse_source.ParseContext object at 0x...>
>>> ParseContext("source.F90", 32)
Traceback (most recent call last):
CCPPError: ParseContext linenum must be an int
parse_tools.parse_source.CCPPError: ParseContext linenum must be an int
>>> ParseContext(32, 90) #doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
CCPPError: ParseContext filenum must be a string
Expand Down Expand Up @@ -382,7 +382,7 @@ class ParseSource(object):
"""
A simple object for providing source information
>>> ParseSource("myname", "mytype", ParseContext(13, "foo.F90")) #doctest: +ELLIPSIS
<__main__.ParseSource object at 0x...>
<parse_tools.parse_source.ParseSource object at 0x...>
>>> ParseSource("myname", "mytype", ParseContext(13, "foo.F90")).type
'mytype'
>>> ParseSource("myname", "mytype", ParseContext(13, "foo.F90")).name
Expand Down Expand Up @@ -413,11 +413,3 @@ def context(self):
return self._context

########################################################################

if __name__ == "__main__":
# pylint: disable=ungrouped-imports
import doctest
# pylint: enable=ungrouped-imports
fail, _ = doctest.testmod()
sys.exit(fail)
# end if
18 changes: 2 additions & 16 deletions scripts/parse_tools/xml_tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,8 @@ def call_command(commands, logger, silent=False):
###############################################################################
"""
Try a command line and return the output on success (None on failure)
>>> _LOGGER = init_log('xml_tools')
>>> set_log_to_null(_LOGGER)
>>> call_command(['ls', 'really__improbable_fffilename.foo'], _LOGGER) #doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
CCPPError: Execution of 'ls really__improbable_fffilename.foo' failed:
Expand Down Expand Up @@ -350,19 +352,3 @@ def write(self, file, encoding="us-ascii", xml_declaration=None,
# end with

##############################################################################

if __name__ == "__main__":
_LOGGER = init_log('xml_tools')
set_log_to_null(_LOGGER)
try:
# First, run doctest
# pylint: disable=ungrouped-imports
import doctest
# pylint: enable=ungrouped-imports
fail, _ = doctest.testmod()
sys.exit(fail)
except CCPPError as cerr:
print("{}".format(cerr))
sys.exit(fail)
# end try
# end if
7 changes: 4 additions & 3 deletions scripts/state_machine.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ class StateMachine:
>>> StateMachine([('ab','a','b','a')]).final_state('ab')
'b'
>>> StateMachine([('ab','a','b','a')]).transition_regex('ab')
re.compile('a$')
re.compile('a$', re.IGNORECASE)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes in this file also seem to mirror #493 but I don't see an obvious relationship between the branches since dc6458e.
Might make merging later more fun?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hopefully those changes were cherry-picked or the two PRs started from the same branch, in which case there won't be any merge conflicts. But in any case #493 should be merged first, then the update from feature/capgen pulled into this branch, and these diffs should go away magically.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@climbfuji is correct. I had included them in the cleanup PR but then needed the updates in this PR to get the tests to pass. After I merge #493 I will confirm this diffs go away.

>>> StateMachine([('ab','a','b','a')]).function_match('foo_a', transition='ab')
('foo', 'a', 'ab')
>>> StateMachine([('ab','a','b',r'ax?')]).function_match('foo_a', transition='ab')
Expand Down Expand Up @@ -162,8 +162,9 @@ def __setitem__(self, key, value):
if len(value) != 3:
raise ValueError("Invalid transition ({}), should be of the form (inital_state, final_state, regex).".format(value))
# end if
regex = re.compile(value[2] + r"$")
function = re.compile(FORTRAN_ID + r"_(" + value[2] + r")$")
regex = re.compile(value[2] + r"$", re.IGNORECASE)
function = re.compile(FORTRAN_ID + r"_(" + value[2] + r")$",
re.IGNORECASE)
self.__stt__[key] = (value[0], value[1], regex, function)

def __delitem__(self, key):
Expand Down
Loading