-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simulation-related changes required for M4.5.7 #125
Conversation
So that we can run both OpenCOR-related and oSPARC-related simulations.
With a /start_simulation and /check_simulation endpoint, so that we can start and check a longish simulation.
…imulation endpoints.
Needed for datasets 4 and 17.
Since we replaced our /simulation endpoint with the /start_simulation and /check_simulation endpoints.
@agarny is this PR still a draft? I could use some help on how I should test the PR. I tried running the osparc tests and passed all except:
and
|
I hadn't even noticed that it had been qualified as a draft, but no it's not a draft anymore. As for the failing tests, sorry about that, let me have a quick look. |
@Tehsurfer, I have just run the SPARC API tests and they are all fine for me:
Ok, time to remind myself what those tests actually do and see why they pass for me and fail for you. |
By the way, @Tehsurfer, you mentioned test |
To run just the oSPARC tests:
So, all good indeed... |
@Tehsurfer, what kind of error message are you getting? FWIW, I have just modified
So, yes, I really can't see what might be wrong for you. |
This test relies on a file in dataset 135 that has changed.
@alan-wu and @Tehsurfer, I have just updated this PR to be in sync with the |
@agarny yeah I get the error:
Which my guess is comes from this line? temp_input_file = tempfile.NamedTemporaryFile(mode="w+") not to sure |
Still a Windows-related issue to me, not to mention that it's already used in v1.9.0 of the SPARC API (see here). |
Ok I'll try testing by sending requests from here: |
@agarny Yeah I managed to fix that permission error by changing
But now I am getting this error:
|
I am not sure we want As for the |
Ok I figured it out, needed Osparc keys and the found it from here: https://stackoverflow.com/questions/23212435/permission-denied-to-write-to-my-temporary-file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Worked for me!
Mostly looked pretty good but I found the if statements a bit hard to follow at times
app/main.py
Outdated
@@ -685,13 +688,14 @@ def build_filetypes_table(osparc_viewers): | |||
|
|||
|
|||
@app.route("/sim/dataset/<id_>") | |||
def sim_dataset(id_): | |||
@app.route("/sim/dataset/<id_>/<debug_>") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think using DEPLOY_ENV
would be better than having a route for debugging?
ie:
except ClientError:
# If the file is not under folder 'files', check under folder 'packages'
if Config.DEPLOY_ENV is not "production"
logging.warning(
"Required file template.json was not found under /files folder, trying under /packages..."
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't like having a route for debugging, BUT that warning (and also error further down the code) is only relevant for SIM-Core based simulation datasets (depending on what you do with the /sim/dataset/<id_>
endpoint), not CellML-based simulation datasets.
Here, I am trying to reuse the /sim/dataset/<id_>
endpoint and that warning (and error) is not relevant to me, no matter whether it's a SIM-Core based simulation or a CellML-based one. So, I just don't want to see that message at all, be it in production or in my development environment. It clutters my console for no good reason.
I could have copied/pasted most of that code, but that would have been a bit silly, hence the debugging route approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A debug flag could be setup using a environment variable instead. The debug route is not useful to the person calling it unless they are also running the server.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A debug flag could be setup using a environment variable instead. The debug route is not useful to the person calling it unless they are also running the server.
Sure, although such an environment variable would normally be set IF we want to get debug information while, here, debug information is provided by default (which is wrong in the first place).
Anyway, what should such an environment variable be called? What about SPARC_API_DEBUGGING
? If it's not set or set to 1
(or ON
?) then debugging would be on, and if it is set to 0
(or OFF
?) then debugging would be off.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SPARC_API_DEBUGGING is good. Having it default to be on is consistent with the DEPLOY_ENV = development so that is fine. TRUE or FALSE is probably more consistent with standard although it is going to be string comparison regardless.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SPARC_API_DEBUGGING is good. Having it default to be on is consistent with the DEPLOY_ENV = development so that is fine. TRUE or FALSE is probably more consistent with standard although it is going to be string comparison regardless.
Ok, going to use SPARC_API_DEBUGGING
with unset meaning TRUE
.
from time import sleep | ||
|
||
|
||
OPENCOR_SOLVER = "simcore/services/comp/opencor" | ||
DATASET_4_SOLVER = "simcore/services/comp/rabbit-ss-0d-cardiac-model" | ||
DATASET_17_SOLVER = "simcore/services/comp/human-gb-0d-cardiac-model" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would switching to a dictionary remove some of the if statement?
Something like:
SOLVER_LOOKUP = {
"OPENCOR_SOLVER": "simcore/services/comp/opencor",
"DATASET_4_SOLVER": "simcore/services/comp/rabbit-ss-0d-cardiac-model",
...
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't see how it would. Here, we really want to distinguish between CellML-based simulation datasets (which require OPENCOR_SOLVER
to be solved) and SIM-Core based simulation datasets (which require another solver). In the latter case, to have DATASET_4_SOLVER
, DATASET_17_SOLVER
, and DATASET_78_SOLVER
is just so that we can provide a more accurate user feedback, should we ever try to run a simulation using a solver that is not supported.
password=Config.OSPARC_API_SECRET | ||
)) | ||
solvers_api = osparc.SolversApi(api_client) | ||
job_id = data["job_id"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to check the keys from data? Or are they always correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
data
comes from oSPARC and, unless they change their JSON schema (which would break many things for them, I would imagine), we should be fine. Also, this is the reason we do all of that stuff in a try...except
statement. If something goes wrong, we report an error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The data is checked on line 1145 in main.py.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The data is checked on line 1145 in main.py.
Oh yes, I had forgotten that I had done that check. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @alan-wu! I missed that
def run_simulation(model_url, json_config): | ||
with tempfile.NamedTemporaryFile(mode="w+") as temp_config_file: | ||
json.dump(json_config, temp_config_file) | ||
def start_simulation(data): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it worth splitting up start simulation into a few more functions? The flow is kind of hard for me to follow with all of the if statements
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to think of a way to split this function but couldn't come up with a simple solution, not least since it relies on the use of exceptions which would need to be propagated.
94fb198
to
c7fa94a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good.
To be merged in ONLY IF datasets 135 and 157 have been published to the portal.
Description
Updated the SPARC API so that it can address the needs of M4.5.7. This meant removing the
/simulation
endpoint and replacing it with a new/start_simulation
and/check_simulation
endpoints. This was needed to account for simulation that take quite some time to run. The previous/simulation
endpoint would try to start and check a simulation in one go, but if the simulation was too long then the API would time out. Now, we can start the simulation and check for its progress until it has completed or errored. The tests have been changed accordingly.Also updated the
/sim/dataset/<id_>
endpoint to make it optional to output debug information (seeinject_template_data()
) which doesn't apply to simulation datasets. The default behaviour hasn't changed, but now if you have something like/sim/dataset/135/nodebug
then we won't get that warning frominject_template_data()
.Type of change
Delete those that don't apply.
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
Checklist: