You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use different graders for final grading versus the testing available to students during development. I would like to be able to leave the students' original submissions intact in the original assessment and create a new assessment for final grading so that they can compare their expected score with their achieved score (and perhaps motivate them to test better).
Workflow would look like this:
Students would submit normally while project is in progress, and the installed autograder would give them scores
After the project deadline has passed, I would perform a one-time clone to a new assessment, which would contain the final submission for every student as its only submissions.
I would install a new autograder Makefile and tarball on the new assessment, and regrade all.
Students could compare their results from the new assessment with their results from the original assessment.
The text was updated successfully, but these errors were encountered:
this is currently doable just not with a different assignment so if you want to do this just swap grader and regrade the latest submission. with imports this should be simple.
I've never been able to use the regrade button, because tarball submissions don't regrade.
Longer term I'd like students to be able to see their submission and its score as well as the final score with the final grader, which requires something like a new assignment. Short term if I could push regrade all and have it actually do anything, that would be fine.
I use different graders for final grading versus the testing available to students during development. I would like to be able to leave the students' original submissions intact in the original assessment and create a new assessment for final grading so that they can compare their expected score with their achieved score (and perhaps motivate them to test better).
Workflow would look like this:
The text was updated successfully, but these errors were encountered: