-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stopping a test when validation fails #619
Comments
I've considered a mechanism for specifying that a measurement failing is equivalent to the phase returning a STOP. However, I believe this is also possible by checking the measurement's status and returning if it's a failure: @htf.measures(htf.Measurement('voltage').in_range(3, 4))
def phase(test):
test.measurements.voltage = 5
voltage = test.measurements._measurements['voltage']
voltage.validate()
if voltage.outcome == openhtf.measurements.Outcome.FAIL:
return htf.PhaseResult.STOP This is definitely reaching into the internals, so the alternatives would be to improve the ability to retrieve the current phase's measurements/outcomes or to put something more automatic in the measurements themselves. For option A, here's some strawmen APIs: test.get_measurement_outcome('voltage') == htf.measurements.Outcome.FAIL
test.measurements.get('voltage').outcome == ... For option B: @htf.measures(
htf.Measurement('voltage').in_range(3, 4).required_to_continue(),
htf.Measurement('voltage').in_range(3, 4).stop_on_failure(),
htf.Measurement('voltage').in_range(3, 4).on_failure(htf.PhaseResult.STOP),
htf.Measurement('voltage').in_range(3, 4).on_failure(htf.PhaseResult.REPEAT),
htf.Measurement('voltage').in_range(3, 4).on_failure(htf.PhaseResult.SKIP),
) What would you like? |
Yeah, I was imagining something like htf.Measurement('voltage').in_range(3, 4).fatal() but your I was also thinking about htf.Measurement('voltage').in_range(3, 4, on_fail=STOP) but it seems like maybe validators are too decoupled from phases to make that happen? |
For option A, what about test.measurements.voltage = 5
return on_fail('voltage', STOP) |
Bah, that doesn't work if there's more than one measurement. |
Farz's suggestion is what was originally intended, actually. It looks a
bit internal-reaching, but it's more that it's not a super common thing to
do so it's a bit buried.
I'm not sure you have to explicitly call validate(), but I may be wrong.
We've waffled a couple times on when validation happens internally.
I recommend going with Farz's approach :)
~madsci
…On Thu, Jul 27, 2017 at 11:10 AM, Dan Lipsitt ***@***.***> wrote:
Bah, that doesn't work if there's more than one measurement.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#619 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ANbdA0h773qXrDvaC7722HRw-38w_EBcks5sSNKhgaJpZM4OkqEu>
.
|
This looks like a duplicate of #522, I'll close 522. Once implemented adding usage in all_the_things.py would be helpful too. |
From a clarity standpoint, it makes sense to me to have the failure behavior colocated with the validation decorator as in @fahhem's option B. The downside is that then you can't stop between measurements. Or is that wrong? From @grybmadsci's comment it sounds like validators don't have to wait until the end of the phase to run, but I don't understand how a decorator could have that power. Is there some |
The code below confirms that calling @htf.measures(htf.Measurement('val1').equals(1))
def phase1(test):
"""When does validation happen?"""
val1 = test.measurements._measurements['val1']
print("before assignment, val1 = {}".format(val1))
test.measurements.val1 = 0
print("before validation, val1 = {}".format(val1))
val1.validate()
print("after validation, val1 = {}".format(val1)) |
Are there arguments against this (aside from the work to implement it)? |
It seems @grybmadsci might object, but for me it's just the lack of time to implement it and make sure it doesn't complicate the |
Stopping tests when phases fail is very common for us, so adding that functionality to validators would make a lot of sense. |
Do you actually need to specify different behavior for specific validators? or is your goal to mostly cut down wasted test time running phases for a test run that we already know it is going to fail? I think #816 might achieve 95% of your goal and is a pretty easy to understand concept. |
Is there a way to specify that a validation failure should stop succeeding phases from running? There are cases where finishing a test is pointless and very time-consuming.
The text was updated successfully, but these errors were encountered: