Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stopping a test when validation fails #619

Open
DanLipsitt opened this issue Jul 27, 2017 · 12 comments
Open

Stopping a test when validation fails #619

DanLipsitt opened this issue Jul 27, 2017 · 12 comments

Comments

@DanLipsitt
Copy link
Contributor

Is there a way to specify that a validation failure should stop succeeding phases from running? There are cases where finishing a test is pointless and very time-consuming.

@fahhem
Copy link
Collaborator

fahhem commented Jul 27, 2017

I've considered a mechanism for specifying that a measurement failing is equivalent to the phase returning a STOP. However, I believe this is also possible by checking the measurement's status and returning if it's a failure:

@htf.measures(htf.Measurement('voltage').in_range(3, 4))
def phase(test):
  test.measurements.voltage = 5
  voltage = test.measurements._measurements['voltage']
  voltage.validate()
  if voltage.outcome == openhtf.measurements.Outcome.FAIL:
    return htf.PhaseResult.STOP

This is definitely reaching into the internals, so the alternatives would be to improve the ability to retrieve the current phase's measurements/outcomes or to put something more automatic in the measurements themselves.

For option A, here's some strawmen APIs:

test.get_measurement_outcome('voltage') == htf.measurements.Outcome.FAIL
test.measurements.get('voltage').outcome == ...

For option B:

@htf.measures(
    htf.Measurement('voltage').in_range(3, 4).required_to_continue(),
    htf.Measurement('voltage').in_range(3, 4).stop_on_failure(),
    htf.Measurement('voltage').in_range(3, 4).on_failure(htf.PhaseResult.STOP),
    htf.Measurement('voltage').in_range(3, 4).on_failure(htf.PhaseResult.REPEAT),
    htf.Measurement('voltage').in_range(3, 4).on_failure(htf.PhaseResult.SKIP),
)

What would you like?

@DanLipsitt
Copy link
Contributor Author

Yeah, I was imagining something like

htf.Measurement('voltage').in_range(3, 4).fatal()

but your on_failure() examples are better.

I was also thinking about

htf.Measurement('voltage').in_range(3, 4, on_fail=STOP)

but it seems like maybe validators are too decoupled from phases to make that happen?

@DanLipsitt
Copy link
Contributor Author

DanLipsitt commented Jul 27, 2017

For option A, what about

test.measurements.voltage = 5 
return on_fail('voltage', STOP)

@DanLipsitt
Copy link
Contributor Author

Bah, that doesn't work if there's more than one measurement.

@grybmadsci
Copy link
Collaborator

grybmadsci commented Jul 27, 2017 via email

@wallacbe
Copy link
Collaborator

This looks like a duplicate of #522, I'll close 522. Once implemented adding usage in all_the_things.py would be helpful too.

@DanLipsitt
Copy link
Contributor Author

DanLipsitt commented Jul 27, 2017

From a clarity standpoint, it makes sense to me to have the failure behavior colocated with the validation decorator as in @fahhem's option B. The downside is that then you can't stop between measurements. Or is that wrong? From @grybmadsci's comment it sounds like validators don't have to wait until the end of the phase to run, but I don't understand how a decorator could have that power. Is there some __setattr__() magic going on?

@DanLipsitt
Copy link
Contributor Author

The code below confirms that calling validate() manually is unnecessary. Can we rely on this?

@htf.measures(htf.Measurement('val1').equals(1))
def phase1(test):
    """When does validation happen?"""
    val1 = test.measurements._measurements['val1']
    print("before assignment, val1 = {}".format(val1))
    test.measurements.val1 = 0
    print("before validation, val1 = {}".format(val1))
    val1.validate()
    print("after validation,  val1 = {}".format(val1))

@DanLipsitt
Copy link
Contributor Author

I've considered a mechanism for specifying that a measurement failing is equivalent to the phase returning a STOP.

Are there arguments against this (aside from the work to implement it)?

@fahhem
Copy link
Collaborator

fahhem commented Jul 29, 2017

It seems @grybmadsci might object, but for me it's just the lack of time to implement it and make sure it doesn't complicate the PhaseExecutor (or wherever the logic goes) too much

@DanLipsitt
Copy link
Contributor Author

Stopping tests when phases fail is very common for us, so adding that functionality to validators would make a lot of sense.

@kdsudac
Copy link
Collaborator

kdsudac commented Aug 28, 2018

Do you actually need to specify different behavior for specific validators? or is your goal to mostly cut down wasted test time running phases for a test run that we already know it is going to fail?

I think #816 might achieve 95% of your goal and is a pretty easy to understand concept.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants