dhylands wrote:We could write out the comple set of configuration options in a python module that the tests could import.
I guess it's a trade-off - we could either have each test-script itself dynamically determine if it should be run, or we could just have 'static meta-data' in each test-script and have run-tests determine which tests should be run.
The former obviously allows for much more flexibility ("this particular test-script relies on features 'X and (Y or Z)' and this particular subtest also depends on feature W"), but makes the individual test-scripts much more complex, and risks having lots of duplicate logic across many scripts (and therefore also harder to update if the available options change?). OTOH it would allow related tests to be grouped together into single files where sensible, instead of being split out into multiple separate files.
Hmmm, we could allow both options of course - if the 'magic meta-data' is present, run-tests itself will decide whether to run the test or not, and if no meta-data is present it will be up to the script itself which tests to run.
Hmmm2, obviously if running tests on the pyboard (which is the most likely scenario where you'll need to enable or disable specific options) it won't be able to import the "set of configuration options in a python module" (unless run-tests first copied it to flash or something?!?!)
Hmmm3, "set of configuration options in a python module that the tests could import" won't work if the import function has been compiled out
Hmmm4, I guess allowing each test to determine if it should run itself would also lead to python NameErrors / SyntaxErrors if it tried to use keywords / features that had been disabled in this particular build?
Hmmm5, I guess another option would be to have CPython run each test-script first (and importing the list of current uPy options from an external file), and if it printed out a special string (e.g. "MICROPY_SKIP_TEST") then don't try running the test-script under uPy, which would fix the "hmm3" and "hmm4" problems.
For the sake of simplicity I'd vote for just static meta-data inside comments at the top of each test file

(if it turns out to be too limiting I guess we could allow limited boolean combination of required-options, but again evaluated by run-tests rather than the script itself)
It's a bit late so apologies if any of that comes across as gibberish...