Page 3 of 3

Re: Wanted: Travis CI integration

Posted: Sat Apr 19, 2014 11:57 pm
by lurch
Kinda related to both this discussion and , rather than 'bloating' micropython with the run-time ability to detect what options it was compiled with, maybe a simpler option would be to just have the build-process spit out some kind of meta-file alongside the final uPy binary (whether that's unix, stmhal, windows, etc.) that specifies which options this particular binary was compiled with.
And then maybe there could be some additional meta-info contained inside each of the test-scripts (perhaps inside a comment-block), saying what 'features' the tests in that particular script rely on.

And then tying both these things together, run-tests could read in both sets of data, and then selectively choose which tests to run, based on what features are or aren't compiled into the particular micropython that is being tested; i.e. it would allow you to double-check that disabling one feature, wouldn't accidentally break another still-enabled feature.

Does that make sense, or would it be too much effort and be impractical? :? OTOH it would be a really good way of testing if or when it ever gets implemented :?:

Maybe I'm thinking too far ahead and going a bit over the top like I did in :oops:

Re: Wanted: Travis CI integration

Posted: Sun Apr 20, 2014 12:03 am
by dhylands
I think that's a sensible approach, especially for testing.

We could write out the comple set of configuration options in a python module that the tests could import.

For the most part, I'm not sure how much information a typical running python program would need. I guess that will come up when we start writing some libraries and stuff, and then add things as needed.

Re: Wanted: Travis CI integration

Posted: Sun Apr 20, 2014 12:46 am
by lurch
dhylands wrote:We could write out the comple set of configuration options in a python module that the tests could import.
I guess it's a trade-off - we could either have each test-script itself dynamically determine if it should be run, or we could just have 'static meta-data' in each test-script and have run-tests determine which tests should be run.

The former obviously allows for much more flexibility ("this particular test-script relies on features 'X and (Y or Z)' and this particular subtest also depends on feature W"), but makes the individual test-scripts much more complex, and risks having lots of duplicate logic across many scripts (and therefore also harder to update if the available options change?). OTOH it would allow related tests to be grouped together into single files where sensible, instead of being split out into multiple separate files.

Hmmm, we could allow both options of course - if the 'magic meta-data' is present, run-tests itself will decide whether to run the test or not, and if no meta-data is present it will be up to the script itself which tests to run.

Hmmm2, obviously if running tests on the pyboard (which is the most likely scenario where you'll need to enable or disable specific options) it won't be able to import the "set of configuration options in a python module" (unless run-tests first copied it to flash or something?!?!)

Hmmm3, "set of configuration options in a python module that the tests could import" won't work if the import function has been compiled out :lol:

Hmmm4, I guess allowing each test to determine if it should run itself would also lead to python NameErrors / SyntaxErrors if it tried to use keywords / features that had been disabled in this particular build?

Hmmm5, I guess another option would be to have CPython run each test-script first (and importing the list of current uPy options from an external file), and if it printed out a special string (e.g. "MICROPY_SKIP_TEST") then don't try running the test-script under uPy, which would fix the "hmm3" and "hmm4" problems.

For the sake of simplicity I'd vote for just static meta-data inside comments at the top of each test file :) (if it turns out to be too limiting I guess we could allow limited boolean combination of required-options, but again evaluated by run-tests rather than the script itself)

It's a bit late so apologies if any of that comes across as gibberish...

Re: Wanted: Travis CI integration

Posted: Tue Apr 29, 2014 5:33 am
by dhylands
Just saw that Travis now supports Python 3.4: ... ython-3-4/