Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

KeyValueTypeDefaultCombinationDescriptionExample
ignoreBoolTrueOverrideIgnored model files / directories will not be included in the automatic testing process.
ignore = false
confidentialBoolFalseOverrideIf set the test / benchmarks should not publish any information about the content or meta-data of the model.
confidential = true
modelFileExtensionComma separated list of stringsEmptyOverrideA list of file name suffixes (file extensions) for identify model files.
modelFileExtension = sct
traceFileExtensionComma separated list of stringsEmptyOverrideA list of file name suffixes (file extensions) for identify test trace files.
traceFileExtension = eso, .trace

resourceSetID

Status
colourYellow
titledeprecated

StringEmptyNot propagatedA globally unique identifier which will cause the associated model files to be loaded into one resource set which allows resolving cross references between models files.
resourceSetID = my-unique-id
modelPropertiesComma separated list of stringsEmptyCombinedA list of model specific categories that should be assigned to the model. The categories are handled as a set where is property file in the hierarchy can add or remove (using !) new tags.
modelProperties = tiny-model, !broken
Other propertiesStringEmptyOverrideAny user specific property.
complexity = 9001

...

Status
colourYellow
titleWIP

Benchmarks run similar to test. They run the models in a models repository, provided in the same way as mentioned before.

Local Benchmarks

To run the benchmarks locally you first have to provide the models repository (including the environment variable mentioned above). Then you need the appropriate plug-ins. in you runtime configuration.

...

Where the project_name specifies the project to save the results into. The benchmarks will create the project if it does not exist and will create a json file containing the benchmark results.