The following is the proposal for the information model (i.e. required bits) for test results and tooling used in the badging.

ParameterStatusDescription 

For each budge submission

This information would be collected from the submitter, when the results are submitted for review to the program.  A likely approach will be use of a template in the merge request comment, that makes the specific fields clear to the submitter.

Applicant InformationMandatory
  • Submitting Vendor Name
  • Product Name
  • Product Version
Badge InformationMandatory

The following are "pick from list" type values, where there are only specific values allowed, i.e. the in-force program / badge types and releases.

  • Badge Type (i.e. Cloud Native or NFV, Infrastructure or Workload)
  • Budge Sub-Type (i.e. ONAP, CNCF, etc.)
  • Badge Version / Release
For each set of results generated
Results IDMandatory
  • Uniquely identifies the set of test results generated during the test run.
  • Must include:
    • Date of test run
Test Tool DetailsMandatory
  • Must list the tool name, including the project originator / owner (i.e. ONAP or Anuket).
  • Version of the tool.
  • This must contain enough information, a user could reproduce the exact test run with the same test tooling.
SUT InformationTBD
  • Product Name
  • Product Version 
  • Inventory

Notes:

  • Need to confirm what information is collected / stored by the tooling.
For each test case included in submitted results
Test Case IDMandatory
  • A unique identifier for the test case which has been executed.
  • This must be a unique identifier across all projects and tests.
Requirement IDOptional
  • Reference to requirement ID from the base project (i.e. VNF-REQTS).
  • Requirements and test case implementations are driven by the individual communities.
Test Case ResultMandatory
  • Pass / Fail indication
Test Execution LogTBD
  • One or more files that capture logging and debugging output from the test run. 
  • The specific format of the files is undefined, and determined by the test tool. 
  • The link between the Test Case ID and log files MUST be clear, through either file naming or path locations.

7 Comments

  1. Lincoln Lavoie Marc Price

    My understanding of the above is that we are looking to define a common set of agreed information to be provided in test results regardless of the project that is involved (i.e R*1, R*2, ONAP, etc).  Assuming this to be true, my comments are as follows:


    1) Test Case ID - A unique identifier for the test case which has been executed.  It should be possible to use the Test Case ID to trace back to the specific requirement(s) that are covered by the unique test case.

    2) Test Case Result - A pass or fail indication such as "P" or "F"

    3) Test Execution Log - There is currently no description in Kanagaraj Manickam proposal. 

    4) CNF URL - There is currently no description in Kanagaraj Manickam proposal. 

    5) Test Tool Details - The purpose of this is not clear from the description Kanagaraj Manickam

    6) LFN Badge ID - A unique identifier for the specific badge being for which the test is being conducted

    I would also think that we should have information about whether a test is a Mandatory or Optional Test.  A proposal would be:

    7) Test Case Requirement - An indication of whether passing or failing (Test Case Result) is deemed mandatory or optional.  Uses an indication such as "M" or "O" to clarify.


    Olivier

    1. On Item 6 and Item 7, the things I don't like about those, is if the tool is generating these results, it means the tools have to update with each badge, to internalize the badge ID and the requirements.  It might be easier if those are external review tools, that can verify a set of results meets requirements for badge A or B.

    2. Hi Olivier, pls find my inputs

      #3. As part the test execution, test cases produces the logs. This part is modelled for capturing this info

      #4. Public URL for the CNF product page

      #5. The tool used for testing the CNF testcases such as ONAP VTP. 


      1. Thanks Kanagaraj Manickam

        #3 - Ok, but is there any expectation of what is contained in the log?  We are creating a common information model that says a log should be included.  Is that enough, or do we expect certain information in those logs?

        #4 - AI gree with Lincoln Lavoie.  Is this really something we should include in the testing tool?

        #5 - Lincoln Lavoiemade the suggestion that it would be testing project/tool name and version. (see his comment further below). 

  2. Lincoln LavoieI do see your point for item 6.  For Item 7, my thought was how will Anuket Assured know whether a test case that has passed or failed is mandatory or optional unless it is included in the tool/results.  I am open to suggestions, but I would think the project (i.e. R*1, R*2, ONAP, etc) will know which tests are mandatory vs optional and could accommodate that information in the tool.  The alternative would be that someone who validates the results in CVC has to look it up in a document.  Other alternatives?

  3. Olivier Smith my thinking was, if there is a separate tool (basically some shell scripts or something similarly light weight) that are maintained in the review repo, those could be specific for each badge and badge release.  So the testing tools focus on running testing, and don't have't internalize any information about each badge.

    Kanagaraj Manickam  for Item #4, why would the testing tool know about the URL for the product page?  This would mean the tester has to enter that at test time, and it would be "spread" throughout the results set.  However, usually the marketing folks at companies have strong feelings about where information like that lives and is pointed to.  I don't think it's necessary in the results set.

    For Item #5, I think we just need to define what should be included in this field.  My suggestion would be it should be the tool project name (i.e. ONAP VNFSDK) and the version of the tool.

  4. Hi. I would strongly encourage not to reinvent the wheel for the last part "For each test case included in submitted results".

    It must conform with the TestDB format as agreed by OPNFV, Anuket, and ONAP: the well know dump zipped http://artifacts.opnfv.org/functest/MELRCMNUJO9N.zip
    Here is a CNTT RC1 as a good example https://build.opnfv.org/ci/view/functest/job/functest-jerma-daily/487/

    Else Anuket Assured is free to ask for additional data when submitting this archive: For each budge submission or For each set of results generated.