With the function (Working With Function Requirements) and design (Working on the Design Level) requirements configuration in place in SystemWeaver, the test design can be implemented. The SystemWeaver Test Solution provides powerful tools for the many common test tasks such as creating tests from requirements, traceability, test execution, requirements coverage, test system management, test analysis and test scripts. And in SystemWeaver, it is all customizable. Test solutions are centered around two phases; test development and execution. 



Test design encompasses creating test specifications including test cases that cover the requirements for the system under specification. SystemWeaver provides tools for authoring test cases while maintaining traceability to and keeping track of requirements coverage. Test execution entails executing the test specifications and reporting results. Integrated change management facilitates directly reporting defects to development while maintaining traceability to the circumstances under which the defect has occurred. 


This article describes working with the Test Solution in SystemWeaver. You can also watch a demonstration.


Designing the Test Structure

Below is a sample test model.


Test architecture:  The top item of test specifications that contains the test scopes and test scope groups (A)

Test scope: A set of test cases that defines a “test project” organised as a set of test specifications. A test scope has a part relation called specification, which points to the item under test (scope). This item could be a vehicle function, ECU, software component, etc. Example: Park Assist (B)

Test specification: A set of test cases that are related, i.e., have something in common. A test specification has two part relations: test specification item and test specifications requirements. Examples: Audio, Cruise control (C)

Test case: Describes a method for testing the test object. A test case has a part relation which is test case requirement. Examples: Phone audio mode, Cruise control from stand still. (D)






Test Meta Model



The meta model for test development and test execution is hard-coded and the views that are described below are dependent on the meta model. For specifics on the test meta model, see Test Meta Model


Test Views

As is the case with all aspects of requirements management in SystemWeaver, various views can be created to make the testing process more efficient. They can be made accessible via the Items ribbon and are designed to assist the user in performing process steps, such as Create Test. Below is an example Test menu. 



Creating Test Scopes

By creating a relationship between the test scope and the requirement specification, you define the complete set of requirements that needs to be tested.


To create a test scope, simply right-click on the top-level test item and select Add > Test Domains. In the Open items dialog, select the test specifications scope to be added, e.g., Function test specifications, Component test specifications, etc. 



Then, so that testing can easily track the related function requirements, e.g., ECU, Attribute, Software Component, Function, etc., add them to the test model by right-clicking on the test scope and selecting New > Specification. In the below example, the Function requirements have been made accessible when viewing the Function test specifications. 



Creating Test Specifications

With the test scopes now defined, the test specifications can be created. There will likely be many test specifications which focus on different aspects of the testing. Each test specification refers to a part of the requirement specification. The below example shows how two test specifications focus on testing the same group of requirements. However, the requirements that need to be tested are divided between the two specifications.


To add a test specification, right-click on the on the test scope and select New > Test Specification. In the below example, the Component test specifications scope has one test specification of Cruise control. 




Adding Test Specification Items

The next step is to tie the test specifications to the items that are being tested. This can be done in two ways - via the Structure Tree menu or the Requirements view. 


Via the Structure Tree Menu

Right-click a test specification and select Add > Test Specification Item. In the below example, a test specification item is being added for the Radio test specification:



In the Open item of type item dialog, click General search, select the type of item that the specification is going to test and select then the specific test specification item from the Search Result list. In the below example, the search was for a Function specification and Radio is being selected as the test specification item:



Via The Requirements View

Select the test specification and click Requirements on the Items ribbon. 

In the Requirements view, click Set and search for and select the test specification item in the same manner as described above.


In the below example, the test specification requirement groups "Functional requirements" and "Non-Functional" have now been added for Radio: 


Adding Requirements to the Test Specifications

The Requirements view is used to connect requirements to the test specification. In the structure tree, select the specification item that you added above and switch to the Requirements view. The far right-hand Specifying Item pane should display the full set of requirements to select from. (If it does not, check with your system's architect as it may be that the Test Requirements structure tree view setting is missing (see Requirements don't show up on test views.) The Test Specification Requirements left-hand pane lists the selected requirements. In the below example, no requirements have been selected yet: 



To select the requirements that need to be tested for the selected specification item, right-click on a requirement in the Specifying Item list in the right pane and select Include Requirements



The requirement will appear to the left in the Test Specifications Requirements pane. In the above example for Radio, the "Preset listings" requirement has been added already and "RDS functions" is in the process of being added. 


Inclusion Status

The Requirements view will tell you if there are have been version changes of the requirements so that you can verify that the specification and test specification are aligned. The status will be visually displayed in the structure tree for each instance and in the Requirements view's panes.


Inclusion Status/InconsistencyDefinition
The specification and test specification requirements' versions are the same. In the below example, the specification's requirement version is the same as the test specification requirement version for "RDS functions". 
There is a version inconsistency between the specification requirement and the test specification requirement. In the below example, the test requirement is still version 1 while the specification requirement is version 2 and the changes can be viewed as well in the Description. To update, right-click on the requirement on the right side and select Update Requirement to same version. The requirement in the test case will also need to be synchronized as well which will be covered in "Traceability to Requirements in Test Case" in this article.
A test specification requirement highlighted in red indicates that it is "Out of Scope", i.e., the requirement is not included in the specification requirements list. For example, "Maintain cruise control" is not in the complete list of functional requirements for Radio on the right side: 
It was likely added as a test specification requirement via the structure tree outside of the scope of the function specifications in this case. To address it, you can either remove it or add it as a specification requirement.



Creating Test Cases

A test case item has to be created for each requirement in the test specification. The test case is the item where the test result is linked. If you link two requirements to the same test case, it will apply the test results of both requirements during a test. 


Test cases are created and managed using the Test Case Manager view. The menu option can be found in the Test menu group on the Items ribbon. In the below example, there are four requirements and none of them have be added to a test case yet. There are also no existing test cases for Radio.




To create a test case, right-click on a requirement in the Test Case manager view and select Create Test Case...

Alternatively, you can add a requirement to an existing test case/test case script, by selecting Add Req to existing Test Case instead. Since there are no existing test cases in this example, the option is grayed out. 


Tip: Multi-select of requirements to batch create test cases is also supported. An individual test case will be created for each selected requirement. Each test case will be named after its requirement, and can be modified afterwards if needed.
                                         

In the below example, test cases have been created for two of the four requirements for Radio - Preset listing Test Case and RDS functions Test Case. They are listed in the Test case manager view and are also displaying in the structure tree: 


The new test case items at this point have no content. You will design and edit them using the Test case editor as explained below.


Designing Test Cases

Once a test case is created, it is ready to be designed or edited using the Test case editor. This is where the test's steps are defined. The editor is easily accessible with one click via the Test menu group on the Items ribbon. For the purpose of this overview, we will focus on test cases where the test case steps are defined manually in the Description using free text or sequences. However, permutation and automated test cases can also be created. 


When a test case is displayed, the editor will always provide you with a view of the requirements that the test case is being developed for. 



To specify the test cases using free text, type the steps directly into the Description field in the Test case editor: 



To add the steps more formally as sequences, click Add sequence



In the Sequence section, you can then add each step of the test. A sequence toolbar provides you with the tools to reorder, add, and delete steps. You an also hit the Tab key to add a step. Note that Expected Result text is saved as String values in an XML. They do not support storage of images.

 

Before demonstrating how test cases are defined and run, the traceability and coverage view features in the test solution will be introduced. They are two highly appreciated and useful tools that help test managers to manage the testing processes.


Traceability to Requirements in Test Case

The Test case manager offers a traceability feature for requirements. If a test case appears yellow in the Test case manager, the version of its requirement is inconsistent with the version of the requirement that needs to be tested. In other words,  the requirement has changed and the test manager is visually alerted that the requirement needs to be updated within the test case itself. Below is an example showing two test cases that have requirements that need to be synchronized.



Updating a requirement in a test case is simple. Just expand the test case, find and right-click the inconsistent requirement and select Synchronize version



Once you synchronize, you will see that the versions now match: 



Tip: As in any grid-type display throughout SystemWeaver, you can right-click on a column header and Export to Excel. 



Using the Coverage View to Verify Requirements Coverage

At any given time, a test manager needs to be able to see an overview of the requirement coverage in testing. The Coverage view allows you to track the status of requirements coverage. 

Select a test scope or test specification and select the Coverage view. On the Pies tab, the Requirement coverage section displays the breakdown of current coverage using 6 difference states:

  • Not in test case or test spec
  • In test case (but not in any test spec)
  • Both in test case and test spec
  • Both in test case and test spec but "Not Testable"
  • In test spec but "Not Testable"
  • In test spec (but not in any test case)



The pie chart is interactive in that when you can click on a slice, the corresponding state in the legend will display in red. You can also right-click on the slice and export a list of the corresponding requirements to Excel:




Specifying Test Execution Settings

Once you have created the test specifications and cases, you can set up tests. This is when you define what test system to use, for example a rig, and which build to test so you are sure that the results obtained are from the specific build that needs to be tested. Open the Create test view of the selected test specification and select the test system and build items via Open Item. This has been done for the Radio test specification in the example below:




Once completed, click Create Test and Name the new test item. The new test suite item will display and the tests are now ready to be performed.


Performing Tests and Reporting Test Results

To perform a test, select the test and the Result grid view. Run the tests and set the Test Case Status accordingly for each to report back the results. 

 


You can get an overview of the test results using the Coverage view's Test case status panel to get a sense of how well the requirements have tested. 



Note: Test results are stored as node attributes. As such, you will not find them as part attributes or find them in any views. They are not visible by default in the swExplorer, however, you can add them to existing grid views via the right-click context menu using the More>Add node attribute column option or by including them in grid configurations.



Viewing Result History (to see how test results have changed over time)

Several test executions can be created for the same test specification as shown below. Below is an example showing two test executions for Radio



Once they have been performed, the results from each run can be compared using the Result history view. The view offers both a Table and a Chart display. In the example below, you can see how Test run 1 had two test cases where the results were not OK, but for the second run, those same two were now successful.