Combinatorial testing, random testing, and adaptive random testing for detecting interaction triggered failures

There are a number of pairwise testing benefits and challenges within a software QA strategy. Pairwise testing is an opportunity to determine the maximum number of bugs with the minimum number of checks. N is a variable, which is used to control the number of degrees in an experiment.

definition of combinatorial testing

When these software systems consist of many different objects, components and services, different parts may interact to cause a failure, called an interaction triggered failure . An ITF can occur when certain inputs or features are tested together; the value combination of factors causing the failure can be modeled as a Minimal Failure-causing Schema . An MFS is often hidden in software, especially when the software system has many inputs and features. Our approach generates covering arrays of larger size than other tools, particularly when the strength is higher than 2. As one of the future work, we intend to apply a squashing technique to diminish a redundant covering array. We examine the behavior of our proposed method in higher strengths, 4 and 5.

Describing Combinatorial Testing:

Lastly, our current algorithm is sufficiently fast in strength 2 and 3, but it may become less efficient in strength 4 or greater. It is known that a bug can be found in a strength up to 6 or 7 (Kuhn, Kacker & Lei, 2016). Therefore, in order to improve the applicability of our approach in practice in high strength, our algorithm needs further improvement. In summary, our approach can enhance the applicability of the CIT technique for software whose specifications are typically considered too complex for ACTS or too large for JCUnit . JCUnit has the highest capability in handling various data types and constraints and its notations are the most readable among the three tools.

definition of combinatorial testing

Therefore, the technique can be described as dividing of the whole set of input/output data into such partitions. And if you have, for example, a set of data with about 100 elements which can be divided into 5 partitions, you can decrease the number of cases to 5. Design of experiments , factorial designed experiments – A very simple explanation is a systemic approach to running experiments in which multiple parameters are varied simultaneously. This allows for learning a great deal in very few experiments and allows for learning about interactions between parameters quickly. DoE principles are at the core of how Hexawise software creates effective software test plans.

Constrained locating arrays for combinatorial interaction testing

But trying to over-simplify performance testing removes much of its value. Another form of performance testing is done on sub components of a system to determine what solutions may be best. These likely don’t depend on individual user conditions but can be impacted by other things. Such as under normal usage option 1 provides great performance but under larger load option 1 slows down a great deal and option 2 is better. Focusing on these tests of sub components run the risk of sub-optimization, where optimizing individual sub-components result in a less than optimal overall performance. Performance testing sub-components is important but it is most important is testing the performance of the overall system.

Section 2 presents formal definitions and gives some metrics for measuring quality of the test suite in hitting MFS. Section 3 undertakes a systematic review of RT, ART and CT, and defines algorithms for each to capture the MFS. Section 4 builds a framework and constructs experiments to compare RT, ART and CT. Section 5 analyzes and compares existing studies on CT, RT and ART with our work.

definition of combinatorial testing

Combinatorial interaction testing, which is a technique to verify a system with numerous input parameters, employs a mathematical object called a covering array as a test input. This technique generates a limited number of test cases while guaranteeing a given combinatorial coverage. Although this area has been studied extensively, handling constraints among input parameters remains a major challenge, which may significantly increase the cost to generate covering arrays.

In order to answer RQ1, we measure the execution time of our algorithm including necessary preprocesses for the input data for a desired input model. The preprocess may contain a covering array generation since our algorithm does not generate a covering array but it takes two covering arrays as input. It will be compared with the execution time to generate a covering array using a conventional method for the same desired output model. Thus, we can construct a new CCA from the existing CCA’s without inspecting into neither the semantics of the constraints nor the forbidden tuples defined for the input arrays. This allows users to employ an approach, where different CIT tools to construct input covering arrays and then combine them into one, later.

Hence a pair of two test parameters are chosen and all possible pairs of these two parameters are sent as input parameters for testing purpose. We first examine these assumptions and further clarify the conditions where combinatorial join can reduce overall testing cost. For simplicity, in this discussion, we model the testing effort into two phases, “component level testing” and “system level testing”. We then evaluate our method proposed in this paper based on those conditions. Our approach allows us to reuse test cases defined as input covering arrays, but the reusability of test oracles along with the input covering arrays is an independent question. In order to answer RQ3, we will extend our previous work (Ukai et al., 2019) by examining various scenarios where test oracles may be reusable or not.

For example, if you have a set of variables which are hard to remember and manage, a Decision Table will help to organize them to simplify identification of the right cases. Of course, the techniques are not a silver bullet, they do not magically turn specification or code into test cases. You still need to analyze each functionality aspect very carefully, and choose and apply definition of combinatorial testing the techniques wisely. Access Rights define the actions someone you share a Hexawise project with is able to take. The rights are set at the project level, for all test plans in that project. Model-Driven Development is a paradigm that prescribes building conceptual models that abstractly represent the system and generating code from these models through transformation rules.

Covering array constructors: An experimental analysis of their interaction coverage and fault detection

Thus, a combinatorial technique for picking test cases like all-pairs testing is a useful cost-benefit compromise that enables a significant reduction in the number of test cases without drastically compromising functional coverage. Combinations of configuration values or parameters, in which the covering arrays are used to select the values of configurable parameters, possibly with the assistance of same tests that run against all configuration combinations. Another intuitive tool for performing combinatorial testing is where factors, values, and constraints are simply written in the editor, and test configurations are generated. This tool has an extremely fast and efficient algorithm and can generate about 15 test cases in 1 second.

Can be virtually applied to any software and at different levels of abstractions. Our systems have detected unusual traffic activity from your network. Please complete this reCAPTCHA to demonstrate that it’s you making the requests and not a robot.

  • We can consider the list box values as 0 and others as 0 is neither positive nor negative.
  • When the VSCA’s strengths are 2 and 3 , at the degree 20, the size penalty is −46–15% and it becomes −43–8.8% when the degree increases up to 380 .
  • Specifically, in strength 2, our approach reduces the test suite generation time of synthetic systems by 33%–95%, while it reduces the generation time by 84%–99% in strength 3.
  • As mentioned already, we measure the generation time and size of output covering arrays , for various set of settings along with different number of parameters.
  • It includes various sets of parameter models taken from real world projects and we selected the following data sets for our evaluation.

Though pairwise testing can dramatically reduce combinations, it still remains really effective in terms of fault detections and is indeed a smart test design technique that promises best test efforts and exceptional effectiveness. Different CIT tools have different characteristics in terms of generation time, output size, and especially constraint describing capability. Second, our proposed method accelerates an existing covering array generation algorithm by combining outputs of it, at the cost of increase in output size (i.e., size penalty). In general, The size penalty becomes smaller and the time reduction becomes greater when the constraint set is more complex and the degree is larger. Specifically, the generation time reduction varies from 13% to 99% depending on generation scenarios and degrees of the method’s output. Although the increase in size is big in some cases and it seems a concern to apply the approach, it is still beneficial, from different aspects in different situations, as discussed as follows.

All-pairs within a testing suite

The idea is to catch bugs that are based on the interaction between two parameter values . A specific test case will include parameter values for each parameter . Wireless Ad hoc networks provide a flexible and adaptable infrastructure to transport data over a great variety of environments. Recently, real-time audio and video data transmission has been increased due to the appearance of many multimedia applications. One of the major challenges is to ensure the quality of multimedia streams when they have passed through a wireless ad hoc network.

definition of combinatorial testing

Moreover, most bugs are triggered by single input parameters or are caused by the interaction between two parameters. Bugs caused by interaction between three or more parameters are usually very uncommon and provide very less justification for the greater investment in searching for them. Show the sizes of generated covering arrays in variable strength and respectively.

Generated covering array size

We used “CASA” benchmark models, which is widely referenced in CIT area, in our evaluation . It includes various sets of parameter models taken from real world projects and we selected the following data sets for our evaluation. In this section, we describe how we conduct evaluation to answer each research question, and we illustrate how each research question relates to the covering array generation process in Fig.

Weaken-product-based Combinatorial Join Technique

Generating a covering array is an expensive task, especially when executed under complex constraints, a higher strength than two, and/or there are a number of parameters. Since a large software system can have a complex internal structure and hundreds or even more parameters, divide-and-conquer approach is desirable. If the time of covering array generation grows non-linearly along with the number of parameters n(e.g., n2, n3), this approach may accelerate the overall generation because a set of parameters can be divided into multiple groups.

Specifically, “resource conflict” refers to a type of bug that is triggered by conflicting usage of resources shared among multiple components. Project(C,factors) contains all the rows found in R and all the rows in it are contained by R. Project(C,factors) contains all the rows found in L and all the rows in it are contained by L. Factors is a function that returns a set of factors on which a given array is constructed. We can also define a constraint that checks if values satisfy a certain formula using mathematical operators such as +, −, ∗, and /. We have evaluated how the size of generated test suite behaves under various conditions .

Many software bugs are caused by either a single input parameter or an interaction between pairs of parameters. Bugs involving interactions between more than two parameters are less common and more expensive to discover. Software testing efforts are therefore reaching their limits when you want to explore all possible inputs interactions. The output of a software application depends on many factors e.g. input parameters, state variables and environment configurations. Techniques likeboundary value analysis and equivalence partitioningcan be useful to identify the possible values for individual factors. But it is impractical to test all possible combinations of values for all those factors.

Hence, it is beneficial to apply “divide-and-conquer” approach to generation of a covering array so that we can utilize multiple covering array generators in combination. We will answer RQ4 by examining the detail of the procedure to employ the technique to implement the approach. There is another approach that constructs a new covering array from existing ones (Zamansky et al., 2017).

Leave a Reply

Your email address will not be published. Required fields are marked *