11st International Workshop on Combinatorial TestingIWCT 2022
IWCT 2022 is to be held in conjunction with ICST 2022 (going virtual), focusing on combinatorial testing. The workshop welcomes academic research submissions, as well as industrial experience reports.
Combinatorial Testing (CT), or Combinatorial Interaction Testing (CIT), is a widely applicable generic methodology and technology for software verification and validation, considered a testing best practice. In a combinatorial test plan, all interactions between parameters up to a certain level are covered. For example, in pairwise testing, for every pair of parameters, every pair of values will appear at least once. Studies show that CT is more efficient and effective than random testing.
CT has gained significant interest in recent years, both in research and in practice. However, many issues still remain unresolved, and much research is still needed in the field. For example, while pairwise testing is a well recognized and popular test planning method, investigations of actual failures in a number of software and systems convincingly show that pairwise testing is usually not sufficient, so high strength CT (i.e., t-way for t > 2) may be needed.
In addition, the combinatorial test suites need to exclude invalid combinations of test values that cannot be executed, which limits the degrees of freedom the algorithms have, thus complicating the problem. Moreover, modeling languages and tools for easily capturing the input test space are also required for real-life applicability of CT. Other obstacles for wide acceptance of CT in industry are the gap between the generated test plans and executable tests, and the difficulty in determining expected results for the generated tests. Finally, empirical studies on CT, as well as thorough comparison with other methods are also required.
In this workshop, we plan to bring together researchers actively working on combinatorial testing, and create a productive and creative environment for sharing and collaboration. Since there is no other venue dedicated to CT, yet there are many researchers working in the field, we expect, like in previous years, to see high responsiveness to take part in the workshop. Researchers attending the workshop will have an opportunity to publish their work in a dedicated venue, create new collaborations and take active part in the growing community of researchers working in the field.
The workshop will also be a meeting place between academia and industry, thus uniting academic excellence and industrial experience and needs. This will allow participants from academia to learn about the industrial experience in practical application of CT to real-life testing problems, and together with the colleagues from industry identify the difficulties that are obstacle to wider application of CT, and should be addressed in future research. Industrial participants will have an opportunity to meet the leading scientists in the field, and hear about the latest advances and innovations.
IWCT will host a competition among tools for combinatorial testing - We are still working on the details, some preliminary information is here: https://fmselab.github.io/ct-competition/ If you are interested in helping us in defining the rules, please let us know.
Call for Papers
We invite submissions of high-quality papers presenting original work on both theoretical and experimental aspects of combinatorial testing.
Papers should not exceed 8 pages for full papers or 4 pages for short experience and position papers, excluding references - but it is not a strict limit, if you need more space contact the chairs. Each submitted paper must conform to the IEEE two-column publication format. Papers will be reviewed by at least three members from the program committee. Accepted papers will be published in the IEEE Digital Library.
Participants to the CT competition (https://fmselab.github.io/ct-competition/) are invited to submit a paper in which they discuss their tool and results (against the benchamark provided in advance wrt the competition itself).
The aim of journal-first papers in category is to further enrich the program of IWCT, as well as to provide an overall more flexible path to publication and dissemination of original research in model-based testing. The published journal paper must adhere to the following three criteria:
- It should be clearly within the scope of the workshop.
- It should be recent: it should have been accepted and made publicly available in a journal (online or in print) by 1 January 2020 or more recently.
- It has not been presented at, and is not under consideration for, journal-first tracks of other conferences or workshops.
The 2-page submission should provide a concise summary of the published journal paper.
Journal-first submissions must be marked as such in the submission’s title, and must explicitly include full bibliographic details (including a DOI) of the journal publication they are based on. Submissions will be judged on the basis of the above criteria, but also considering how well they would complement the workshop’s technical program.
Topics of interest for full and short papers include, but are not limited to:
- Modeling the input space for CT – the input to CT algorithms is a set of parameters, respective values, and constraints on value combinations. This input should capture correctly the points of variability in testing the system. While this input is clearly crucial for the effectiveness of CT, it is a difficult problem.
- Efficient algorithms to generate test suites with small size for t-way testing for t > 2, involving support of constraints on combinations that are possible.
- Determination of expected system behavior for each test case – while the test cases are automatically generated by the CT algorithm, determining the expected system behavior is currently usually a manual task.
- Executing CT test suites – the result of CT algorithms is a list of tests, where a test is represented by a value for each parameter. There is still significant effort in transforming this representation into actual tests that a tester or testing tool can execute.
- Combinatorial testing based fault localization.
- Implementation of CT in existing testing infrastructures.
- Handling changes in test requirements – current CT methods focus on one-time generation of a test plan from a given set of requirements. However, test requirements are almost never static, and tend to change between different releases and versions. On the other hand, generating a new set of tests for each release may not be practical either.
- Empirical studies and feedback from practical applications of CT.
- Evaluation and return of investment metrics to assess the degree of usefulness of CT.
- Methodology used for test space modeling and determination of interaction coverage requirements.
- Discussion of challenges and open problems in the application of CT in industrial settings.
- Combinatorial testing for AI-based systems.
- Comparison and combination of CT with other dynamic verification methods.
- Investigation of historical records of failures to determine the kind of CT which may have detected underlying faults.
- Combinatorial testing for concurrent and real-time systems.
- CT for testing cloud computing systems and use of combinatorial methods in cloud architecture.
- Application of CT in other domains, e.g., information security, study of gene regulation and other biotechnology applications and mechanical engineering.
- Combinatorial testing of feature models for software product lines.
- Combinatorial analysis of existing test suites – analyze a test suite not generated by a CT algorithm in light of a combinatorial test space.
- Test plan reduction and completeness – both under stable, and under changing test requirements.
- CT and coverage metrics – combining the two, and studying the relation between them.