The application of AI techniques in software testing is in its early stages. In the last few years, software industry and practitioners increasingly adopted novel techniques to ease the software development cycle, namely the testing phase, by means of autonomous testing and/or optimization of repetitive and tedious activities.
AI can transform software testing by supporting more efficient tests and increased automation, which ultimately leads to a reduction in testing costs and in the production of improved software. The AIST workshop aims to gather researchers and industry practitioners to present, discuss, and foster collaboration on novel and up-to-date R&D focused on the application of AI in software testing coined by the multiplicity of perspectives and different topics observed under the AI umbrella.
Call for Papers
We invite novel papers from academia and industry on AI applied to software testing that cover, but are not limited to, the following aspects:
- AI for test case design, test generation, test prioritization, and test reduction.
- AI for load testing and performance testing.
- AI for monitoring running systems or optimizing those systems.
- Explainable AI for software testing.
- Case studies, experience reports, benchmarking, and best practices.
- New ideas, emerging results, and position papers.
- Industrial case studies with lessons learned or practical guidelines.
Papers can be of one of the following types:
- Full-papers (max. 8 pages): Papers presenting mature research results or industrial practices.
- Short-papers (max. 4 pages): Papers presenting new ideas, preliminary results, position statements, and open challenges.
All submissions must be original, unpublished, and not submitted for publication elsewhere. For all accepted papers, at least one author must register in the workshop and present the paper. AIST will be held online.
Papers presented at AIST will be published through the IEEE digital library.