Sat 23 Apr 2022
Innopolis, Kazan, Russia
The Second International Conference on Code Quality (ICCQ) was a one-day computer science event focused on static analysis, program verification, bug detection, and software maintenance; ICCQ was organized in cooperation with IEEE Computer Society and Innopolis University.
Watch all presentations on YouTube and subscribe to our channel so that you don’t miss the next event!
The Proceedings of ICCQ were published by IEEE Xplore.
Keynote
Charles Zhang
HKUST
The general research interest of Dr. Zhang centers around the use of
both static and dynamic programming analysis techniques for making
complex software systems more secure and reliable. Dr. Zhang is
an Associate Professor and director of the
Cybersecurity Lab at HKUST.
His research received an ICSE and a
PLDI distinguished paper award,
as well as the ACM SIGSOFT
Doctoral Dissertation Award, and IBM PhD fellowships.
He co-founded and served as the chairman of Sourcebrella,
a static analysis tool vendor.
Steering Committee
Hou Rui
Director of Huawei MRC
Alexander Tormasov
Rector of Innopolis University
Program Committee
Giancarlo Succi (Chair)
Innopolis University
And in alphabetical order:
Karim Ali
University of Alberta
Luciano Baresi
Politecnico di Milano
Carl Friedrich Bolz-Tereick
Heinrich-Heine-Universität Düsseldorf
Daniele Cono D’Elia
Sapienza University of Rome
William J. Bowman
University of British Columbia
Laura M. Castro
Universidade da Coruña
Shigeru Chiba
University of Tokyo
Christian Hammer
University of Passau
Mats Heimdahl
University of Minnesota
Robert Hirschfeld
University of Potsdam
Alexandra Jimborean
University of Murcia
David H. Lorenz
Open University of Israel
Hidehiko Masuhara
Tokyo Institute of Technology
Hausi A. Müller
University of Victoria
Yongjun Park
Hanyang University
Gennady Pekhimenko
University of Toronto
Yulei Sui
University of Technology Sydney
Laurie Williams
North Carolina State University
Tuba Yavuz
University of Florida
Keynotes and Invited Talks
Enterprise-scale static analysis: A Pinpoint experience Charles Zhang
Despite years of research and practice, modern static analysis techniques still cannot detect oldest and extremely well understood software bugs such as the Heartbleed, one of the most “spectacular” security flaws of the recent decade. A remedy, as what we have attempted through the successful commercialization of the Pinpoint platform (PLDI 18), is to make static program analysis aware of the basic characteristics of the modern enterprise-scale software system. The talk focuses on discussing these characteristics and how Pinpoint addresses them pragmatically as well as its future directions. Pinpoint is a LLVM-based cross-language static analysis platform and deployed in major Chinese tech companies such as Tencent, Baidu, Huawei, and Alibaba.
Accepted Papers
We received 11 submissions. 4 papers were accepted. Each paper received at least three reviews from PC members.
One accepted paper was withdrawn.
To What Extent Can Code Quality be Improved by Eliminating Test Smells? Haitao Wu, Ruidi Yin, Jianhua Gao, Zijie Huang and Huajun Huang
Software testing is a key activity to guarantee software reliability and maintainability. However, developers always ignore the maintenance of test code when performing a tradeoff between code quality and release deadlines. Moreover, there lacks of research to quantify the relationship between test code and production code quality. As a result, test quality degrades due to the lack of appropriate refactoring plans. This paper fills the gap by evaluating to what extent can code quality be improved by eliminating test smells. First, we detect the presence of test smells in 119 historical releases of 10 open-source projects. Afterward, we evaluate code quality in 2 aspects, i.e., defect- and change-proneness. Finally, we exploit the odds ratio and Mann-Whitney test to quantify the extent of variation for the code quality. Results show that the OR values of the test code and production code are both much greater than 1, which proves that the test smell is indeed a risk factor to increase the defect- proneness of code. Moreover, the change-proneness of the test code and associated production code reduces significantly after the elimination. Experiment also reveals Assertion Roulette is the riskiest smell to degrade production code quality
Method Name Prediction for Automatically Generated Unit Tests Maxim Petukhov, Evelina Gudauskayte, Arman Kaliyev, Mikhail Oskin, Dmitry Ivanov and Qianxiang Wang
Writing intuitively understandable method names is an important aspect of good programming practice. The method names have to summarize the codes’ behavior such that software engineers would easily understand their purpose. Modern automatic testing tools are able to generate potentially unlimited number of unit tests for a project under test. However, these tests suffers from unintelligible unit test names as it is a quite difficult to understand what each test triggers and checks. This inspired us to adapt the state-of-the-art method name prediction approaches for automatically generated unit tests. We have developed a graph extraction pipeline with prediction models based on Graph Neural Networks (GNNs). Extracted graphs contain information about the structure of unit tests and their calling functions. The experiment results have shown that the proposed work outperforms other models with precision = 0.48, recall = 0.42 and F1 = 0.45 results. The dataset and source codes are released for wide public access.
Quasi-Dominators and Random Selection in Mutation Testing Rowland Pitts
Mutation Testing is a powerful approach to bug detecting and assessing code quality; however, software developers may be reluctant to embrace the technique due to the monstrous quantity of redundant mutants it generates. In spite of their large numbers, redundant mutants are relatively innocuous. Recent research indicates that redundant mutants affect a test engineer’s work effort only slightly, whereas equivalent mutants have a direct linear impact. Moreover, the time invested analyzing equivalent mutants produces no unit tests. Dominator mutants seek to address the redundancy problem, but they require the identification of all subsumption relationships, which consequently reveals all equivalent mutants. This paper introduces the notion of quasi-dominator mutants, which augment dominator mutants in significant numbers, enhancing their performance, and provides new insight into why random of mutant selection performs so well.
Partners
Academia:
Industry:
Yandex, a Russian intelligent technology company
Huawei, a global provider of ICT infrastructure and smart devices
B.TECH, a Center of Excellence for Electronic Markets technology of BNP Paribas CIB
Kaspersky, a global cybersecurity company, multinational provider of security solutions
Organizers
These people were making ICCQ’22:
Yegor
Bugayenko (Chair)