Computer Engineering
Permanent URI for this collection
Browse
Browsing Computer Engineering by Author "Jiang, Zhen Ming"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Open Access An Empirical Assessment on the Techniques Used in Load Testing(2017-07-27) Gao, Ruoyu; Jiang, Zhen MingThere are two main problems associated with load testing research: (1) the testing environment might not be realistic and (2) lack of empirical research. To address the first problem, we systematically assess the performance behavior of the system with various realistic environment changes. Results show that environment changes can have a clear performance impact on the system. Different scenarios react differently to the changes in the computing resources. When predicting the performance of the system under new environment changes, our ensemble-based models significantly out-perform the baseline models. To address the second problem, we have empirically evaluated 23 test analysis techniques. We have found all the evaluated techniques can effectively build performance models using data from both buggy or non-buggy tests and flag the performance deviations. It is more cost-effective to train models using two recent previous tests collected under longer sampling intervals.Item Open Access Characterizing and Improving Logging Practices in Java-based Open Source Software Projects - A Large-scale Case Study in Apache Software Foundation(2017-07-27) Chen, Boyuan; Jiang, Zhen MingLog messages (generated by logging code) contain rich information about the runtime behavior of software systems. Although more logging code can provide more context of the system's behavior, it is undesirable to include too much logging code. Yuan et al. performed the first empirical study on characterizing the logging. In the first part of the thesis, we conduct a large-scale replication study on characterizing the logging practices on Java-based open source projects. A significantly higher portion of log updates are for enhancing the quality rather than co-changes with feature implementations. However, there are no well-defined coding guidelines for performing effective logging. In the second part, we studied the problem of characterizing and detecting the anti-patterns in the logging code. We have encoded these anti-patterns into a static code analysis tool, LCAnalyzer. Case studies show that LCAnalyzer has an average recall of 95% and precision of 60% .