Examining the Effectiveness of Generative Artificial Intelligence for the Identification of Defeaters in Assurance Cases
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Assurance cases are structured arguments that allow verifying the correct implementation of the created systems’ non-functional requirements (e.g., safety, security, reliability). This allows for preventing system failure. The latter may result in loss of life, severe injuries, large-scale environmental damage, property destruction, and major economic loss. Assurance cases support the certification of systems in compliance with industrial standards (e.g., DO-178C, ISO26262). However, the presence of assurance weakeners - deficits and logical fallacies - signals gaps in evidence and reasoning. Addressing this, our research presents a comprehensive taxonomy for categorizing these assurance weakeners, alongside proposed management strategies. The taxonomy divides weakeners into four categories of uncertainty: aleatory, epistemic, ontological, and argumentation. It also categorizes management approaches into representation, identification, and mitigation. A critical aspect of strengthening assurance cases involves identifying argumentation uncertainty or defeaters. To automate this process, we explore the capabilities of GPT-4 Turbo, a sophisticated large language model by OpenAI. We focus on its application in detecting defeaters within assurance cases represented using Eliminative Argumentation(EA) notation. Our initial evaluation assesses GPT-4 Turbo’s proficiency in understanding and applying this notation, a key factor in effectively generating defeaters. The results indicate that GPT-4 Turbo is highly adept in EA notation, demonstrating its potential to generate a diverse range of defeaters, thereby enhancing the robustness and reliability of assurance cases. Moreover, we used GPT-4 Turbo to identify defeaters which demonstrated effective proficiency.