Human-AI Comparative Evaluation of Test Case Generation in Scenario-Oriented Software Testing
编号:142 访问权限:仅限参会人 更新:2025-12-23 13:18:11 浏览:109次 拓展类型2

报告开始:2025年12月30日 12:05(Asia/Amman)

报告时间:10min

所在会场:[S9] Track 5: Emerging Trends of AI/ML [S9-1] Track 5: Emerging Trends of AI/ML

演示文件

提示:该报告下的文件权限为仅限参会人,您尚未登录,暂时无法查看。

摘要
This research evaluates the effectiveness of AI-generated test cases (using GPT-4) against test cases constructed using conventional manual testing approaches in scenario-driven software testing. Manual test cases developed by applying established black-box testing methods, while GPT-4 generated test cases through structured prompts. Three scenarios—easy, moderate, and complex—used to conduct the evaluation under equivalent conditions. The primary comparisons in the present study evaluated defect detection capability, test coverage, efficiency of execution, and scenario relevance. The results indicate that AI-generated test cases provide better coverage, are faster to generate, and more effectively detect edge case faults; notably when evaluating the complex scenario. Procedural/manual testing found to be stronger in contextual reasoning and for safety critical interpretation. Overall, this research concludes that AI-generated testing is a complement to procedural/manual testing methods not a replacement. The results support a "hybrid" testing approach for modern software testing and quality assurance.
关键词
Artificial Intelligence,GPT-4; Software Testing; Test Case Generation; Scenario-Driven Testing; Quality Assurance
报告人
Rawan Habarneh
Student zarqa university

稿件作者
Hamed Fawareh zarqa university
Rawan Habarneh zarqa university
发表评论
验证码 看不清楚,更换一张
全部评论
重要日期
  • 会议日期

    12月29日

    2025

    12月31日

    2025

  • 12月30日 2025

    报告提交截止日期

  • 02月10日 2026

    初稿截稿日期

  • 02月10日 2026

    注册截止日期

主办单位
国际科学联合会
承办单位
扎尔卡大学
历届会议
移动端
在手机上打开
小程序
打开微信小程序
客服
扫码或点此咨询