Textual api description standard as a tool for improving software testing quality

Authors

DOI:

https://doi.org/10.34185/1562-9945-4-165-2026-17

Keywords:

API documentation standard, MADS, software testing quality, test coverage metrics, boundary value analysis, REST API, ISTQB, RAG, Model Context Protocol

Abstract

The rapid adoption of microservice architecture has made application programming interfaces (APIs) the primary integration mechanism in modern software systems. Accordingly, the quality of API testing depends directly on the completeness and structure of API specifications available to testing engineers. In practice, however, the majority of projects document their APIs as informal plain text in corporate knowledge management systems - Confluence, Google Docs, Notion - without adhering to any unified standard. A systematic analysis of four current ISTQB syllabuses (CTFL v4.0.1, CTAL-TAE v2.0, CT-TAS v1.0, CT-AI v1.0) reveals that none of them defines the minimum required content for a textual endpoint description, despite recognising documentation quality as a measurable characteristic (FL-BO4). Existing research confirms the problem: Uddin and Robillard identified "incompleteness" as the most prevalent failure mode across API documentation, while Murphy et al. reported that specifications are "frequently missing, vague, or outdated" in real development teams. Machine-readable formats such as OpenAPI Specification address a different audience and assume technical knowledge of YAML or JSON, leaving the gap in informal human-readable documentation unresolved.

The purpose of this study is to develop and validate the Minimal API Description Standard (MADS) - a structured 10-field template for plain-text API endpoint descriptions in corporate documentation tools - and to demonstrate its impact on software testing quality.

MADS organises ten fields into four functional blocks: endpoint identification (HTTP method, URL pattern, operation name), input data (request parameters with types and constraints, request body), output data (successful response structure, error codes with conditions), and security context (authentication model, preconditions and business rules, API version). Fields are classified as mandatory or recommended. Each field is justified through convergent evidence from the scientific literature and practical security requirements (OWASP API Security Top 10).

Empirical evaluation was conducted across three REST API endpoints of a typical order management service. Test cases were designed using two ISTQB-standard techniques: Boundary Value Analysis (BVA) and Equivalence Partitioning (EP). Three indicators were measured for both an unstructured description (UD) and a MADS-compliant description: the standard Requirement Coverage metric (RC, per ISTQB CTFL v4.0.1 section 5.3.1 and IEEE 829), the applicability of BVA and EP as a binary indicator per parameter, and the total number of test cases. Results show that RC increases from 23% (UD) to 100% (MADS), BVA/EP applicability rises from 25% to 100% of parameters, and the test case count grows from 5 to 26 — a 5.2-fold increase achieved exclusively through structured documentation - a 5.3-fold improvement achieved exclusively through structured documentation, without additional development resources. Response Code Coverage reached zero for all three endpoints under the unstructured condition, meaning negative test scenarios were entirely absent. The study further demonstrates that MADS serves as a structural prerequisite for reliable LLM-based test generation pipelines: structured MADS chunks improve RAG retrieval accuracy and enable deterministic resource access in Model Context Protocol (MCP) agentic architectures.

The article proposes that the ISTQB Foundation Level Working Group consider incorporating minimum requirements for informal textual API descriptions into a future revision of the CTFL syllabus. Future research directions include automated MADS compliance validation, empirical correlation studies between MADS adoption and post-release defect rates, and extension of the standard to GraphQL and gRPC APIs.

References

Unified.to. (2024). 2024 State of SaaS APIs: API Specifications and Documentation. https://unified.to/blog/2024_state_of_saas_apis_api_specifications_and_documentation

Hrytsyuk, Yu. I., & Mukha, T. O. (2020). Metody vyznachennia yakosti prohramnoho zabez-pechennia [Methods for determining software quality]. Naukovyi visnyk NLTU Ukrainy, 30(1), 158–167. https://doi.org/10.36930/40300127

Hrytsyuk, Yu. I. (2022). Systema kompleksnoho otsiniuvannia yakosti prohramnoho zabez-pechennia [Comprehensive software quality assessment system]. Naukovyi visnyk NLTU Ukrainy, 32(2), 81–95. https://doi.org/10.36930/40320213

Hrytsyuk, P. Yu., Ivanyshyn, A. V., & Hrytsyuk, Yu. I. (2023). Zabezpechennia yakosti prohramnoho produktu za standartom IEEE 730-2014 [Software product quality assurance per IEEE 730-2014]. Naukovyi visnyk NLTU Ukrainy, 33(2), 101–117. https://doi.org/10.36930/40330214

International Software Testing Qualifications Board. (2024). Certified Tester Foundation Level Syllabus v4.0.1. https://istqb.org

Torskyi, O. I., & Hrytsyuk, Yu. I. (2025). Zastosuvannia mashynnoho navchannia modelei dlia pidvyshchennia efektyvnosti avtomatyzovanoho testuvannia [Application of ML models for im-proving automated testing efficiency]. Scientific Bulletin of UNFU, 35(4), 142–149. https://doi.org/10.36930/40350416

Trofymenko, O. H., & Dyka, A. I. (2024). Testuvannia ta zabezpechennia yakosti prohramnykh system [Testing and QA of software systems]. Feniks. https://doi.org/10.32837/11300.27717

Natsionalnyi universytet «Lvivska politekhnika». (2025). Formuvannia tekhnichnoi dokumen-tatsii IT proiektiv [Formation of technical documentation of IT projects]. Information Systems and Networks, 18, 261–270.

Uddin, G., & Robillard, M. P. (2015). How API documentation fails. IEEE Software, 32(4), 68–75. https://doi.org/10.1109/MS.2014.80

Meng, M., Steinhardt, S., & Schubert, A. (2018). Application Programming Interface Documen-tation: What Do Software Developers Want? Journal of Technical Writing and Communication, 48(3), 295–330. https://doi.org/10.1177/0047281617721853

Zibran, M. F., Nabi, N., Roy, C. K., & Bhavsar, V. C. (2019). What Should I Document? A Pre-liminary Systematic Mapping Study (arXiv:1907.13260). arXiv. https://arxiv.org/abs/1907.13260

Golmohammadi, A., Zhang, M., & Arcuri, A. (2023). Testing RESTful APIs: A Survey. ACM Transactions on Software Engineering and Methodology, 33(1). https://doi.org/10.1145/3617175

Coblenz, M., Guo, W., Voozhian, K., & Foster, J. S. (2023). A Qualitative Study of REST API Design and Specification Practices. 2023 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), 148–157. https://doi.org/10.1109/VL-HCC57772.2023.00025

Kim, M., Corradini, D., Sinha, S., Orso, A., Pasqua, M., Tzoref-Brill, R., & Ceccato, M. (2023). Enhancing REST API Testing with NLP Techniques. Proceedings of ISSTA 2023, 1232–1243. https://doi.org/10.1145/3597926.3598131

Sohan, S. M., Anslow, C., & Maurer, F. (2017). A study of the effectiveness of usage examples in REST API documentation. IEEE VL/HCC, 53–61. https://doi.org/10.1109/VLHCC.2017.8103450

Google Cloud. (2024). What is Model Context Protocol (MCP) A guide. https://cloud.google.com/discover/what-is-model-context-protocol

International Software Testing Qualifications Board. (2024). CTAL-TAE Syllabus v2.0. https://istqb.org

International Software Testing Qualifications Board. (2024). CT-TAS Syllabus v1.0. https://istqb.org

International Software Testing Qualifications Board. (2021). CT-AI Syllabus v1.0. https://astqb.org

OpenAPI Initiative. (2021). OpenAPI Specification v3.1.1. https://spec.openapis.org/oas/v3.1.1.html

OWASP Foundation. (2023). OWASP API Security Top 10 – 2023. https://owasp.org/API-Security

Krepych, S. Ya., & Spivak, I. Ya. (Eds.). (2020). Yakist prohramnoho zabezpechennia ta testu-vannia [Software quality and testing]. FOP Palianytsia V. A.

Gao, Y., et al. (2023). Retrieval-Augmented Generation for Large Language Models: A Survey (arXiv:2312.10997). arXiv. https://arxiv.org/abs/2312.10997

Lewis, P., et al. (2020). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. Advances in NeurIPS, 33, 9459–9474.

Sun, Z., et al. (2024). Retrieval-Augmented Test Generation: How Far Are We? (arXiv:2409.12682). arXiv. https://arxiv.org/abs/2409.12682

Sheffer, T. (2024). RAG for a Codebase with 10k Repos. Qodo Engineering Blog. https://www.qodo.ai/blog/rag-for-large-scale-code-repos

Anthropic. (2024). Introducing the Model Context Protocol. https://www.anthropic.com/news/model-context-protocol

Model Context Protocol. (2025). Wikipedia. https://en.wikipedia.org/wiki/Model_Context_Protocol

IBM. (2024). What is Model Context Protocol (MCP) https://www.ibm.com/think/topics/model-context-protocol

Published

2026-04-30