Zero Deviation Lifecycle gives requirements engineering and software modeling a refresh
Modeling of requirements and solutions in software development has remained a core part of systems engineering and embedded components in product manufacturing, but has had mixed-to-poor results in breaking into mainstream software development. The last major effort, the Object Management Groupās executable UML, arrived at the same time as the Agile methodologies (Scrum and Extreme Programming) began their rise in adoption and lost out to these lighter weight processes for various reasons, including the lack of a standard solution. However, Qualit-e, a Cognizant Technology Solutions venture, is introducing a new concept to modeling, the Zero Deviation Lifecycle (ZDLC), with the potential to give modeling a fresh opportunity to break into the mainstream. ZDLC embraces the agile iterative approach to development while using executable models to reduce the gap that can arise between requirements and the built system. Ovum believes that ZDLC has enormous potential in software development; the modeling initiatives used are based upon open source projects that will create definitive standard solutions.
The emphasis of ZDLC is to reduce the mismatch between the required system and the built system
Qualit-eās ZDLC starts by addressing the requirements of the architecture transformation from āas isā to āto beā, employing five levels of House of Quality (HoQ) transformation matrices. The first matrix maps stakeholders (the who) to business requirements (the what). This map is then drilled into with a second HoQ matrix relating the business requirements (the what) to the user requirements (the how). A third drill-down HoQ matrix maps the user requirements (the new what) to technical requirements (the new how). At each level of HoQ prioritization the matrix entries are prioritized ā for example, using a traffic lights scheme.
The next step is a fourth-level HoQ matrix created to relate the technical requirements to processes and entities. At this point the Goal Question Metric (GQM) methodology is used to extract metrics that will demonstrate whether the goals are being achieved. GQM is an approach that has been developed over the years at institutes such as Nasa and in industry (for example, BMW). GQM yields measurable quality attributes of the non-functional requirements. With this information the final fifth HoQ matrix is produced relating the processes and entities (the how) to non-functional requirements (the how well).
The next stage of ZDLC is to take the HoQ analysis of functional and non-functional requirements and create a single model based on Petri nets and pi-calculus using Cognizantās Testable Integration Architecture (TiA) methodology. The key aspect of ZDLC models is that they can be executed to test the requirements through simulation. The modeling phase results are then used to feedback to the HoQ levels and the process described above is iterated several times to refine the requirements as inconsistencies are flushed out.
Having completed these iterations the modeling tool is then used to generate design artifacts and specifications in formats that can drive code generation: BPMN, WSDL, state charts, etc. Once the system is built it can then be reverse-engineered to create a model of the actual system built. Cognizantās Systemic Defect Profiler then runs both the original requirements-based model and the actual model in tandem and compares them to identify any differences.
Benefits of the ZDLC include a strong tie of requirements into constructible code artifacts, with full traceability back to stakeholders, a testable architecture simulated by an executable model, and a reverse-engineering capability that allows the actual model to be compared with the required model. Any deviations and defects are reported by the solution to inform root-cause analysis and correction. The finished product is provably shown to conform to the business requirements.
The modeling element in ZDLC is based on the Savara project and the Eclipse BPMN2 Modeler
A key element in ZDLC is the Systemic Defect Profiler modeling capability, and this has contributions from two open source projects: JBoss Savara (a joint venture between Cognizant and Red Hat), and the Eclipse BPMN2 Modeler. Savara, currently in version 2, aims to build tools for enterprise and solution architects based on TiA. The BPMN2 Modeler project, a standard for (executable) business process modeling, feeds into Savara. TiA uses Choreography Description Language (WSCDL) and BPMN2 to describe an architecture, in particular SOA, and test it against requirements. The formal basis of TiA is pi-calculus, which formally defines asynchronous and serial or parallel communication between entities. Savara supports not just BPMN2 but also WS-BPEL, WSDL, WS-CDL, and Service Component Architecture.
The executable UML 2 standard and the concept of the software factory were initiatives meant to take modeling to the next level. While correctly addressing genuine needs in software development, these approaches failed to achieve their aims; the lack of availability of a standard solution had some part to play in that. As a community project Savara promises to provide a core standard which will be available to the mass developer community, as well as a platform around which vendors may offer premium products, such as the ZDLC from Qualit-e.
It has always been our view that modeling, which proves so essential in systems software engineering, has a part to play in mainstream application development; there is a good chance that ZDLC will achieve that goal.
Michael Azoff, Principal Analyst, Software Solutions Group
All Rights Reserved.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of the publisher, Ovum (an Informa business).
The facts of this report are believed to be correct at the time of publication but cannot be guaranteed. Please note that the findings, conclusions, and recommendations that Ovum delivers will be based on information gathered in good faith from both primary and secondary sources, whose accuracy we are not always in a position to guarantee. As such Ovum can accept no liability whatever for actions taken based on any information that may subsequently prove to be incorrect.