both equally dangerous: One, the product can Experience has shown that the cost of rework is over 50% in most large projects. Can this re-work be brought down? pass testing with flying colors, but can turn out to be a poor product because it was developed and tested using wrong or inaccurate requirements; two, it may fail testing completely, throwing up scores of bugs. In both instances, rework will eat into budgets and time to market.
Here is an example of ambiguity from a requirements document, “Shut off the pump if the water level remains above 100 meters for more than 4 seconds.”1 From this requirement statement it is not clear if the water level refers to mean/median/root mean square/minimum. The interpretation of the requirement is a function of the reader’s background.
Much of the problems in requirements capture have become acceptable norms in development. But what if the requirements can be articulated in a non-ambiguous manner to reduce the number of bugs being introduced in development? In other words, what if we could automate requirements capture?
New tools are becoming available that run through text (written and spoken) and figure out ambiguities. Once these are flagged by the tool, they can be taken back to the client and put into more structured language. These tools surface ambiguities and they can do it at scale and speed – both of which are of critical importance in today’s competitive environment.
Automatic code generation: the new challenge
We have the tools that automate requirements gathering. It is the next step of coding that needs equal attention. Code automation has not been fully developed. Usable models, templates, tools, libraries in common/target languages, etc., that can automate development are still viewed with caution. But some methodologies, when used intelligently, can make developers more productive and reduce the risk of bugs. The challenge here is to figure out the automation tools that work and those that don’t in specific instances and environments.
In integration projects, the process of creating a spreadsheet mapping spec (done by a business analyst) and then converting that into code (by a developer) amounts to major duplication of effort. Typically, over 50% of interfaces developed in an integration process are simple in nature. In such instances, the simple interface receives an XML (eXtensible Markup Language) file that is run through an XSLT (Extensible Stylesheet Language Transformations) which transforms the input XML according to the logic defined in the mapping spec. What if the XSLT and other output files could be created using the input and output XSDs (XML Schema Definition)? The code that will be generated as a result is then run through an automated testing framework. We would then have an automated assembly line for SDLC starting from requirements gathering right up to the deployment of tested code to the production environments.
Such an approach will not always result in zero errors. But over a period of time, using Machine Learning, the system will know enough so as to be almost 100% accurate.