“We started with the ISO definition of those characteristics,” he said. “They deal with external behaviors, like uptime and such, but don’t get down to the code. We focused on violations of code architecture and coding practices that cause these problems.”Models that assess process but don’t look at code quality can’t truly be used to evaluate if the process is working, Curtis said. “I hear from CIOs that they’re at CMM level 5 support, but the software crashed when they loaded it. Even with an excellent process, there can still be problems in the code. Companies are paying for code—for a product—not for a process model.”Companies can measure themselves against the CISQ specifications, and that itself can become the equivalent of a service-level agreement in many cases, Curtis said. He also said organizations have begun to use quality gates that incorporate code analysis and testing to ensure the code is not harmful and will meet the service levels agreed upon before it is put into operation.“The use of measures is to evaluate risk,” Curtis said. “There are things in code that create vulnerabilities, the code itself can degrade… These measures give organizations better insight into what’s going on with the code. It provides governance over risk.” The Object Management Group will approve as standards new measures to evaluate the quality characteristics of software created by the Consortium for IT Software Quality.The CISQ Quality Characteristic Measures cover the areas of reliability, security, performance efficiency and maintainability, as well as for automating function point measurements, according to Bill Curtis, executive director of the CISQ. There are 86 software engineering rules covered in the measures, which the consortium said are designed to automatically identify flaws and vulnerabilities in software, and to produce a score based on a ratio of violations to software size.(Related: CISQ’s mission to ensure industry-wide quality standards)The work was necessary, he said, because systems integrators approached OMG with metrics from their service-level agreements, and OMG saw that each customer had a different definition of such things as security and reliability. So Curtis was asked to head the work, joined by 24 international companies to define how to measure those characteristics in code.