When it comes to creating secure applications, nothing beats focusing on the basics: secure coding in development and then testing the application for security defects. Part of the testing regime should always include an in-depth application pen test. But how do organizations know they are getting the full benefit from such assessments?
What goes (or should go) into developing application security is well known. Developers should have their code vetted in their development environment. Their code should go through a series of quality and security tests in the development pipeline. Applications should be vetted again right after deployment. And, after all of that, it’s very likely that more vulnerabilities exist in the application that have yet to be uncovered.
Finding those stubborn flaws is where periodic application penetration tests come in; this is when an application is poked and prodded to see if its security controls work as intended and if it’s vulnerable to attack. Research firm Markets and Markets predicts that the entire penetration testing market will grow to $1.7 billion by 2021, up from $595 million in 2016, and that the web application penetration testing segment had the largest market size in 2016.
The difficulty in finding bugs throughout development, and as applications run in production, is why application security pen tests remain a critical part of any security program. These tests are how latent vulnerabilities such as cross-site scripting, SQL injection, remote code execution, and poor authentication are identified and hopefully sent for remediation. But what does a successful penetration test look like, and how should enterprises measure success, so that they can improve their results and get more value in the future?
In a study of all of the application penetration tests the company conducted in 2016, Cobalt set out to detail how application penetration test metrics can be used to measure the effectiveness of application penetration tests on security programs and third-party pen test engagements. After all, the only way to improve is to assess the current situation, establish goals, and measure progress against those goals.
With that in mind, it’s reasonable to ask why an enterprise would conduct an application penetration test to begin with. Caroline Wong, vice president of security strategy at Cobalt, says the most common reason, by far, is when companies need to prove due diligence to customers as part of a business transaction – or to prove that uncovered security deficiencies were remedied. “This is the most common use case we see,” she says. The second, says Wong: Security complaints.
Chris Blow, security risk technologist at Liberty Mutual Insurance agrees. “The majority of application penetration tests I’ve seen professionally were compliance or customer driven,” he says. “There are the rare cases when a company is making a big push to be more secure, and they want to see where their applications stand, but most of the time it’s compliance driven, such as meeting a PCI requirement, or a customer is doing due diligence.”
Cobalt wanted to determine how organizations could learn how to get the most out of the application penetration tests. “The idea was to help define the key metrics that we think are needed to determine the impact and or the ROI for today’s modern penetration testing program,” says Wong. “Organizations are spending a lot of money on pen testing. What are they getting for their money?”
Application penetration testing metrics can help to answer that question, contends Wong, who is also the author of Security Metrics, A Beginner’s Guide. “Measurement provides visibility, educates and provides a common way for understanding the security program, and it enables the best positive management of the security program through improved planning and decision making,” she says.
Experienced red team professionals generally agree that it’s difficult to determine the value of application penetration tests without looking at other aspects of the application’s security program. “Suppose an application penetration test is conducted, and it identifies 10 high-, 20 medium-, and 50 low-risk vulnerabilities in January. The organization remedies all of those vulnerabilities in 60 days. But when they conduct another test in March, they find a new batch of, and even more, vulnerabilities that include 20 high, 30 medium, and 50 low vulnerabilities,” Blow says.
“Was the first application penetration test a waste?” he continues. “No, as flaws were fixed. But results like this indicate that more developer training is in order and perhaps more testing in the development pipeline.”
In addition to using metrics to inform a secure application development program, what are examples of other metrics? These can include measuring the frequency of application assessments, how long it takes to fix critical vulnerabilities, or the reduction of certain classes of flaws. “The important thing is to start measuring based on what is important to you and use these measurements to improve over time,” says Wong.
When it came to the periodicity of application penetration tests, the Cobalt study found that most opt for an annual test. Of those organizations Cobalt examined, about 46 percent conducted an annual pen test, 39 percent semiannual, and 15 percent quarterly. As for the criticality of the vulnerabilities uncovered? It turned out that for the enterprises in the study 9 percent of vulnerabilities uncovered in the penetration tests were critical, 8 percent were high criticality, 14 percent were medium. And at 72 percent, the clear majority of flaws uncovered were ranked as low criticality.
While it may seem at first blush that the more application penetration tests the better, that isn’t necessarily so, as different organizations will have different approaches to technology and their deployment of new internally applications and features. “A lot of it depends on the nature of your security program and the nature of your application development efforts,” says Blow.
For instance, today with the pressures that come with DevOps and agile development workflows to ship new code, it is important to have the right testing tools in place so that software can be vetted during the development and deployment process. They can have automated application scans running against their apps in production running continuously. “In cases like this it may make sense to have an application pen test every six months or right after a major release,” Blow says.
When it comes to choosing which applications to spend the most time assessing and remediating, experts agree that it all comes down to business criticality. “You want to focus on the most important business applications and those that pose the most security and regulatory risks, and you want to fix the most critical flaws in this apps first,” says Wong.
This story, “What makes a good application pen test? Metrics” was originally published by