Defcon 16 Talk Review: The Pentest is Dead, Long Live the Pentest

This insightful presentation at the Defcon 16 conference in Las Vegas commented on the history of the pentest, what worked and what didn’t, and the direction which, in the speakers’ eyes, the pentest should be moving towards today.

The speakers, Taylor Banks and Carric, gave a warning at the start of the presentation that no punches would be pulled and that things which they felt were wrong with the industry and the people in it would be freely discussed.

They laid out what pentesting used to be like and where it came from, what it became, the problems it still faces and looked at what really adds value to a pentest and how we should be developing it as a service. Much of what they said rang true with MWR’s experience and current goals and it was certainly interesting to see these ideas laid out. In this article I hope to captures much of what they were expressing in that talk.

The talk harked back to the early days of the pentest and its links with hacker culture. This is an aspect often missed in today’s professional world, but acknowledging it allows for a great perspective on why pentesting evolved in the way it did, and contrasting it against the directions it took later helps us keep the good, throw out the bad, and improve the neglected. Just as hacker culture evolved, so did the pentest, as did business use of technology.

The talk described the dark days of early pentesting. They described pentesting as a “black art” in those early days, which much like security itself was little understood by those slowly beginning to use technology.

The only place for early pentesters to learn from was from small, underground communities. If this sounds like the same place hackers learnt their stuff, then its because they probably were the same places. There was no large security industry attracting practitioners; pentesters did what they did for the sheer love of it. Indeed to begin with, the term penetration tester didn’t even exist and people would sometimes seek to simply “hire a hacker”.

Early pentesting then was a much more obscure process in which someone was essentially let loose on the network to play hacker and see what they could turn up. People paying for pentests understood little about the arcane art and had to place trust in whatever was delivered at the end. This certainly gave the customer a perspective on the safety of their networks and systems they could not have gotten on their own, but the lack of any kind of accepted methodology brought with many shortcomings.

But strong competition tended to incline people away from accepted methodologies. To succeed a pentester or pentesting company needed to deliver the best report with the most findings, so revealing details of any methodology that gave you an edge would only give competitors the chance to yield the same results as you. In the still immature security industry there was little else at the time to act as a differentiator between a good and a bad pentest other than how many results were in a report.

The other major problem with this ad-hoc approach is that pentests were rarely repeatable. The presnters made the point that “if it ain’t repeatable, it ain’t a pentest… it’s just a hack”. But what is the difference between a hack and a pentest? An attacker only needs to be successful once to get in, but a pentest ought to cover as many possible weaknesses as possible. A pentest is different from a hack in that it is not just about “getting in”, its about evaluating how easy other people might get in. And that means that the people fixing the systems need to know exactly how security was bypassed, and for effective retests to be possible a pentester, perhaps a different tester from the one who originally performed the test, needs to be able to reproduce the “hack” exactly.

As organisations grow, the ability for one person to do everything disappears rapidly. The ability to effectively share information among testers, and to communicate relevant information to the customer, is therefore vital to a good penetration test. The experience of seasoned pentesters should also not go to waste. No longer are we restricted to small underground communities trading bits of information. Progress in this area has been made with schemes such as CHECK and OWASP.

As the millennium approached, pentetration testing started to go more mainstream. The computing world grew massively, technology was being used in ever more sensitive areas, and computers became accessible to many more people. As such, the hacker community grew, and the pentesting community grew. Many tools had been developed to allow scanning and automated assessment, and this again made pentesting a more accessible profession.

Unfortunately, these tools do not solve the security problem. The bar is not raised very high if all that is performed is an automated scan with the results blindly handed on to a customer. It is easy to fall into the trap of thinking that a methodology is a simple checklist which can in time be completely automated with the tools that become available. But these do not give nearly as useful a picture as a test with expert, creative human input.

This issue is found in various training courses that can now be taken in “hacking” or pentesting. They tend to focus a lot on using the tools and specific nuances of technology and tricks of the trade. But a good pentest is also about understanding the process of an attack. How can a random collection of vulnerabilities and tools be combined to create a greater threat, and how do you go about finding all these weaknesses in a system in a reliable way.

What’s more, we do not only have to deal with software vulnerabilities or open ports. Architectural or systemic problems in the way a system is designed can be a huge threat as well. Such things cannot easily be rooted out with automated tools, but by following an intelligent process with experience and creativity the flaws in the concepts of a system can be exposed.

So while methodologies capture past experience and allow known issues to be methodically tested for, which is certainly better than a purely ad-hoc approach based on the experience and disposition of whatever pentester you happened to get, it is equally important that these methodologies are flexible. Room must always be left for the human mind to do what it does best and creatively explore ways in which a given security system might be bypassed.

After all, this is precisely what malicious hackers do; they bypass the very restrictions that were in theory supposed to keep them out. So confirming that the restrictions you thought were in place are indeed in place, is only a low level of assurance. In reality, a full blown penetration test needs to go deeper than that and look at the things you didn’t think of, not just test the things you already put in place.

So a large part of the value of a pentest is in the people performing the test. But knowing this does not by itself help evaluate the difference between good testers and bad testers, and it may seem that we are still stuck with the size of the report being the only differentiator.

What can we do to fix this and take pentesting to the next level?

One thing we can do is put some of that human creativity into reports. Reports should not just be a dry list of vulnerabilities, offering a stark view of your network with no guidance on what to do about it or context about what it all means to you in your environment. Instead it is far better to include personalised descriptions of security weaknesses found, detailing their potential impact to the client based on the knowledge the pentester has of what they do. This means good communication between the client and the pentester is vital, and the service should not stop with the report. Ongoing support and advice is also an important way to add value to a pentetration testing service.

Metrics are also useful in evaluating both the value of a pentest, and the relative risk to which you may be exposed at a given time. A simple vulnerability total is not necessarily enough here. Some vulnerabilities are much higher risk than others, some are more practical than others, or relate to core security architectural failures.

Metrics are certainly not the whole picture, but they do allow tests to be compared, and patterns to be seen over time. Is the security posture improving, staying the same, or actually getting worse? And in what areas are these trends found, perhaps application level issues are being dealt with better than deployment of new infrastructure, and if so perhaps a certain team in the client organisation needs some further training. Such things provide constructive business information that can really be used to help improve security as a process over time.

So what are the ingredients for a good modern pentest?

  • Repeatability
  • Process driven not tool driven
  • Creative, expert input
  • Strategically focussed, partnering closely with the client in an ongoing relationship