I think the point is automated tools are a nice way of directing your pen test, sort of like the way you might use binoculars to scan your surroundings while hiking to spot interesting terrain you might wish to explore.
This is the idea that scares me the most, because while it sounds great, it has an obvious flip side: wherever the scanner doesn't "focus" you is getting less attention.
Could you not do your normal practice, and then have someone else run the automated scanners as a sort of double-check just to make sure you didn't miss anything obvious? Sure, it would likely never find anything, as it's your job to not miss anything obvious, but if they're basically automated and free, seems odd to dismiss them outright.
EDIT: Furthermore, how embarrassing would it be if someone hired you and then ran one of these scanners themselves and did find something you missed? Given the magnitude of the potential downside, and the marginal cost of using them as described, it'd almost be more of a business insurance tactic than anything else.
First, we're a pretty big company, and I'm only one of 3 founders and of 4 practice managers, so you can imagine we've debated this pretty thoroughly.
Last point first:
Try real hard not to miss stuff. Seriously, that's it. Appsec is a competitive field. Forget scanners: many clients are going to hire a different firm for their next test, and some of those firms are really, really good. When you're testing an app, that is the thing that is animating your work: trying not to leave nuggets for the next team to find and shame you with. Think we're worried about missing things sqlmap finds? Try worrying about what iSec Partners is going to find.
To your first point: this is hard to articulate well, but let me take a stab at it: it's very hard to double back over terrain a scanner has already covered. Think of security testing like a treasure hunt. Have you ever had a fun treasure hunt? Can you get a bead on the feeling you had when you started out hunting for stuff? Imagine trying to put the same focus and motivation together if, before the treasure hunt, the organizers announced "we sent a bunch of people out before you to try to make sure there's no treasure".
Would you consider it a perfectly good procedure to run some automated tests against the web app and glance at the results? It's only a problem if you use those results as the focus of your investigation, right?
What it seems like you're saying is that you have found that scanners are almost always a distraction at best, misleading at worst, and that they offer zero useful information about the app or about the MO of the app developer.
You (matasano) are good enough that any information those scanners could reveal about the app or the developer, you'd probably turn up independently. Not everyone has those skills, and the results of a scan might include something that a lesser practitioner wouldn't have noticed any other way.
What I'm saying is that I have found that scanners are almost always a distraction at best, misleading at worst, and that they offer very little useful information about the app that can't be acquired more easily and efficiently by a tester reasoning about the behavior of the app for themselves.
I'm not against automation. I don't think testers need to launch every fiddley little query they need to gauge behavior. What I'm against are tools that run some unspecified number of fiddley little queries, do some bogus analysis, and then spit out a "YES XSS" or "NO XSS" answer.
Has this been something you have found in practice? Or just theory/fear you've developed? I would think tools like sqlmap are helpful in building that initial recon/inventory of items to explore. You should still be doing the due diligence of manual exploration. I suppose if one were to get lazy and stop doing the manual exploration, and just let the mapper tell them where to look, it is most definitely true. However, people can get lazy without automated tools, and fall into the same trap.
but I'm going through the exercise of building out an app sec practice, and for what it's worth, I've adopted the same approach (we're not using automated scanners).
You tell yourself that, but what I think is that you miss a lot of medium-hanging fruit, and you find the same number of "logic bugs". Meanwhile: your firm and our firm empirically bill a similar number of hours (if you're at a competent firm; if you're at a body shop, you probably bill 1.5x to 2x more hours than we do).
Reasonable people can disagree on this point, but we're a pretty large, well-established practice and our belief isn't coming out of nowhere.
Your argument is literally the first thing anyone who wants to convince us to use scanners brings up. It is the point we've thought about and debated most. I just don't think it pans out in the real world:
(1) The bugs you find "because" your scanner took the low-hanging fruit will be bugs any good tester will find;
(2) meanwhile, the extra scrutiny you're not giving the app to find that low-hanging fruit is costing you insight that would reveal still more bugs...
(3) also, your scanner is missing bugs, probably in the neighborhood of 20-30%, and:
(4) you're not making up for that because it's very difficult to force yourself to focus on terrain that a scanner has covered and flagged bugs in.
No, I've decreed that after a decade breaking applications professionally, there isn't an application scanner I've used (and I've used very very many) that is worth anything.
I am not as philosophically opposed to scanners as I think Thomas is, I've just found that they provide nearly no useful value. For many years, I argued that although they provided no value, "they didn't hurt", so there was no harm in also running them at the end of a test to make sure there aren't any low-hanging fruit that a tester may have missed.
What started to turn me around in that belief was that I noticed an increasing number of tests being performed by my team where the scanner wasn't just not providing value, but actually causing issues.
Under the best case scenario, you are now having to take the time to validate your scanner findings (which are all things you would/should have found anyway but are relying on the scanner to do for you).
Under the scenarios I've witnessed play out, people assume that the scanners will actually find the low-hanging fruit, and they slack off on that part of the assessment (because, hey, the scanner will cover it, and now they can spend more time looking for logic bugs). Then the scanner doesn't find something trivial (which happens about one in every...oh, I don't know...actually it happens in nearly every test).
I'm happy you've found that scanners don't make your work product worse, but that's not what I've found at all.