Auditing Code
Auditing code is a major part of any software project, since for some reason people have a tendency to write code with security problems. Most projects take a reactive position, fixing problems as they come to light (oftentimes after someone finds exploit code floating around). Some projects, like OpenBSD, take an extremely proactive stance. For example, format string attacks have become fashionable in the last few months, and the OpenBSD team has done an extensive audit of their source code, fixing many problems for the upcoming 2.8 release. In any event, auditing code manually takes a large amount of effort and some degree of expertise. You must understand secure programming techniques, and you must understand the software you are auditing.
Enter the automated software auditing tools. To be honest, there's really only one that's worth using: ITS4 (It's The Software Stupid) by Cigital (formerly Reliable Software Technologies). Some people will argue that these automated tools are not as comprehensive or as safe as a good manual code audit, and they are generally correct. However, an automated code audit is much better than no code audit, especially with a reasonably advanced tool such as ITS4, which will catch many of the common problems that have resulted in root exploits. The following is an interview with John Viega, author of ITS4.
Why did you create ITS4? | |
The short answer is because it didn't exist. But that's not very interesting. Here's the long answer: I'd been working on statically analyzing C code for common security problems as part of my research agenda for about a year and a half. We've built some pretty impressive tools that are far more accurate than ITS4; the reduction in the number of false positives is almost 70%. The problem with that technology is there wasn't very much practical about it at the time. The analysis was taking several days, and could only be run on strictly conforming ANSI C code. Meanwhile, the company I work for (Cigital, formerly Reliable Software Technologies) continued to do a lot of software security audits, and I often participated. When it came to auditing C and C++ code, it was unpleasant to have to do that by hand. Unfortunately, there were no tools out there at the time that could meet my needs. The l0pht had long been advertising a code scanner called "SLINT" on their web page, but you couldn't get it from them. I think they were trying to sell it as a core technology to some company that might want to put it in a development environment, and weren't really looking to license it to end users. From what I know about SLINT, it was definitely worth a company paying for until ITS4 came out, because it encoded a heck of a lot of security knowledge that just wasn't gathered in any one place in a format that auditors could easily use. ITS4 was built to be a stopgap measure until bleeding edge technology could become more practical. Hopefully, the tool will be completely rewritten with a different technology behind it in the next 6 months. There are a couple of problems that might not make that possible in all cases. First, ITS4 wants to analyze the entire program, not just a particular build. However, any good analysis technology is only going to operate on a single build; anything else is way too complex. We did a study in which we found that, in networked, cross-platform C programs, it is typical for over 25% of the program (excluding comments) to get excluded from the actual build. That code gets preprocessed out. If a developer does a security scan for his own environment only, then a large chunk of the code won't be analyzed. If you make him do multiple scans, that can get really inconvenient with some packages I've seen. Second, full C++ support is pretty challenging if you're going to do a full analysis, because C++ is such a difficult language to parse correctly. Currently, ITS4 parses heuristically. A full analysis would require a real compiler front-end. I do work on ITS4, but it's primarily a side project; it'll be awhile before I have time to undertake something that complex that's outside the scope of the research. |
What is the purpose of ITS4? Do you have any comments on recommended usage? | |
ITS4 doesn't find bugs, per se. It was never intended to do so. Its original purpose was to be a starting point for a good security audit. The idea is that ITS4 identifies places in code that cannot be ruled out for having a security problem. While ITS4 can definitely rule some things out, it still usually provides tons of false positives. A knowledgeable person or group of people needs to use that as a starting point. The real purpose of ITS4 in a good audit is to save time, by narrowing down the search to a finite number of starting points. As a side effect, it also serves to remind auditors of what they should be looking for. If you're a developer, ITS4 can still be useful. Just assume that everything it flags is a problem, and act accordingly. When using ITS4 to audit, here's my recommended strategy:
|
Currently ITS4 does C and C++. Are there any plans to support other languages such as Java or Perl? |
|
I know some people who are very interested in porting ITS4 to handle Perl. The big problem with supporting Perl is that it's so difficult to parse… the language was hacked together. There's no grammar for it; there's just the very disgusting source code. You could use heuristics, just like the current ITS4 does. However, the people who are trying are going for the gold, there. As for Java, Tom Mutdosch built a very nice Java security scanner for me about 2 years ago. Unfortunately, the scanner searches for some really high-level stuff, so isn't very useful (see http://www.securingjava.com/chapter-seven/chapter-seven-1.html for the list of stuff). The problem is that, so far as I know, there aren't any constructs at the implementation level where, if you use them, it's a red flag that you had a good chance of inadvertently introducing a security bug. That says something pretty good about Java. If we had anything useful to put into such a scanner, we'd definitely release a Java version of ITS4 in a hurry. As is, I don't want to have to maintain code for a tool that isn't useful. |
Have you done any surveys, such as OpenBSD vs. NetBSD, or various Linux distributions? | |
No. To actually go through all the output of ITS4 for an entire operating system would take a lot more time than I'm willing to spend, never mind more than one. However, now that you mention it, it'd be a neat exercise to at least see how much output each OS gives in proportion to the number of lines of code. I don't have any plans to do that, but I'll bet that OpenBSD will win, hands down. |
What do you think of code audits, such as OpenBSD’s? Are they a good return on investment? | |
It depends on the entity producing the software. I think companies should embrace the notion of risk management in their software projects, and treat security as one of their larger concerns. Address the biggest risks you see, and keep addressing risks for as long as it's economically feasible. Certainly, implementation problems are a pretty big deal in C programs, because if you do something that's obviously stupid to a security expert (which isn't usually obvious to the average developer), and people notice, then you're a sitting duck. However, I've done security audits of many, many software systems in my day, and I've found that language problems making their way into the implementation aren't so much a big deal. Often, products have gaping holes in their design, making them fundamentally flawed from a security standpoint. When your design is fundamentally flawed, you'll probably get a whole lot more bang for the buck on fixing that. When you've got a design without problems that's at least fairly bullet proof, then it's probably worth a code audit. A lot of people skip that step because it's so difficult and time consuming. Hopefully, as some sort of minimum bar, people writing C and C++ code will at least run ITS4 and fix the stuff it finds. You don't have to take the time to figure out whether something is a problem or not. |
There are other software scanners such as Pscan (http://www.striker.ottawa.on.ca/~aland/pscan/) and BFBTester (http://my.ispchannel.com/~mheffner/bfbtester/, not ported to Linux yet). What do you think of them? Do you have any plans to join forces or otherwise collaborate with them? There are also software testing tools, such as Fuzz (http://fuzz.sourceforge.net). What do you think of them? | |
I'd missed the announcement of pscan. I've looked at it now, though. It basically seems like a primitive version of ITS4; I didn't see anything it does that ITS4 doesn't also do well. ITS4 is at least ten times the size of pscan, and its database only has a dozen or so functions, whereas ITS4 has well over 100. The ITS4 database also comes with a brief description of the problem and suggested solutions, although they aren't always as detailed as I'd like (pscan doesn't give any output other than that something is flagged). Another thing is that ITS4 does some heuristic analyses to weed out a lot of false positives. Pscan isn't doing that. Collaborating is a good idea, though. I'll definitely approach the guy who did it. His tool is definitely reasonable. BFBTester is cool; I use it. It's definitely capable of some of the same things. I don't really consider it to be in the same class of tools, though. It's a dynamic testing tool, not a static analysis tool. As a dynamic tool, BFBTester is good for finding some of the more obvious problems without having the code, and without having to really look at the code. The problem is that it's a testing tool like any other… it can only be so thorough. All testing tools have this issue. Consider a typical code coverage tool, where it tries to figure out how much of your code has run, given a set of test cases and some metric. If it were even close to feasible to get 100% path coverage in an average program (generally it is not), you could still miss every single buffer overflow by failing to test with any really long inputs. If you pull out fuzz (which basically sends well crafted garbage input into programs), then it'll catch any buffer overflows the test cases happen to tickle, but miss all the race conditions. If you use BFBTester, it'll let you know about tmpfiles in use during any test runs. That might not be all tmpfile usage, and it may not indicate an exploitable race condition, but it's better than nothing. None of these tools are perfect. Neither is ITS4. They all will miss tons of things. They all have environments in which they don't work well or at all. In reality, coverage does an okay job, and things like Fuzz and BFBTester end up doing pretty well. I use them as a supplement to static detection. |
L0pht makes a tool called SLINT. What do you think about it? I have tried on several occasions in the past to get ahold of it, but L0pht has not answered any email regarding it, and I know several people that had the same experience. | |
I can confirm that it exists. I too know a ton of people who wanted to get ahold of it and never could. While I can't say for sure, I think it's because the l0pht was hoping to sell it as a big piece of IP (intellectual property). Giving it out (even selling it) would make that IP less valuable. I definitely thought about not releasing ITS4… even though, as you said, no one had ever been able to get SLINT, and it looked like it'd probably be a long time before someone could. I really had no desire to accidentally piss off the l0pht; they'd always been very nice to me. But the code base took me about a weekend. The database behind it was drudged up from a bunch of public sources, largely by Tom O'Connor any myself, and was the result of maybe two days work. I felt like we weren't doing anything that someone else couldn't do, although we were probably in a position to do it better than most. I figured whatever IP was still in SLINT is still in there; it would be things like the Windows vulnerabilities and other problems that the l0pht knows about, but no one else does. |
If you had three wishes regarding software security, what would they be? (Anything goes.) | |
My first wish would be that no software would ever contain a security flaw. Then my second wish would be for a job to replace my current one, since wish one would make me pretty useless in my current role. The third wish would be for sale to the highest bidder. Knowing that's unrealistic, I'll also answer the question from the standpoint of what I think is possible. First, I'd like to see languages/environments where programmers can get all the benefits of C without the security risks. That's the direction my research has been taking lately. Second, I'd like to see material to educate people on how to design and build more secure software. Currently, the material there is pretty lacking. Thankfully, I know of a few projects that hope to fix this problem. One is a book that I'm writing (along with Gary McGraw) for Addison-Wesley which should be out in mid-2001, entitled Building Secure Software. Then I believe Ross Anderson and Avi Ruben are both working on books on secure design that promise to be quite good. Third, I'd like to see developers be more open to securing their systems. I see plenty of developers who are afraid of losing face if they were to admit there might be security problems in their software. People should realize that security bugs are just bugs; no one would ever expect you to write bug-free software the first time. Another aspect of this problem is that developers are often unwilling to treat real risks with the severity required. They live in the ivory tower of an ideal world. For example, there's the notion that all you have to do is add cryptography. That one's been debunked pretty well, recently, especially in Schneier's new book. Then there's the notion, "well, we've got a corporate firewall, so who cares about protecting the stuff that's not visible from the outside?" People seem to think that firewalls don't have bugs, and that nothing visible through the firewall could possibly give an attacker an entry. And everyone seems to be willing to ignore the threat of insider attacks, even though they're probably the most widespread. If everyone's staff is as trustworthy as their managers think, who's responsible for all the attacks we see in statistics? |
About John Viega
John Viega is a Senior Research Associate and Consultant at Cigital (formerly Reliable Software Technologies), where he is the principal investigator on a DARPA-funded grant in the area of software security. He's currently writing two books, Building Secure Software (with Gary McGraw) to be published by Addison-Wesley, and Java Enterprise Architecture (with George Reese) to be published by O'Reilly. He also co-authors a column on software security for IBM's DeveloperWorks magazine, along with Gary McGraw.
John is the author of ITS4, Mailman (the GNU mailing list manager) and several other widely used pieces of software. He is also a member of the Shmoo Group.
Related Links
ITS4
http://www.cigital.com/its4/
SecurityPortal is the world’s foremost on-line resource and services provider for companies and individuals concerned about protecting their information systems and networks.
http://www.SecurityPortal.com
Th e Focal Point for Security on the Net ™