OVAL Working Group on Unauthenticated Tests

Teleconference - 10 February 2005

Attendees

Raffael Marty, ArcSight
Jay Beale, Bastille Linux
Richard Reiner, FSC Internet Corp
David Proulx, MITRE
Matthew Wojcik, MITRE -- OVAL Moderator
Michael Murray, nCircle -- Working Group Chair
Anton Chuvakin, netForensics
Sudhakar Govindavajhala, Princeton
Gerhard Eschelbeck, Qualys

OVAL Working Group on Unauthenticated Test - Discussion

Murray: Welcome everyone, and thanks for participating. To introduce the working group: OVAL has become a standard for authenticated checks on its supported platform, but there has been no OVAL standard for unauthenticated tests. A lot of the product space is in unauthenticated, network-based testing, and the idea for this working group arose in conversations about how to add that testing to OVAL.

Govindavajhala: Could you explain what we mean by unauthenticated vs. authenticated?

Murray: Authenticated test are anything that need credentials to access the machine to make the tests work. The OVAL Windows tests for files, or registry data--you have to log in to the box to get the data. When we talk about unauthenticated, we're talking about connecting to the service remotely, probing it in some way to get the information you're looking for about vulnerable or not vulnerable.

Govindavajhala: Something like nmap?

Murray: Yes, or the unauthenticated side of Nessus. When you don't enter credentials into Nessus, it's doing things like reading system banners to tell you about vulnerable or not vulnerable. So we started talking about putting together a schema and writing definitions for OVAL to use unauthenticated remote tests. This idea for this group came out of it, because there are a number of questions that are hard to answer.

In the authenticated space, there's a somewhat small set of target spaces you can work in: you're either looking at file versions, or registry information, or some other small set of possible conditions that can indicate vulnerability. Whereas on the unauthenticated side, the way that different vendors or products test for an issue is probably significantly different, because the different sets of criteria for making that test are quite large.

So defining a standard way of testing is a very difficult question to answer. There are also a lot of other questions about what this kind of schema should look like, and what criteria to include. This is a call to set up the infrastructure to start answering these questions. Everybody involved probably has some intuition about the questions that will come up when we talk about connecting to a remote host, and standardizing what a vulnerability check looks like using unauthenticated tests.

The OVAL Developer list will host the discussion for this group; I'll be sending email to that list to open the discussion. I think we need this working group because there are so many questions to answer which need collaboration to answer.

Reiner: One question: there are other efforts out there which are either partially in this space, or are in early stages and look like they're going to try to address this. I have in mind AVDL, which seems to be partially in this space, and EVDL which seems to have grown out of the WAS group and looks like it wants to address some of this.

Chuvakin: I was on the AVDL mailing list for a while, and they seem to be just in the web application space. There was some talk about a new AVDL version that would move to general vulnerabilities, but I don't think that has happened.

Govindavajhala: What are some other similar projects people know about?

Wojcik: There are a number of efforts out there, and it's clear we need to be aware of what's being done.

Eschelbeck: I think that's the goal of this project: there are many languages out there; every vendor probably has their own language to express these signatures. The effort here really is to try to figure out if there's a way to standardize it.

I'm convinced of the need; I have some difficulties in thinking about how to express the unauthenticated tests in a standardized fashion, because the complexity is enormous in comparison to the trusted scanning, when you look at all the protocols and ways to probe different things. Just looking at our scanner, there are so many different ways you can scan for things. Trying to standardize them is the challenge here. I'm not saying it's impossible, or that it's the wrong thing to do; I definitely think it's worth doing, but it will be difficult.

Wojcik: To get back to [Reiner's] specific question, I've been in on some WAS teleconferences, and we at MITRE have spent a lot of time trying to figure out whether OVAL overlaps with AVDL, and with WAS. I think in the past the answer has been no, but as OVAL moves into unauthenticated tests, there's more potential for overlap.

I know the folks at Citadel, including [OVAL Board Member] Kent Landfield, are very involved in both AVDL and WAS, as well as OVAL. I'll be in touch with them continually, to try to make sure we are keeping track of those. By all means, we want to make sure we're not replicating effort; we want to learn from other efforts or contribute to them.

[Other initiatives mentioned to look into included IOMEF and IDMEF.]

Another question that obviously comes up is, why not use NASL [the Nessus scripting language] as the standard language? It's been out there for a long time, it's an open-source language. There are a couple of things that come up immediately: with Tenable moving toward commercialization of their tests, it seems like they won't be a repository of open standardized tests. There have been questions about variability of quality of Nessus tests. Also, while I haven't spent a lot of time with NASL or Nessus, but NASL is clearly a very different approach structurally from OVAL. For one thing, it's declarative and not procedural.

Reiner: The first question about NASL or anything else is, is the semantics right? Is it sufficiently general?

Wojcik: Absolutely. It's something that I know I'm going to have to read up on, because there are definitely lessons to be learned.

Beale: Another factor with NASL, is that with the introduction of the knowledge base, the language became something other than straight procedural, and you could say "do we have this information about the system?" One thing that does make me uncomfortable about it, as much as I like Perl, there are so many ways you can test for a vulnerability, even when you're trying to gather the exact same piece of information. It seems to me like a very general procedural language.

Reiner: There's a real question about whether the right approach is declarative or procedural.

Murray: And that's a question we definitely have to answer. We may find that procedural language actually lends itself better. As Gerhard said, the complexity is significantly higher, and we may find that something procedural better represents that complexity.

Chuvakin: I have a question about the difference between authenticated and unauthenticated tests. A lot of unauthenticated tests are like, "If you send these 10 bits in this order against this port, and you get a shell as a result, you have a vulnerability." Essentially an exploit. Does MITRE really want to get into the exploit-writing business? In many cases it's unavoidable.

Wojcik: It's a good question. As background, the idea of this process is to let the OVAL community drive the language in the direction that it wants, to encourage someone who has energy and knowledge to extend OVAL for a new operating system, or a whole new area such as this.

This working group started because there is a perceived need to standardize unauthenticated testing, and interest in working on the problem (as shown by the participation in this call). The working group is meant to be largely autonomous, make its investigation, and produce a set of proposals of additions or changes to OVAL.

It's clear a political question of whether MITRE can add exploit-like information into OVAL needs to be answered early, so this group doesn't invest work that won't be added to OVAL. But in general, I envision the group going along and doing its work, and producing those proposals which will be rolled into the language. Does that make sense?

Chuvakin: It does to me, but the answer to the question needs to be there. Of course, you can always rephrase the exploit as a check where you send some binary data, and you get some string in return. The string might be "Windows Command Shell."

Murray: In terms of creating the language, it could be possible to mark individual tests as invasive or not. We don't want to limit the language, we want to allow people to write them, but we don't necessarily want the default set of tests to be going out and breaking in to all kinds of stuff. Like with Nessus, where you have a safe check mode, where you turn off that kind of test.

Chuvakin: I don't think the safe checks mean they don't use exploits, they're just not known to crash stuff. But we can verify that later.

Beale: Nessus's safe checks are specifically there to avoid crashing the service; some of them may still use exploit code.

Murray: Maybe we want to formalize the different definitions, the different levels of invasiveness or possible crashing. Maybe there's a continuum.

Wojcik: Yeah, there doesn't have one definition of "safe."

Reiner: There's more than one dimension. There's the dimension of what are the potential effects to the machine under testing, and there's the dimension of does it use exploit-like code, something where if the payload were modified it would be an exploit.

Wojcik: This goes back to the complexity; there are more options for how to test for things. Another thought I've had is that these will be a whole separate set of checks for a vulnerability from the authenticated tests we have already. Just on the unauthenticated side, there will be multiple ways to test: a non-invasive banner test, an exploit attempt. We'll have to decide how to lump these together, when we want to separate them out.

Beale: We're going to want to keep our definitions as practical as possible. This group will have a lot more experience in writing and using these tests than a lot of the end users.

We'll need to define categories of risk: this test is non invasive, this test has a good chance of harming a service somehow on a production server, this test will crash a machine.

I don't know exactly how we'll define it, but we'll want to push the user to go for the highest level of accuracy possible while preserving whatever level of safety they want to preserve. I think that may at times make us a little less abstract in the way we choose these categories.

Wojcik: Another thing about whether to include exploit-like code is what I say a lot: an exploit is a very good positive indication of vulnerability, but not necessarily a good negative indicator. There might be many reasons why an exploit might fail against a service on a particular machine on a given day even though the underlying vulnerability is really present.

Eschelbeck: I think you have some of the same issues in trusted scanning methods too. Rather than exploits, I prefer to talk about active testing, where you negotiate specific protocols and make queries against the system. If we don't limit ourselves to exploits, I think we can be just as accurate as you can with trusted scanning. So I'm less worried about that part.

Reiner: I do think there's a distinction to be made between active tests which probe some aspects of a system's response profile that are not the same as the vulnerability mechanism, versus going part of the way towards triggering the vulnerability.

The distinction between passive and active tests is important, but so is the one within the space of active testing between looking for something correlated with the issue versus those that go at the issue directly.

Murray: Right, and I think you're talking about a continuum that goes all the way from something like, the service is available, all the way down to actually exploiting the problem. You can walk that continuum and fall a lot of different places along that line.

Reiner: There's also a continuum of outcomes. The majority of the tools that are out there, when they apply a check to a target, they return yes or no, and I think there's more meaningful reporting to be done.

Murray: I totally agree.

Wojcik: That's a very good point when we talk about how this fits into OVAL. The current version (including Version 4, which is almost finished--this working group will not be making changes to Version 4)--the current version can return zero or one for an issue and nothing else. We might need to look at changes to the OVAL language to support more detailed reporting. Of course, OVAL reports 0 or 1 for each individual test in a definition, which gets your something towards the spectrum of an answer, but we might need to make changes to OVAL to support that better reporting.

Reiner: Especially for what's normally covered as a zero by most tools, where there is a wide range of ways you can get a negative result.

Murray: Which reminds me of the mention of the knowledge base in Nessus. OVAL test can't currently pass information between them, which would be a useful quality to have. Because of the complexity, there are a whole lot of new capabilities we may want to explore.

Wojcik: I do want to say that the ability to pass data between tests is definitely planned for a future version of OVAL. We don't have a time frame for that yet, but it is planned.

Govindavajhala: What kind of information are you talking about passing from one test to another?

Wojcik: It could be something like what port the webserver is running on, or the value of a parameter. Currently in OVAL, you can only link together the truth values of the individual tests; you can't retrieve a parameter with one test and use it as input to another test.

Chuvakin: So what other challenges are there for remote tests? I still think we'll have to get more in depth than just banners. I think what's available from the outside is pretty much just banners, or prodding the system in some unusual way, and that's pretty close to an exploit.

Murray: I think there's a whole continuum between banner and exploit condition. There's a gray area with lot you can talk about accurately obtaining information on the system.

I think this call has reinforced my belief that there are a lot of big questions that we'll need to answer, which is why I thought we needed a working group. We have a lot of people who have a lot of experience with these questions; if we can leverage that experience, we can turn this into something that the community can really use.

Wojcik: Do you have thoughts on how we should proceed?

Murray: The first thing I'll do is try to summarize some of these questions, and send them out to the OVAL Developer list, to get the discussion started. Whether we want to set up more regular calls, or how we want to take that first step, I think will come out of that first discussion. I don't think anybody's going to answer these in the first to weeks, so I'd like to see how that evolves, and structure the work based on that.

I think the first step is to figure out what the issues are. Then we can address them once they're out on the table. I've written a few already, but I'm sure more will come up as we discuss the first five.

Reiner: I'd suggest adding one more thing to the list: what are the use cases for the end product? How do we expect this to be used?

Chuvakin: That goes back to the motivation for this working group--there was some talk about it, but does the world need one more freeware scanner to do this? Isn't that what would be produced? What's the other motivation?

Murray: You can ask the same thing about the entire OVAL project. From my view, OVAL provides a standard. We can all talk about the same thing in the same way. I'll let Matt talk about the motivation side, but having a standard so we can all talk about things in the same way, and being able to see how OVAL does something so we can all talk about it, I think is a very valuable thing for this industry.

I know that if I were to sit down with any of the other people in the vulnerability management space, and try to talk about how we're doing things, the number of intricacies even between how nCircle does things and how Nessus is doing things, and what I'm sure the other vendors are doing--those discussions are very difficult to have. So having a common language to talk about is very useful. It reminds me of what CVE did for just referring to vulnerabilities.

Wojcik: I agree with what Mike said, but also, the goal is not really to produce another open source scanner. If we produce a reference implementation, or a SourceForge project that's an implementation of the unauthenticated OVAL stuff, that's great. But more importantly, it's as a standard format to let other people code to. Hopefully we'll formalize the answers to some of these questions that we've come up with just in this call.

Clearly, even with the experience we have on this call, there are questions about what's the right way to categorize these things? What's the right way to deal with different approaches for unauthenticated ways to probe for the same vulnerability? Shouldn't there be a better spectrum of reporting than just yes we found, or no we didn't, or we're not sure.

So that's what I see. It's not so much, "is there going to be another open source tool that comes out of this." It's about allowing the kind of communication Mike was talking about. Are we going to discover by talking about the problem that there are questions that none of us have been able to answer yet, that we may be able to get closer to answering. Which will in turn be better for everyone's customers, be better for the community at large.

Govindavajhala: I think the most valuable part of OVAL is that you have a formal description of a vulnerability that you can reason about. If you look at bugtraq, you have textual descriptions of problems, and you can't do any high level of analysis. We need to convince the community that they should describe their vulnerabilities formally using OVAL.

Reiner: It's obviously very early, but if we think about what the final work product might be, there might be more value in a set of transforms from OVAL unauthenticated tests to NASL and Product A's language, and Product B's language, rather than another interpreter.

[General agreement.]

Wojcik: Absolutely, and that is one route for OVAL compatibility with the authenticated stuff that's out there right now. The Reference Interpreter is not, by any means, the product of OVAL. It's the language, and the standardized body of tests.

Chuvakin: I think I'm convinced about the motivation. I never questioned the value of the authenticated host-based checks, but I did have some small doubts about the remote tests.

Wojcik: They're clearly for very different purposes; there will always be times when it's necessary to scan a large number of machines, without access to authentication tokens or the ability to install code on each machine. I think no one would dispute that there's a very different quality of information, but there are times unauthenticated is the route you have to go. And it's clear from this conversation that there are some questions out there, and desire to try to get some of that standardized.

Murray: Are there any more questions? I think I know what the first steps are on my end, and if everyone makes sure they're signed up for the OVAL Developer list, that's where we'll host the discussion.

Chuvakin: One other action item I'd suggest is that we all make sure we are familiar with what's going on in some of the other projects we mentioned. We should especially read up on NASL, because we can borrow ideas from them.

Murray: Absolutely, there are lessons we can learn from all of these projects. We'll meet online, and start thinking about these problems more in depth.

Back to top

Page Last Updated: June 06, 2006