Introduction

NX protection seems great; it stops viruses dead in their tracks and eliminates those pesky buffer overflows we have been hearing so much about for the last 15 years. Well, maybe not. In fact it seems that NX provides several layers of false security, particularly since it only stops some buffer overflows and whether or not it stops any viruses has yet to be seen yet.

Although AMD and Intel both like to spin their newest NX/XD "Virus Protection" as a new feature on x86 technology, in reality NX behaves more like an emergency patch for an easily exploitable architecture. To really explore how NX benefits us, and where it provides a false sense of security, we must understand one of the scenarios that it supposedly prevents against.

The simplest explanation of a buffer overflow is when a program executes memory it was not intending to. Typically, this is done when a program writes something into memory that it has not properly allocated for; sometimes writing over pieces of the same program already in memory.

For example, let's take a look at the following program code:

int main(int argc, char **argv)
{
           char line[512];
           line[0] = 0;
           gets(line); 
           ...
           return 0;
}

Undoubtedly, even those who have taken only rudimentary programming classes are having a good chuckle with the monstrosity above. However, this code is actually nearly verbatim copied from the original UNIX fingerd program circa 1988. In UNIX (or any other operating system), the entire contents of the program are loaded into memory as machine code. A simple, although not necessarily proper, way to think about the above code is an array of several hundred bytes in memory; 512 bytes allocated for the array of characters, line, on line 3 of the code.

As the program starts flipping through the code in its array (which is actually called a stack), it reaches the function gets(). gets(), a horrible, ancient function, reads input and places it into the line array. But line has only enough memory allocated for 512 characters! It would only take a malicious user a few seconds to realize that writing more than 512 characters to the fingerd program would result in the ability to write data past the end of the line array. And since line exists inside a stack in memory, writing past the end of line actually starts to overwrite critical pieces of the program! If a user were to write 512 characters to fingerd followed by machine code, the user could essentially control any attribute of the entire machine. The example with fingerd above became the first real internet worm exploit, a buffer overflow. Worms like Nimda and Code Red work exactly the same as the original fingerd worm, so unfortunately 16 years later imprudent programming and exploitable hardware are still to blame.

What does it all mean?
Comments Locked

17 Comments

View All Comments

  • Andyvan - Saturday, November 20, 2004 - link

    I have three quibbles about the article:

    1) "As the program starts flipping through the code in its array (which is actually called a stack), it reaches the function gets()."

    I think you've mixed a couple of concepts together here. The code is *not* in the stack. They're two different sections of memory.

    I would reword this as:

    "As the program starts executing the code of the main routine, it reaches the function gets()."

    2) "And since line exists inside a stack in memory, writing past the end of line actually starts to overwrite critical pieces of the program!"

    This may be more of a nit: I would say that it will overwrite data critical to the program. The code proper is not in the stack, and so you can't actually overwrite the *code* when you go off the end of 'line'.

    I'll understand if you don't accept this quibble.

    3) "When a program begins to load an array, a special byte after the array (called a return address) tells the computer to go back to the code segment and continue to run the program."

    I don't believe the return address is actually a byte, as that would only support an address with an 8-bit range. Isn't it a four byte value?

    -- Andyvan
  • sluramod - Wednesday, November 3, 2004 - link

    Maybe it is offtopic, but...

    What are the good reasons for abandoning segmentation and segment protection on x86? Sorry if this is a stupid question...

    Alexei.
  • emboss - Sunday, October 24, 2004 - link

    If you're going to be picky on terminology, at least get it right :) Segmented memory hasn't been used since Windows 3. The correct term to use is sections ("All memory is divided up into several SECTIONS").

    Also, the 286 most definately did not support a NX bit for pages. Primarily because the 286 didn't do paging. What was introduced with the 286 (and still remains on modern CPUs) is segment protection (executable, writable, readable, plus a few less-easily expained attributes). However, since segmentation has been abandoned (with good reason) and the flat address space model has taken over, this "protection" is of no use.

    Finally, a system with NX ideally shouldn't be any more stable than a system without it. User-space programs are isolated the kernel (and other user-space programs) so any damage they do is limited to themselves. In which case terminating the program immediately just speeds up the process of ending the program when something gets corrupted :)

    And if a kernel-mode component gets killed because it violated NX protection, then you're gonna get a BSOD or equivalent, so again, you're not much better off.

    The one thing it MAY help is stopping corrupted data from getting written to disk. But overall I don't think it would have much of an effect from this point of view either.
  • Bitpower - Sunday, October 24, 2004 - link

    What I was trying to say was that a computer system that has NX on it, would be more 'stable' then a computer system without it, since another important side effect of NX is that besides protecting you against viruses, it also will protect you against certain program crashes where there is a buffer underrun.
  • Pax Team - Sunday, October 24, 2004 - link

    software bugs vs. exploit techniques:

    in a reply above you suggested that instead of relying on 200,000 programmers to fix their bugs we should rely on NX and similar hardware features. i.e., it appeared to me as if fixing bugs and intrusion prevention techniques were mutually exclusive (wouldn't be the first time i've misunderstood someone ;-).

    as for my comment: software bugs (or let's just talk about memory corruption bugs) come in many flavours, such as various forms of buffer overflows (stack or heap based, linear or non-linear, single or multiple), integer handling bugs (that can create buffer overflows in turn), user-supplied format strings, etc. these bugs can be fixed (for good) by reading the code, finding them and modyfing the code properly. this is what needs many eyes (and lots of time and expertise) and in practice it's never been 100% efficient, hence the need for intrusion prevention technologies that try to prevent or mitigate the effects of bugs.

    orthogonal to bugs there are exploits that make use of the bugs by exploiting the level of access they (inadvertantly) give to the attacked program. memory corruption bugs can give at most arbitrary read-write access to the target program's memory (e.g., a user-supplied format string bug comes very close to giving this level of access while a 'traditional' strcpy() based stack overflow provides only limited write access).

    an exploit technique describes the way a given bug is made use of. note that this implies that a given bug can be exploited by different techniques and a given technique can make use of different kinds of bugs - hence my saying that these categories (bugs vs. exploit techniques) are orthogonal (and one can and should attack this two-dimensional problem space from either dimension).

    what exploit techniques can we speak of? this normally depends on how your defense mechanisms (intrusion prevention system) work. in PaX we assume arbitrary read-write bugs (i.e., the most powerful category, every memory corruption bug falls within), and this allows us to split the exploit techniques into 3 main categories only, you can read about them at http://pax.grsecurity.net/docs/pax.txt .

    PS: your guess about Intel/AMD beginning to work on NX in mid-2003 is not quite correct, amd64 (the architecture) has always had NX, at least since 2000 i guess. linux itself didn't make use of it until late 2002 though: http://www.x86-64.org/lists/discuss/msg03016.html .

    PS2: contact email address is on the PaX homepage.
  • Bitpower - Thursday, October 21, 2004 - link

    Make a couple typos.

    EDITED VERSION OF MY LAST COMMENT (read this one instead):

    Just curious, but why in the above article, doesn't the author address the other very important advantage of NX, which has nothing to do with viruses? I would think that it is one of the most important advantages of NX is it would make your computer a lot more stable. And this advantage is totally seperate and has nothing to do with virus protection.

    Did you ever had an application or game that you use cause your computer to totally crash and lock up, and the only way out of it was to reboot? Then from the description of NX above, it would also protect you against certain crashes. So it would also act like a "crash guard", thus preventing poorly written programs or programs that have bugs from accidentally overwriting its own code with random junk and causing your machine to totally crash.

    I think its ability to act like a crash guard is as important, if not more important, then its ability to virus protect. So while virus protection is a very important ability of NX, I would also think NXs ability to act like a "crash guard" for your system is just as important.

    Am I the only person who thought of this second advantage of NX, as a crash guard? Myself, I don't care about the virus protection since I constantly scan my computer. But I would be very interested in the abillity to use NX for crash guarding and helping to increase the stability of my machine against programs that are buggy.
  • Bitpower - Thursday, October 21, 2004 - link

    Just curious, by why in the above article, doesn't the author address the only very important advantage of NX, which has NOTHING to do with viruses? I would think that one of the MOST IMPORTANT advantages of NX is it would make your computer a lot more stable. And this advantage is totally seperate and has nothing to do with virus protection.

    Did you ever had an application or game that you use cause your computer to totally crash and lock up, and the only way out of it was to reboot? Then from the description of NX above, it would also protect you against certain crashes. So it would also act like a "crash guard", thus preventing poorly written programs or programs that have bugs from accidentally overwriting its own code with random junk and causing your machine to totally crash.

    I think its ability to act like a crash guard is as important, if not more important, then its ability to virus protect.

    Am I the only person who thought of this second advantage of NX, as a crash guard?
  • KristopherKubicki - Thursday, October 21, 2004 - link

    Pax Team:

    Correct about W^X and Execsheild; same for NX as well too.

    "Prescott supports the NX - for 'no execute' -- feature that blocks worms and viruses from executing code after creating a buffer overflow on the machine", said Paul Otellini, Intel's COO.

    The scope of the article was to show that NX doesnt do what Intel's COO even believes it does.

    I am a little confused about your second paragraph though, can you please elaborate? Thanks for the feedback. Please email me when you get a chance.

    Kristopher
  • Pax Team - Sunday, October 17, 2004 - link

    OpenBSD's W^X is only a subset of what PaX has implemented for 4 years now, but thanks for the praise anyway ;-). As for ExecShield, it doesn't implement W^X, data and bss sections in libraries remain both writable and executable. And both OpenBSD and ExecShield are vulnerable to executing existing code that can trivially circumvent W^X separation and execute injected shellcode.

    Also, you're mixing up software bugs (that your 200,000+ programmers can fix) with exploit techniques (some of which properly used hardware features can prevent). The two sets are orthogonal, not mutually exclusive.

    As for MIPS, as far as i know, none of them has true hardware NX bit support, although on some models it might be possible to simulate the behaviour, but it's lots of hacking and is mostly an academic exercise only, not useful for production.
  • KristopherKubicki - Saturday, October 16, 2004 - link

    WarcraftIII: Although I agree with your comments on programming error, let's look at the last 20 years of programming as an example. If we have the ability to stop Buffer Overflows do we rely on 4 processor makers or 200,000+ C programmers to fix the problem; either of which can fix the errors with equal amounts of work?

    NX is OK, I am just illustrating it doesn't do what it says. It might have stopped some older worms, but when the next worm hits that moves around the eip to somewhere it shouldn't and "buffer overflows" all those NX protection machines don't you think people are going to be upset?

    Most RISC and MIPS processors have utilized some form of NX protection since their inception, by the way.

    Finally, I would like to add that OpenBSD's w^x protection (writeable xor execute; no execute on any writable segment) is probably the most elegent solution yet to the buffer overflow issue. It still is vulnerable to maliciously modified return pointers, but it doesnt advertise that it isn't.

    Kristopher

Log in

Don't have an account? Sign up now