Lock down your kernel with grsecurity

Harden your system against attacks with the grsecurity kernel patch.

Hardening a Unix system can be a difficult process. It typically involves setting up all the services that the system will run in the most secure fashion possible, as well as locking down the system to prevent local compromises. However, putting effort into securing the services that you're running does little for the rest of the system and for unknown vulnerabilities. Luckily, even though the standard Linux kernel provides few features for proactively securing a system, there are patches available that can help the enterprising system administrator do so. One such patch is grsecurity (http://www.grsecurity.net).

grsecurity started out as a port of the OpenWall patch (http://www.openwall.com) to the 2.4.x series of Linux kernels. This patch added features such as nonexecutable stacks, some filesystem security enhancements, restrictions on access to /proc, as well as some enhanced resource limits. These features helped to protect the system against stack-based buffer overflow attacks, prevented filesystem attacks involving race conditions on files created in /tmp, limited a user to only seeing his own processes, and even enhanced Linux's resource limits to perform more checks. Since its inception, grsecurity has grown to include many features beyond those provided by the OpenWall patch. grsecurity now includes many additional memory address space protections to prevent buffer overflow exploits from succeeding, as well as enhanced chroot() jail restrictions, increased randomization of process and IP IDs, and increased auditing features that enable you to track every process executed on a system. grsecurity adds a sophisticated access control list (ACL) system that makes use of Linux's capabilities system. This ACL system can be used to limit the privileged operations that individual processes are able to perform on a case-by-case basis.

Configuration of ACLs is handled through the gradm utility.

To compile a kernel with grsecurity, you will need to download the patch that corresponds to your kernel version and apply it to your kernel using the patch utility.

For example, if you are running Linux 2.4.24:

# cd /usr/src/linux-2.4.24

# patch -p1 < ~andrew/grsecurity-1.9.13-2.4.24.patch

While the command is running, you should see a line for each kernel source file that is being patched. After the command has finished, you can make sure that the patch applied cleanly by looking for any files that end in .rej. The patch program creates these when it cannot apply the patch cleanly to a file. A quick way to see if there are any .rej files is to use the find command:

# find ./ -name \*.rej

If there are any rejected files, they will be listed on the screen. If the patch applied cleanly, you should be returned back to the shell prompt without any additional output.

After the patch has been applied, you can configure the kernel to enable grsecurity's features by running make config to use text prompts, make menuconfig for a curses-based interface, or make xconfig to use a Tk-based GUI. If you went the graphical route and used make xconfig, you should then see a dialog similar to Figure 1-1. If you ran make menuconfig or make config, the relevant kernel options have the same name as the menu options described in this example.


To configure which grsecurity features will be enabled in the kernel, click the button labeled Grsecurity. After doing that, you should see a dialog similar to 
To enable grsecurity, click the y radio button. After you've done that, you can enable predefined sets of features with the Security Level drop-down list, or set it to Custom and go through the menus to pick and choose which features to enable.

Choosing Low is safe for any system and should not affect any software's normal operation. Using this setting will enable linking restrictions in directories with mode 1777. This prevents race conditions in /tmp from being exploited, by only following symlinks to files that are owned by the process following the link. Similarly, users won't be able to write to FIFOs that they do not own if they are within a directory with permissions of 1777.

In addition to the tighter symlink and FIFO restrictions, the Low setting increases the randomness of process and IP IDs. This helps to prevent attackers from using remote detection techniques to correctly guess the operating system your machine is running (as in [Hack #40] ), and it also makes it difficult to guess the process ID of a given program. The Low security level also forces programs that use chroot( ) to change their current working directory to / after the chroot() call. Otherwise, if a program left its working directory outside of the chroot environment, it could be used to break out of the sandbox. Choosing the Low security level also prevents nonroot users from using dmesg, a utility that can be used to view recent kernel messages.

Choosing Medium enables all of the same features as the Low security level, but this level also includes features that make chroot()-based sandboxed environments more secure. The ability to mount filesystems, call chroot( ), write to sysctl variables, or create device nodes within a chrooted environment are all restricted, thus eliminating much of the risk involved in running a service in a sandboxed environment under Linux. In addition, TCP source ports will be randomized, and failed fork() calls, changes to the system time, and segmentation faults will all be logged. Enabling the Medium security level will also restrict total access to /proc to those who are in the wheel group. This hides each user's processes from other users and denies writing to /dev/kmem, /dev/mem, and /dev/port. This makes it more difficult to patch kernel-based root kits into the running kernel. Also, process memory address space layouts are randomized, making it harder for an attacker to successfully exploit buffer overrun attacks. Because of this, information on process address space layouts is removed from /proc as well. Because of these /proc restrictions, you will need to run your identd daemon (if you are running one) as an account that belongs to the wheel group. According to the grsecurity documentation, none of these features should affect the operation of your software, unless it is very old or poorly written.

To enable nearly all of grsecurity's features, you can choose the High security level. In addition to the features provided by the lower security levels, this level implements additional /proc restrictions by limiting access to device and CPU information to users who are in the wheel group. Sandboxed environments are also further restricted by disallowing chmod to set the SUID or SGID bit when operating within such an environment. Additionally, applications that are running within such an environment will not be allowed to insert loadable modules, perform raw I/O, configure network devices, reboot the system, modify immutable files, or change the system's time. Choosing this security level will also cause the kernel's stack to be laid out randomly, to prevent kernel-based buffer overrun exploits from succeeding. In addition, the kernel's symbols will be hidden—making it even more difficult for an intruder to install Trojan code into the running kernel—and filesystem mounting, remounting, and unmounting will be logged.

The High security level also enables grsecurity's PaX code, which enables nonexecutable memory pages. Enabling this will cause many buffer overrun exploits to fail, since any code injected into the stack through an overrun will be unable to execute. However, it is still possible to exploit a program with buffer overrun vulnerabilities, although this is made much more difficult by grsecurity's address space layout randomization features. PaX can also carry with it some performance penalties on the x86 architecture, although they are said to be minimal. In addition, some programs—such as XFree86, wine, and Java© virtual machines—will expect that the memory addresses returned by malloc() will be executable. Unfortunately, PaX breaks this behavior, so enabling it will cause those programs and others that depend on it to fail. Luckily, PaX can be disabled on a per-program basis with the chpax utility (http://chpax.grsecurity.net).

To disable PaX for a program, you can run a command similar to this one:

# chpax -ps /usr/bin/java

There are also other programs that make use of special GCC features, such as trampoline functions. This allows a programmer to define a small function within a function, so that the defined function is only in the scope of the function in which it is defined. Unfortunately, GCC puts the trampoline function's code on the stack, so PaX will break any programs that rely on this. However, PaX can provide emulation for trampoline functions, which can be enabled on a per-program basis with chpax, as well by using the -E switch.

If you do not like the sets of features that are enabled with any of the predefined security levels, you can just set the kernel option to "custom" and enable only the features you need.

After you've set a security level or enabled the specific options you want to use, just recompile your kernel and modules as you normally would. You can do that with commands similar to these:

# make dep clean && make bzImage 
# make modules && make modules_install

Then reboot with your new kernel. In addition to the kernel restrictions already in effect, you can now use gradm to set up ACLs for your system.

As you can see, grsecurity is a complex but tremendously useful modification of the Linux kernel. For more detailed information on installing and configuring the patches, consult the extensive documentation at http://www.grsecurity.net/papers.php
Udgivet i Knowledge Base, Linux, Old Base | Skriv en kommentar

Make compilers estinguis buffer overflows (eller noget)

In C and C++, memory for local variables is allocated in a chunk of memory called the stack. Information pertaining to the control flow of a program is also maintained on the stack. If an array is allocated on the stack and that array is overrun (that is, more values are pushed into the array than the available space provides), an attacker can overwrite the control flow information that is also stored on the stack. This type of attack is often referred to as a stack-smashing attack.

Stack-smashing attacks are a serious problem, since an otherwise innocuous service (such as a web server or FTP server) can be made to execute arbitrary commands. Several technologies have been developed that attempt to protect programs against these attacks. Some are implemented in the compiler, such as IBM's ProPolice (http://www.trl.ibm.com/projects/security/ssp/) and the Stackguard (http://www.immunix.org/stackguard.html) versions of GCC. Others are dynamic runtime solutions, such as LibSafe (http://www.research.avayalabs.com/project/libsafe/). While recompiling the source gets to the heart of the buffer overflow attack, runtime solutions can protect programs when the source isn't available or recompiling simply isn't feasible.

All of the compiler-based solutions work in much the same way, although there are some differences in the implementations. They work by placing a "canary" (which is typically some random value) on the stack between the control flow information and the local variables. The code that is normally generated by the compiler to return from the function is modified to check the value of the canary on the stack; if it is not what it is supposed to be, the program is terminated immediately.

The idea behind using a canary is that an attacker attempting to mount a stack-smashing attack will have to overwrite the canary to overwrite the control flow information. By choosing a random value for the canary, the attacker cannot know what it is and thus cannot include it in the data used to "smash" the stack.

When a program is distributed in source form, the developer of the program cannot enforce the use of StackGuard or ProPolice, because they are both nonstandard extensions to the GCC compiler. It is the responsibility of the person compiling the program to make use of one of these technologies.

For Linux systems, Avaya Labs's LibSafe technology is not implemented as a compiler extension, but instead takes advantage of a feature of the dynamic loader that causes a dynamic library to be preloaded with every executable. Using LibSafe does not require the source code for the programs it protects, and it can be deployed on a system-wide basis.

LibSafe replaces the implementation of several standard functions that are known to be vulnerable to buffer overflows, such as gets(), strcpy(), and scanf(). The replacement implementations attempt to compute the maximum possible size of a statically allocated buffer used as a destination buffer for writing, using a GCC built-in function that returns the address of the frame pointer. That address is normally the first piece of information on the stack following local variables.

In C and C++, memory for local variables is allocated in a chunk of memory called the stack. Information pertaining to the control flow of a program is also maintained on the stack. If an array is allocated on the stack and that array is overrun (that is, more values are pushed into the array than the available space provides), an attacker can overwrite the control flow information that is also stored on the stack. This type of attack is often referred to as a stack-smashing attack.

Stack-smashing attacks are a serious problem, since an otherwise innocuous service (such as a web server or FTP server) can be made to execute arbitrary commands. Several technologies have been developed that attempt to protect programs against these attacks. Some are implemented in the compiler, such as IBM's ProPolice (http://www.trl.ibm.com/projects/security/ssp/) and the Stackguard (http://www.immunix.org/stackguard.html) versions of GCC. Others are dynamic runtime solutions, such as LibSafe (http://www.research.avayalabs.com/project/libsafe/). While recompiling the source gets to the heart of the buffer overflow attack, runtime solutions can protect programs when the source isn't available or recompiling simply isn't feasible.

All of the compiler-based solutions work in much the same way, although there are some differences in the implementations. They work by placing a "canary" (which is typically some random value) on the stack between the control flow information and the local variables. The code that is normally generated by the compiler to return from the function is modified to check the value of the canary on the stack; if it is not what it is supposed to be, the program is terminated immediately.

The idea behind using a canary is that an attacker attempting to mount a stack-smashing attack will have to overwrite the canary to overwrite the control flow information. By choosing a random value for the canary, the attacker cannot know what it is and thus cannot include it in the data used to "smash" the stack.

When a program is distributed in source form, the developer of the program cannot enforce the use of StackGuard or ProPolice, because they are both nonstandard extensions to the GCC compiler. It is the responsibility of the person compiling the program to make use of one of these technologies.

For Linux systems, Avaya Labs's LibSafe technology is not implemented as a compiler extension, but instead takes advantage of a feature of the dynamic loader that causes a dynamic library to be preloaded with every executable. Using LibSafe does not require the source code for the programs it protects, and it can be deployed on a system-wide basis.

LibSafe replaces the implementation of several standard functions that are known to be vulnerable to buffer overflows, such as gets(), strcpy(), and scanf(). The replacement implementations attempt to compute the maximum possible size of a statically allocated buffer used as a destination buffer for writing, using a GCC built-in function that returns the address of the frame pointer. That address is normally the first piece of information on the stack following local variables. If an attempt is made to write more than the estimated size of the buffer, the program is terminated.

Unfortunately, there are several problems with the approach taken by LibSafe. First, it cannot accurately compute the size of a buffer; the best it can do is limit the size of the buffer to the difference between the start of the buffer and the frame pointer. Second, LibSafe's protections will not work with programs that were compiled using the -fomit-frame-pointer flag to GCC, an optimization that causes the compiler not to put a frame pointer on the stack. Although relatively useless, this is a popular optimization for programmers to employ. Finally, LibSafe will not work on SUID binaries without static linking or a similar trick.

In addition to providing protection against conventional stack-smashing attacks, the newest versions of LibSafe also provide some protection against format-string attacks. The format-string protection also requires access to the frame pointer because it attempts to filter out arguments that are not pointers into either the heap or the local variables on the stack.. If an attempt is made to write more than the estimated size of the buffer, the program is terminated.

Unfortunately, there are several problems with the approach taken by LibSafe. First, it cannot accurately compute the size of a buffer; the best it can do is limit the size of the buffer to the difference between the start of the buffer and the frame pointer. Second, LibSafe's protections will not work with programs that were compiled using the -fomit-frame-pointer flag to GCC, an optimization that causes the compiler not to put a frame pointer on the stack. Although relatively useless, this is a popular optimization for programmers to employ. Finally, LibSafe will not work on SUID binaries without static linking or a similar trick.

In addition to providing protection against conventional stack-smashing attacks, the newest versions of LibSafe also provide some protection against format-string attacks. The format-string protection also requires access to the frame pointer because it attempts to filter out arguments that are not pointers into either the heap or the local variables on the stack.
Udgivet i Knowledge Base, Old Base | Skriv en kommentar

Mysql authentication for proftpd

Make sure that your database system's OS is running as efficiently as possible with these tweaks.

proftpd is a powerful FTP daemon with a configuration syntax much like Apache. It has a whole slew of options not available in most FTP daemons, including ratios, virtual hosting, and a modularized design that allows people to write their own modules.

One such module is mod_sql, which allows proftpd to use a SQL database as its back-end authentication source. Currently, mod_sql supports MySQL and PostgreSQL. This can be a good way to help lock down access to your server, as inbound users will authenticate against the database (and therefore not require an actual shell account on the server). In this hack, we'll get proftpd authenticating against a MySQL database.

First, download and build the source to proftpd and mod_sql:

~$ bzcat proftpd-1.2.6.tar.bz2 | tar xf -

~/proftpd-1.2.6/contrib$ tar zvxf ../../mod_sql-4.08.tar.gz 
~/proftpd-1.2.6/contrib$ cd ..

~/proftpd-1.2.6$ ./configure --with-modules=mod_sql:mod_sql_mysql \

--with-includes=/usr/local/mysql/include/ \

--with-libraries=/usr/local/mysql/lib/

(Naturally, substitute the path to your mySQL install, if it isn't in /usr/local/mysql/.) Now, build the code and install it:

rob@catlin:~/proftpd-1.2.6$ make && sudo make install

Next, create a database for proftpd to use (assuming that you already have mysql up and running):

$ mysqladmin create proftpd

Then, permit read-only access to it from proftpd:

$ mysql -e "grant select on proftpd.* to proftpd@localhost \

    identified by 'secret';"

Create two tables in the database, with this schema:

CREATE TABLE users (

userid varchar(30) NOT NULL default '',

password varchar(30) NOT NULL default '',

uid int(11) default NULL,

gid int(11) default NULL,

homedir varchar(255) default NULL,

shell varchar(255) default NULL,

UNIQUE KEY uid (uid),

UNIQUE KEY userid (userid)

) TYPE=MyISAM;



CREATE TABLE groups (

groupname varchar(30) NOT NULL default '',

gid int(11) NOT NULL default '0',

members varchar(255) default NULL

) TYPE=MyISAM;

One quick way to create the tables is to save this schema to a file called proftpd.schema and run a command like mysql proftpd < proftpd.schema.

Now we need to tell proftpd to use this database for authentication. Add the following lines to /usr/local/etc/proftpd.conf:

SQLConnectInfo proftpd proftpd secret

SQLAuthTypes crypt backend

SQLMinUserGID 111

SQLMinUserUID 111

The SQLConnectInfo line takes the form database user password. You could also specify a database on another host (even on another port) with something like:

SQLConnectInfo proftpd@dbhost:5678 somebody somepassword

The SQLAuthTypes line lets you create users with passwords stored in the standard Unix crypt format, or mysql's PASSWORD( ) function. Be warned that if you're using mod_sql's logging facilities, the password may be exposed in plain text, so keep those logs private.

The SQLAuthTypes line as specified won't allow blank passwords; if you need that functionality, also include the empty keyword. The SQLMinUserGID and SQLMinUserUID lines specify the minimum group and user ID that proftpd will permit on login. It's a good idea to make this greater than 0 (to prohibit root logins), but it should be as low as you need to allow proper permissions in the filesystem. On this system, we have a user and group called www, with both its uid and gid set to 111. As we'll want web developers to be able to log in with these permissions, we'll need to set the minimum values to 111.

Finally, we're ready to create users in the database. This will create the user jimbo, with effective user rights as www/www, and dump him in the /usr/local/apache/htdocs/ directory at login:

mysql -e "insert into users values ('jimbo',PASSWORD('sHHH'),'111', \

  '111', '/usr/local/apache/htdocs','/bin/bash');" proftpd

The password for jimbo is encrypted with mysql's PASSWORD( ) function before being stored. The /bin/bash line is passed to proftpd to pass proftpd's RequireValidShell directive. It has no bearing on granting actual shell access to the user jimbo.

At this point, you should be able to fire up proftpd and log in as user jimbo, with a password of sHHH. If you are having trouble getting connected, try running proftpd in the foreground with debugging on, like this:

# proftpd -n -d 5

Watch the messages as you attempt to connect, and you should be able to track down the source of difficulty. In my experience, it's almost always due to a failure to set something properly in proftpd.conf, usually regarding permissions.

The mod_sql module can do far more than I've shown here; it can connect to existing mysql databases with arbitrary table names, log all activity to the database, modify its user lookups with an arbitrary WHERE clause, and much more.
See Also

    *

      The mod_sql home page at http://www.lastditcheffort.org/~aah/proftpd/mod_sql/
    *

      The proftpd home page at http://www.proftpd.org/
Udgivet i Knowledge Base, Old Base | Skriv en kommentar

Chrooting / jailin application

Mitigate system damage by keeping service compromises contained.

Sometimes keeping up with the latest patches just isn't enough to prevent a break-in. Often, a new exploit will circulate in private circles long before an official advisory is issued, during which time your servers may be open to unexpected attack. With this in mind, it's wise to take extra preventative measures to contain the aftermath of a compromised service. One way to do this is to run your services in sandbox environments. Ideally, this lets the service be compromised while minimizing the effects on the overall system.

Most Unix and Unix-like systems include some sort of system call or other mechanism for sandboxing that offers various levels of isolation between the host and the sandbox. The least restrictive and easiest to set up is a chroot() environment, which is available on nearly all Unix and Unix-like systems. In addition to chroot(), FreeBSD includes another mechanism called jail( ), which provides a few more restrictions beyond those provided by chroot().

chroot() very simply changes the root directory of a process and all of its children. While this is a powerful feature, there are many caveats to using it. Most importantly, there should be no way for anything running within the sandbox to change its effective UID (EUID) to 0, which is root's UID. Naturally, this implies that you don't want to run anything as root within the jail. If an attacker is able to gain root privileges within the sandbox, then all bets are off. While the attacker will not be able to directly break out of the sandbox environment, it does not prevent him from running functions inside the exploited processes' address space that will let him break out. There are many ways to break out of a chroot( ) sandbox. However, they all rely on being able to get root privileges within the sandboxed environment. The Achilles heel of chroot() is possession of UID 0 inside the sandbox.

There are a few services that support chroot() environments by calling the function within the program itself, but many services do not. To run these services inside a sandboxed environment using chroot(), we need to make use of the chroot command. The chroot command simply calls chroot() with the first command-line argument and attempts to execute the program specified in the second argument. If the program is a statically linked binary, all you have to do is copy the program to somewhere within the sandboxed environment; but if the program is dynamically linked, you will need to copy all of its supporting libraries to the environment as well.

See how this works by setting up bash in a chroot() environment. First we'll try to run chroot without copying any of the libraries bash needs:

# mkdir -p /chroot_test/bin

# cp /bin/bash /chroot_test/bin/

# chroot /chroot_test /bin/bash

chroot: /bin/bash: No such file or directory

Now we'll find out what libraries bash needs, which you can do with the ldd command, and attempt to run chroot again:

# ldd /bin/bash

libtermcap.so.2 => /lib/libtermcap.so.2 (0x4001a000)

libdl.so.2 => /lib/libdl.so.2 (0x4001e000)

libc.so.6 => /lib/tls/libc.so.6 (0x42000000)

/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)

# mkdir -p chroot_test/lib/tls && \

> (cd /lib; \

> cp libtermcap.so.2 libdl.so.2 ld-linux.so.2 /chroot_test/lib; \

> cd tls; cp libc.so.6 /chroot_test/lib/tls)

# chroot /chroot_test /bin/bash

bash-2.05b#

bash-2.05b# echo /*

/bin /lib

Setting up a chroot environment mostly involves trial and error in getting permissions right and all of the library dependencies in order. Be sure to consider the implications of having other programs such as mknod or mount available in the chroot environment. If these were available, the attacker could possibly create device nodes to access memory directly or to remount filesystems, thus breaking out of the sandbox and gaining total control of the overall system. This threat can be mitigated by putting the directory on a filesystem mounted with options that prohibit the use of device files , but that isn't always convenient. It is advisable to make as many of the files and directories in the chrooted directory as possible owned by root and writable only by root, in order to make it impossible for a process to modify any supporting files (this includes files such as libraries and configuration files). In general it is best to keep permissions as restrictive as possible, and to relax them only when necessary (for example, if the permissions prevent the daemon from working properly).

The best candidates for a chroot() environment are services that do not need root privileges at all. For instance, MySQL listens for remote connections on port 3306 by default. Since this port is above 1024, mysqld can be started without root privileges and therefore doesn't pose the risk of being used to gain root access. Other daemons that need root privileges can include an option to drop these privileges after completing all the operations for which it needs root access (e.g., binding to a port below 1024), but care should be taken to ensure that the program drops its privileges correctly. If a program uses seteuid() rather than setuid() to drop its privileges, it is still possible to gain root access when exploited by an attacker. Be sure to read up on current security advisories for programs that will run only with root privileges.

You might think that simply not putting compilers, a shell, or utilities such as mknod in the sandbox environment may protect them in the event of a root compromise within the restricted environment. In reality, attackers can accomplish the same functionality by changing their code from calling system("/bin/sh") to calling any other C library function or system call that they desire. If you can mount the filesystem that the chrooted program runs from using the read-only flag [Hack #1], you can make it more difficult for attackers to install their own code, but this is still not quite bulletproof. Unless the daemon you need to run within the environment can meet the criteria discussed earlier, you might want to look into using a more powerful sandboxing mechanism.

One such mechanism is available under FreeBSD and is implemented through the jail() system call. jail() provides many more restrictions in isolating the sandbox environment from the host system and provides additional features, such as assigning IP addresses from virtual interfaces on the host system. Using this functionality, you can create a full virtual server or just run a single service inside the sandboxed environment.

Just as with chroot(), the system provides a jail command that uses the jail( ) system call. The basic form of the jail command is:

jail 
new root  hostname ipaddr command

where ipaddr is the IP address of the machine on which the jail is running. Try it out by running a shell inside a jail:

# mkdir -p /jail_test/bin

# cp /bin/sh /jail_test/sh

# jail /jail_test jail_test 192.168.0.40 /bin/sh

# echo /*

/bin

This time, no libraries needed to be copied, because FreeBSD's /bin/sh is statically linked.

On the opposite side of the spectrum, we can build a jail that can function as a nearly full-function virtual server with its own IP address. The steps to do this basically involve building FreeBSD from source and specifying the jail directory as the install destination.

You can do this by running the following commands:

# mkdir /jail_test

# cd /usr/src

# make world DESTDIR=/jail_test

# cd /etc && make distribution DESTDIR=/jail_test -DNO_MAKEDEV_RUN

# cd /jail_test/dev && sh MAKEDEV jail

# cd /jail_test && ln -s dev/null kernel

However, if you're planning to run just one service from within the jail, this is definitely overkill. Note that in the real world you'll most likely need to create /dev/null and /dev/log device nodes in your sandbox environment for most daemons to work correctly.
Udgivet i Knowledge Base, Old Base | Skriv en kommentar

automatic siganutre verification

Use scripting and key servers to automate the chore of checking software authenticity.

One of the most important things you can do for the security of your system is to be familiar with the software you are installing. You probably will not have the time, knowledge, or resources to actually go through the source code for all of the software that you are installing. However, verifying that the software you are compiling and installing is what the authors intended it to be can go a long way toward preventing the widespread distribution of Trojan horses. Recently, several pivotal pieces of software (such as tcpdump, LibPCap, Sendmail, and OpenSSH) have had Trojaned versions distributed. Since this is an increasingly popular vector for attack, verifying your software is critically important.

Why is this even an issue? Unfortunately, it takes a little bit of effort to verify software before installing it. Either through laziness or ignorance, many system administrators overlook this critical step. This is a classic example of "false" laziness, as it will likely lead to more work for the sysadmin in the long run. This problem is difficult to solve because it relies on the programmers and distributors to get their acts together. Then there's the laziness aspect: many times, software packages don't even come with a signature to use for verifying the legitimacy of what you've downloaded. Often, signatures are available right along with the source code, but in order to verify the code, you must then hunt through the site for the public key that was used to create the signature. After finding the public key, you have to download it, verify that the key is genuine, add it to your keyring, and finally check the signature of the code.

Here is what this would look like when checking the signature for Version 1.3.28 of the Apache web server using GnuPG (http://www.gnupg.org):

# gpg -import KEYS

# gpg -verify apache_1.3.28.tar.gz.asc apache_1.3.28.tar.gz

gpg: Signature made Wed Jul 16 13:42:54 2003 PDT using DSA key ID 08C975E5

gpg: Good signature from "Jim Jagielski <jim@zend.com>"

gpg:                 aka "Jim Jagielski <jim@apache.org>"

gpg:                 aka "Jim Jagielski <jim@jaguNET.com>"

gpg: WARNING: This key is not certified with a trusted signature!

gpg:          There is no indication that the signature belongs to the owner.

Fingerprint: 8B39 757B 1D8A 994D F243  3ED5 8B3A 601F 08C9 75E5

As you can see, it's not terribly difficult to do, but this step is often overlooked when you are in a hurry. This is where this hack comes to the rescue. We'll use a little bit of shell scripting and what are known as key servers to reduce the number of steps to perform this process.

Key servers are a part of a public-key cryptography infrastructure that allows you to retrieve keys from a trusted third party. A nice feature of GnuPG is its ability to query key servers for a key ID and to download the result into a local keyring. To figure out which key ID to ask for, we rely on the fact that the error message generated by GnuPG tells us which key ID it was unable to find locally when trying to verify the signature.

In the previous example, if the key that GnuPG was looking for had not been imported prior to verifying the signature, it would have generated an error like this:

gpg: Signature made Wed Jul 16 13:42:54 2003 PDT using DSA key ID 08C975E5

gpg: Can't check signature: public key not found

The following script takes advantage of that error:

#!/bin/sh                                                                                                             

VENDOR_KEYRING=vendors.gpg

KEYSERVER=search.keyserver.net

KEYID="0x`gpg --verify $1 $2 2>&1 | grep 'key ID' | awk '{print $NF}'`"

gpg --no-default-keyring --keyring $VENDOR_KEYRING --recv-key \

  --keyserver $KEYSERVER $KEYID

gpg --keyring $VENDOR_KEYRING --verify $1 $2

The first line of the script specifies the keyring in which the result from the key server query will be stored. You could use pubring.gpg (which is the default keyring for GnuGP), but using a separate file will make managing vendor public keys easier. The second line of the script specifies which key server to query (the script uses search.keyserver.net; another good one is pgp.mit.edu). The third line attempts (and fails) to verify the signature without first consulting the key server. It then uses the key ID it saw in the error, and prepends an 0x in order to query the key server on the next line. Finally, GnuPG attempts to verify the signature, and specifies the keyring in which the query result was stored.

This script has shortened the verification process by eliminating the need to search for and import the public key that was used to generate the signature. Going back to the example of verifying the Apache 1.3.28 source code, you can see how much more convenient it is to verify the package's authenticity:

# checksig apache_1.3.28.tar.gz.asc apache_1.3.28.tar.gz

gpg: requesting key 08C975E5 from HKP keyserver search.keyserver.net

gpg: key 08C975E5: public key imported

gpg: Total number processed: 1

gpg:               imported: 1

gpg: Warning: using insecure memory!

gpg: please see http://www.gnupg.org/faq.html for more information

gpg: Signature made Wed Jul 16 13:42:54 2003 PDT using DSA key ID 08C975E5

gpg: Good signature from "Jim Jagielski <jim@zend.com>"

gpg:                 aka "Jim Jagielski <jim@apache.org>"

gpg:                 aka "Jim Jagielski <jim@jaguNET.com>"

gpg: checking the trustdb

gpg: no ultimately trusted keys found

gpg: WARNING: This key is not certified with a trusted signature!

gpg:          There is no indication that the signature belongs to the owner.

Fingerprint: 8B39 757B 1D8A 994D F243  3ED5 8B3A 601F 08C9 75E5

With this small and quick script, both the number steps needed to verify a source package and the amount of time needed have been reduced. As with any good shell script, it should help you to be lazy in a good way: by doing more work properly, but with less effort on your part.
Udgivet i Knowledge Base, Old Base | Skriv en kommentar

chek cor listening services

Find out whether unneeded services are listening and looking for possible backdoors.

One of the first things that should be done after a fresh operating system install is to see what services are running, and remove any unneeded services from the system startup process. You could use a port scanner (such as nmap [Hack #42] ) and run it against the host, but if one didn't come with the operating system install, you'll likely have to connect your fresh (and possibly insecure) machine to the network to download one. Also, nmap can be fooled if the system is using firewall rules. With proper firewall rules, a service can be completely invisible to nmap unless certain criteria (such as the source IP address) also match. When you have shell access to the server itself, it is usually more efficient to find open ports using programs that were installed with the operating system. One program that will do what we need is netstat, a program that will display various network-related information and statistics.

To get a list of listening ports and their owning processes under Linux, run this:

# netstat -luntp

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address Foreign Address  State   PID/Program name

tcp        0      0 0.0.0.0:22    0.0.0.0:*        LISTEN  1679/sshd

udp        0      0 0.0.0.0:68    0.0.0.0:*                1766/dhclient

From the output, you can see that this machine is probably a workstation, since it just has a DHCP client running along with an SSH daemon for remote access. The ports in use are listed after the colon in the Local Address column (22 for sshd and 68 for dhclient). The absence of any other listening processes means that this is probably a workstation, and not a network server.

Unfortunately, the BSD version of netstat does not let us list the processes and the process IDs (PIDs) that own the listening port. Nevertheless, the BSD netstat command is still useful for listing the listening ports on your system.

To get a list of listening ports under FreeBSD, run this command:

# netstat -a -n | egrep 'Proto|LISTEN'

Proto Recv-Q Send-Q  Local Address          Foreign Address        (state)

tcp4       0      0  *.587                  *.*                    LISTEN

tcp4       0      0  *.25                   *.*                    LISTEN

tcp4       0      0  *.22                   *.*                    LISTEN

tcp4       0      0  *.993                  *.*                    LISTEN

tcp4       0      0  *.143                  *.*                    LISTEN

tcp4       0      0  *.53                   *.*                    LISTEN

Again, the ports in use are listed in the Local Address column. Many seasoned system administrators have memorized the common port numbers for popular services, and can see that this server is running SSH, SMTP, DNS, IMAP, and IMAP+SSL services. If you are ever in doubt about which services typically run on a given port, either eliminate the -n switch from netstat (which tells netstat to use names but can take much longer to run when looking up DNS addresses) or manually grep the /etc/services file:

# grep -w 993 /etc/services

imaps           993/udp     # imap4 protocol over TLS/SSL

imaps           993/tcp     # imap4 protocol over TLS/SSL

Also notice that, unlike the output of netstat on Linux, we don't get the PIDs of the daemons themselves. You might also notice that no UDP ports were listed for DNS. This is because UDP sockets do not have a LISTEN state in the same sense that TCP sockets do. In order to display UDP sockets, you must add udp4 to the argument for egrep, thus making it 'Proto|LISTEN|udp4'. However, due to the way UDP works, not all UDP sockets will necessarily be associated with a daemon process.

Under FreeBSD, there is another command that will give us just what we want. The sockstat command performs only a small subset of what netstat can do, and is limited to just listing information on both Unix domain sockets and Inet sockets.

To get a list of listening ports and their owning processes with sockstat, run this command:

# sockstat -4 -l

USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS        FOREIGN ADDRESS   

root     sendmail  1141    4 tcp4   *:25                 *:*

root     sendmail  1141    5 tcp4   *:587                *:*

root     sshd      1138    3 tcp4   *:22                 *:*

root     inetd     1133    4 tcp4   *:143                *:*

root     inetd     1133    5 tcp4   *:993                *:*

named    named     1127   20 tcp4   *:53                 *:*

named    named     1127   21 udp4   *:53                 *:*

named    named     1127   22 udp4   *:1351               *:*

Once again, we see that sshd, SMTP, DNS, IMAP, and IMAP+SSL services are running, but now we have the process that owns the socket plus its PID. We can now see that the IMAP services are being spawned from inetd instead of standalone daemons, and that sendmail and named are providing the SMTP and DNS services.

For most other Unix-like operating systems you can use the lsof utility (http://ftp.cerias.purdue.edu/pub/tools/unix/sysutils/lsof/). lsof is short for "list open files" and, as the name implies, allows you to list files that are open on a system, in addition to the processes and PIDs that have them open. Since sockets and files work the same way under Unix, lsof can also be used to list open sockets. This is done with the -i command-line option.

To get a list of listening ports and the processes that own them using lsof, run this command:

# lsof -i -n | egrep 'COMMAND|LISTEN'

COMMAND   PID   USER FD  TYPE     DEVICE SIZE/OFF NODE NAME

named    1127 named  20u IPv4 0xeb401dc0      0t0  TCP *:domain (LISTEN)

inetd    1133  root   4u IPv4 0xeb401ba0      0t0  TCP *:imap (LISTEN)

inetd    1133  root   5u IPv4 0xeb401980      0t0  TCP *:imaps (LISTEN)

sshd     1138  root   3u IPv4 0xeb401760      0t0  TCP *:ssh (LISTEN)

sendmail 1141  root   4u IPv4 0xeb41b7e0      0t0  TCP *:smtp (LISTEN)

sendmail 1141  root   5u IPv4 0xeb438fa0      0t0  TCP *:submission (LISTEN)

Again, you can change the argument to egrep to display UDP sockets. However, this time use UDP instead of udp4, which makes the argument 'COMMAND|LISTEN|UDP'. As mentioned earlier, not all UDP sockets will necessarily be associated with a daemon process.
Udgivet i Knowledge Base, Old Base | Skriv en kommentar

Sudo crash course

The sudo utility can help you delegate some system responsibilities to other people, without giving away full root access. It is a setuid root binary that executes commands on an authorized user's behalf, after she has entered her current password.

As root, run /usr/sbin/visudo to edit the list of users who can call sudo. The default sudo list looks something like this:

root ALL=(ALL) ALL

Unfortunately, many system administrators tend to use this entry as a template and grant unrestricted root access to all other admins unilaterally:

root ALL=(ALL) ALL

rob ALL=(ALL) ALL

jim ALL=(ALL) ALL

david ALL=(ALL) ALL

While this may allow you to give out root access without giving away the root password, this method is truly useful only when all of the sudo users can be completely trusted. When properly configured, the sudo utility provides tremendous flexibility for granting access to any number of commands, run as any arbitrary uid.

The syntax of the sudo line is:

user machine=(effective user) command 

The first column specifies the sudo user. The next column defines the hosts in which this sudo entry is valid. This allows you to easily use a single sudo configuration across multiple machines.

For example, suppose you have a developer who needs root access on a development machine, but not on any other server:

peter beta.oreillynet.com=(ALL) ALL

The next column (in parentheses) specifies the effective user that may run the commands. This is very handy for allowing users to execute code as users other than root:

peter lists.oreillynet.com=(mailman) ALL

Finally, the last column specifies all of the commands that this user may run:

david ns.oreillynet.com=(bind) /usr/sbin/rndc,/usr/sbin/named

If you find yourself specifying large lists of commands (or, for that matter, users or machines), then take advantage of sudo's Alias syntax. An Alias can be used in place of its respective entry on any line of the sudo configuration:

User_Alias ADMINS=rob,jim,david

User_Alias WEBMASTERS=peter,nancy

Runas_Alias DAEMONS=bind,www,smmsp,ircd

Host_Alias WEBSERVERS=www.oreillynet.com,www.oreilly.com,www.perl.com

Cmnd_Alias PROCS=/bin/kill,/bin/killall,/usr/bin/skill,/usr/bin/top

Cmnd_Alias APACHE=/usr/local/apache/bin/apachectl

WEBMASTERS WEBSERVERS=(www) APACHE

ADMINS ALL=(DAEMONS) ALL

It is also possible to specify system groups in place of the user specification, to allow any user who belongs to that group to execute commands. Just preface the group with a %, like this:

%wwwadmin WEBSERVERS=(www) APACHE

Now any user who is part of the wwwadmin group can execute apachectl as the www user on any of the web server machines.

One very useful feature is the NOPASSWD: flag. When present, the user won't have to enter a password before executing the command:

rob ALL=(ALL) NOPASSWD: PROCS

This will allow the user rob to execute kill, killall, skill, and top on any machine, as any user, without entering a password.

Finally, sudo can be a handy alternative to su for running commands at startup out of the system rc files:

(cd /usr/local/mysql; sudo -u mysql ./bin/safe_mysqld &)

sudo -u www /usr/local/apache/bin/apachectl start

For that to work at boot time, the default line root ALL=(ALL) ALL must be present.

Use sudo with the usual caveats that apply to setuid binaries. Particularly if you allow sudo to execute interactive commands (like editors) or any sort of compiler or interpreter, you should assume that it is possible that the sudo user will be able to execute arbitrary commands as the effective user. Still, under most circumstances this isn't a problem, and it's certainly preferable to giving away undue access to root privileges.

—Rob Flickenger
Udgivet i Knowledge Base, Old Base | Skriv en kommentar

Append only logfiles freebsd/linux

Use file attributes to prevent intruders from removing traces of their break-in.

In the course of an intrusion, an attacker will more than likely leave telltale signs of his actions in various system logs. This is a valuable audit trail that should be well protected. Without reliable logs, it can be very difficult to figure out how the attacker got in, or where the attack came from. This information is crucial in analyzing the incident and then responding to it by contacting the appropriate parties involved [Hack #100] . However, if the break-in attempt is successful and the intruder gains root privileges, what's to stop him from removing the traces of his misbehavior?

This is where file attributes come in to save the day (or at least make it a little better). Both Linux and the BSDs have the ability to assign extra attributes to files and directories. This is different from the standard Unix permissions scheme in that the attributes set on a file apply universally to all users of the system, and they affect file accesses at a much deeper level than file permissions or ACLs [Hack #4]. In Linux you can see and modify the attributes that are set for a given file by using the lsattr and chattr commands, respectively. Under the BSDs, ls -lo can be used to view the attributes, and chflags can be used to modify them. At the time of this writing, file attributes in Linux are available only when using the ext2 and ext3 filesystems. There are also kernel patches available for attribute support in XFS and reiserfs.

One useful attribute for protecting log files is append-only. When this attribute is set, the file cannot be deleted, and writes are only allowed to append to the end of the file.

To set the append-only flag under Linux, run this command:

# chattr +a 
filename

Under the BSDs, use this:

# chflags sappnd 
filename

See how the +a attribute works by creating a file and setting its append-only attribute:

# touch /var/log/logfile

# echo "append-only not set" > /var/log/logfile

# chattr +a /var/log/logfile

# echo "append-only set" > /var/log/logfile

bash: /var/log/logfile: Operation not permitted

The second write attempt failed, since it would overwrite the file. However, appending to the end of the file is still permitted:

# echo "appending to file" >> /var/log/logfile

# cat /var/log/logfile

append-only not set

appending to file

Obviously, an intruder who has gained root privileges could realize that file attributes are being used and just remove the append-only flag from our logs by running chattr -a. To prevent this, we need to disable the ability to remove the append-only attribute. To accomplish this under Linux, use its capabilities mechanism. Under the BSDs, use its securelevel facility.

The Linux capabilities model divides up the privileges given to the all-powerful root account and allows you to selectively disable them. In order to prevent a user from removing the append-only attribute from a file, we need to remove the CAP_LINUX_IMMUTABLE capability. When present in the running system, this capability allows the append-only attribute to be modified. To modify the set of capabilities available to the system, we will use a simple utility called lcap (http://packetstormsecurity.org/linux/admin/lcap-0.0.3.tar.bz2).

To unpack and compile the tool, run this command:

# tar xvfj lcap-0.0.3.tar.bz2 && cd lcap-0.0.3 && make

Then, to disallow modification of the append-only flag, run:

# ./lcap CAP_LINUX_IMMUTABLE

# ./lcap CAP_SYS_RAWIO

The first command removes the ability to change the append-only flag, and the second command removes the ability to do raw I/O. This is needed so that the protected files cannot be modified by accessing the block device they reside on. It also prevents access to /dev/mem and /dev/kmem, which would provide a loophole for an intruder to reinstate the CAP_LINUX_IMMUTABLE capability. To remove these capabilities at boot, add the previous two commands to your system startup scripts (e.g., /etc/rc.local). You should ensure that capabilities are removed late in the boot order, to prevent problems with other startup scripts. Once lcap has removed kernel capabilities, they can be reinstated only by rebooting the system.

The BSDs accomplish the same thing through the use of securelevels. The securelevel is a kernel variable that can be set to disallow certain functionality. Raising the securelevel to 1 is functionally the same as removing the two previously discussed Linux capabilities. Once the securelevel has been set to a value greater than 0, it cannot be lowered. By default, OpenBSD will raise the securelevel to 1 when in multiuser mode. In FreeBSD, the securelevel is -1 by default.

To change this behavior, add the following line to /etc/sysctl.conf:

kern.securelevel=1

Before doing this, you should be aware that adding append-only flags to your log files will most likely cause log rotation scripts to fail. However, doing this will greatly enhance the security of your audit trail, which will prove invaluable in the event of an incident.
Udgivet i FreeBSD, Knowledge Base, Linux, Old Base | Skriv en kommentar

Access Control Lists , advanced permisions in linux

Most of the time, the traditional Unix file permission system fits the bill just fine. But in a highly collaborative environment with multiple people needing access to files, this scheme can become unwieldy. Access control lists, otherwise known as ACLs (pronounced to rhyme with "hackles"), are a feature that is relatively new to the Linux operating system, but has been available in FreeBSD and Solaris for some time. While ACLs do not inherently add "more security" to a system, they do reduce the complexity of managing permissions. ACLs provide new ways to apply file and directory permissions without resorting to the creation of unnecessary groups.

ACLs are stored as extended attributes within the filesystem metadata. As the name implies, they allow you to define lists that either grant or deny access to a given file based on the criteria you provide. However, ACLs do not abandon the traditional permission system completely. ACLs may be specified for both users and groups and are still separated into the realms of read, write, and execute access. In addition, a control list may be defined for any user or group that does not correspond to any of the user or group ACLs, much like the "other" mode bits of a file. Access control lists also have what is called an ACL mask, which acts as a permission mask for all ACLs that specifically mention a user and a group. This is similar to a umask, but not quite the same. For instance, if you set the ACL mask to r--, any ACLs that pertain to a specific user or group and are looser in permissions (e.g., rw-) will effectively become r--. Directories also may contain a default ACL, which specifies the initial ACLs of files and subdirectories created within them.

To modify or remove ACLs, use the setfacl command. To modify an ACL, the -m option is used, followed by an ACL specification and a filename or list of filenames. You can delete an ACL by using the -x option and specifying an ACL or list of ACLs.

There are three general forms of an ACL: one for users, another for groups, and one for others. Let's look at them here:

# User ACL

u:[user]:<mode>

# Group ACL

g:[group]:<mode>

# Other ACL

o:<mode>

Notice that in the user and group ACLs, the actual user and group names that the ACL applies to are optional. If these are omitted, it means that the ACL will apply to the base ACL, which is derived from the file's mode bits. Thus, if you modify these, the mode bits will be modified and vice versa.

See for yourself by creating a file and then modifying its base ACL:

$ touch myfile

$ ls -l myfile

-rw-rw-r--    1 andrew   andrew          0 Oct 13 15:57 myfile

$ setfacl -m u::---,g::---,o:--- myfile

$ ls -l myfile

----------    1 andrew   andrew          0 Oct 13 15:57 myfile

From this example, you can also see that multiple ACLs can be listed by separating them with commas.

You can also specify ACLs for an arbitrary number of groups or users:

$ touch foo

$ setfacl -m u:jlope:rwx,g:wine:rwx ,o:--- foo

$ getfacl foo

# file: foo

# owner: andrew

# group: andrew

user::rw-

user:jlope:rwx

group::---

group:wine:rwx

mask::rwx

other::---

Now if you changed the mask to r--, the ACLs for jlope and wine would effectively become r-- as well:

$ setfacl -m m:r-- foo

$ getfacl foo

# file: foo

# owner: andrew

# group: andrew

user::rw-

user:jlope:rwx                  #effective:r--

group::---

group:wine:rwx                  #effective:r--

mask::r--

other::---

As mentioned earlier, directories can have default ACLs that will automatically be applied to files that are created within the directory. Default ACLs are set by prepending a d: to the ACL that you want to set:

$ mkdir mydir

$ setfacl -m d:u:jlope:rwx mydir

$ getfacl mydir

# file: mydir

# owner: andrew

# group: andrew

user::rwx

group::---

other::---

default:user::rwx

default:user:jlope:rwx

default:group::---

default:mask::rwx

default:other::---



$ touch mydir/bar

$ getfacl mydir/bar

# file: mydir/bar

# owner: andrew

# group: andrew

user::rw-

user:jlope:rwx                  #effective:rw-

group::---

mask::rw-

other::---

As you may have noticed from the previous examples, you can list ACLs by using the getfacl command. This command is pretty straightforward and has only a few options. The most useful is the -R option, which allows you to list ACLs recursively and works very much like ls -R.
Udgivet i Knowledge Base, Old Base | Skriv en kommentar

Loosy dir permisions and sticky bit

# find / -type d \( -perm -g+w -o -perm -o+w \) -exec ls -lad {} \;

Any directories that are listed in the output should have the sticky bit set, which is denoted by a t in the directory's permission bits. A world-writable directory with the sticky bit set ensures that even though anyone may create files in the directory, they may not delete or modify another user's files. If you see a directory in the output that does not contain a sticky bit, consider whether it really needs to be world-writable or whether the use of groups or ACLs [Hack #4] will work better for your situation. If you really do need the directory to be world-writable, set the sticky bit on it using chmod +t.

To get a list of the directories that don't have their sticky bit set, run this:

# find / -type d \( -perm -g+w -o -perm -o+w \) \

  -not -perm -a+t -exec ls -lad {} \;

If you're using a system that creates a unique group for each user (e.g., you create a user andrew, which in turn creates a group andrew as the primary group), you may want to modify the commands to not scan for group-writable directories. (Otherwise, you will get a lot of output that really isn't pertinent.) To do this, run the command without the -perm -g+w portion.
Udgivet i Knowledge Base, Linux, Old Base | Skriv en kommentar