Apache security – ssl, suexec

Help secure your web applications with mod_ssl and suEXEC.

Web server security is a very important issue these days, especially since people are always finding new and creative ways to put the Web to use. If you’re using any sort of web application that needs to handle authentication or provides some sort of restricted information, you should seriously consider installing a web server with SSL capabilities. Without SSL, any authentication information your users send to the web server is sent over the network in the clear, and any information that clients can access can be viewed by anyone with a sniffer. If you are already using Apache, you can easily add SSL capabilities with mod_ssl (http://www.modssl.org).

In addition, if your web server serves up dynamic content for multiple users, you may want to enable Apache’s suEXEC functionality. suEXEC allows your web server to execute server-side scripts as the user that owns them, rather than as the account under which the web server is running. Otherwise, any user could create a script and run code as the account the web server is running under. This is a bad thing, particularly on a multiuser web server. If you don’t review the scripts that your users write before allowing them to be run, they could very well write code that allows them to access other users’ data or other sensitive information, such as database accounts and passwords.

To compile Apache with mod_ssl, download the appropriate mod_ssl source distribution for the version of Apache that you’ll be using. (If you don’t want to add mod_ssl to an existing Apache source tree, you will also need to download and unpack the Apache source.) After you’ve done that, unpack the mod_ssl distribution and go into the directory that it created. Then run a command like this:

# ./configure \

–with-apache=../apache_1.3.29 \
–with-ssl=SYSTEM \
–prefix=/usr/local/apache \
–enable-module=most \
–enable-module=mmap_static \
–enable-module=so \

–enable-shared=ssl \

–disable-rule=SSL_COMPAT \

–server-uid=www \

–server-gid=www \

–enable-suexec \

–suexec-caller=www \

–suexec-uidmin=500 \

–suexec-gidmin=500

This will both patch the Apache source tree with extensions provided with mod_ssl and configure Apache for the build process.

You will probably need to change a number of options in order to build Apache. The directory specified in the –with-apache switch should point to the directory that contains the Apache source code for the version that you are building. In addition, if you want to use a version of OpenSSL that has not been installed yet, specify the location of its build tree with the –with-ssl switch. If you elect to do that, you should configure and build OpenSSL in the specified directory before attempting to build Apache and mod_ssl. The –server-uid and –server-gid switches are used to specify what user and group the web server will run under. Apache defaults to the “nobody” account. However, many programs that can be configured to drop their privileges also default to the nobody account; if you end up accepting these defaults with every program, the nobody account can become quite privileged. So, it is recommended that you create a separate account for every program that provides this option.

The remaining options enable and configure Apache’s suEXEC. To provide the suEXEC functionality, Apache uses a SUID wrapper program to execute users’ scripts. This wrapper program makes several checks before it will allow a program to execute. One thing that the wrapper checks is the UID of the process that invoked it. If it is not the account that was specified with the –suexec-caller option, then execution of the user’s script will abort. Since the suEXEC wrapper will be called by the web server, this option should be set to the same value as –server-uid. Additionally, since most privileged accounts and groups on a system usually all have a UID and GID beneath a certain value, the suEXEC wrapper will check to see if the UID or GID of the process invoking it is less than this threshold. For this to work, you must specify the appropriate value for your system. In this example, Apache and mod_ssl are being built on a Red Hat system, which starts regular user accounts and groups at UID and GID 500. In addition to these checks, suEXEC performs a multitude of other checks, such as ensuring that the script is writable only by the owner, that the owner is not root, and that the script is not SUID or SGID.

After the configure script completes, change to the directory that contains the Apache source code and run make and make install. You can run make certificates if you would like to generate an SSL certificate to test out your installation. You can also run make certificate TYPE=custom to generate a certificate signing request to be signed by either a commercial Certificate Authority or your own CA. See [Hack #45] if you would like to run your own Certificate Authority.

After installing Apache, you can start it by running this command:

# /usr/local/apache/bin/apachectl startssl

If you want to start out by testing it without SSL, run this:

# /usr/local/apache/bin/apacectl start

You can then verify that suEXEC support is enabled by running this command:

# grep suexec /usr/local/apache/logs/error_log

[Thu Jan 1 16:48:17 2004] [notice] suEXEC mechanism enabled (wrapper:

/usr/local/apache/bin/suexec)

Now add a Directory entry similar to this to enable CGI scripts for user directories:

<Directory /home/*/public_html>

AllowOverride FileInfo AuthConfig Limit

Options MultiViews Indexes SymLinksIfOwnerMatch Includes ExecCGI

<Limit GET POST OPTIONS PROPFIND>

Order allow,deny

Allow from all

</Limit>

<LimitExcept GET POST OPTIONS PROPFIND>

Order deny,allow

Deny from all

</LimitExcept>

</Directory>

In addition, add this line to enable CGI scripts outside of the ScriptAlias directories:

AddHandler cgi-script .cgi

After you’ve done that, you can restart Apache by running this:

# /usr/local/apache/bin/apachectl restart

Now test out suEXEC with a simple script that runs the id command, which will print out information about the user the script is executed as:

#!/bin/sh

echo -e “Content-Type: text/plain\r\n\r\n”

/usr/sbin/id

Put this script in a directory such as /usr/local/apache/cgi-bin, name it suexec-test.cgi, and make it executable. Now enter the URL for the script (i.e., http://webserver/cgi-bin/suexec-test.cgi) into your favorite web browser. You should see something like this:

uid=80(www) gid=80(www) groups=80(www)

As you can see, it is being executed as the same user that the web server runs as.

Now copy the script into a user’s public_html directory:

$ mkdir public_html && chmod 711 ~/ ~/public_html

$ cp /usr/local/apache/cgi-bin/suexec-test.cgi .

After you’ve done that, enter the URL for the script (i.e., http://webserver/~user/suexec-test.cgi) in your web browser. You should see something similar to this:

uid=500(andrew) gid=500(andrew) groups=500(andrew)

In addition to handling scripts in users’ private HTML directories, suEXEC can also execute scripts as another user within a virtual host. However, to do this, you will need to create all of your virtual host’s directories beneath the web server’s document root (i.e., /usr/local/apache/htdocs). When doing this, you can configure what user and group the script will execute as by using the User and Group configuration directives within the VirtualHost statement.

For example:

<VirtualHost>

User myuser

Group mygroup

DocumentRoot /usr/local/apache/htdocs/mysite

</VirtualHost>

Unfortunately, suEXEC is incompatible with mod_perl and mod_php because the modules run within the Apache process itself instead of a separate program. Since the Apache process is running as a nonroot user it cannot change the UID under which the scripts execute. suEXEC works by having Apache call a special SUID wrapper (e.g., /usr/local/apache/bin/suexec) that can only be invoked by Apache processes. If you care to make the security/performance trade-off by using suEXEC but still need to run Perl scripts, you can do so through the standard CGI interface. Just as with Perl, you can also run PHP programs through the CGI interface, but you’ll have to create a php binary and specify it as the interpreter in all the PHP scripts you wish to execute through suEXEC. You can also execute your scripts through mod_perl or mod_php by locating them outside the directories where suEXEC will work.

Udgivet i Apache, Knowledge Base, Linux, Networking, Old Base | Skriv en kommentar

OS Fingerprint protection

Keep outsiders on a need-to-know basis regarding your operating systems.

When performing network reconnaissance, one very valuable piece of information for would-be attackers is the operating system running on each system discovered in their scans. From an attacker’s point of view, this is very helpful in figuring out what vulnerabilities the system might have or which exploits may work on a system. Combined with the knowledge of open ports found during a port-scan, this information can be devastating. After all, an RPC exploit for SPARC Solaris isn’t very likely to work for x86 Linux—the code for the portmap daemon isn’t common to both systems, and they have different processor architectures. Armed with the knowledge of a given server’s platform, attackers can very efficiently try the techniques most likely to grant them further access without wasting time on exploits that cannot work.

Traditionally, individuals performing network reconnaissance would simply connect to any services detected by their port-scan, to see which operating system the remote system is running. This works because many daemons, such as Sendmail, Telnet, and even FTP, readily announce the underlying operating system, as well as their own version numbers. Even though this method is easy and straightforward, it is now seen as intrusive since it’s easy to spot someone connecting in the system log files. Additionally, most services can be configured not to disclose this sensitive information. In response, more sophisticated methods were developed that do not require a full connection to the target system to determine which operating system it is running. These methods rely on the eccentricities of the host operating system’s TCP/IP stack and its behavior when responding to certain types of packets. Since individual operating systems respond to these packets in a particular way, it is possible to make a very good guess at what OS a particular server is running based on how it responds to probe packets, which normally don’t show up in log files. Luckily, such probe packets can be blocked at the firewall to circumvent any operating system detection attempts that deploy methods like this.

One popular tool that employs such OS detection methods is Nmap (http://www.insecure.org/nmap/), which not only allows you to detect the operating system running on a remote system, but also perform various types of port-scans.

Attempting to detect an operating system with Nmap is as simple as running it with the -O switch. Here are the results of scanning an OpenBSD 3.3 system:

# nmap -O puffy

Starting nmap 3.48 ( http://www.insecure.org/nmap/ ) at 2003-12-02 19:14 MST

Interesting ports on puffy (192.168.0.42):

(The 1653 ports scanned but not shown below are in state: closed)

PORT STATE SERVICE

13/tcp open daytime

22/tcp open ssh

37/tcp open time

113/tcp open auth

Device type: general purpose

Running: OpenBSD 3.X

OS details: OpenBSD 3.0 or 3.3

Nmap run completed — 1 IP address (1 host up) scanned in 24.873 seconds

To thwart Nmap’s efforts, we can employ firewall rules that block packets used for operating-system probes. These are fairly easy to spot, since several of them have invalid combinations of TCP flags. Some of the tests that Nmap performs cannot be blocked by PF by simply adding block rules, but they can be blocked if stateful filtering and a default deny policy have been implemented in the ruleset. This is because some of the tests make use of TCP options, which cannot be filtered with PF.

To block these fingerprinting attempts with OpenBSD’s PF, we can put rules similar to these in our /etc/pf.conf:

set block-policy return

block in log quick proto tcp flags FUP/WEUAPRSF

block in log quick proto tcp flags WEUAPRSF/WEUAPRSF

block in log quick proto tcp flags SRAFU/WEUAPRSF

block in log quick proto tcp flags /WEUAPRSF

block in log quick proto tcp flags SR/SR

block in log quick proto tcp flags SF/SF

This also has the side effect of logging any attempts to the pflog0 interface. Even if we can’t block all of Nmap’s tests, we can at least log some of the more unique attempts, and possibly confuse it by providing an incomplete picture of our operating system’s TCP stack behavior. Packets that have triggered these rules can be viewed with tcpdump by running the following commands:

# ifconfig pflog0 up

# tcpdump -n -i pflog0

Now let’s look at the results of an Nmap scan after enabling these rules:

# nmap -O puffy

Starting nmap 3.48 ( http://www.insecure.org/nmap/ ) at 2003-12-02 22:56 MST

Interesting ports on puffy (192.168.0.42):

(The 1653 ports scanned but not shown below are in state: closed)

PORT STATE SERVICE

13/tcp open daytime

22/tcp open ssh

37/tcp open time

113/tcp open auth

No exact OS matches for host (If you know what OS is running on it, see

http://www.insecure.org/cgi-bin/nmap-submit.cgi).

TCP/IP fingerprint:

SInfo(V=3.48%P=i686-pc-linux-gnu%D=12/2%Time=3FCD7B3F%O=13%C=1)

TSeq(Class=TR%IPID=RD%TS=2HZ)

T1(Resp=Y%DF=Y%W=403D%ACK=S++%Flags=AS%Ops=MNWNNT)

T2(Resp=Y%DF=Y%W=0%ACK=S%Flags=AR%Ops=)

T3(Resp=Y%DF=Y%W=0%ACK=O%Flags=AR%Ops=)

T4(Resp=Y%DF=Y%W=4000%ACK=O%Flags=R%Ops=)

T5(Resp=Y%DF=Y%W=0%ACK=S++%Flags=AR%Ops=)

T6(Resp=Y%DF=Y%W=0%ACK=O%Flags=R%Ops=)

T7(Resp=Y%DF=Y%W=0%ACK=S++%Flags=AR%Ops=)

PU(Resp=Y%DF=N%TOS=0%IPLEN=38%RIPTL=134%RID=E%RIPCK=F%UCK=E%ULEN=134%DAT=E)

Nmap run completed — 1 IP address (1 host up) scanned in 27.028 seconds

As you can see, this time the attempt was unsuccessful. But if you are feeling particularly devious, simply confusing Nmap attempts may not be enough. What if you want to actually trick would-be attackers into believing that a server is running a different OS entirely? For example, this could be useful when setting up a honeypot [Hack #94] to attract miscreants away from your critical servers. If this sounds like fun to you, read on.

Udgivet i Knowledge Base, Linux, Networking, Old Base | Skriv en kommentar

Installing nessus

# lynx -source http://install.nessus.org | sh

——————————————————————————–
NESSUS INSTALLATION SCRIPT
——————————————————————————–

This script will retrieve the latest version of Nessus via CVS, and
will compile and install it on your system.

To run this script, you must know the root password of this host
and you need to be able to establish outgoing connections to port
2401/tcp or 80/tcp (through a proxy or directly)

Press a key to continue <ENTER>

——————————————————————————–
Nessus installation : installation location
——————————————————————————–

Where do you want the whole Nessus package to be installed ?
[/usr/local] <ENTER>

——————————————————————————–
Nessus installation : branch selection
——————————————————————————–

Nessus is currently made up of two branches:
– the STABLE branch is Nessus 2.0.x. It is now considered as being
bug-free.

– the DEVEL branch is Nessus 2.1x. It is considered as a being in
developement, and therefore may prove to be unstable.

Which branch do you wish to install (STABLE or DEVEL) ?
[STABLE] <ENTER>

——————————————————————————–
Nessus installation : download method
——————————————————————————–

There are two ways to download Nessus :
. From cvs, the download will be slower but you’ll have the latest version
. From www, the download will be faster, but you may not get the nightly
changes. However, www is updated every 24 hours

Which download method do you want ? (cvs or www) [www] <ENTER>

——————————————————————————–
Nessus installation : final step
——————————————————————————–

Nessus will now be installed on this host. The packages will first be
downloaded from nessus.org, then they will be compiled and installed

Press a key to continue <ENTER>

Are you behind a web proxy ? [y/n] n <ENTER>

– Now it downloads and install the software

——————————————————————————–
Nessus installation : Finished
——————————————————————————–

Nessus is now installed on this host
. Create a certificate for nessusd using /usr/local/sbin/nessus-mkcert
. Add a user by typing /usr/local/sbin/nessus-adduser
. Then start nessusd by typing /usr/local/sbin/nessusd -D

. Remember to invoke ‘nessus-update-plugins’ periodically to update your plugins

Press a key to quit <ENTER>

Then run : /usr/local/sbin/nessus-mkcert

——————————————————————————-
Creation of the Nessus SSL Certificate
——————————————————————————-

This script will now ask you the relevant information to create the SSL
certificate of Nessus. Note that this information will *NOT* be sent to
anybody (everything stays local), but anyone with the ability to connect to your
Nessus daemon will be able to retrieve this information.

CA certificate life time in days [1460]:
Server certificate life time in days [365]:
Your country (two letter code) [FR]: DK
Your state or province name [none]: Denmark
Your location (e.g. town) [Paris]: Stoholm
Your organization [Nessus Users United]: Unifix Security

——————————————————————————-
Creation of the Nessus SSL Certificate
——————————————————————————-

Congratulations. Your server certificate was properly created.

/usr/local/etc/nessus/nessusd.conf updated

The following files were created :

. Certification authority :
Certificate = /usr/local/com/nessus/CA/cacert.pem
Private key = /usr/local/var/nessus/CA/cakey.pem

. Nessus Server :
Certificate = /usr/local/com/nessus/CA/servercert.pem
Private key = /usr/local/var/nessus/CA/serverkey.pem

Press [ENTER] to exit

– Create as many users as needed!

localhost root # /usr/local/sbin/nessus-adduser
Using /var/tmp as a temporary file holder

Add a new nessusd user
———————-

Login : mike
Authentication (pass/cert) [pass] : <enter>
Login password : <password>
Login password (again) : <password>

User rules
———-
nessusd has a rules system which allows you to restrict the hosts
that mike has the right to test. For instance, you may want
him to be able to scan his own host only.

Please see the nessus-adduser(8) man page for the rules syntax

Enter the rules for this user, and hit ctrl-D once you are done :
(the user can have an empty rules set)
<CTRL-D>

Login : mike
Password : ********
DN :
Rules :

Is that ok ? (y/n) [y] y
user added.

Finally start the deamon:
localhost root # /usr/local/sbin/nessusd -D

And then run nessus from some host and connect it up to the deamon!

Udgivet i Knowledge Base, Linux, Old Base, Security | Skriv en kommentar

Set UP TLS-enabled SMTP (encryption)

Protect your users’ in-transit email from eavesdroppers.

If you have set up encrypted POP and IMAP services [Hack #47], your users’ incoming email is protected from others once it reaches your servers, but what about their outgoing email? You can protect outgoing email quickly and easily by setting up your MTA to use Transport Layer Security (TLS) encryption. Virtually all modern email clients support TLS—enable it by simply checking a box in the email account preferences.

If you’re using Sendmail, you can check to see if it has TLS support compiled-in by running this command:

$ sendmail -bt -d0.1

This will print out the options that your sendmail binary was compiled with. If you see a line that says STARTTLS, then all you need to do is supply some additional configuration information to get TLS support working. However, if you don’t see this line, you’ll need to recompile sendmail.

Before recompiling sendmail, you will need to go into the directory containing sendmail’s source code and add the following lines to devtools/Site/site.config.m4:

APPENDDEF(`conf_sendmail_ENVDEF’, `-DSTARTTLS’)

APPENDDEF(`conf_sendmail_LIBS’, `-lssl -lcrypto’)

If this file doesn’t exist, simply create it. The build process will automatically include the file once you create it. The first line in the example will cause TLS support to be compiled into the sendmail binary, and the second line will link the binary with libssl.so and libcrypto.so.

After adding these lines, you can recompile and reinstall sendmail by running this command:

# ./Build -c && ./Build install

After you’ve done this, you will need to create a certificate and key pair to use with sendmail [Hack #45] . Then you’ll need to reconfigure sendmail to use the certificate and key that you created. You can do this by editing the file your sendmail.cf file is generated from, which is usually /etc/mail/sendmail.mc. Once you’ve located the file, add lines, similar to the following, that point to your Certificate Authority’s certificate as well as the certificate and key you generated earlier:

define(`confCACERT_PATH’, `/etc/mail/certs’)

define(`confCACERT’, `/etc/mail/certs/cacert.pem’)

define(`confSERVER_CERT’, `/etc/mail/certs/cert.pem’)

define(`confSERVER_KEY’, `/etc/mail/certs/key.pem’)

define(`confCLIENT_CERT’, `/etc/mail/certs/cert.pem’)

define(`confCLIENT_KEY’, `/etc/mail/certs/key.pem’)

The first line tells sendmail where your Certificate Authority is located, and the second one tells it where to find the CA certificate itself. The next two lines tell sendmail which certificate and key to use when it is acting as a server (i.e., accepting mail from a MUA or another mail server). The last two lines tell sendmail which certificate and key to use when it is acting as a client (i.e., relaying mail to another mail server). Usually you can then rebuild your sendmail.cf by typing make sendmail.cf while inside the /etc/mail directory. Now kill sendmail and then restart it.

After you’ve restarted sendmail, you can check whether TLS is set up correctly by connecting to it:

# telnet localhost smtp

Trying 127.0.0.1…

Connected to localhost.

Escape character is ‘^]’.

220 mail.example.com ESMTP Sendmail 8.12.9/8.12.9; Sun, 11 Jan 2004 12:07:43 -0800 (PST)

ehlo localhost

250-mail.example.com Hello IDENT:6l4ZhaGP3Qczqknqm/KdTFGsrBe2SCYC@localhost

[127.0.0.1], pleased to meet you

250-ENHANCEDSTATUSCODES

250-PIPELINING

250-EXPN

250-VERB

250-8BITMIME

250-SIZE

250-DSN

250-ETRN

250-AUTH DIGEST-MD5 CRAM-MD5

250-STARTTLS

250-DELIVERBY

250 HELP

QUIT

221 2.0.0 mail.example.com closing connection

Connection closed by foreign host.

When sendmail relays mail to another TLS-enabled mail server, your mail will be encrypted. Now all you need to do is configure your mail client to use TLS when connecting to your mail server, and your users’ email will be protected all the way to the MTA.

While there isn’t enough room in this hack to cover every MTA available, nearly all support some variant of TLS. If you are running Exim (http://www.exim.org) or Courier (http://www.courier-mta.org), you can build TLS support straight out of the box. Postfix (http://www.postfix.org) has TLS support and is designed to be used in conjunction with Cyrus-SASL (see the HOWTO at http://postfix.state-of-mind.de/patrick.koetter/smtpauth/). Qmail has an RFC 2487 (TLS) patch available at http://inoa.net/qmail-tls/. With TLS support in virtually all MTAs and email clients, there is no longer any good reason to send email “in the clear.”

Udgivet i Knowledge Base, Kryptering, Old Base | Skriv en kommentar

Encrypt IMAP and POP with SSL

Keep your email safe from prying eyes while also protecting your POP and IMAP passwords.

Having your email available on an IMAP server is invaluable when you have to access your email from multiple locations. Unlike POP, IMAP stores all your email and any folders you create on the server, so you can access all of your email from whatever email client you decide to use. You can even set up a web-based email client so that messages can be accessed from literally any machine with an Internet connection and a web browser. But more than likely, you will need to cross untrusted networks along the way. How do you protect your email account password and email from others with less than desirable intentions? You use encryption, of course!

If you already have an IMAP or POP daemon installed that does not have the ability to use SSL natively, you can use stunnel [Hack #76] to wrap the service in an SSL tunnel. If you’re starting from scratch, you have the luxury of choosing a daemon that has SSL support compiled directly into the binary.

One daemon that supports SSL out of the box is the University of Washington’s IMAP daemon, otherwise known as UW-IMAP (http://www.washington.edu/imap/). The IMAP daemon is included with their IMAP software distribution.

To compile and install the IMAP daemon, download the compressed tar archive and run commands similar to these:

$ tar xfz imap.tar.Z
$ cd imap-2002e

$ make lnp SSLDIR=/usr SSLCERTS=/usr/share/ssl/certs

The Makefile target specifies what type of system you are building for. In this case, lnp stands for Linux-PAM. Other popular Makefile targets are bsf for FreeBSD, bso for OpenBSD, osx for Mac OS X, sol for Solaris, and gso for Solaris with GCC. The SSLDIR variable is used to set the base directory for your OpenSSL installation. By default, the Makefile is set to use /usr/local/ssl, which would cause it to look for the libraries in /usr/local/ssl/lib and the headers in /usr/local/ssl/include. If a version of OpenSSL came installed with your operating system and you want to use that, you will most likely need to use SSLDIR=/usr as shown in the example. The SSLCERTS variable is used to tell the imapd and popd where to find their SSL certificates.

If the compile aborts due to errors, look for a message similar to this:

In file included from /usr/include/openssl/ssl.h:179,

from osdep.c:218:

/usr/include/openssl/kssl.h:72:18: krb5.h: No such file or directory

In file included from /usr/include/openssl/ssl.h:179,

from osdep.c:218:

This means that the compiler cannot find the Kerberos header files, a known issue with newer versions of Red Hat Linux. This happens because the files are located in /usr/kerberos/include, which is a nonstandard directory on the system.

To tell the compiler where to find the headers, use the EXTRACFLAGS variable. The make command from the previous example will now look like this:

$ make lnp SSLDIR=/usr SSLCERTS=/usr/share/ssl/certs \

EXTRACFLAGS=-I/usr/kerberos/include

After the binaries have been built, become root and copy them to a suitable place:

# cp imapd/imapd ipopd/ipopd /usr/local/bin

Next, to create self-signed certificates, run these two commands:

$ openssl req -new -x509 -nodes \

-out /usr/share/ssl/certs/imapd.pem \

-keyout /usr/share/ssl/certs/imapd.pem -days 3650

$ openssl req -new -x509 -nodes \

-out /usr/share/ssl/certs/ipopd.pem \

-keyout /usr/share/ssl/certs/ipopd.pem -days 3650

Alternatively, you can sign the certificates with your own Certificate Authority [Hack #45] . However, if you go this route, you must change the certificates’ names to imapd.pem and ipopd.pem.

All that’s left to do now is edit your /etc/inetd.conf file so that inetd will listen on the correct ports and spawn imapd and ipopd when a client connects. To do this, add the following lines at the end of the file:

imaps stream tcp nowait root /usr/libexec/tcpd /usr/local/bin/imapd

pop3s stream tcp nowait root /usr/libexec/tcpd /usr/local/bin/ipop3d

Now tell inetd to reload its configuration:

# kill -HUP `ps -ax | grep inetd | grep -v grep \

| awk ‘{print $1}’`

That’s the final task for the server end of things. All you need to do now is configure your email clients to connect to the secure version of the service that you were using. Usually, there will be a Use Encryption, Use SSL, or some other similarly named checkbox in the incoming mail settings for your client. Just check the box, reconnect, and you should be using SSL now. Be sure your client trusts your CA cert, or you will be nagged with annoying (but important!) trust warnings.

Udgivet i Knowledge Base, Old Base | Skriv en kommentar

Distribute your CA to clients

Be sure all of your clients trust your new Certificate Authority.

Once you have created a Certificate Authority (CA) [Hack #45], any certificates that are signed by your CA will be trusted by any program that trusts your CA. To establish this trust, you need to distribute your CA’s certificate to each program that needs to trust it. This could include email programs, IPSec installations, or web browsers.

Since SSL uses public key cryptography, there is no need to keep the certificate a secret. You can simply install it on a web server and download it to your clients over plain old HTTP. While the instructions for installing a CA cert are different for every program, this hack will show you a quick and easy way to install your CA on web browsers.

There are two possible formats that browsers will accept for new CA certs: pem and der. You can generate a der from your existing pem with a single openssl command:

$ openssl x509 -in demoCA/cacert.pem -outform DER -out cacert.der

Also, add the following line to the conf/mime.types file in your Apache installation:

application/x-x509-ca-cert der pem crt

Now restart Apache for the change to take effect. You should now be able to place both the cacert.der and demoCA/cacert.pem files anywhere on your web server and have clients install the new cert by simply clicking on either link.

Early versions of Netscape expected pem format, but recent versions will accept either. Internet Explorer is just the opposite (early IE would accept only der format, but recent versions will take both). Other browsers will generally accept either format.

You will get a dialog box in your browser when downloading the new Certificate Authority, asking if you’d like to continue. Accept the certificate, and that’s all there is to it. Now SSL certs that are signed by your CA will be accepted without warning the user.

Keep in mind that Certificate Authorities aren’t to be taken lightly. If you accept a new CA in your browser, you had better trust it completely—a mischievous CA manager could sign all sorts of certs that you should never trust, but your browser would never complain (since you claimed to trust the CA when you imported it). Be very careful about who you extend your trust to when using SSL-enabled browsers. It’s worth looking around in the CA cache that ships with your browser to see exactly who you trust by default.

For example, did you know that AOL/Time Warner has its own CA? How about GTE? Or VISA? CA certs for all of these entities (and many others) ship with Netscape 7.0 for Linux, and are all trusted authorities for web sites, email, and application add-ons by default. Keep this in mind when browsing to SSL-enabled sites: if any of the default authorities have signed online content, your browser will trust it without requiring operator acknowledgment.

If you value your browser’s security (and, by extension, the security of your client machine), then make it a point to review your trusted CA relationships.

Udgivet i Knowledge Base, Kryptering, Networking, Old Base, Security | Skriv en kommentar

Create your own certificate authority

Sign your own certificates to use in securing your network.

SSL certificates are usually thought of as being used for secure communications over the HTTP protocol. However, they are also useful in providing both a means for authentication and a means for initiating key exchange for a myriad of other services where encryption is desired, such as POP and IMAP [Hack #47], SMTP [Hack #48], IPSec (see Chapter 6), and, of course, SSL tunnels [Hack #76] . To make the best use of SSL, you will need to properly manage your own certificates.

If an SSL client needs to verify the authenticity of an SSL server, the cert used by the server needs to be signed by a Certificate Authority (CA) that is already trusted by the client. Well-known Certificate Authorities (such as Thawte and VeriSign) exist to serve as an authoritative, trusted third party for authentication. They are in the business of signing SSL certificates that are used on sites dealing with sensitive information (such as account numbers or passwords). If a site’s SSL certificate is signed by a trusted authority, then presumably it is possible to verify the identity of a server supplying that cert’s credentials. However, for anything other than e-commerce applications, a self-signed certificate is usually sufficient for gaining all of the security advantages that SSL provides. But even a self-signed cert must be signed by an authority that the client recognizes.

OpenSSL, a free SSL implementation, is perfectly capable of generating everything you need to run your own Certificate Authority. The CA.pl utility makes the process very simple.

In these examples, you’ll need to type anything in boldface, and enter passwords wherever appropriate (they don’t echo to the screen). To establish your new Certificate Authority, first change to the misc/ directory under wherever OpenSSL is installed (/System/Library/OpenSSL/ on OpenBSD; /usr/ssl/ or /usr/local/ssl/ on most Linux systems). Then use these commands:

$./CA.pl -newca

CA certificate filename (or enter to create)

Making CA certificate …

Generating a 1024 bit RSA private key

……….++++++

…………………++++++

writing new private key to ‘./demoCA/private/cakey.pem’

Enter PEM pass phrase:

Verifying – Enter PEM pass phrase:

—–

You are about to be asked to enter information that will be incorporated

into your certificate request.

What you are about to enter is what is called a Distinguished Name or a DN.

There are quite a few fields but you can leave some blank

For some fields there will be a default value,

If you enter ‘.’, the field will be left blank.

—–

Country Name (2 letter code) []:US

State or Province Name (full name) []:Colorado

Locality Name (eg, city) []:Denver

Organization Name (eg, company) []:NonExistant Enterprises

Organizational Unit Name (eg, section) []:IT Services

Common Name (eg, fully qualified host name) []:ca.nonexistantdomain.com

Email Address []:certadmin@nonexistantdomain.com

Note that you don’t necessarily need root permissions, but you will need write permissions on the current directory.

Congratulations! You’re the proud owner of your very own Certificate Authority. Take a look around:

$ ls -l demoCA/

total 16

-rw-r–r– 1 andrew andrew 1399 3 Dec 19:52 cacert.pem

drwxr-xr-x 2 andrew andrew 68 3 Dec 19:49 certs

drwxr-xr-x 2 andrew andrew 68 3 Dec 19:49 crl

-rw-r–r– 1 andrew andrew 0 3 Dec 19:49 index.txt

drwxr-xr-x 2 andrew andrew 68 3 Dec 19:49 newcerts

drwxr-xr-x 3 andrew andrew 102 3 Dec 19:49 private

-rw-r–r– 1 andrew andrew 3 3 Dec 19:49 serial

The public key for your new Certificate Authority is contained in cacert.pem, and the private key is in private/cakey.pem. You can now use this private key to sign other SSL certs.

By default, CA.pl will create keys that are good for only one year. To change this behavior, edit CA.pl and change the line that reads:

$DAYS=”-days 365″;

Alternatively, you can forego CA.pl altogether and generate the public and private keys manually with a command like this:

$ openssl req -new -x509 -keyout cakey.pem -out \

cakey.pem -days 3650

This will create a key pair that is good for the next 10 years, which can of course be changed by using a different argument to the -days switch. Additionally, you should change the private key’s permissions to 600, to ensure that it is protected from being read by anyone.

So far, we have only created the Certificate Authority. To actually create keys that you can use with your services, you need to create a certificate-signing request and a key. Again, this can be done easily with CA.pl. First, a certificate-signing request is created:

$ ./CA.pl -newreq-nodes

Generating a 1024 bit RSA private key

…++++++

………………………………………..++++++

writing new private key to ‘newreq.pem’

—–

You are about to be asked to enter information that will be incorporated

into your certificate request.

What you are about to enter is what is called a Distinguished Name or a DN.

There are quite a few fields but you can leave some blank

For some fields there will be a default value,

If you enter ‘.’, the field will be left blank.

—–

Country Name (2 letter code) [AU]:US

State or Province Name (full name) [Some-State]:Colorado

Locality Name (eg, city) []:Denver

Organization Name (eg, company) [Internet Widgits Pty Ltd]:NonExistant Enterprises

Organizational Unit Name (eg, section) []:IT Services

Common Name (eg, YOUR name) []:mail.nonexistantdomain.com

Email Address []:postmaster@nonexistantdomain.com

Please enter the following ‘extra’ attributes

to be sent with your certificate request

A challenge password []:

An optional company name []:NonExistant Enterprises

Request (and private key) is in newreq.pem

If you wish to encrypt the private key, you can use the -newreq switch in place of -newreq-nodes. However, if you encrypt the private key, you will have to enter the password for it each time the service that uses it is started. If you decide not to use an encrypted private key, be extremely cautious with your private key, as anyone who can obtain a copy of it can impersonate your server.

Now, to actually sign the request and generate the signed certificate:

$ ./CA.pl -sign

Using configuration from /System/Library/OpenSSL/openssl.cnf

Enter pass phrase for ./demoCA/private/cakey.pem:

Check that the request matches the signature

Signature ok

Certificate Details:

Serial Number: 1 (0x1)

Validity

Not Before: Dec 3 09:05:08 2003 GMT

Not After : Dec 3 09:05:08 2004 GMT

Subject:

countryName = US

stateOrProvinceName = Colorado

localityName = Denver

organizationName = NonExistant Enterprises

organizationalUnitName = IT Services

commonName = mail.nonexistantdomain.com

emailAddress = postmaster@nonexistantdomain.com

X509v3 extensions:

X509v3 Basic Constraints:

CA:FALSE

Netscape Comment:

OpenSSL Generated Certificate

X509v3 Subject Key Identifier:

94:0F:E9:F5:22:40:2C:71:D0:A7:5C:65:02:3E:BC:D8:DB:10:BD:88

X509v3 Authority Key Identifier:

keyid:7E:AF:2D:A4:39:37:F5:36:AE:71:2E:09:0E:49:23:70:61:28:5F:4A

DirName:/C=US/ST=Colorado/L=Denver/O=NonExistant Enterprises/OU=IT Services/

CN=Certificate Administration/emailAddress=certadmin@nonexistantdomain.com

serial:00

Certificate is to be certified until Dec 7 09:05:08 2004 GMT (365 days)

Sign the certificate? [y/n]:y

1 out of 1 certificate requests certified, commit? [y/n]y

Write out database with 1 new entries

Data Base Updated

Signed certificate is in newcert.pem

Now you can set up keys in this manner for each server that needs to provide an SSL-encrypted service. It is easier to do this if you designate a single workstation to maintain the certificate authority and all the files associated with it. Don’t forget to distribute your CA cert to programs that need to trust it [Hack #46] .

Udgivet i Knowledge Base, Kryptering, Old Base | Skriv en kommentar

Fool Remote operating system detection software on linux with iptables

Evade remote OS detection attempts by disguising your TCP/IP stack.

Another method to thwart operating system detection attempts is to modify the behavior of your system’s TCP/IP stack and make it emulate the behavior of another operating system. This may sound difficult, but can be done fairly easily in Linux by patching your kernel with code available from the IP Personality project (http://ippersonality.sourceforge.net). This code extends the kernel’s built-in firewalling system, Netfilter, as well as its user-space component, the iptables command.

To set up IP Personality, download the package that corresponds to your kernel. If you can’t find the correct one, visit the SourceForge patches page for the project (http://sourceforge.net/tracker/?group_id=7557&atid=307557), which usually has more recent kernel patches available.

To patch your kernel, unpack the IP Personality source distribution and go to the directory containing your kernel source; then run the patch command:

# cd /usr/src/linux

# patch -p1 < \

../ippersonality-20020819-2.4.19/patches/ippersonality-20020819-linux-2.4.19.diff

If you are using a patch downloaded from the patches page, just substitute it in your patch command. To verify that the patch has been applied correctly, you can run this command:

# find ./ -name \*.rej

If the patch was applied correctly, this command should not find any files.

Now that the kernel is patched, you will need to configure the kernel for IP Personality support. As mentioned in [Hack #13], running make xconfig, make menuconfig, or even make config while you are in the kernel source’s directory will allow you to configure your kernel. Regardless of the method you choose, the menu options will remain the same.

First, be sure that “Prompt for development and/or incomplete code/drivers” is enabled under “Code maturity level options”. Under Networking Options, find and enable the option for Netfilter Configuration.

The list displayed by make xconfig is shown in Figure 3-7. Find the option labeled IP “Personality Support”, and either select y to statically compile it into your kernel, or select m to create a dynamically loaded module.
Figure 3-7. Enable IP Personality Support

After you have configured in support for IP Personality, save your configuration. Now compile the kernel and modules, and install them by running commands similar to these:

# make dep && make clean

# make bzImage && make modules

# cp arch/i386/boot/bzImage /boot/vmlinuz

# make modules_install

Now reboot with your new kernel. In addition to patching your kernel, you’ll also need to patch the user-space portion of Netfilter, the iptables command. To do this, go to the Netfilter web site (http://www.netfilter.org) and download the version specified by the patch that came with your IP Personality package. For instance, the iptables patch included in ippersonality-20020819-2.4.19.tar.gz is for Netfilter Version 1.2.2.

After downloading the proper version and unpacking it, you will need to patch it with the patch included in the IP Personality package. Then build and install it in the normal way:

# tar xfj iptables-1.2.2.tar.bz2

# cd iptables-1.2.2

# patch -p1 < \

../ippersonality-20020819-2.4.19/patches/ippersonality-20020427-iptables-\1.2.2.diff

patching file pers/Makefile

patching file pers/example.conf

patching file pers/libipt_PERS.c

patching file pers/pers.h

patching file pers/pers.l

patching file pers/pers.y

patching file pers/pers_asm.c

patching file pers/perscc.c

# make KERNEL_DIR=/usr/src/linux && make install

This will install the modified iptables command, its supporting libraries, and the manpage under the /usr/local hierarchy. If you would like to change the default installation directories, you can edit the Makefile and change the values of the BINDIR, LIBDIR, MANDIR, and INCDIR macros. Be sure to set KERNEL_DIR to the directory containing the kernel sources you built earlier.

If you are using Red Hat Linux, you can replace the iptables command that is installed by changing the macros to these values:

LIBDIR:=/lib

BINDIR:=/sbin

MANDIR:=/usr/share/man

INCDIR:=/usr/include

In addition to running make install, you may also want to create a directory for the operating system personality configuration files. These files are located in the samples/ directory within the IP Personality distribution. For example, you could create a directory called /etc/personalities and copy them there.

Before setting up IP Personality, try running Nmap against the machine to see which operating system it detects:

# nmap -O colossus

Starting nmap 3.48 ( http://www.insecure.org/nmap/ ) at 2003-12-12 18:36 MST

Interesting ports on colossus (192.168.0.64):

(The 1651 ports scanned but not shown below are in state: closed)

PORT STATE SERVICE

22/tcp open ssh

25/tcp open smtp

111/tcp open rpcbind

139/tcp open netbios-ssn

505/tcp open mailbox-lm

631/tcp open ipp

Device type: general purpose

Running: Linux 2.4.X|2.5.X

OS details: Linux Kernel 2.4.0 – 2.5.20

Uptime 3.095 days (since Tue Dec 9 16:19:55 2003)

Nmap run completed — 1 IP address (1 host up) scanned in 7.375 seconds

If your machine has an IP address of 192.168.0.64 and you want it to pretend that it’s running Mac OS 9, you can run iptables commands like these:

# iptables -t mangle -A PREROUTING -d 192.168.0.64 -j PERS \

–tweak dst –local –conf /etc/personalities/macos9.conf

# iptables -t mangle -A OUTPUT -s 192.168.0.64 -j PERS \

–tweak src –local –conf /etc/personalities/macos9.conf

Now run Nmap again:

# nmap -O colossus

Starting nmap 3.48 ( http://www.insecure.org/nmap/ ) at 2003-12-12 18:47 MST

Interesting ports on colossus (192.168.0.64):

(The 1651 ports scanned but not shown below are in state: closed)

PORT STATE SERVICE

22/tcp open ssh

25/tcp open smtp

111/tcp open rpcbind

139/tcp open netbios-ssn

505/tcp open mailbox-lm

631/tcp open ipp

Device type: general purpose

Running: Apple Mac OS 9.X

OS details: Apple Mac OS 9 – 9.1

Uptime 3.095 days (since Tue Dec 9 16:19:55 2003)

Nmap run completed — 1 IP address (1 host up) scanned in 5.274 seconds

You can of course emulate other operating systems that aren’t provided with the IP Personality package. All you need is a copy of Nmap’s operating system fingerprints file, nmap-os-fingerprints, and then you can construct your own IP Personality configuration file for any operating system Nmap knows about.

Udgivet i Knowledge Base, Linux, Networking, Old Base | Skriv en kommentar

MAC Filtering with iptables

Keep unwanted machines off your network with MAC address whitelisting

Media Access Control (MAC) address filtering is a well-known method for protecting wireless networks. This type of filtering works on the default deny principle: you specify the hosts that are allowed to connect, while leaving unknown ones behind. MAC addresses are unique 48-bit numbers that have been assigned to every Ethernet device that has ever been manufactured, including 802.11 devices, and are usually written as six 8-bit hexadecimal digits separated by colons.

In addition to Linux’s native IP packet filtering system, Netfilter also contains MAC address filtering functionality. While many of the wireless access points on the market today already support this, there are many older ones that do not. MAC filtering is also important if your access point is actually the Linux machine itself, using a wireless card. If you have a Linux-based firewall already set up, it’s a trivial modification to enable it to filter at the MAC level. MAC address filtering with iptables is very much like IP-based filtering, and is just as easy to do.

This example demonstrates how to allow a particular MAC address if your firewall policy is set to DROP [Hack #33] :

iptables -A FORWARD -m state –state NEW \

-m mac –mac-source 00:DE:AD:BE:EF:00 -j ACCEPT

This command will allow any traffic sent from the network interface with the address 00:DE:AD:BE:EF:00. Using rules like this one along with a default deny policy enables you to create a whitelist of the MAC addresses that you want to allow through your gateway. To create a blacklist, you can employ a default accept policy and change the MAC address matching rule’s target to DENY.

This is all pretty straightforward if you already know the MAC addresses for which you want to create rules, but what if you don’t? If you have access to the system, you can find out the MAC address of an interface by using the ifconfig command:

$ ifconfig eth0

eth0 Link encap:Ethernet HWaddr 00:0C:29:E2:2B:C1

inet addr:192.168.0.41 Bcast:192.168.0.255 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:132893 errors:0 dropped:0 overruns:0 frame:0

TX packets:17007 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:100

RX bytes:46050011 (43.9 Mb) TX bytes:1601488 (1.5 Mb)

Interrupt:10 Base address:0x10e0

Here you can see that the MAC address for this interface is 00:0C:29:E2:2B:C1. The output of ifconfig is somewhat different on other operating systems, but they are all similar to some degree (this output was from a Linux system).

Finding the MAC address of a system remotely is slightly more involved and can be done by using the arp and ping commands. By pinging the remote system, its IP address will be resolved to a MAC address, which can then be looked up using the arp command.

For example, to look up the MAC address that corresponds to the IP address 192.168.0.61, you could run the following commands:

$ ping -c 1 192.168.0.61

$ /sbin/arp 192.168.0.61 | awk ‘{print $3}’

Or you could use this very small and handy shell script:

#!/bin/sh

ping -c $1 >/dev/null && /sbin/arp $1 | awk ‘{print $3}’ \

| grep -v Hwaddress

When implementing MAC address filtering, please be aware that it is not foolproof. Under many circumstances, it is quite trivial to change the MAC address that an interface uses by simply instructing the driver to do so. It is also possible to send out link-layer frames with forged MAC addresses by using raw link-layer sockets. Thus, MAC address filtering should only be considered as an additional measure that you can use to protect your network. Treat MAC filtering more as a “Keep Out” sign, rather than a good deadbolt

Udgivet i Knowledge Base, Linux, Networking, Old Base | Skriv en kommentar

Authenticated Gateway with OpenBSD

Use PF to keep unauthorized users off the network.

Firewalling gateways have traditionally been used to block traffic from specific services or machines. Instead of watching IP addresses and port numbers, an authenticated gateway allows you to regulate traffic to or from machines based on a user’s credentials. With an authenticated gateway, a user will have to log in and authenticate himself to the gateway in order to gain access to the protected network. This can be useful in many situations, such as restricting Internet access or restricting a wireless segment to be used only by authorized users.

With the release of OpenBSD 3.1, you can implement this functionality through the use of PF and the authpf shell. Using authpf also provides an audit trail by logging usernames, originating IP addresses, and the time that they authenticated with the gateway, as well as when they logged off the network.

To set up authentication with authpf, you’ll first need to create an account on the gateway for each user. Specify /usr/sbin/authpf as the shell, and be sure to add authpf as a valid shell to /etc/shells. When a user logs in through SSH, authpf will obtain the user’s name and IP address through the environment. After doing this, a template file containing NAT and filter rules is read in, and the username and IP address are applied to it. The resulting rules are then added to the running configuration. When the user logs out (i.e., types ^C), the rules that were created are unloaded from the current ruleset. For user-specific rule templates, authpf looks in /etc/authpf/users/$USER/authpf.rules. Global rule templates are stored in /etc/authpf/authpf.rules. Similarly, NAT entries are stored in authpf.nat, in either of these two directories. When a user-specific template is present for the user who has just authenticated, the template completely replaces the global rules, instead of just adding to them. When loading the templates, authpf will expand the $user_ip macro to the user’s current IP address.

For example:

pass in quick on wi0 proto { tcp, udp } from $user_ip to any \

keep state flags S/SA

This particular rule will pass in all traffic on the wireless interface from the newly authenticated user’s IP address. This works particularly well with a default deny policy, where only the initial SSH connection to the gateway and DNS have been allowed from the authenticating IP address.

You could be much more restrictive and allow only HTTP-, DNS-, and email-related traffic through the gateway:

pass in quick on wi0 proto tcp from $user_ip to any \

port { smtp, www, https, pop3, pop3s, imap, imaps } \

keep state flags S/SA

pass in quick on wi0 proto udp from $user_ip to any port domain

After the template files have been created, you must then provide an entry point into pf.conf for the rules that authpf will create for evaluation by PF. These entry points are added to your pf.conf with the various anchor keywords:

nat-anchor authpf

rdr-anchor authpf

binat-anchor authpf

anchor authpf

Note that each anchor point needs to be added to the section it applies to—you cannot just put them all at the end or beginning of your pf.conf. Thus the nat-anchor, rdr-anchor, and binat-anchor entries must go into the address translation section of the pf.conf. Likewise, the anchor entry, which applies only to filtering rules, should be added to the filtering section.

When a user logs into the gateway, he should now be presented with a message like this:

Hello andrew, You are authenticated from host “192.168.0.61”

The user will also see the contents of /etc/authpf/authpf.message if it exists and is readable.

If you examine /var/log/daemon, you should also see log messages similar to these for when a user logs in and out:

Dec 3 22:36:31 zul authpf[15058]: allowing 192.168.0.61, \

user andrew

Dec 3 22:47:21 zul authpf[15058]: removed 192.168.0.61, \

user andrew- duration 650 seconds

Note that since it is present in /etc/shells, any user that has a local account is capable of changing his shell to authpf. If you want to ensure that the user cannot do this, you can create a file named after his username and put it in the /etc/authpf/banned directory. The contents of this file will be displayed when he logs into the gateway. On the other hand, you can also explicitly allow users by listing their usernames, one per line, in /etc/authpf/authpf.allow. However, any bans that have been specified in /etc/authpf/banned take precedence over entries in authpf.allow.

Since authpf relies on the SSH session to determine when the rules pertaining to a particular user are to be unloaded, care should be taken in configuring your SSH daemon to time out connections. Timeouts should happen fairly quickly, to revoke access as soon as possible once a connection has gone stale. This also helps prevent connections to systems outside the gateway from being held open by those conducting ARP spoof attacks.

You can set up OpenSSH to guard against this by adding these to lines to your sshd_config:

ClientAliveInterval 15

ClientAliveCountMax 3

This will ensure that the SSH daemon will send a request for a client response 15 seconds after it has received no data from the client. The ClientAliveCountMax option specifies that this can happen three times without a response before the client is disconnected. Thus, after a client has become unresponsive, it will be disconnected after 45 seconds. These keepalive packets are sent automatically by the SSH client software and don’t require any intervention on the part of the user.

Authpf is very powerful in its flexibility and integration with PF, OpenBSD’s native firewalling system. It is easy to set up and has very little performance overhead, since it relies on SSH and the operating system to do authentication and manage sessions

Udgivet i Knowledge Base, Networking, Old Base, OpenBSD | Skriv en kommentar