Snort sencors

Use SnortCenter’s easy-to-use web interface to manage your NIDS sensors.

Managing an IDS sensor and keeping track of the alerts it generates can be a daunting task, and even more so when you’re dealing with multiple sensors. One way to unify all your IDS management tasks into a single application is to use SnortCenter (http://users.pandora.be/larc/), a management system for Snort.

SnortCenter is comprised of a web-based console and sensor agents that are run on each machine in your NIDS infrastructure. It lets you unify all of your management and monitoring duties into one program, which can help you get your work done quickly. SnortCenter has its own user authentication scheme, and supports encrypted communication between the web-based management console and the individual sensor agents. This enables you to update multiple sensors with new Snort rules or create new rules of your own and push them to your sensors securely. SnortCenter also allows you to start and stop your sensors remotely through its management interface. To monitor the alerts from your sensors, SnortCenter can integrate with ACID [Hack #83] .

To set up SnortCenter, you’ll first need to install the management console on a web server that has both PHP support and access to a MySQL database server where SnortCenter can store its configuration database. To install the management console, download the distribution from the download page (http://users.pandora.be/larc/download/) and unpack it. This will create a directory called www (so be sure not to unpack it where there’s already a www directory) containing SnortCenter’s PHP scripts, graphics, and SQL schemas. Then, copy the contents of the www directory to a suitable location within your web server’s document root.

For example:

# tar xfz snortcenter-v1.0-RC1.tar.gz

# cp -R www /var/www/htdocs/snortcenter

In order for SnortCenter to communicate with your database, you’ll need to install ADODB (http://php.weblogs.com/adodb) as well. This is a PHP package that provides database abstraction functionality. After you’ve downloaded the ADODB code, unpack it into your document root (e.g., /var/www/htdocs).

You’ll also need to install curl (http://curl.haxx.se). Download the source distribution, unpack it, and run ./configure && make install. Alternatively, it might be available with your operating system (Red Hat has a curl RPM, and *BSD includes it in the ports tree).

After that’s out of the way, you’ll need to edit SnortCenter’s config.php (e.g., /var/www/htdocs/snortcenter/config.php) and change these variables to similar values that fit your situation:

$DBlib_path = “../adodb/”;

$DBtype = “mysql”;

$DB_dbname = “SNORTCENTER”;

$DB_host = “localhost”;

$DB_port = “”;

$DB_user = “snortcenter”;

$DB_password = “snortcenterpass”;

$hidden_key_num =1823701983719312;

This configuration will tell SnortCenter to look for the ADODB code in the adodb directory located at the same directory level as the one containing SnortCenter. In addition, it will tell SnortCenter to connect to a MySQL database called SNORTCENTER that is running on the local machine as the user snortcenter with the password snortcenterpass. Since it is connecting to a MySQL server on the local machine, there is no need to specify a port. If you want to connect to a database running on another system, you should specify 3389 for the port, which is the default used by MySQL. Set $hidden_key_num to a random number.

After you’re done editing config.php, you’ll need to create the database and user you specified and set the proper password for it:

$ mysql -u root -p mysql

Enter password:

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 27 to server version: 3.23.55

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the buffer.

mysql> create database SNORTCENTER;

Query OK, 1 row affected (0.01 sec)

mysql> GRANT SELECT,INSERT,UPDATE,DELETE ON SNORTCENTER.* TO \

snortcenter@localhost IDENTIFIED BY ‘snortcenterpass’;

Query OK, 0 rows affected (0.00 sec)

mysql> FLUSH PRIVILEGES;

Query OK, 0 rows affected (0.02 sec)

mysql> Bye

Now create the database tables:

$ mysql -u root -p SNORTCENTER < snortcenter_db.mysql

Congratulations, it’s time to try out SnortCenter! To do this, go to the URL that corresponds to where you installed it within your document root (e.g., http://example.com/snortcenter/). You should see something like Figure 7-5.
Figure 7-5. The SnortCenter login page

Enter in the default login/password admin/change, and then click the Login button. After you do that, you should see a page similar to Figure 7-6.
Figure 7-6. The initial SnortCenter main page

Now that you know that the management console has been installed successfully, you can move on to installing the agent. But before doing that, you should change the password for the admin account. To do this, click on the Admin button, then click on the User Administration menu item that appears. After that, click on View Users. You should then see a page like Figure 7-7.
Figure 7-7. SnortCenter’s user listing page

Clicking on the icon to the left of the username should bring you to a page similar to Figure 7-8; here you can edit the admin account’s information, including the password.
Figure 7-8. Changing the admin account’s password and email address

Now you can go on to set up your sensor agents (really, I’m serious this time).

SnortCenter’s sensor agents are written in Perl and require the Net::SSLeay module to communicate with the management console through a secure channel. If you have Perl’s CPAN module installed, you can install Net::SSLeay easily by running the following command:

# perl -MCPAN -e “install Net::SSLeay”

To install the sensor code, you’ll first need to unpack it. This will create a directory called sensor containing all of the sensor agent code. Then copy that directory to a suitable permanent location.

For example:

# tar xfz /tmp/snortcenter-agent-v1.0-RC1.tar.gz

# cp -R sensor /usr/local/snortcenter

Next you’ll need to create an SSL certificate for the sensor. You can do this by running the following command:

# cd /usr/local/snortcenter

# mkdir conf

# openssl req -new -x509 -days 3650 -nodes \

-out conf/sensor.pem -keyout conf/sensor.pem

Alternatively, you can create a signed certificate [Hack #45] and use that.

After you’ve done that, run the sensor agent’s setup script:

# sh setup.sh

****************************************************************************

* Welcome to the SnortCenter Sensor Agent setup script, version 1.0 RC1 *

****************************************************************************

Installing Sensor in /usr/local/snortcenter …

****************************************************************************

The Sensor Agent uses separate directories for configuration files and log files.

Unless you want to place them in a other directory, you can just accept the defaults.

Config file directory [/usr/local/snortcenter/conf]:

This script will prompt you for several pieces of information, such as the sensor agent’s configuration file and log directories, the full path to the perl binary (e.g., /usr/bin/perl), as well as the location of your snort binary and rules. In addition, it will ask you questions about your operating system, what port and IP address you want the sensor agent to listen on (the default is TCP port 2525), and what IP addresses are allowed to connect to the agent. In particular, it will ask you to set a login and password that the management console will use for logging into the agent. After it has prompted you for all the information it needs, it will start the sensor agent on the port and IP address specified in the configuration file. You can now test out the sensor agent by accessing it with your web browser (be sure to use https instead of http). You should see a page similar to Figure 7-9 after entering the login information contained in the setup script.
Figure 7-9. The sensor agent direct console page

Now you can go back to the main management console and add the sensor to it. To do this, log back into the management console and select Add Sensor from the Sensor Console menu. After doing this, you should see something similar to Figure 7-10.
Figure 7-10. Adding a sensor agent

Fill in the information that you used when running the setup script and click the Save button. When the next page loads, the sensor that you just added should appear in the sensor list. You can push a basic configuration to the sensor by opening the Admin menu, then selecting the Import/Update Rules item, and then Update from Internet. After you’ve done that, go back to the sensor list by clicking View Sensors in the Sensor Consoles menu, and then click the Push hyperlink for the sensor. To start Snort on that particular sensor, click the Start link. After you’ve done that, you should see a page similar to Figure 7-11.
Figure 7-11. SnortCenter’s sensor list after starting a sensor

You can now configure your sensor by using the Sensor Config and Resources menus. Once you’ve created a configuration you’re satisfied with, you can push it to your sensor(s) by going back to the sensor list and selecting Push.

Udgivet i Knowledge Base, Linux, Networking, Old Base | Skriv en kommentar

Realtime monitoring snort , yet another gui

Use Sguil’s advanced GUI to monitor and analyze IDS events in a timely manner.

One thing that’s crucial when analyzing your IDS events is to be able to correlate all your audit data from various sources, to determine the exact trigger for the alert and what actions should be taken. This could involve anything from simply querying a database for similar alerts to viewing TCP stream conversations. One tool to help facilitate this is Sguil (http://sguil.sourceforge.net), the Snort GUI for Lamerz. In case you’re wondering, Sguil is pronounced “sgweel” (to rhyme with “squeal”).

Sguil is a graphical analysis console written in Tcl/Tk that brings together the power of such tools as Ethereal (http://www.ethereal.com), TcpFlow (http://www.circlemud.org/~jelson/software/tcpflow/), and Snort’s portscan and TCP stream decoding processors into a single unified application, where it correlates all the data from each of these sources. Sguil uses a client/server model and is made up of three parts: a plug-in for Barnyard (op_guil), a server (sguild), and a client (sguil.tk). Agents installed on each of your NIDS sensors are used to report back information to the Sguil server. The server takes care of collecting and correlating all the data from the sensor agents, and handles information and authentication requests from the GUI clients.

Before you begin, you’ll need to download the Sguil distribution from the project’s web site and unpack it somewhere. This will create a directory that reflects the package and its version number (e.g., sguil-0.3.0).

The first step in setting up Sguil is creating a MySQL database for storing its information. You should also create a user that Sguil can use to access the database:

$ mysql -u root -p
Enter password:

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 546 to server version: 3.23.55

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the buffer.

mysql> CREATE DATABASE SGUIL;

Query OK, 1 row affected (0.00 sec)

mysql> GRANT SELECT,INSERT,UPDATE,DELETE ON SGUIL.* \

TO sguil IDENTIFIED BY ‘sguilpass’;
Query OK, 0 rows affected (0.06 sec)

mysql> FLUSH PRIVILEGES;

Query OK, 0 rows affected (0.06 sec)

mysql>

Now you’ll need to create Sguil’s database tables. To do this, locate the create_sguildb.sql file. It should be in the server/sql_scripts subdirectory of the directory that was created when you unpacked the Sguil distribution. You’ll need to feed this as input to the mysql command like this:

$ mysql -u root -p SGUIL < create_sguildb.sql

sguild requires several Tcl packages in order to run. The first is Tclx (http://tclx.sourceforge.net), which is an extensions library for Tcl. The second is mysqltcl (http://www.xdobry.de/mysqltcl/). Both of these can be installed with the standard ./configure && make install routine.

You can verify that they were installed correctly by running the following commands:

$ tcl

tcl>package require Tclx

8.3

tcl>package require mysqltcl

2.40

tcl>

If you want to use SSL to encrypt the traffic between the GUI and the server, you will also need to install tcltls (http://sourceforge.net/projects/tls/). After installing it, you can verify that it was installed correctly by running this command:

$ tcl

tcl>package require tls

1.41

tcl>

Now you’ll need to go about configuring sguild. First, you’ll need to create a directory suitable for holding its configuration files (i.e., /etc/sguild). Then copy sguild.users, sguild.conf, sguild.queries, and autocat.conf to the directory that you created.

For example:

# mkdir /etc/sguild

# cd server

# cp autocat.conf sguild.conf sguild.queries \

sguild.users /etc/sguild

This assumes that you’re in the directory that was created when you unpacked the Sguil distribution. You’ll also want to copy the sguild script to somewhere more permanent, such as /usr/local/sbin or something similar.

Now edit sguild.conf and tell it how to access the database you created. If you used the database commands shown previously to create the database and user for Sguil, you would set these variables to the following values:

set DBNAME SGUIL

set DBPASS sguilpass

set DBHOST localhost

set DBPORT 3389

set DBUSER sguil

In addition, sguild requires access to the Snort rules used on each sensor in order for it to correlate the different pieces. You can tell sguild where to look for these by setting the RULESDIR variable.

For instance, the following line will tell sguild to look for rules in /etc/snort/rules:

set RULESDIR /etc/snort/rules

However, sguild needs to find rules for each sensor that it monitors here, so this is really just the base directory for the rules. When looking up rules for a specific host it will look for them in a directory corresponding to the hostname within the directory that you specified (e.g., zul’s rules would be in /etc/snort/rules/zul).

Optionally, if you want to use SSL to encrypt sguild’s traffic (which you should), you’ll need to create an SSL certificate and key pair [Hack #45] . After you’ve done that, move them to /etc/sguild/certs and make sure they’re named sguild.key and sguild.pem.

Next, you’ll need to add users for accessing sguild from the Sguil GUI. To do this, use a command similar to this:

# sguild -adduser andrew

Please enter a passwd for andrew:

Retype passwd:

User ‘andrew’ added successfully

You can test out the server at this point by connecting to it with the GUI client. All you need to do is edit the sguil.conf file and change the SERVERHOST variable to point to the machine on which sguild is installed. In addition, if you want to use SSL, you’ll need to change the following variables to values similar to these:

set OPENSSL 1

set TLS_PATH /usr/lib/tls1.4/libtls1.4.so

Now test out the client and server by running sguil.tk. After a moment you should see a login window like Figure 7-3.
Figure 7-3. The Sguil login dialog

Enter in the information that you used when you created the user and click OK. After you’ve done that, you should see a dialog like Figure 7-4.
Figure 7-4. Sguil’s no available sensors dialog

Since you won’t have any sensors to monitor yet, click Exit.

To set up a Sguil sensor, you’ll need to patch your Snort source code. You can find the patches that you’ll need in the sensor/snort_mods/2_0/ subdirectory of the Sguil source distribution. Now change to the directory that contains the Snort source code, go to the src/preprocessors subdirectory, and patch spp_portscan.c and spp_stream4.c.

For example:

$ cd ~/snort-2.0.5/src/preprocessors

$ patch spp_portscan.c < \

~/sguil0.3.0/sensor/snort_mods/2_0/spp_portscan_sguil.patch
patching file spp_portscan.c

$ patch spp_stream4.c < \

~/sguil-0.3.0/sensor/snort_mods/2_0/spp_stream4_sguil.patch
patching file spp_stream4.c

Hunk #9 succeeded at 988 (offset -5 lines).

Hunk #11 succeeded at 3324 (offset -5 lines).

Hunk #13 succeeded at 3674 (offset -5 lines).

Then compile Snort just as you normally would [Hack #82] . After you’ve done that, edit your snort.conf and enable the portscan and stream4 preprocessors:

preprocessor portscan: $HOME_NET 4 3 /var/log/snort/portscans gw-ext0

preprocessor stream4: detect_scans, disable_evasion_alerts, keepstats db \

/var/log/snort/ssn_logs

The first line enables the portscan preprocessor and tells it to trigger a portscan alert if connections to four different ports within a three-second interval have been received from the same host. In addition, the portscan preprocessor will keep its logs in /var/log/snort/portscans. The last field on the line is the name of the sensor. The second line enables the stream4 preprocessor, directs it to detect stealth portscans, and to not alert on overlapping TCP datagrams. It also tells the stream4 preprocessor to keep its logs in /var/log/snort/ssn_logs.

You’ll also need to set up Snort to use its unified output format, so that you can use Barnyard to handle logging Snort’s alert and log events:

output alert_unified: filename snort.alert, limit 128

output log_unified: filnemae snort.log, limit 128

Next, create a crontab entry for the log_packets.sh script that comes with Sguil. This script starts an instance of Snort solely to log packets. This crontab line will have the script restart the Snort logging instance every hour:

00 0-23/1 * * * /usr/local/bin/log_packets.sh restart

You should also edit the variables at the beginning of the script and change them to suit your needs. These variables tell the script where to find the Snort binary (SNORT_PATH), where to have Snort log packets to (LOG_DIR), what interface to sniff on (INTERFACE), and what command-line options to use (OPTIONS). Pay special attention to the OPTIONS variable. Here is where you can tell snort what user and group to run as; the default won’t work unless you’ve created a sguil user and group. In addition, you can specify what traffic to not log by setting the FILTER variable to a BPF (i.e., tcpdump-style) filter.

Next, you’ll need to compile and install Barnyard [Hack #92], but only run the configure step for now. After that, patch in the op_sguil output plug-in provided by Sguil. To do this, copy sensor/barnyard_mods/op_sguil.* to the output-plugins directory in the Barnyard source tree.

For instance:

$ cd ~/barnyard-0.1.0/src/output-plugins

$ cp ~/sguil-0.3.0/sensor/barnyard_mods/op_sguil.* .

Now edit the Makefile in that directory to add op_sguil.c and op_sguil.h to the libop_a_SOURCES variable, and add op_sguil.o to the libop_a_OBJECTS variable.

After you’ve done that, edit op_plugbase.c and look for a line that says:

#include “op_acid_db.h”

Add another line below it so that it becomes:

#include “op_acid_db.h”

#include “op_sguil.h”

Now look for another line like this:

AcidDbOpInit( );

and add another line below it so that it looks like this:

AcidDbOpInit( );

SguilOpInit( );

Now run make from the current directory; when that completes, change to the top-level directory of the source distribution and run make install. To configure Barnyard to use the Sguil output plug-in, add a line similar to this one to your barnyard.conf:

output sguil: mysql, sensor_id 0, database SGUIL, server localhost, user sguil,

sguilpass, sguild_host localhost, sguild_port 7736

Now you can start Barnyard as you would normally. After you do that, you’ll need to set up Sguil’s sensor agent script, sensor_agent.tcl, which can be found in the sensor directory of the source distribution. Before running the script, you’ll need to edit several variables to fit your situation:

set SERVER_HOST localhost

set SERVER_PORT 7736

set HOSTNAME gw-ext0

set PORTSCAN_DIR /var/log/snort/portscans

set SSN_DIR /var/log/snort/ssn_logs

set WATCH_DIR /var/log/snort

The PORTSCAN_DIR and SSN_DIR variables should be set to where the Snort portscan and stream4 preprocessors log to.

Now all you need to do is set up xscriptd on the same system that you installed sguild on. This script is responsible for collecting the packet dumps from each sensor, pulling out the requested information, and then sending it back to the GUI client. Before running it, you’ll need to edit some variables in this script too:

set LOCALSENSOR 1

set LOCAL_LOG_DIR /var/log/snort/archive

set REMOTE_LOG_DIR /var/log/snort/dailylogs

If you’re running xscriptd on the same host as the sensor, set LOCALSENSOR to 1. Otherwise, set it to 0. The LOCAL_LOG_DIR variable sets where xscriptd will archive the data it receives when it queries the sensor, and REMOTE_LOG_DIR sets where xscriptd will look on the remote host for the packet dumps. If you’re installing xscriptd on a host other than the sensor agent, you’ll need to set up SSH client keys [Hack #73] in order for it to retrieve data from the sensors. You’ll also need to install tcpflow (http://www.circlemud.org/~jelson/software/tcpflow/) and p0f (http://www.stearns.org/p0f/) on the host that you install xscriptd on.

Now that everything’s set up, you can start sguild and xscriptd with commands similar to these:

# sguild -O /usr/lib/tls1.4/libtls1.4.so

# xscriptd -O /usr/lib/tls1.4/libtls1.4.so

If you’re not using SSL, you should omit the -O /usr/lib/tls1.4/libtls1.4.so portions of the commands. Otherwise, you should make sure that the argument to -O points to the location of libtls on your system.

Getting Sguil running isn’t trivial, but it is well worth the effort. Once everything is running, you will have a very good overview of precisely what is happening on your network. Sguil presents data from a bunch of sources simultaneously, giving you a good view of the big picture that is sometimes impossible to see when simply looking at your NIDS logs.R

Udgivet i Knowledge Base, Networking, Old Base | Skriv en kommentar

web Frontend to SNORT

Use ACID to make sense of your IDS logs.

Once you have set up Snort to log information to your database [Hack #82] ), you may find it hard to cope with all the data that it generates. Very busy and high-profile sites can generate a huge number of Snort warnings that eventually need to be tracked down. One way to alleviate the problem is to install ACID (http://acidlab.sourceforge.net).

ACID , otherwise known as the Analysis Console for Intrusion Databases, is a web-based frontend to databases that contain alerts from intrusion detection systems. It features the ability to search for alerts based on a variety of criteria, such as alert signature, time of detection, source and destination address and ports, as well as payload or flag values. ACID can display the packets that triggered the alerts, as well as decode their layer-3 and layer-4 information. ACID also contains alert management features that allow you to group alerts based on incident, delete acknowledged or false positive alerts, email alerts, or archive them to another database. ACID also provides many different statistics on the alerts in your database based on time, the sensor they were generated by, signature, and packet-related statistics such as protocol, address, or port.

To install ACID, you’ll first need a web server and a working installation of PHP (e.g., Apache and mod_php), as well as a Snort installation that has been configured to log to a database (e.g., MySQL). You will also need a couple of PHP code libraries: ADODB (http://php.weblogs.com/adodb) for database abstraction and either PHPlot (http://www.phplot.com) or JPGraph (http://www.aditus.nu/jpgraph) for graphics rendering.

After you have downloaded these packages, unpack them into a directory that can be used to execute PHP content on the web server. Next, change to the directory that was created by unpacking the ACID distribution (i.e., ./acid) and edit the acid_conf.php file. Here you will need to tell ACID where to find ADODB and JPGraph, as well as how to connect to your Snort database.

You can do this by changing these variables to similar values that fit your situation:

$Dblib_path = “../adodb”;

$Dbtype = “mysql”;

$alert_dbname = “SNORT”;

$alert_host = “localhost”;

$alert_port = “”;

$alert_user=”snort”;

$alert_password = “snortpass”;

This will tell ACID to look for the ADODB code in the adodb directory at the same directory level as the acid directory. In addition, it will tell ACID to connect to a MySQL database called SNORT that is running on the local machine, using the user snort with the password snortpass. Since it is connecting to a MySQL server on the local machine, there is no need to specify a port number. If you want to connect to a database running on another system, you should specify 3389, which is the default port used by MySQL.

Additionally, you can configure an archive database for ACID using variables that are similar to the ones used to configure the alert database. The following variables will need to be set to use ACID’s archiving features:

$archive_dbname

$archive_host

$archive_port

$archive_user

$archive_password

To tell ACID where to find the graphing library that you want to use, you will need to set the $ChartLib_path variable. If you are using JPGraph 1.13 and have unpacked it from the same directory you unpacked the ACID distribution, you would enter something like this:

$ChartLib_path = “../jpgraph-1.13/src”;

Congratulations! You’re finished mucking about in configuration files for the time being. Now open a web browser and go to the URL that corresponds to the directory where you unpacked ACID. You should then be greeted with a database setup page as shown in Figure 7-1.
Figure 7-1. The ACID database setup page

Before you can use ACID, it must create some database tables for its own use. To do this, click the Create ACID AG button. After this, you should see a screen confirming that the tables were created. In addition, you can have ACID create indexes for your events table if this was not done prior to setting up ACID. Indexes will greatly speed up queries as your events table grows, at the expense of using a little more disk space. Once you are done with the setup screen, you can click the Home link to go to the main ACID page, as seen in Figure 7-2.
Figure 7-2. ACID’s main page

ACID has a fairly intuitive user interface. The main table provides plenty of links to see many useful views of the database at a glance, such as the list of source or destination IP addresses associated with the alerts in your database, as well as the source and destination ports.

Udgivet i Knowledge Base, Old Base, Security | Skriv en kommentar

Detect intrusions with snort

Use one of the most powerful (and free) network intrusion detection systems available to help you keep an eye on your network.

Monitoring your logs can take you only so far in detecting intrusions. If the logs are being generated by a service that has been compromised, welcome to the security admin’s worst nightmare: you can no longer trust your logs. This is where NIDS come into play. They can alert you to intrusion attempts, or even intrusions in progress.

The undisputed champion of open source NIDS is Snort (http://www.snort.org). Some of the features that make Snort so powerful are its signature-based rule engine and its easy extensibility through plug-ins and preprocessors. These features allow you to extend Snort in whichever direction you need. Consequently, you don’t have to depend on anyone else to provide you with rules when a new exploit comes to your attention: with a basic knowledge of TCP/IP, you can write your own rules quickly and easily. This is probably Snort’s most important feature, since new attacks are invented and reported all the time. Additionally, Snort has a very flexible reporting mechanism that allows you to send alerts to a syslogd, flat files, or even a database.

To compile and install a plain-vanilla version of Snort, download the latest version and unpack it. Run the configure script and then make:

$ ./configure && make

Then become root and run:

# make install

Note that all the headers and libraries for libpcap (http://www.tcpdump.org) must be installed before you start building Snort, or else compilation will fail. Additionally, you may need to make use of the –with-libpcap-includes and –with-libpcap-libraries configure options to tell the compiler where it can find the libraries and headers. However, you should only need to do this if you have installed the libraries and headers in a nonstandard location (i.e., somewhere other than the /usr or /usr/local hierarchy).

For example, if you have installed libpcap within the /opt hierarchy, you would use this:

$ ./configure –with-libpcap-includes=/opt/include\

–with-libpcap-libraries=/opt/lib

Snort has the ability to respond to the host that has triggered one of its rules. This capability is called flexible response . To enable this functionality, you’ll also need to use the –enable-flexresp option, which requires the libnet packet injection library (http://www.packetfactory.net/projects/libnet/). After ensuring that this package is installed on your system, you can use the –with-libnet-includes and –with-libnet-libraries switches to specify its location.

If you want to include support for sending alerts to a database, you will need to make use of either the –with-mysql, –with-postgresql, or –with-oracle options. To see the full list of configure script options, you can type ./configure –help.

After you have installed Snort, test it out by using it in sniffer mode. You should immediately see some traffic:

# ./snort -evi eth0
Running in packet dump mode

Log directory = /var/log/snort

Initializing Network Interface eth0

–== Initializing Snort ==–

Initializing Output Plugins!

Decoding Ethernet on interface eth0

–== Initialization Complete ==–

-*> Snort! <*-

Version 2.0.5 (Build 98)

By Martin Roesch (roesch@sourcefire.com, www.snort.org)

12/14-16:25:17.874711 0:A:95:C7:2B:10 -> 0:C:29:E2:2B:C1 type:0x800 len:0x42

192.168.0.60:53179 -> 192.168.0.41:22 TCP TTL:64 TOS:0x10 ID:56177 IpLen:20 DgmLen:52 DF

***A**** Seq: 0x67E53951 Ack: 0x2BA09FF7 Win: 0xFFFF TcpLen: 32

TCP Options (3) => NOP NOP TS: 3426501948 469087

=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+

12/14-16:25:17.874828 0:C:29:E2:2B:C1 -> 0:A:95:C7:2B:10 type:0x800 len:0x252

192.168.0.41:22 -> 192.168.0.60:53179 TCP TTL:64 TOS:0x10 ID:50923 IpLen:20 DgmLen:580 DF

***AP*** Seq: 0x2BA09FF7 Ack: 0x67E53951 Win: 0x2200 TcpLen: 32

TCP Options (3) => NOP NOP TS: 469100 3426501948

=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+

Some configuration files are provided with the Snort source distribution in the etc/ directory, but they are not installed when running make install. You can create a directory to hold these in /etc or /usr/local/etc and copy the pertinent files to it by running something similar to this:

# mkdir /usr/local/etc/snort &&\

cp etc/[^Makefile]* /usr/local/etc/snort

You’ll probably want to copy the rules directory to there as well.

Now you need to edit the snort.conf file. Snort’s sample snort.conf file lists a number of variables. Some are defined with default values, and all are accompanied by comments that make this section mostly self-explanatory. Of particular note, however, are these two variables:

var HOME_NET any

var EXTERNAL_NET any

HOME_NET specifies which IP address spaces should be considered local. The default is set so that any IP address is included as part of the home network. Networks can be specified using CIDR notation (i.e., xxx.xxx.xxx.xxx/yy). You can also specify multiple subnets and IP addresses by enclosing them in brackets and separating them with commas:

var HOME_NET [10.1.1.0/24,192.168.1.0/24]

HOME_NET can also be automatically set to the network address of a particular interface by setting the variable to $eth0_ADDRESS. In this particular case, $eth0_ADDRESS sets it to the network address of eth0.

The EXTERNAL_NET variable allows you to explicitly specify IP addresses and networks that are not a part of HOME_NET. Unless a subset of HOME_NET is considered hostile, you can just keep the default value, which is any.

The rest of the variables that deal with IP addresses or network ranges— DNS_SERVERS, SMTP_SERVERS, HTTP_SERVERS, SQL_SERVERS, and TELNET_SERVERS—are set to $HOME_NET by default. These variables are used within the ruleset that comes with the Snort distribution and can be used to fine-tune a rules behavior. For instance, rules that deal with SMTP-related attack signatures use the SMTP_SERVERS variable to filter out traffic that isn’t actually related to the rule. Fine-tuning these variables not only leads to more relevant alerts and less false positives, but also to higher performance.

Another important variable is RULE_PATH, which is used later in the configuration file to include rulesets. The sample configuration file sets it to ../rules but, to be compatible with the previous examples, this should be set to ./rules since snort.conf and the rules directory are both in /usr/local/etc/snort.

The next section in the configuration file allows you to configure Snort’s built-in preprocessors. These do anything from reassembling fragmented packets to decoding HTTP traffic to detecting portscans. For most situations, the default configuration is sufficient. However, if you need to tweak any of these settings, the configuration file is fully documented with each preprocessor’s options.

If you’ve compiled in database support, you’ll probably want to enable the database output plug-in, which will cause Snort to store any alerts that it generates in your database. Enable this plug-in by putting lines similar to these in your configuration file:

output database: log, mysql, user=snort password=snortpass dbname=SNORT \ host=dbserver

output database: alert mysql, user=snort password=snortpass dbname=SNORT \ host=dbserver

The first line configures Snort to send any information generated by rules that specify the log action to the database. Likewise, the second line tells Snort to send any information generated by rules that specify the alert action to the database. For more information on the difference between the log and alert actions, see [Hack #86] .

If you’re going to use a database with Snort, you’ll need to create a new database, and possibly a new database user account. The Snort source code’s contrib directory includes scripts to create databases of the supported types: create_mssql, create_mysql, create_oracle.sql, and create_postgresql.

If you are using MySQL, you can create a database and then create the proper tables by running a command like this:

# mysql SNORT -p < ./contrib/create_mysql

The rest of the configuration file deals mostly with the rule signatures Snort will use when monitoring network traffic for intrusions. These rules are categorized and stored in separate files, and are activated by using the include directive. For testing purposes (or on networks with light traffic) the default configuration is sufficient, but you should look over the rules and decide which rule categories you really need and which ones you don’t.

Now that all of the hard configuration and setup work is out of the way, you should test your snort.conf file. You can do this by running something similar to the following command:

# snort -T -c /usr/local/etc/snort/snort.conf

Snort will report any errors that it finds and then exit. If there aren’t any errors, run Snort with a command similar to this:

# snort -Dd -z est -c /usr/local/etc/snort/snort.conf

Two of these flags, -d and -c, were used previously (to tell Snort to decode packet data and to use the specified configuration file, respectively). The other two are new. The -D flag tells Snort to print out some startup messages and then fork into the background. The -z est argument tells Snort’s streams preprocessor plug-in to ignore TCP packets that aren’t part of established sessions, which makes your Snort system much less susceptible to spoofing attacks and certain DoS attacks. Some other useful options are -u and -g, which let Snort drop its privileges and run under the user and group that you specify. These are especially useful with the -t option, which will chroot() Snort to the directory that you specify. Now you should start to see logs appearing in /var/log/snort.

Udgivet i Knowledge Base, Networking, Old Base, Security | Skriv en kommentar

Tunnel with PPP and SSH

Use PPP and SSH to create a secure VPN tunnel.

There are so many options to choose from when creating a VPN or tunneled connection that it’s mind-boggling. You may not be aware that all the software you need to create a VPN is probably already installed on your Unix machines—namely PPP and SSH daemons.

You might have used PPP back in the day to connect to the Internet over a dial-up connection, so you may be wondering how the same PPP can operate over SSH. Well, when you used PPP in conjunction with a modem, it was talking to the modem through what the operating system presented as a TTY interface, which is, in short, a regular terminal device. The PPP daemon on your end would send its output to the TTY, which the operating system would send out the modem and across the telephone network until it reached the remote end, where the same thing would happen in reverse.

The terminals that you use to run shell commands on (e.g., the console, an xterm, etc.) use pseudo-TTY interfaces, which are designed to operate similarly to TTYs. Because of this, PPP daemons can also operate over pseudo-TTYs. So, you can replace the serial TTYs with pseudo-TTYs, but you still need a way to connect the local pseudo-TTY to the remote one. Here’s where SSH comes into the picture.

You can create the actual PPP connection in one quick command. For instance, if you wanted to use the IP 10.1.1.20 for your local end of the connection and 10.1.1.1 on the remote end, you could run a command similar to this:

# /usr/sbin/pppd updetach noauth silent nodeflate \

pty “/usr/bin/ssh root@colossus /usr/sbin/pppd nodetach notty noauth” \

ipparam 10.1.1.20:10.1.1.1

root@colossus’s password:

local IP address 10.1.1.20

remote IP address 10.1.1.1

The first line of the command starts the pppd process on the local machine and tells it to fork into the background once the connection has been established (updetach). It also tells pppd to not do any authentication (noauth)—the SSH daemon already provides very strong authentication. The pppd command also turns off deflate compression (nodeflate). The second line of the command tells pppd to run a program and to communicate with it through the program’s standard input and standard output. This is used to log into the remote machine and run a pppd process there. Finally, the last line specifies the local and remote IP addresses that are to be used for the PPP connection.

After the command returns you to the shell, you should be able to see a ppp interface in the output of ifconfig:

$ /sbin/ifconfig ppp0

ppp0 Link encap:Point-to-Point Protocol

inet addr:10.1.1.20 P-t-P:10.1.1.1 Mask:255.255.255.255

UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1

RX packets:58 errors:0 dropped:0 overruns:0 frame:0

TX packets:50 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:3

RX bytes:5372 (5.2 Kb) TX bytes:6131 (5.9 Kb)

Now to try pinging the remote end’s IP address:

$ ping 10.1.1.1

PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.

64 bytes from 10.1.1.1: icmp_seq=1 ttl=64 time=4.56 ms

64 bytes from 10.1.1.1: icmp_seq=2 ttl=64 time=4.53 ms

64 bytes from 10.1.1.1: icmp_seq=3 ttl=64 time=5.45 ms

64 bytes from 10.1.1.1: icmp_seq=4 ttl=64 time=4.51 ms

— 10.1.1.1 ping statistics —

4 packets transmitted, 4 received, 0% packet loss, time 3025ms

rtt min/avg/max/mdev = 4.511/4.765/5.451/0.399 ms

And finally, the ultimate litmus test—actually using the tunnel for something other than ping:

$ ssh 10.1.1.1

The authenticity of host ‘10.1.1.1 (10.1.1.1)’ can’t be established.

RSA key fingerprint is 56:36:db:7a:02:8b:05:b2:4d:d4:d1:24:e9:4f:35:49.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘10.1.1.1’ (RSA) to the list of known hosts.

andrew@10.1.1.1’s password:

[andrew@colossus andrew]$

Before deciding to keep this setup, you may want to generate login keys to use with ssh [Hack #73], so that you don’t need to type in a password each time. In addition, you may want to create a separate user for logging in on the remote machine and starting pppd. However, pppd needs to be started as root, so you’ll have to make use of sudo [Hack #6]. Also, you can enable SSH’s built-in compression by adding a -C to the ssh command. In some circumstances, SSH compression can greatly improve the speed of the link. Finally, to tear down the tunnel, simply kill the ssh process that pppd spawned.

Although it’s ugly and might not be as stable and full of features as actual VPN implementations, the PPP and SSH combination can help you create an instant encrypted network without the need to install additional software.

Udgivet i Knowledge Base, Networking, Old Base, SSH | Skriv en kommentar

Cross platform VPN

Use OpenVPN to easily tie your networks together.

Creating a VPN can be quite difficult, especially when dealing with clients using multiple platforms. Quite often, a single VPN implementation isn’t available for all of them. As an administrator, you can be left with trying to get different VPN implementations to operate on all the different platforms that you need to support, which can become a nightmare.

Luckily, someone has stepped in to fill the void in cross-platform VPN packages and has written OpenVPN (http://openvpn.sourceforge.net). It supports Linux, Solaris, OpenBSD, FreeBSD, NetBSD, Mac OS X, and Windows 2000/XP. OpenVPN achieves this by implementing all of the encryption, key-management, and connection-setup functionality in a user-space daemon, leaving the actual tunneling portion of the job to the host operating system.

To accomplish the tunneling, OpenVPN makes use of the host operating system’s virtual TUN or TAP device. These devices export a virtual network interface, which is then managed by the openvpn process to provide a point-to-point interface between the hosts participating in the VPN. Instead of traffic being sent and received on these devices, it’s sent and received from a user-space program. Thus, when data is sent across the virtual device, it is relayed to the openvpn program, which then encrypts it and sends it to the openvpn process running on the remote end of the VPN link. When the data is received on the other end, the openvpn process decrypts it and relays it to the virtual device on that machine. It is then processed just like a packet being received on any other physical interface.

OpenVPN uses SSL and relies on the OpenSSL library (http://www.openssl.org) for encryption, authentication, and certification functionality. Tunnels created with OpenVPN can either use preshared static keys or take advantage of TLS dynamic keying and digital certificates. Since OpenVPN makes use of OpenSSL, it can support any cipher that OpenSSL supports. The main advantage of this is that OpenVPN will be able to transparently support any new ciphers as they are added to the OpenSSL distribution.

If you’re using a Windows-based operating system, all you need to do is download the executable installer and configure OpenVPN. On all other platforms, you’ll need to compile OpenVPN yourself. Before you compile and install OpenVPN, make sure that you have OpenSSL installed. You can also install the LZO compression library (http://www.oberhumer.com/opensource/lzo/), which is generally a good idea. Using LZO compression can make much more efficient use of your bandwidth, and even greatly improve performance in some circumstances. To compile and install OpenVPN, download the tarball and type something similar to this:

$ tar xfz openvpn-1.5.0.tar.gz

$ cd openvpn-1.5.0

$ ./configure && make

If you installed the LZO libraries and header files somewhere other than /usr/lib and /usr/include, you will probably need to use the –with-lzo-headers and –with-lzo-lib configure script options.

For example, if you have installed LZO under the /usr/local hierarchy, you’ll want to run the configure script like this:

$ ./configure –with-lzo-headers=/usr/local/include \

–with-lzo-lib=/usr/local/lib

If the configure script cannot find the LZO libraries and headers, it will print out a warning that looks like this:

LZO library and headers not found.

LZO library available from http://www.oberhumer.com/opensource/lzo/

configure: error: Or try ./configure –disable-lzo

If the script does find the LZO libraries, you should see output on your terminal that is similar to this:

configure: checking for LZO Library and Header files…

checking lzo1x.h usability… yes

checking lzo1x.h presence… yes

checking for lzo1x.h… yes

checking for lzo1x_1_15_compress in -llzo… yes

Now that that’s out of the way, you can install OpenVPN by running the usual make install. If you are running Solaris or Mac OS X, you’ll also need to install a TUN/TAP driver. The other Unix-based operating systems already include one, and the Windows installer installs the driver for you. You can get the source code to the Solaris driver from the SourceForge project page (http://vtun.sourceforge.net/tun/). The Mac OS X driver is available in both source and binary form from http://chrisp.de/en/projects/tunnel.html.

Once you have LZO, OpenSSL, the TUN/TAP driver, and OpenVPN all installed, you can test everything by setting up a rudimentary VPN from the command line.

On machine A (kryten in this example), run a command similar to this one:

# openvpn –remote zul –dev tun0 –ifconfig 10.0.0.19 10.0.0.5

The command that you’ll need to run on machine B (zul) is a lot like the previous command, except the arguments to –ifconfig are swapped:

# openvpn –remote kryten –ifconfig 10.0.0.5 10.0.0.19

The first IP address is the local end of the tunnel, and the second is for the remote end; this is why you need to swap the IP addresses on the other end. When running these commands, you should see a warning about not using encryption, as well as some status messages. Once OpenVPN starts, run ifconfig to see that the point-to-point tunnel device has been set up:

[andrew@kryten andrew]$ /sbin/ifconfig tun0

tun0: flags=51<UP,POINTOPOINT,RUNNING> mtu 1300

inet 10.0.0.19 –> 10.0.0.5 netmask 0xffffffff

Now try pinging the remote machine, using its tunneled IP address:

[andrew@kryten andrew]$ ping -c 4 10.0.0.5

PING 10.0.0.5 (10.0.0.5): 56 data bytes

64 bytes from 10.0.0.5: icmp_seq=0 ttl=255 time=0.864 ms

64 bytes from 10.0.0.5: icmp_seq=1 ttl=255 time=1.012 ms

64 bytes from 10.0.0.5: icmp_seq=2 ttl=255 time=0.776 ms

64 bytes from 10.0.0.5: icmp_seq=3 ttl=255 time=0.825 ms

— 10.0.0.5 ping statistics —

4 packets transmitted, 4 packets received, 0% packet loss

round-trip min/avg/max = 0.776/0.869/1.012 ms

Now that you have verified that OpenVPN is working properly, it is time to create a configuration that’s a little more useful in the real world. First you will need to create SSL certificates [Hack #45] for each end of the connection. After you’ve done this, you’ll need to create configuration files and connection setup and teardown scripts for each end of the connection.

Let’s look at the configuration files first. For these examples, zul will be the gateway into the private network and kryten will be the external client.

The configuration file for zul that is used for kryten is stored in /etc/openvpn/openvpn.conf. Here are the contents:

dev tun0

ifconfig 10.0.0.5 10.0.0.19

up /etc/openvpn/openvpn.up

down /etc/openvpn/openvpn.down

tls-server

dh /etc/openvpn/dh1024.pem

ca /etc/ssl/ca.crt

cert /etc/ssl/zul.crt

key /etc/ssl/private/zul.key

ping 15

verb 0

You can see that the dev and ifconfig options are used in the same way as they are on the command line. The up and down options specify scripts that will be executed when the VPN connection is initiated or terminated. The tls-server option enables TLS mode and specifies that you want to designate this side of the connection as the server during the TLS handshaking process. The dh option specifies the Diffie-Hellman parameters to use during key exchange.These are encoded in a .pem file and can be generated with the following openssl command:

# openssl dhparam -out dh1024.pem 1024

The next few configuration options deal with the SSL certificates. The ca option specifies the Certificate Authority’s public certificate, and the cert option specifies the public certificate to use for this side of the connection. Similarly, the key option specifies the private key that corresponds to the public certificate. To help ensure that the VPN tunnel doesn’t get dropped from any intervening firewalls that are doing stateful filtering, the ping option is used. This causes OpenVPN to ping the remote host every n seconds so that the tunnel’s entry in the firewall’s state table does not time out.

On kryten, the following configuration file is used:

dev tun0

remote zul

ifconfig 10.0.0.19 10.0.0.5

up /etc/openvpn/openvpn.up

down /etc/openvpn/openvpn.down

tls-client

ca /etc/ssl/ca.crt

cert /etc/ssl/kryten.crt

key /etc/ssl/private/kryten.key

ping 15

verb 0

The main differences with this configuration file are that the remote and tls-client options have been used. Other than that, the arguments to the ifconfig option have been swapped, and the file uses kryten’s public and private keys instead of zul’s. To turn on compression, add the comp-lzo option to the configuration files on both ends of the VPN.

Finally, create the openvpn.up and openvpn.down scripts on both hosts participating in the tunnel. These scripts set up and tear down the actual routes and other networking requirements.

The openvpn.up scripts are executed whenever a VPN connection is established. On kryten it looks like this:

#!/bin/sh

/sbin/route add -net 10.0.0.0 gw $5 netmask 255.255.255.0

This sets a route telling the operating system to send all traffic destined for the 10/24 network to the remote end of our VPN connection. From there it will be routed to the interface on zul that has been assigned an address from the 10/24 address range. The $5 in the script is replaced by the IP address used by the remote end of the tunnel. In addition to adding the route, you might want to set up nameservers for the network you are tunneling into in this script. Unless you are doing something fancy, the openvpn.down script on kryten is empty, since the route is automatically dropped by the kernel when the connection ends.

No additional routes are needed on zul, because it already has a route to the network that kryten is tunneling into. In addition, since tun0 on zul is a point-to-point link between itself and kryten, there is no need to add a route to pass traffic to kryten—by virtue of having a point-to-point link, a host route will be created for kryten.

The only thing that needs to be in the openvpn.up script on zul is this:

#!/bin/sh

arp -s $5 00:00:d1:1f:3f:f1 permanent pub

This causes zul to answer ARP queries for kryten, since otherwise the ARP traffic will not be able to reach kryten. This sort of configuration is popularly called proxy arp. In this particular example, zul is running OpenBSD. If you are running Linux, simply remove the permanent keyword from the arp command. Again, the $5 is replaced by the IP address that is used at the remote end of the connection, which in this case is kryten’s.

The openvpn.down script on zul simply deletes the ARP table entry:

#!/bin/sh

arp -d kryten

Unfortunately, since scripts run through the down configuration file option are not passed an argument telling them what IP address they should be dealing with, you have to explicitly specify the IP address or hostname to delete from the ARP table. Now the only thing to worry about is firewalling. You’ll want to allow traffic coming through your tun0 device, as well as UDP port 5000.

Finally, you are ready to run openvpn on both sides, using a command like this:

# openvpn –config /etc/openvpn/openvpn.conf –daemon

Setting up OpenVPN under Windows is even easier. Simply run the installer, and everything you need will be installed onto your system. This includes OpenSSL, the TUN/TAP driver, and OpenVPN itself. The installer will also associate the .ovpn file extension with OpenVPN. Simply put your configuration information in a .ovpn file, double-click it, and you’re ready to go.

This should get you started using OpenVPN, but it has far too many configuration options to discuss here. Be sure to look at the OpenVPN web site for more information.

Udgivet i Knowledge Base, Networking, Old Base, OpenVPN | Skriv en kommentar

Automatic vtund.conf configurator

Generate a vtund.conf on the fly to match changing network conditions.

If you’ve just come from [Hack #78], then the following script will generate a working vtund.conf for the client side automatically.

If you haven’t read the previous hack (or if you’ve never used VTun), then go back and read it before attempting to grok this bit of Perl. Essentially, it attempts to take the guesswork out of changing the routing table around on the client side by auto-detecting the default gateway and building the vtund.conf accordingly.

To configure the script, take a look at the Configuration section. The first line of $Config contains the addresses, port, and secret that we used in the VTun hack. The second line simply serves as an example of how to add more.

To run the script, either call it as vtundconf home or set $TunnelName to the one you want to default to. Better yet, make symlinks to the script, like this:

# ln -s vtundconf home
# ln -s vtundconf tunnel2

Then you can generate the appropriate vtund.conf by calling the symlink directly:

# vtundconf home > /usr/local/etc/vtund.conf

You might be wondering why anyone would go to all of the trouble to make a vtund.conf-generating script in the first place. Once you get the settings right, you’ll never have to change them, right?

Well, usually that is the case. But consider the case of a Linux laptop that uses many different networks in the course of the day (say, a DSL line at home, Ethernet at work, and maybe a wireless connection at the local coffee shop). By running vtundconf once at each location, you will have a working configuration instantly, even if your IP and gateway is assigned by DHCP. This makes it easy to get up and running quickly with a live, routable IP address, regardless of the local network topology.

Incidentally, VTun currently runs well on Linux, FreeBSD, Mac OS X, Solaris, and others.

Save this file as vtundconf, and run it each time you use a new wireless network to generate an appropriate vtund.conf for you on the fly:

#!/usr/bin/perl -w

#

# vtund wrapper in need of a better name.

#

# (c)2002 Schuyler Erle & Rob Flickenger

#

################ CONFIGURATION

# If TunnelName is blank, the wrapper will look at @ARGV or $0.

#

# Config is TunnelName, LocalIP, RemoteIP, TunnelHost, TunnelPort, Secret

#

my $TunnelName = “”;

my $Config = q{

home 208.201.239.33 208.201.239.32 208.201.239.5 5000 sHHH

tunnel2 10.0.1.100 10.0.1.1 192.168.1.4 6001 foobar

};

################ MAIN PROGRAM BEGINS HERE

use POSIX ‘tmpnam’;

use IO::File;

use File::Basename;

use strict;

# Where to find things…

#

$ENV{PATH} = “/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin:/usr/local/[RETURN]sbin”;

my $IP_Match = ‘((?:\d{1,3}\.){3}\d{1,3})’; # match xxx.xxx.xxx.xxx

my $Ifconfig = “ifconfig -a”;

my $Netstat = “netstat -rn”;

my $Vtund = “/bin/echo”;

my $Debug = 1;

# Load the template from the data section.

#

my $template = join( “”, );

# Open a temp file — adapted from Perl Cookbook, 1st Ed., sec. 7.5.

#

my ( $file, $name ) = (“”, “”);

$name = tmpnam( )

until $file = IO::File->new( $name, O_RDWR|O_CREAT|O_EXCL );

END { unlink( $name ) or warn “Can’t remove temporary file $name!\n”; }

# If no TunnelName is specified, use the first thing on the command line,

# or if there isn’t one, the basename of the script.

# This allows users to symlink different tunnel names to the same script.

#

$TunnelName ||= shift(@ARGV) || basename($0);

die “Can’t determine tunnel config to use!\n” unless $TunnelName;

# Parse config.

#

my ($LocalIP, $RemoteIP, $TunnelHost, $TunnelPort, $Secret);

for (split(/\r*\n+/, $Config)) {

my ($conf, @vars) = grep( $_ ne “”, split( /\s+/ ));

next if not $conf or $conf =~ /^\s*#/o; # skip blank lines, comments

if ($conf eq $TunnelName) {

($LocalIP, $RemoteIP, $TunnelHost, $TunnelPort, $Secret) = @vars;

last;

}

}

die “Can’t determine configuration for TunnelName ‘$TunnelName’!\n”

unless $RemoteIP and $TunnelHost and $TunnelPort;

# Find the default gateway.

#

my ( $GatewayIP, $ExternalDevice );

for (qx{ $Netstat }) {

# In both Linux and BSD, the gateway is the next thing on the line,

# and the interface is the last.

#

if ( /^(?:0.0.0.0|default)\s+(\S+)\s+.*?(\S+)\s*$/o ) {

$GatewayIP = $1;

$ExternalDevice = $2;

last;

}

}

die “Can’t determine default gateway!\n” unless $GatewayIP and $ExternalDevice;

# Figure out the LocalIP and LocalNetwork.

#

my ( $LocalNetwork );

my ( $iface, $addr, $up, $network, $mask ) = “”;

sub compute_netmask {

($addr, $mask) = @_;

# We have to mask $addr with $mask because linux /sbin/route

# complains if the network address doesn’t match the netmask.

#

my @ip = split( /\./, $addr );

my @mask = split( /\./, $mask );

$ip[$_] = ($ip[$_] + 0) & ($mask[$_] + 0) for (0..$#ip);

$addr = join(“.”, @ip);

return $addr;

}

for (qx{ $Ifconfig }) {

last unless defined $_;

# If we got a new device, stash the previous one (if any).

if ( /^([^\s:]+)/o ) {

if ( $iface eq $ExternalDevice and $network and $up ) {

$LocalNetwork = $network;

last;

}

$iface = $1;

$up = 0;

}

# Get the network mask for the current interface.

if ( /addr:$IP_Match.*?mask:$IP_Match/io ) {

# Linux style ifconfig.

compute_netmask($1, $2);

$network = “$addr netmask $mask”;

} elsif ( /inet $IP_Match.*?mask 0x([a-f0-9]{8})/io ) {

# BSD style ifconfig.

($addr, $mask) = ($1, $2);

$mask = join(“.”, map( hex $_, $mask =~ /(..)/gs ));

compute_netmask($addr, $mask);

$network = “$addr/$mask”;

}

# Ignore interfaces that are loopback devices or aren’t up.

$iface = “” if /\bLOOPBACK\b/o;

$up++ if /\bUP\b/o;

}

die “Can’t determine local IP address!\n” unless $LocalIP and $LocalNetwork;

# Set OS dependent variables.

#

my ( $GW, $NET, $PTP );

if ( $^O eq “linux” ) {

$GW = “gw”; $PTP = “pointopoint”; $NET = “-net”;

} else {

$GW = $PTP = $NET = “”;

}

# Parse the config template.

#

$template =~ s/(\$\w+)/$1/gee;

# Write the temp file and execute vtund.

#

if ($Debug) {

print $template;

} else {

print $file $template;

close $file;

system(“$Vtund $name”);

}

_ _DATA_ _

options {

port $TunnelPort;

ifconfig /sbin/ifconfig;

route /sbin/route;

}

default {

compress no;

speed 0;

}

# ‘mytunnel’ should really be `basename $0` or some such

# for automagic config selection

$TunnelName {

type tun;

proto tcp;

keepalive yes;

pass $Secret;

up {

ifconfig “%% $LocalIP $PTP $RemoteIP arp”;

route “add $TunnelHost $GW $GatewayIP”;

route “delete default”;

route “add default $GW $RemoteIP”;

route “add $NET $LocalNetwork $GW $GatewayIP”;

};

down {

ifconfig “%% down”;

route “delete default”;

route “delete $TunnelHost $GW $GatewayIP”;

route “delete $NET $LocalNetwork”;

route “add default $GW $GatewayIP”;

};

}

Udgivet i Knowledge Base, Networking, Old Base | Skriv en kommentar

Tunnel with VTUN and SSH

Connect two networks using VTun and a single SSH connection.

VTun is a user-space tunnel server, allowing entire networks to be tunneled to each other using the tun universal tunnel kernel driver. An encrypted tunnel such as VTun allows roaming wireless clients to secure all of their IP traffic using strong encryption. It currently runs under Linux, BSD, and Mac OS X. The examples in this hack assume that you are using Linux.

The procedure described next will allow a host with a private IP address (10.42.4.6) to bring up a new tunnel interface with a real, live, routed IP address (208.201.239.33) that works as expected, as if the private network weren’t even there. Do this by bringing up the tunnel, dropping the default route, and then adding a new default route via the other end of the tunnel.

To begin with, here is the (pretunneled) network configuration:

root@client:~# ifconfig eth2

eth2 Link encap:Ethernet HWaddr 00:02:2D:2A:27:EA

inet addr:10.42.3.2 Bcast:10.42.3.63 Mask:255.255.255.192

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:662 errors:0 dropped:0 overruns:0 frame:0

TX packets:733 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:100

RX bytes:105616 (103.1 Kb) TX bytes:74259 (72.5 Kb)

Interrupt:3 Base address:0x100

root@client:~# route

Kernel IP routing table

Destination Gateway Genmask Flags Metric Ref Use Iface

10.42.3.0 * 255.255.255.192 U 0 0 0 eth2

loopback * 255.0.0.0 U 0 0 0 lo

default 10.42.3.1 0.0.0.0 UG 0 0 0 eth2

As you can see, the local network is 10.42.3.0/26, the IP is 10.42.3.2, and the default gateway is 10.42.3.1. This gateway provides network address translation (NAT) to the Internet. Here’s what the path looks like to yahoo.com:

root@client:~# traceroute -n yahoo.com

traceroute to yahoo.com (64.58.79.230), 30 hops max, 40 byte packets

1 10.42.3.1 2.848 ms 2.304 ms 2.915 ms

2 209.204.179.1 16.654 ms 16.052 ms 19.224 ms

3 208.201.224.194 20.112 ms 20.863 ms 18.238 ms

4 208.201.224.5 213.466 ms 338.259 ms 357.7 ms

5 206.24.221.217 20.743 ms 23.504 ms 24.192 ms

6 206.24.210.62 22.379 ms 30.948 ms 54.475 ms

7 206.24.226.104 94.263 ms 94.192 ms 91.825 ms

8 206.24.238.61 97.107 ms 91.005 ms 91.133 ms

9 206.24.238.26 95.443 ms 98.846 ms 100.055 ms

10 216.109.66.7 92.133 ms 97.419 ms 94.22 ms

11 216.33.98.19 99.491 ms 94.661 ms 100.002 ms

12 216.35.210.126 97.945 ms 93.608 ms 95.347 ms

13 64.58.77.41 98.607 ms 99.588 ms 97.816 ms

In this example, we are connecting to a tunnel server on the Internet at 208.201.239.5. It has two spare live IP addresses (208.201.239.32 and 208.201.239.33) to be used for tunneling. We’ll refer to that machine as the server, and our local machine as the client.

Now let’s get the tunnel running. To begin with, load the tun driver on both machines:

# modprobe tun

It is worth noting that the tun driver will sometimes fail if the server and client kernel versions don’t match. For best results, use a recent kernel (and the same version, e.g., 2.4.20) on both machines.

On the server machine, save this file to /usr/local/etc/vtund.conf:

options {

port 5000;

ifconfig /sbin/ifconfig;

route /sbin/route;

syslog auth;

}

default {

compress no;

speed 0;

}

home {

type tun;

proto tcp;

stat yes;

keepalive yes;

pass sHHH; # Password is REQUIRED.

up {

ifconfig “%% 208.201.239.32 pointopoint 208.201.239.33”;

program /sbin/arp “-Ds 208.201.239.33 %% pub”;

program /sbin/arp “-Ds 208.201.239.33 eth0 pub”;

route “add -net 10.42.0.0/16 gw 208.201.239.33”;

};

down {

program /sbin/arp “-d 208.201.239.33 -i %%”;

program /sbin/arp “-d 208.201.239.33 -i eth0”;

route “del -net 10.42.0.0/16 gw 208.201.239.33”;

};

}

Launch the vtund server like so:

root@server:~# vtund -s

Now you’ll need a vtund.conf file for the client side. Try this one, again in /usr/local/etc/vtund.conf:

options {

port 5000;

ifconfig /sbin/ifconfig;

route /sbin/route;

}

default {

compress no;

speed 0;

}

home {

type tun;

proto tcp;

keepalive yes;

pass sHHH; # Password is REQUIRED.

up {

ifconfig “%% 208.201.239.33 pointopoint 208.201.239.32 arp”;

route “add 208.201.239.5 gw 10.42.3.1”;

route “del default”;

route “add default gw 208.201.239.32”;

};

down {

route “del default”;

route “del 208.201.239.5 gw 10.42.3.1”;

route “add default gw 10.42.3.1”;

};

}

Finally, run this command on the client:

root@client:~# vtund -p home server

Presto! Not only do you have a tunnel up between client and server, but also a new default route via the other end of the tunnel. Take a look at what happens when we traceroute to yahoo.com with the tunnel in place:

root@client:~# traceroute -n yahoo.com

traceroute to yahoo.com (64.58.79.230), 30 hops max, 40 byte packets

1 208.201.239.32 24.368 ms 28.019 ms 19.114 ms

2 208.201.239.1 21.677 ms 22.644 ms 23.489 ms

3 208.201.224.194 20.41 ms 22.997 ms 23.788 ms

4 208.201.224.5 26.496 ms 23.8 ms 25.752 ms

5 206.24.221.217 26.174 ms 28.077 ms 26.344 ms

6 206.24.210.62 26.484 ms 27.851 ms 25.015 ms

7 206.24.226.103 104.22 ms 114.278 ms 108.575 ms

8 206.24.238.57 99.978 ms 99.028 ms 100.976 ms

9 206.24.238.26 103.749 ms 101.416 ms 101.09 ms

10 216.109.66.132 102.426 ms 104.222 ms 98.675 ms

11 216.33.98.19 99.985 ms 99.618 ms 103.827 ms

12 216.35.210.126 104.075 ms 103.247 ms 106.398 ms

13 64.58.77.41 107.219 ms 106.285 ms 101.169 ms

This means that any server processes running on the client are now fully available to the Internet, at IP address 208.201.239.33. This has all happened without making a single change (e.g., port forwarding) on the gateway 10.42.3.1.

Here’s what the new tunnel interface looks like on the client:

root@client:~# ifconfig tun0

tun0 Link encap:Point-to-Point Protocol

inet addr:208.201.239.33 P-t-P:208.201.239.32 Mask:255.255.255.255

UP POINTOPOINT RUNNING MULTICAST MTU:1500 Metric:1

RX packets:39 errors:0 dropped:0 overruns:0 frame:0

TX packets:39 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:10

RX bytes:2220 (2.1 Kb) TX bytes:1560 (1.5 Kb)

And here’s the updated routing table (note that we still need to keep a host route to the tunnel server’s IP address via our old default gateway; otherwise, the tunnel traffic can’t get out):

root@client:~# route

Kernel IP routing table

Destination Gateway Genmask Flags Metric Ref Use Iface

208.201.239.5 10.42.3.1 255.255.255.255 UGH 0 0 0 eth2

208.201.239.32 * 255.255.255.255 UH 0 0 0 tun0

10.42.3.0 * 255.255.255.192 U 0 0 0 eth2

10.42.4.0 * 255.255.255.192 U 0 0 0 eth0

loopback * 255.0.0.0 U 0 0 0 lo

default 208.201.239.32 0.0.0.0 UG 0 0 0 tun0

To bring down the tunnel, simply kill the vtund process on client. This restores all network settings back to their original states.

This method works fine if you trust VTun to use strong encryption and to be free from remote exploits. Personally, I don’t think you can be too paranoid when it comes to machines connected to the Internet. To use VTun over SSH (and therefore rely on the strong authentication and encryption that SSH provides), simply forward port 5000 on the client to the same port on the server. Give this a try:

root@client:~# ssh -f -N -c blowfish -C -L5000:localhost:5000 server

root@client:~# vtund -p home localhost

root@client:~# traceroute -n yahoo.com

traceroute to yahoo.com (64.58.79.230), 30 hops max, 40 byte packets

1 208.201.239.32 24.715 ms 31.713 ms 29.519 ms

2 208.201.239.1 28.389 ms 36.247 ms 28.879 ms

3 208.201.224.194 48.777 ms 28.602 ms 44.024 ms

4 208.201.224.5 38.788 ms 35.608 ms 35.72 ms

5 206.24.221.217 37.729 ms 38.821 ms 43.489 ms

6 206.24.210.62 39.577 ms 43.784 ms 34.711 ms

7 206.24.226.103 110.761 ms 111.246 ms 117.15 ms

8 206.24.238.57 112.569 ms 113.2 ms 111.773 ms

9 206.24.238.26 111.466 ms 123.051 ms 118.58 ms

10 216.109.66.132 113.79 ms 119.143 ms 109.934 ms

11 216.33.98.19 111.948 ms 117.959 ms 122.269 ms

12 216.35.210.126 113.472 ms 111.129 ms 118.079 ms

13 64.58.77.41 110.923 ms 110.733 ms 115.22 ms

In order to discourage connections to vtund on port 5000 of the server, add a net filter rule to drop connections from the outside world:

root@server:~# iptables -A INPUT -t filter -i eth0 \

-p tcp –dport 5000 -j DROP

This allows local connections to get through (since they use loopback), and therefore requires an SSH tunnel to the server before accepting a connection.

As you can see, this can be an extremely handy tool to have around. In addition to giving live IP addresses to machines behind a NAT, you can effectively connect any two networks if you can obtain a single SSH connection between them (originating from either direction).

If your head is swimming from this vtund.conf configuration or you’re feeling lazy and don’t want to figure out what to change when setting up your own client’s vtund.conf file, take a look at the automatic vtund.conf generator [Hack #79] .

Udgivet i Knowledge Base, Networking, Old Base | Skriv en kommentar

Tunnel connections inside http

Break through draconian firewalls by using httptunnel.

If you’ve ever been on the road and found yourself in a place where the only connectivity to the outside world is through an incredibly restrictive firewall, you probably know the pain of trying to do anything other than sending and receiving email or basic web browsing.

Here’s where httptunnel (http://www.nocrew.org/software/httptunnel.html) comes to the rescue. Httptunnel is a program that allows you to tunnel arbitrary connections through the HTTP protocol to a remote host. This is especially useful in situations like the one mentioned earlier, when web access is allowed but all other services are denied. Of course, you could just use any kind of tunneling software and configure it to use port 80, but where would that leave you if the firewall is actually a web proxy? This is roughly the same as an application-layer firewall, and will accept only valid HTTP requests. Fortunately, httptunnel can deal with these as well.

To compile httptunnel, download the tarball and run configure and make:

$ tar xfz httptunnel-3.3.tar.gz
$ cd httptunnel-3.3

$ ./configure && make

Install it by running make install, which will install everything under /usr/local. If you want to install it somewhere else, you can use the standard –prefix= option to the configure script.

The httptunnel client program is called htc, and the server is hts. As with ssh [Hack #76], httptunnel can be used to listen on a local TCP port for connections, forward the traffic that it receives on this port to a remote server, and then decrypt and forward the traffic to another port outside of the tunnel.

Try tunneling an SSH connection over HTTP. On the server, run a command like this:

# hts -F localhost:22 80

Now, run a command like this on the client:

# htc -F 2222 colossus:80

In this case, colossus is the remote server, and htc is listening on port 2222. You can use the standard port 22 if you aren’t running a local sshd. If you’re curious, you can verify that htc is now listening on port 2222 by using lsof:

# /usr/sbin/lsof -i | grep htc

htc 2323 root 6u IPv4 0x02358a30 0t0 TCP *:2222 (LISTEN)

And now to try out the tunnel:

[andrew@kryten andrew]$ ssh -p 2222 localhost

andrew@localhost’s password:

[andrew@colossus andrew]$

You can also forward connections to machines other than the one that you’re running hts on. To do this, just replace the localhost in the hts command with whatever remote host you wish to forward to.

For instance, to forward the connection to oceana.ingsoc.net instead of colossus, you could run this command:

# hts -F oceana.ingsoc.net:22 80

If you’re curious to see what an SSH connection tunneled through the HTTP protocol looks like, you can take a look at it with a packet sniffer. Here’s the initial portion of the TCP stream that is sent to the httptunnel server by the client:

POST /index.html?crap=1071364879 HTTP/1.1

Host: linux-vm:80

Content-Length: 102400

Connection: close

SSH-2.0-OpenSSH_3.6.1p1+CAN-2003-0693

If your tunnel needs to go through a web proxy, no additional configuration is needed as long as the proxy is transparent and does not require authentication. If the proxy is not transparent, you can specify it with the -P switch. Additionally, if you do need to authenticate with the proxy, you’ll want to make use of the -A or –proxy-authorization options, which allow you to specify a username and password to authenticate with.

Here’s how to use these options:

htc -P myproxy:8000 -A andrew:mypassword -F 22 colossus:80

If the port that the proxy listens on is the standard web proxy port (8080), then you can just specify the proxy by using its IP address or hostname.

Udgivet i Knowledge Base, Networking, Old Base, SSH | Skriv en kommentar

Using SSH as socks proxy

Protect your web traffic using the basic VPN functionality built into SSH itself.

In the search for the perfect way to secure their wireless networks, many people overlook one of the most useful features of SSH: the -D switch. This simple little switch is buried within the SSH manpage, toward the bottom. Here is a direct quote from the manpage:

-D port

Specifies a local “dynamic” application-level port forwarding. This works by allocating a socket to listen to port on the local side, and whenever a connection is made to this port, the connection is forwarded over the secure channel, and the application protocol is then used to determine where to connect to from the remote machine. Currently the SOCKS 4 protocol is supported, and SSH will act as a SOCKS 4 server. Only root can forward privileged ports. Dynamic port forwardings can also be specified in the configuration file.

This turns out to be an insanely useful feature if you have software that is capable of using a SOCKS 4 proxy. It effectively gives you an instant encrypted proxy server to any machine that you can SSH to. It does this without the need for further software, either on your machine or on the remote server.

Just as with SSH port forwarding [Hack #72], the -D switch binds to the specified local port and encrypts any traffic to that port, sends it down the tunnel, and decrypts it on the other side. For example, to set up a SOCKS 4 proxy from local port 8080 to remote, type the following:

rob@caligula:~$ ssh -D 8080

remote

That’s all there is to it. Now you simply specify localhost:8080 as the SOCKS 4 proxy in your application, and all connections made by that application will be sent down the encrypted tunnel. For example, to set your SOCKS proxy in Mozilla, go to Preferences Advanced Proxies, as shown in Figure 6-7.
Figure 6-7. Proxy settings in Mozilla.

Select “Manual proxy configuration”, then type in localhost as the SOCKS host. Enter the port number that you passed to the -D switch, and be sure to check the SOCKSv4 button.

Click OK, and you’re finished. All of the traffic that Mozilla generates is now encrypted and appears to originate from the remote machine that you logged into with SSH. Anyone listening to your wireless traffic now sees a large volume of encrypted SSH traffic, but your actual data is well protected.

One important point to keep in mind is that SOCKS 4 has no native support for DNS traffic. This has two important side effects to keep in mind when using it to secure your wireless transmissions.

First of all, DNS lookups are still sent in the clear. This means that anyone listening in can still see the names of sites that you browse to, although the actual URLs and data are obscured. This is rarely a security risk, but it is worth keeping in mind.

Second, you are still using a local DNS server, but your traffic originates from the remote end of the proxy. This can have interesting (and undesirable) side effects when attempting to access private network resources.

To illustrate the subtle problems that this can cause, consider a typical corporate network with a web server called intranet.example.com. This web server uses the private address 192.168.1.10 but is accessible from the Internet through the use of a forwarding firewall. The DNS server for intranet.example.com normally responds with different IP addresses depending on where the request comes from, perhaps using the views functionality in BIND 9. When coming from the Internet, you would normally access intranet.example.com with the IP address 208.201.239.36, which is actually the IP address of the outside of the corporate firewall.

Now suppose that you are using the SOCKS proxy example just shown, and remote is actually a machine behind the corporate firewall. Your local DNS server returns 208.201.239.36 as the IP address for intranet.mybusiness.com (since you are looking up the name from outside the firewall). But the HTTP request actually comes from remote and attempts to go to 208.201.239.36. Many times, this is forbidden by the firewall rules, as internal users are supposed to access the intranet by its internal IP address, 192.168.1.10. How can you work around this DNS schizophrenia?

One simple method to avoid this trouble is to make use of a local hosts file on your machine. Add an entry like this to /etc/hosts (or the equivalent on your operating system):

192.168.1.10 intranet.example.com

Likewise, you can list any number of hosts that are reachable only from the inside of your corporate firewall. When you attempt to browse to one of those sites, the local hosts file is consulted before DNS, so the private IP address is used. Since this request is actually made from remote, it finds its way to the internal server with no trouble. Likewise, responses arrive back at the SOCKS proxy on remote, are encrypted and forwarded over your SSH tunnel, and appear in your browser as if they came in from the Internet.

SOCKS 5 support is planned for an upcoming version of SSH, which will also make tunneled DNS resolution possible. This is particularly exciting for Mac OS X users, as there is support in the OS for SOCKS 5 proxies. Once SSH supports SOCKS 5, every native OS X application will automatically be able to take advantage of encrypting SSH socks proxies. In the meantime, we’ll just have to settle for encrypted HTTP proxies [Hack #74] .

Udgivet i Knowledge Base, Networking, Old Base, SSH | Skriv en kommentar