Use nagios to monitor services

Use Nagios to keep tabs on your network.

Since remote exploits can often crash the service that is being broken into or cause its CPU use to skyrocket, you should monitor the services that are running on your network. Just looking for an open port (such as by using Nmap [Hack #42]) isn’t enough. The machine may be able to respond to a TCP connect request, but the service may be unable to respond (or worse, could be replaced by a different program entirely!). One tool that can help you verify your services at a glance is Nagios (http://www.nagios.org).

Nagios is a network-monitoring application that monitors not only the services running on the hosts on your network, but also the resources on each host, such as CPU usage, disk space, memory usage, running processes, log files, and much more. In the advent of a problem it can notify you through email, pager, or any other method that you define, and you can check the status of your network at a glace by using the web GUI. Nagios is also easily extensible through its plug-in API.

To install Nagios, download the source distribution from the Nagios web site. Then, unpack the source distribution and go into the directory it creates:

$ tar xfz nagios-1.1.tar.gz

$ cd nagios-1.1

Before running Nagios’s configure script, you should create a user and group for Nagios to run as (e.g., nagios). Then run the configure script with a command similar to this:

$ ./configure –with-nagios-user=nagios –with-nagios-grp=nagios

This will install Nagios in /usr/local/nagios. As usual, you can modify this behavior by using the –prefix switch. After the configure script finishes, compile Nagios by running make all. Then become root and run make install to install it. In addition, you can optionally install Nagios’s initialization scripts by running make install-init.

If you take a look into the /usr/local/nagios directory right now, you will see that there are four directories. The bin directory contains a single file, nagios, that is the core of the package. This application does the actual monitoring. The sbin directory contains the CGI scripts that will be used in the web-based interface. Inside the share directory, you’ll find the HTML files and documentation. Finally, the var directory is where Nagios will store its information once it starts running.

Before you can use Nagios, you will need a couple of configuration files. These files go into the etc directory, which will be created when you run make install-config. This command also creates a sample copy of each required configuration file and puts them into the etc directory.

At this point the Nagios installation is complete. However, it is not very useful in its current state, because it lacks the actual monitoring applications. These applications, which check whether a particular monitored service is functioning properly, are called plug-ins. Nagios comes with a default set of plug-ins, but they must be downloaded and installed separately.

Download the latest Nagios Plugins package and decompress it. You will need to run the provided configure script to prepare the package for compilation on your system. You will find that the plug-ins are installed in a fashion similar to the actual Nagios program.

To compile the plug-ins, run commands similar to these:

$ ./configure –prefix=/usr/local/nagios \

–with-nagios-user=nagios –with-nagis-grp=nagios

$ make all

You might get notifications about missing programs or Perl modules while the script is running. These are mostly fine, unless you specifically need the mentioned applications to monitor a service.

After compilation is finished, become root and run make install to install the plug-ins. The plug-ins will be installed in the libexec directory of your Nagios base directory (e.g., /usr/local/nagios/libexec).

There are a few rules that all Nagios plug-ins should implement, making them suitable for use by Nagios. All plug-ins provide a –help option that displays information about the plug-in and how it works. This feature is very helpful when you’re trying to monitor a new service using a plug-in you haven’t used before.

For instance, to learn how the check_ssh plug-in works, run the following command:

$ /usr/local/nagios/libexec/check_ssh

check_ssh (nagios-plugins 1.4.0alpha1) 1.13

The nagios plugins come with ABSOLUTELY NO WARRANTY. You may redistribute

copies of the plugins under the terms of the GNU General Public License.

For more information about these matters, see the file named COPYING.

Copyright (c) 1999 Remi Paulmier <remi@sinfomic.fr>

Copyright (c) 2000-2003 Nagios Plugin Development Team

<nagiosplug-devel@lists.sourceforge.net>

Try to connect to SSH server at specified server and port

Usage: check_ssh [-46] [-t <timeout>] [-p <port>] <host>

check_ssh (-h | –help) for detailed help

check_ssh (-V | –version) for version information

Options:

-h, –help

Print detailed help screen

-V, –version

Print version information

-H, –hostname=ADDRESS

Host name or IP Address

-p, –port=INTEGER

Port number (default: 22)

-4, –use-ipv4

Use IPv4 connection

-6, –use-ipv6

Use IPv6 connection

-t, –timeout=INTEGER

Seconds before connection times out (default: 10)

-v, –verbose

Show details for command-line debugging (Nagios may truncate output)

Send email to nagios-users@lists.sourceforge.net if you have questions

regarding use of this software. To submit patches or suggest improvements,

send email to nagiosplug-devel@lists.sourceforge.net

Now that both Nagios and the plug-ins are installed, we are almost ready to begin monitoring our servers. However, Nagios will not even start before it’s configured properly.

The sample configuration files provide a good starting point:

$ cd /usr/local/nagios/etc
$ ls -1

cgi.cfg-sample

checkcommands.cfg-sample

contactgroups.cfg-sample

contacts.cfg-sample

dependencies.cfg-sample

escalations.cfg-sample

hostgroups.cfg-sample

hosts.cfg-sample

misccommands.cfg-sample

nagios.cfg-sample

resource.cfg-sample

services.cfg-sample

timeperiods.cfg-sample

Since these are sample files, the Nagios authors added a .cfg-sample suffix to each file. First, we need to copy or rename each one to end in .cfg, so that the software can use them properly. (If you don’t change the configuration filenames, Nagios will not be able to find them.)

You can either rename each file manually or use the following command to take care of them all at once. Type the following script on a single line:

# for i in *cfg-sample; do mv $i `echo $i | \

sed -e s/cfg-sample/cfg/`; done;

First there is the main configuration file, nagios.cfg. You can pretty much leave everything as is—the Nagios installation process will make sure the file paths used in the configuration file are correct. There’s one option, however, that you might want to change: check_external_commands, which is set to 0 by default. If you would like to be able to directly run commands through the web interface, you will want to set this to 1. Depending on your network environment, this may or may not be an acceptable security risk, as enabling this option will permit the execution of scripts from the web interface. Other options you need to set in cgi.cfg configure which usernames are allowed to run external commands.

To get Nagios running, you must modify all but a few of the sample configuration files. Configuring Nagios to monitor your servers is not as difficult as it looks. To help you, you can use the verbose mode of the Nagios binary by running:

# /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

This command will go through the configuration files and report any errors. Start fixing the errors one by one, and run the command again to find the next error. For testing purposes, it is easiest to disable all hosts and services definitions in the sample configuration files and merely use the files as templates for your own hosts and services. You can keep most of the files as is, but remove the following, which will be created from scratch:

hosts.cfg

services.cfg

contacts.cfg

contactgroups.cfg

hostgroups.cfg

Start by configuring a host to monitor. We first need to add our host definition and configure some options for that host. You can add as many hosts as you like, but we will stick with one for the sake of simplicity.

Here are the contents of hosts.cfg:

# Generic host definition template

define host{

# The name of this host template – referenced i

name generic-host

n other host definitions, used for template recursion/resolution

# Host notifications are enabled

notifications_enabled 1

# Host event handler is enabled

event_handler_enabled 1

# Flap detection is enabled

flap_detection_enabled 1

# Process performance data

process_perf_data 1

# Retain status information across program restarts

retain_status_information 1

# Retain non-status information across program restarts

retain_nonstatus_information 1

# DONT REGISTER THIS DEFINITION – ITS NOT A REAL HOST,

# JUST A TEMPLATE!

register 0

}

# Host Definition

define host{

# Name of host template to use

use generic-host

host_name freelinuxcd.org

alias Free Linux CD Project Server

address www.freelinuxcd.org

check_command check-host-alive

max_check_attempts 10

notification_interval 120

notification_period 24×7

notification_options d,u,r

}

The first host defined is not a real host but a template from which other host definitions are derived. This mechanism can be seen in other configuration files and makes configuration based on a predefined set of defaults a breeze.

With this setup we are monitoring only one host, www.freelinuxcd.org, to see if it is alive. The host_name parameter is important because other configuration files will refer to this server by this name. Now the host needs to be added to a hostgroup, so that the application knows which contact group to send notifications to.

Here’s what hostgroups.cfg looks like:

define hostgroup{

hostgroup_name flcd-servers

alias The Free Linux CD Project Servers

contact_groups flcd-admins

members freelinuxcd.org

}

This defines a new hostgroup and associates the flcd-admins contact_group with it. Now you’ll need to define that contact group in contactgroups.cfg:

define contactgroup{

contactgroup_name flcd-admins

alias FreeLinuxCD.org Admins

members oktay, verty

}

Here the flcd-admins contact_group is defined with two members, oktay and verty. This configuration ensures that both users will be notified when something goes wrong with a server that flcd-admins is responsible for. The next step is to set the contact information and notification preferences for these users.

Here are the definitions for those two members in contacts.cfg:

define contact{

contact_name oktay

alias Oktay Altunergil

service_notification_period 24×7

host_notification_period 24×7

service_notification_options w,u,c,r

host_notification_options d,u,r

service_notification_commands notify-by-email,notify-by-epager

host_notification_commands host-notify-by-email,host-notify-by-epager

email oktay@freelinuxcd.org

pager dummypagenagios-admin@localhost.localdomain

}

define contact{

contact_name Verty

alias David ‘Verty’ Ky

service_notification_period 24×7

host_notification_period 24×7

service_notification_options w,u,c,r

host_notification_options d,u,r

service_notification_commands notify-by-email,notify-by-epager

host_notification_commands host-notify-by-email

email verty@flcd.org

}

In addition to providing contact details for a particular user, the contact_name in the contacts.cfg file is also used by the CGI scripts (i.e., the web interface) to determine whether a particular user is allowed to access a particular resource. Now that your hosts and contacts are configured, you can start to configure monitoring for individual services on your server.

This is done in services.cfg :

# Generic service definition template

define service{

# The ‘name’ of this service template, referenced in other service definitions

name generic-service

# Active service checks are enabled

active_checks_enabled 1

# Passive service checks are enabled/accepted

passive_checks_enabled 1

# Active service checks should be parallelized

# (disabling this can lead to major performance problems)

parallelize_check 1

# We should obsess over this service (if necessary)

obsess_over_service 1

# Default is to NOT check service ‘freshness’

check_freshness 0

# Service notifications are enabled

notifications_enabled 1

# Service event handler is enabled

event_handler_enabled 1

# Flap detection is enabled

flap_detection_enabled 1

# Process performance data

process_perf_data 1

# Retain status information across program restarts

retain_status_information 1

# Retain non-status information across program restarts

retain_nonstatus_information 1

# DONT REGISTER THIS DEFINITION – ITS NOT A REAL SERVICE, JUST A TEMPLATE!

register 0

}

# Service definition

define service{

# Name of service template to use

use generic-service

host_name freelinuxcd.org

service_description HTTP

is_volatile 0

check_period 24×7

max_check_attempts 3

normal_check_interval 5

retry_check_interval 1

contact_groups flcd-admins

notification_interval 120

notification_period 24×7

notification_options w,u,c,r

check_command check_http

}

# Service definition

define service{

# Name of service template to use

use generic-service

host_name freelinuxcd.org

service_description PING

is_volatile 0

check_period 24×7

max_check_attempts 3

normal_check_interval 5

retry_check_interval 1

contact_groups flcd-admins

notification_interval 120

notification_period 24×7

notification_options c,r

check_command check_ping!100.0,20%!500.0,60%

}

This setup configures monitoring for two services. The first service definition, which has been called HTTP, will monitor whether the web server is up and will notify you if there’s a problem. The second definition monitors the ping statistics from the server and notifies you if the response time or packet loss become too high. The commands used are check_http and check_ping, which were installed into the libexec directory during the plug-in installation. Please take your time to familiarize yourself with all other available plug-ins and configure them similarly to the previous example definitions.

Once you’re happy with your configuration, run Nagios with the -v switch one last time to make sure everything checks out. Then run it as a daemon by using the -d switch:

# /usr/local/nagios/bin/nagios -d /usr/local/nagios/etc/nagios.cfg

That’s all there is to it. Give Nagios a couple of minutes to generate some data, and then point your browser to the machine and look at the pretty service warning lights.

Use Nagios to keep tabs on your network.

Since remote exploits can often crash the service that is being broken into or cause its CPU use to skyrocket, you should monitor the services that are running on your network. Just looking for an open port (such as by using Nmap [Hack #42]) isn’t enough. The machine may be able to respond to a TCP connect request, but the service may be unable to respond (or worse, could be replaced by a different program entirely!). One tool that can help you verify your services at a glance is Nagios (http://www.nagios.org).

Nagios is a network-monitoring application that monitors not only the services running on the hosts on your network, but also the resources on each host, such as CPU usage, disk space, memory usage, running processes, log files, and much more. In the advent of a problem it can notify you through email, pager, or any other method that you define, and you can check the status of your network at a glace by using the web GUI. Nagios is also easily extensible through its plug-in API.

To install Nagios, download the source distribution from the Nagios web site. Then, unpack the source distribution and go into the directory it creates:

$ tar xfz nagios-1.1.tar.gz

$ cd nagios-1.1

Before running Nagios’s configure script, you should create a user and group for Nagios to run as (e.g., nagios). Then run the configure script with a command similar to this:

$ ./configure –with-nagios-user=nagios –with-nagios-grp=nagios

This will install Nagios in /usr/local/nagios. As usual, you can modify this behavior by using the –prefix switch. After the configure script finishes, compile Nagios by running make all. Then become root and run make install to install it. In addition, you can optionally install Nagios’s initialization scripts by running make install-init.

If you take a look into the /usr/local/nagios directory right now, you will see that there are four directories. The bin directory contains a single file, nagios, that is the core of the package. This application does the actual monitoring. The sbin directory contains the CGI scripts that will be used in the web-based interface. Inside the share directory, you’ll find the HTML files and documentation. Finally, the var directory is where Nagios will store its information once it starts running.

Before you can use Nagios, you will need a couple of configuration files. These files go into the etc directory, which will be created when you run make install-config. This command also creates a sample copy of each required configuration file and puts them into the etc directory.

At this point the Nagios installation is complete. However, it is not very useful in its current state, because it lacks the actual monitoring applications. These applications, which check whether a particular monitored service is functioning properly, are called plug-ins. Nagios comes with a default set of plug-ins, but they must be downloaded and installed separately.

Download the latest Nagios Plugins package and decompress it. You will need to run the provided configure script to prepare the package for compilation on your system. You will find that the plug-ins are installed in a fashion similar to the actual Nagios program.

To compile the plug-ins, run commands similar to these:

$ ./configure –prefix=/usr/local/nagios \

–with-nagios-user=nagios –with-nagis-grp=nagios

$ make all

You might get notifications about missing programs or Perl modules while the script is running. These are mostly fine, unless you specifically need the mentioned applications to monitor a service.

After compilation is finished, become root and run make install to install the plug-ins. The plug-ins will be installed in the libexec directory of your Nagios base directory (e.g., /usr/local/nagios/libexec).

There are a few rules that all Nagios plug-ins should implement, making them suitable for use by Nagios. All plug-ins provide a –help option that displays information about the plug-in and how it works. This feature is very helpful when you’re trying to monitor a new service using a plug-in you haven’t used before.

For instance, to learn how the check_ssh plug-in works, run the following command:

$ /usr/local/nagios/libexec/check_ssh

check_ssh (nagios-plugins 1.4.0alpha1) 1.13

The nagios plugins come with ABSOLUTELY NO WARRANTY. You may redistribute

copies of the plugins under the terms of the GNU General Public License.

For more information about these matters, see the file named COPYING.

Copyright (c) 1999 Remi Paulmier <remi@sinfomic.fr>

Copyright (c) 2000-2003 Nagios Plugin Development Team

<nagiosplug-devel@lists.sourceforge.net>

Try to connect to SSH server at specified server and port

Usage: check_ssh [-46] [-t <timeout>] [-p <port>] <host>

check_ssh (-h | –help) for detailed help

check_ssh (-V | –version) for version information

Options:

-h, –help

Print detailed help screen

-V, –version

Print version information

-H, –hostname=ADDRESS

Host name or IP Address

-p, –port=INTEGER

Port number (default: 22)

-4, –use-ipv4

Use IPv4 connection

-6, –use-ipv6

Use IPv6 connection

-t, –timeout=INTEGER

Seconds before connection times out (default: 10)

-v, –verbose

Show details for command-line debugging (Nagios may truncate output)

Send email to nagios-users@lists.sourceforge.net if you have questions

regarding use of this software. To submit patches or suggest improvements,

send email to nagiosplug-devel@lists.sourceforge.net

Now that both Nagios and the plug-ins are installed, we are almost ready to begin monitoring our servers. However, Nagios will not even start before it’s configured properly.

The sample configuration files provide a good starting point:

$ cd /usr/local/nagios/etc
$ ls -1

cgi.cfg-sample

checkcommands.cfg-sample

contactgroups.cfg-sample

contacts.cfg-sample

dependencies.cfg-sample

escalations.cfg-sample

hostgroups.cfg-sample

hosts.cfg-sample

misccommands.cfg-sample

nagios.cfg-sample

resource.cfg-sample

services.cfg-sample

timeperiods.cfg-sample

Since these are sample files, the Nagios authors added a .cfg-sample suffix to each file. First, we need to copy or rename each one to end in .cfg, so that the software can use them properly. (If you don’t change the configuration filenames, Nagios will not be able to find them.)

You can either rename each file manually or use the following command to take care of them all at once. Type the following script on a single line:

# for i in *cfg-sample; do mv $i `echo $i | \

sed -e s/cfg-sample/cfg/`; done;

First there is the main configuration file, nagios.cfg. You can pretty much leave everything as is—the Nagios installation process will make sure the file paths used in the configuration file are correct. There’s one option, however, that you might want to change: check_external_commands, which is set to 0 by default. If you would like to be able to directly run commands through the web interface, you will want to set this to 1. Depending on your network environment, this may or may not be an acceptable security risk, as enabling this option will permit the execution of scripts from the web interface. Other options you need to set in cgi.cfg configure which usernames are allowed to run external commands.

To get Nagios running, you must modify all but a few of the sample configuration files. Configuring Nagios to monitor your servers is not as difficult as it looks. To help you, you can use the verbose mode of the Nagios binary by running:

# /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

This command will go through the configuration files and report any errors. Start fixing the errors one by one, and run the command again to find the next error. For testing purposes, it is easiest to disable all hosts and services definitions in the sample configuration files and merely use the files as templates for your own hosts and services. You can keep most of the files as is, but remove the following, which will be created from scratch:

hosts.cfg

services.cfg

contacts.cfg

contactgroups.cfg

hostgroups.cfg

Start by configuring a host to monitor. We first need to add our host definition and configure some options for that host. You can add as many hosts as you like, but we will stick with one for the sake of simplicity.

Here are the contents of hosts.cfg:

# Generic host definition template

define host{

# The name of this host template – referenced i

name generic-host

n other host definitions, used for template recursion/resolution

# Host notifications are enabled

notifications_enabled 1

# Host event handler is enabled

event_handler_enabled 1

# Flap detection is enabled

flap_detection_enabled 1

# Process performance data

process_perf_data 1

# Retain status information across program restarts

retain_status_information 1

# Retain non-status information across program restarts

retain_nonstatus_information 1

# DONT REGISTER THIS DEFINITION – ITS NOT A REAL HOST,

# JUST A TEMPLATE!

register 0

}

# Host Definition

define host{

# Name of host template to use

use generic-host

host_name freelinuxcd.org

alias Free Linux CD Project Server

address www.freelinuxcd.org

check_command check-host-alive

max_check_attempts 10

notification_interval 120

notification_period 24×7

notification_options d,u,r

}

The first host defined is not a real host but a template from which other host definitions are derived. This mechanism can be seen in other configuration files and makes configuration based on a predefined set of defaults a breeze.

With this setup we are monitoring only one host, www.freelinuxcd.org, to see if it is alive. The host_name parameter is important because other configuration files will refer to this server by this name. Now the host needs to be added to a hostgroup, so that the application knows which contact group to send notifications to.

Here’s what hostgroups.cfg looks like:

define hostgroup{

hostgroup_name flcd-servers

alias The Free Linux CD Project Servers

contact_groups flcd-admins

members freelinuxcd.org

}

This defines a new hostgroup and associates the flcd-admins contact_group with it. Now you’ll need to define that contact group in contactgroups.cfg:

define contactgroup{

contactgroup_name flcd-admins

alias FreeLinuxCD.org Admins

members oktay, verty

}

Here the flcd-admins contact_group is defined with two members, oktay and verty. This configuration ensures that both users will be notified when something goes wrong with a server that flcd-admins is responsible for. The next step is to set the contact information and notification preferences for these users.

Here are the definitions for those two members in contacts.cfg:

define contact{

contact_name oktay

alias Oktay Altunergil

service_notification_period 24×7

host_notification_period 24×7

service_notification_options w,u,c,r

host_notification_options d,u,r

service_notification_commands notify-by-email,notify-by-epager

host_notification_commands host-notify-by-email,host-notify-by-epager

email oktay@freelinuxcd.org

pager dummypagenagios-admin@localhost.localdomain

}

define contact{

contact_name Verty

alias David ‘Verty’ Ky

service_notification_period 24×7

host_notification_period 24×7

service_notification_options w,u,c,r

host_notification_options d,u,r

service_notification_commands notify-by-email,notify-by-epager

host_notification_commands host-notify-by-email

email verty@flcd.org

}

In addition to providing contact details for a particular user, the contact_name in the contacts.cfg file is also used by the CGI scripts (i.e., the web interface) to determine whether a particular user is allowed to access a particular resource. Now that your hosts and contacts are configured, you can start to configure monitoring for individual services on your server.

This is done in services.cfg :

# Generic service definition template

define service{

# The ‘name’ of this service template, referenced in other service definitions

name generic-service

# Active service checks are enabled

active_checks_enabled 1

# Passive service checks are enabled/accepted

passive_checks_enabled 1

# Active service checks should be parallelized

# (disabling this can lead to major performance problems)

parallelize_check 1

# We should obsess over this service (if necessary)

obsess_over_service 1

# Default is to NOT check service ‘freshness’

check_freshness 0

# Service notifications are enabled

notifications_enabled 1

# Service event handler is enabled

event_handler_enabled 1

# Flap detection is enabled

flap_detection_enabled 1

# Process performance data

process_perf_data 1

# Retain status information across program restarts

retain_status_information 1

# Retain non-status information across program restarts

retain_nonstatus_information 1

# DONT REGISTER THIS DEFINITION – ITS NOT A REAL SERVICE, JUST A TEMPLATE!

register 0

}

# Service definition

define service{

# Name of service template to use

use generic-service

host_name freelinuxcd.org

service_description HTTP

is_volatile 0

check_period 24×7

max_check_attempts 3

normal_check_interval 5

retry_check_interval 1

contact_groups flcd-admins

notification_interval 120

notification_period 24×7

notification_options w,u,c,r

check_command check_http

}

# Service definition

define service{

# Name of service template to use

use generic-service

host_name freelinuxcd.org

service_description PING

is_volatile 0

check_period 24×7

max_check_attempts 3

normal_check_interval 5

retry_check_interval 1

contact_groups flcd-admins

notification_interval 120

notification_period 24×7

notification_options c,r

check_command check_ping!100.0,20%!500.0,60%

}

This setup configures monitoring for two services. The first service definition, which has been called HTTP, will monitor whether the web server is up and will notify you if there’s a problem. The second definition monitors the ping statistics from the server and notifies you if the response time or packet loss become too high. The commands used are check_http and check_ping, which were installed into the libexec directory during the plug-in installation. Please take your time to familiarize yourself with all other available plug-ins and configure them similarly to the previous example definitions.

Once you’re happy with your configuration, run Nagios with the -v switch one last time to make sure everything checks out. Then run it as a daemon by using the -d switch:

# /usr/local/nagios/bin/nagios -d /usr/local/nagios/etc/nagios.cfg

That’s all there is to it. Give Nagios a couple of minutes to generate some data, and then point your browser to the machine and look at the pretty service warning lights.

Udgivet i Knowledge Base, Networking, Old Base | Skriv en kommentar

Process accounts freebsd, linux / watch users in details

Keep a detailed audit trail of what’s being done on your systems.

Process accounting allows you to keep detailed logs of every command a user runs, including CPU time and memory used. From a security standpoint, this means the system administrator can gather information about what user ran which command and at what time. This is not only very useful in assessing a break-in or local root compromise, but can also be used to spot attempted malicious behavior by normal users of the system. (Remember that intrusions don’t always come from the outside.)

To enable process accounting, run these commands:

# mkdir /var/account

# touch /var/account/pacct && chmod 660 /var/account/pacct

# /sbin/accton /var/account/pacct

Alternatively, if you are running Red Hat or SuSE Linux and have the process accounting package installed, you can run a startup script to enable process accounting. On Red Hat, try this:

# chkconfig psacct on

# /sbin/service psacct start

On SuSE, use these commands:

# chkconfig acct on

# /sbin/service acct start

The process accounting package provides several programs to make use of the data that is being logged. The ac program analyzes total connect time for users on the system.

Running it without any arguments prints out the number of hours logged by the current user:

[andrew@colossus andrew]$ ac

total 106.23

If you want to display connect time for all users who have logged onto the system, use the -p switch:

# ac -p

root 0.07

andrew 106.05

total 106.12

The lastcomm command lets you search the accounting logs by username, command name, or terminal:

# lastcomm andrew

ls andrew ?? 0.01 secs Mon Dec 15 05:58

rpmq andrew ?? 0.08 secs Mon Dec 15 05:58

sh andrew ?? 0.03 secs Mon Dec 15 05:44

gunzip andrew ?? 0.00 secs Mon Dec 15 05:44

# lastcomm bash
bash F andrew ?? 0.00 secs Mon Dec 15 06:44

bash F root stdout 0.01 secs Mon Dec 15 05:20

bash F root stdout 0.00 secs Mon Dec 15 05:20

bash F andrew ?? 0.00 secs Mon Dec 15 05:19

To summarize the accounting information, you can use the sa command. By default it will list all the commands found in the accounting logs and print the number of times that each one has been executed:

# sa

14 0.04re 0.03cp 0avio 1297k troff

7 0.03re 0.03cp 0avio 422k lastcomm

2 63.90re 0.01cp 0avio 983k info

14 34.02re 0.01cp 0avio 959k less

14 0.03re 0.01cp 0avio 1132k grotty

44 0.02re 0.01cp 0avio 432k gunzip

You can also use the -u flag to output per-user statistics:

# sa -u

root 0.01 cpu 344k mem 0 io which

root 0.00 cpu 1094k mem 0 io bash

root 0.07 cpu 1434k mem 0 io rpmq

andrew 0.02 cpu 342k mem 0 io id

andrew 0.00 cpu 526k mem 0 io bash

andrew 0.01 cpu 526k mem 0 io bash

andrew 0.03 cpu 378k mem 0 io grep

andrew 0.01 cpu 354k mem 0 io id

andrew 0.01 cpu 526k mem 0 io bash

andrew 0.00 cpu 340k mem 0 io hostname

You can peruse the output of these commands every so often to look for suspicious activity, such as increases in CPU usage or commands that are known to be used for mischief.

Udgivet i FreeBSD, Knowledge Base, Linux, Networking, Old Base | Skriv en kommentar

Aggregrate logs from remote sites

Integrate collocated and other remote systems or networks into your central syslog infrastructure.

Monitoring the logs of a remote site or just a collocated server can often be overlooked when faced with the task of monitoring activity on your local network. You could use the traditional syslog facilities to send logging information from the remote network or systems, but since the syslog daemon uses UDP for sending to remote systems, this is not the ideal solution. UDP provides no reliability in its communications, and so you risk losing logging information. In addition, the traditional syslog daemon has no means to encrypt the traffic that it sends, so your logs might being viewable by anyone with access to the intermediary networks between you and your remote hosts or networks.

To get around these issues, you’ll have to look beyond the syslog daemon that comes with your operating system and find a replacement. One such replacement syslog daemon is syslog-ng (http://www.balabit.com/products/syslog_ng/). syslog-ng is not only a fully functional replacement for the traditional syslog daemon, but also adds flexible message filtering capabilities, as well as support for logging to remote systems over TCP (in addition to support for the traditional UDP protocol). With the addition of TCP support, you can also employ stunnel or ssh to securely send the logs across untrusted networks.

To build syslog-ng, you will need the libol library package (http://www.balabit.com/downloads/syslog-ng/libol/) in addition to the syslog-ng package. After downloading these packages, unpack them and then build libol:

$ tar xfz libol-0.3.9.tar.gz

$ cd libol-0.3.9

$ ./configure && make

When you build syslog-ng you can have it statically link to libol, so there is no need to fully install the library.

And now to build syslog-ng:

$ tar xfz syslog-ng-1.5.26.tar.gz

$ cd syslog-ng-1.5.26

$ ./configure –with-libol=../libol-0.3.9

$ make

If you want to compile in TCP wrappers support, you can add the –enable-tcp-wrapper flag to the configure script. After syslog-ng is finished compiling, become root and run make install. This will install the syslog-ng binary and manpages. To configure the daemon, create the /usr/local/etc/syslog-ng directory and then create a syslog-ng.conf to put in it. To start off with, you can use one of the sample configuration files in the doc directory of the syslog-ng distribution.

There are five types of configuration file entries for syslog-ng, each of which begins with a specific keyword. The options entry allows you to tweak the behavior of the daemon, such as how often the daemon will sync the logs to the disk, whether the daemon will create directories automatically, and hostname expansion behavior. source entries tell syslog-ng where to collect log entries from. A source can include Unix sockets, TCP or UDP sockets, files, or pipes. destination entries allow you to specify possible places for syslog-ng to send logs to. You can specify files, pipes, Unix sockets, TCP or UDP sockets, TTYs, or programs. Sources and destinations are then combined with filters (using the filter keyword), which let you select syslog facilities and log levels. Finally, these are all used together in a log entry to define precisely where the information is logged. Thus you can arbitrarily combine any source, select what syslog facilities and levels you want from it, and then route it to any destination. This is what makes syslog-ng an incredibly powerful and flexible tool.

To set up syslog-ng on the remote end so that it can replace the syslogd on the system and send traffic to a remote syslog-ng, you’ll first need to translate your syslog.conf to equivalent source, destination, and log entries.

Here’s the syslog.conf for a FreeBSD system:

*.err;kern.debug;auth.notice;mail.crit /dev/console

*.notice;kern.debug;lpr.info;mail.crit;news.err /var/log/messages

security.* /var/log/security

auth.info;authpriv.info /var/log/auth.log

mail.info /var/log/maillog

lpr.info /var/log/lpd-errs

cron.* /var/log/cron

*.emerg *

First you’ll need to configure a source. Under FreeBSD, /dev/log is a link to /var/run/log. The following source entry tells syslog-ng to read entries from this file:

source src { unix-dgram(“/var/run/log”); internal( ); };

If you were using Linux, you would specify unix-stream and /dev/log like this:

source src { unix-stream(“/dev/log”); internal( ) };

The internal() entry is for messages generated by syslog-ng itself. Notice that you can include multiple sources in a source entry. Next, include destination entries for each of the actual log files:

destination console { file(“/dev/console”); };

destination messages { file(“/var/log/messages”); };

destination security { file(“/var/log/security”); };

destination authlog { file(“/var/log/auth.log”); };

destination maillog { file(“/var/log/maillog”); };

destination lpd-errs { file(“/var/log/lpd-errs”); };

destination cron { file(“/var/log/cron”); };

destination slip { file(“/var/log/slip.log”); };

destination ppp { file(“/var/log/ppp.log”); };

destination allusers { usertty(“*”); };

In addition to these destinations, you’ll also want to specify one for remote logging to another syslog-ng process. This can be done with a line similar to this:

destination loghost { tcp(“192.168.0.2” port(5140)); };

The port number can be any available TCP port.

Defining the filters is straightforward. You can simply create one for each syslog facility and log level, or you can create them according to those used in your syslog.conf. If you do the latter, you will only have to specify one filter in each log statement, but it will still take some work to create your filters.

Here are example filters for the syslog facilities:

filter f_auth { facility(auth); };

filter f_authpriv { facility(authpriv); };

filter f_console { facility(console); };

filter f_cron { facility(cron); };

filter f_daemon { facility(daemon); };

filter f_ftp { facility(ftp); };

filter f_kern { facility(kern); };

filter f_lpr { facility(lpr); };

filter f_mail { facility(mail); };

filter f_news { facility(news); };

filter f_security { facility(security); };

filter f_user { facility(user); };

filter f_uucp { facility(uucp); };

and examples for the log levels:

filter f_emerg { level(emerg); };

filter f_alert { level(alert..emerg); };

filter f_crit { level(crit..emerg); };

filter f_err { level(err..emerg); };

filter f_warning { level(warning..emerg); };

filter f_notice { level(notice..emerg); };

filter f_info { level(info..emerg); };

filter f_debug { level(debug..emerg); };

Now you can combine the source with the proper filter and destination within the log entries:

# *.err;kern.debug;auth.notice;mail.crit /dev/console

log { source(src); filter(f_err); destination(console); };

log { source(src); filter(f_kern); filter(f_debug); destination(console); };

log { source(src); filter(f_auth); filter(f_notice); destination(console); };

log { source(src); filter(f_mail); filter(f_crit); destination(console); };

# *.notice;kern.debug;lpr.info;mail.crit;news.err /var/log/messages

log { source(src); filter(f_notice); destination(messages); };

log { source(src); filter(f_kern); filter(f_debug); destination(messages); };

log { source(src); filter(f_lpr); filter(f_info); destination(messages); };

log { source(src); filter(f_mail); filter(f_crit); destination(messages); };

log { source(src); filter(f_news); filter(f_err); destination(messages); };

# security.* /var/log/security

log { source(src); filter(f_security); destination(security); };

# auth.info;authpriv.info /var/log/auth.log

log { source(src); filter(f_auth); filter(f_info); destination(authlog); };

log { source(src); filter(f_authpriv); filter(f_info); destination(authlog); };

# mail.info /var/log/maillog

log { source(src); filter(f_mail); filter(f_info); destination(maillog); };

# lpr.info /var/log/lpd-errs

log { source(src); filter(f_lpr); filter(f_info); destination(lpd-errs); };

# cron.* /var/log/cron

log { source(src); filter(f_cron); destination(cron); };

# *.emerg *

log { source(src); filter(f_emerg); destination(allusers); };

You can set up the machine that will be receiving the logs in much the same way as if you were replacing the currently used syslogd.

To configure syslog-ng to receive messages from a remote host, you must specify a source entry:

source r_src { tcp(ip(“192.168.0.2”) port(5140)); };

Alternatively, you can dump all the logs from the remote machines into the same destinations that you use for your local log entries. This is not really recommended, because it can be a nightmare to manage, but can be done by including multiple source drivers in the source entry that you use for your local logs:

source src {

unix-dgram(“/var/run/log”);

tcp(ip(“192.168.0.2”) port(5140));

internal( );

};

Now logs gathered from remote hosts will appear in any of the destinations that were combined with this source.

If you would like all logs from remote hosts to go into a separate file named for each host in /var/log, you could use a destination like this:

destination r_all { file(“/var/log/$HOST”); };

syslog-ng will expand the $HOST macro to the hostname of the system sending it logs and create a file named after it in /var/log. An appropriate log entry to use with this would be:

log { source(r_src); destination(r_all); };

However, an even better method is to recreate all of the remote syslog-ng log files on your central log server. For instance, a destination for a remote machine’s messages file would look like this:

destination fbsd_messages { file(“/var/log/$HOST/messages”); };

Notice here that the $HOST macro is used in place of a directory name. If you are using a destination entry like this, be sure to create the directory beforehand, or use the create_dirs() option:

options { create_dirs(yes); };

syslog-ng’s macros are a very powerful feature. For instance, if you wanted to separate logs by hostname and day, you could use a destination like this:

destination fbsd_messages {

file(“/var/log/$HOST/$YEAR.$MONTH.$DAY/messages”);

};

You can combine the remote source with the appropriate destinations for the logs coming in from the network just as you did when configuring syslog-ng for local logging—just specify the remote source with the proper destination and filters.

Another neat thing you can do with syslog-ng is collect logs from a number of remote hosts and then send all of those to yet another syslog-ng daemon. You can do this by combining a remote source and a remote destination with a log entry:

log { source(r_src); destination(loghost); };

Since syslog-ng is now using TCP ports, you can use any encrypting tunnel you like to secure the traffic between your syslog-ng daemons. You can use SSH port forwarding [Hack #72] or stunnel [Hack #76] to create an encrypted channel between each of your servers. By limiting connections on the listening port to include only localhost (using firewall rules, as in [Hack #33] or [Hack #34] ), you can eliminate the possibility of bogus log entries or denial-of-service attacks.

Server logs are among the most critical information that a system administrator needs to do her job. Using new tools and strong encryption, you can keep your valuable log data safe from prying eyes.

Udgivet i Knowledge Base, Linux, Networking, Old Base | Skriv en kommentar

Automatic log monitor

Automatically generated log file summaries are fine for keeping abreast of what’s happening with your systems and networks, but if you want to know about events as they happen, you’ll need to look elsewhere. One tool that can help keep you informed in real time is swatch (http://swatch.sourceforge.net), the “Simple WATCHer.”

Swatch is a highly configurable log file monitor that can watch a file for user-defined triggers and dispatch alerts in a variety of ways. It consists of a Perl program, a configuration file, and a library of actions to take when it sees a trigger in the file it is monitoring.

To install swatch, download the package, unpack it, and go into the directory that it creates. Then run these commands:

# perl Makefile.PL

# make && make install

Before swatch will build, the Date::Calc, Date::Parse, File::Tail, and Time::HiRes Perl CPAN modules must be installed. If you get an error message like the following when you run perl Makefile.PL, then you will need to install those modules:

Warning: prerequisite Date::Calc 0 not found.

Warning: prerequisite Date::Parse 0 not found.

Warning: prerequisite Time::HiRes 1.12 not found.

Writing Makefile for swatch

If you already have Perl’s CPAN module installed, simply run these commands:

# perl -MCPAN -e “install Date::Calc”

# perl -MCPAN -e “install Date::Parse”

# perl -MCPAN -e “Time::HiRes”

By default, swatch looks for its configuration in a file called .swatchrc in the current user’s home directory. This file contains regular expressions to watch for in the file that you are monitoring with swatch. If you want to use a different configuration file, tell swatch by using the -c command-line switch.

For instance, to use /etc/swatch/messages.conf to monitor /var/log/messages, you could invoke swatch like this:

# swatch -c /etc/swatch/messages.conf -t /var/log/messages

The general format for entries in this file is the following:

watchfor /<regex>/

<action1>

[action2]

[action3]

Alternatively, you can ignore specific log messages that match a regular expression by using the ignore keyword:

ignore /<regex>/

You can also specify multiple regular expressions by separating them with the | character.

Swatch is very configurable in what actions it can take when a string matches a regular expression. Some useful actions that you can specify in your .swatchrc are echo, write, exec, mail, pipe, and throttle.

The echo action simply prints the matching line to the console; additionally, you can specify what text mode it will use. Thus, lines can be printed to the console as bold, underlined, blinking, inverted, or colored text.

For instance, if you wanted to print a matching line in red, blinking text, you could use the following action:

echo blink,red

The write action is similar to the echo action, except it does not support text modes. It can, however, write the matching line to any specified user’s TTY:

write user:user2:…

The exec action allows you to execute any command:

exec <command>

You can use the $0 or $* variables to pass the entire matching line to the command that you execute, $1 to pass the first field in the line, $2 for the second, and so on. So, if you wanted to pass only the second and third fields from the matching line to the command mycommand, you could use an action like this:

exec “mycommand $2 $3”

The mail action is especially useful if you have an email-enabled or text messaging-capable cell phone or pager. When using the mail action, you can list as many recipient addresses as you like, in addition to specifying a subject line. Swatch will send the line that matched the regular expression to these addresses with the subject you set.

Here is the general form of the mail action:

mail addresses=address:address2:…,subject=mysubject

When using the mail action, be sure to escape the @ characters in the email addresses (i.e., @ becomes \@). If you have any spaces in the subject of the email, you should escape those as well.

In addition to the exec action, swatch can execute external commands with the pipe action as well. The only difference is that instead of passing arguments to the command, swatch will execute the command and pipe the matching line to it. To use this action, just put the pipe keyword followed by the command you want to use.

Alternatively, to increase performance, you can use the keep_open option to keep the pipe to the program open until swatch exits or needs to perform a different pipe action:

pipe mycommand,keep_open

One problem with executing commands or sending emails whenever a specific string occurs in a log message is that sometimes the same log message may be generated over and over again very rapidly. Clearly, if this were to happen, you wouldn’t want to get paged or emailed 100 times within a 10-minute period. To alleviate this problem, swatch provides the throttle action. This action lets you suppress a specific message or any message that matches a particular regular expression for a specified amount of time.

The general form of the throttle action is:

throttle h:m:s

The throttle action will throttle based on the contents of the message by default. If you would like to throttle the actions based on the regular expression that caused the match, you can add a ,use=regex to the end of your throttle statement.

Swatch is an incredibly useful tool, but it can take some work to create a good .swatchrc. The best way to figure out what to look for is to examine your log files for behavior that you want to monitor closely.

Udgivet i Knowledge Base, Linux, Networking, Old Base | Skriv en kommentar

Automatic sumarize your logs

Wade through that haystack of logs to find the proverbial needle.

If you’re logging almost every piece of information you can from all services and hosts on your network, no doubt you’re drowning in a sea of information. One way to keep abreast of the real issues affecting your systems is summarizing your logs. This easy with the logwatch tool (http://www.logwatch.org).

Logwatch analyzes your system logs over a given period of time and automatically generates reports, and it can easily be run from cron so that it can email you the results. Logwatch is available with most Red Hat Linux distributions. You can also download RPM packages from the project’s web site if you are using another RPM-based Linux distribution.

To compile logwatch from source, you can download the source code package. Since it is a script there is no need to compile anything. Thus installing it is as simple as copying the logwatch script to a directory.

You can install it by running commands similar to these:

# tar xfz logwatch-5.0.tar.gz

# cd logwatch-5.0

# mkdir /etc/log.d

# cp -R conf lib scripts /etc/log.d

You can also install the manpage and, for added convenience, create a link from the logwatch.pl script to /usr/sbin/logwatch:

# cp logwatch.8 /usr/share/man/man8

# (cd /usr/sbin && \

ln -s ../../etc/log.d/scripts/logwatch.pl logwatch)

Running the following command will give you a taste of the summaries logwatch creates:

# logwatch –print | less

################### LogWatch 4.3.1 (01/13/03) ####################

Processing Initiated: Sat Dec 27 21:12:26 2003

Date Range Processed: yesterday

Detail Level of Output: 0

Logfiles for Host: colossus

################################################################

——————— SSHD Begin ————————

Users logging in through sshd:

andrew logged in from kryten.nnc (192.168.0.60) using password: 2 Time(s)

———————- SSHD End ————————-

###################### LogWatch End #########################

If you have an /etc/cron.daily directory, you can simply make a symbolic link from the logwatch.pl script to /etc/cron.daily/logwatch.pl, and the script will be run daily. Alternatively, you can create an entry in root’s crontab, in which case you can also modify logwatch’s behavior by passing it command-line switches. For instance, you can change the email address that logwatch sends reports to by using the –mailto command-line option. They are sent to the local root account by default, which is probably not what you want.

Logwatch supports most standard log files without any additional configuration, but you can add support for any type of log file. To do this, you first need to create a logfile group configuration for the new file type in /etc/log.d/conf/logfiles. This file just needs to contain an entry pointing logwatch to the logfile for the service and another entry specifying a globbing pattern for any archived log files for that service.

For example, if you had a service called myservice, you could create /etc/log.d/conf/logfiles/myservice.conf with these contents:

LogFile = /var/log/myservice

Archive = /var/log/myservice.*

Next, you need to create a service definition file. This should be called /etc/log.d/conf/services/myservice.conf and should contain the following line:

LogFile = myservice

Finally, since logwatch is merely a framework for generating log file summaries, you’ll also need to create a script in /etc/log.d/scripts/services called myservice. When logwatch executes, it will strip all time entries from the logs and pass the rest of the log entry through standard input to the myservice script. Therefore, you must write your script to read from standard input, parse out the pertinent information, and then print it to standard out.

This just scratches the surface of how to get logwatch running on your system. There is a great deal of information in the HOWTO-Make-Filter, which is included with the logwatch distribution.

Udgivet i Knowledge Base, Linux, Networking, Old Base | Skriv en kommentar

Steer syslog

Make syslog work harder, and spend less time looking through huge log files.

The default syslog installation on many distributions doesn’t do a very good job of filtering classes of information into separate files. If you see a jumble of messages from Sendmail, sudo, BIND, and other system services in /var/log/messages, then you should probably review your /etc/syslog.conf.

There are a number of facilities and priorities that syslog can filter on. These facilities include auth, auth-priv, cron, daemon, kern, lpr, mail, news, syslog, user, uucp, and local0 through local7. In addition, each facility can have one of eight priorities: debug, info, notice, warning, err, crit, alert, and emerg.

Note that applications decide for themselves at what facility and priority to log (and the best apps let you choose), so they may not be logged as you expect. Here’s a sample /etc/syslog.conf that attempts to shuffle around what gets logged where:

auth.warning /var/log/auth

mail.err /var/log/maillog

kern.* /var/log/kernel

cron.crit /var/log/cron

*.err;mail.none /var/log/syslog

*.info;auth.none;mail.none /var/log/messages

#*.=debug /var/log/debug

local0.info /var/log/cluster

local1.err /var/log/spamerica

All of the lines in this example will log the specified priority (or higher) to the respective file. The special priority none tells syslog not to bother logging the specified facility at all. The local0 through local7 facilities are supplied for use with your own programs, however you see fit. For example, the /var/log/spamerica file fills with local1.err (or higher) messages that are generated by our spam processing job. It’s nice to have those messages separate from the standard mail delivery log (which is in /var/log/maillog).

The commented *.=debug line is useful when debugging daemonized services. It tells syslog to specifically log only debug priority messages of any facility, and generally shouldn’t be running (unless you don’t mind filling your disks with debug logs). Another approach is to log debug information to a fifo. This way, debug logs take up no space, but they will disappear unless a process is watching it. To log to a fifo, first create it in the filesystem:

# mkfifo -m 0664 /var/log/debug

Then amend the debug line in syslog.conf to include a |, like this:

*.=debug |/var/log/debug

Now debug information is constantly logged to the fifo and can be viewed with a command like less -f /var/log/debug. A fifo is also handy if you want a process to constantly watch all system messages and perhaps notify you via email about a critical system message. Try making a fifo called /var/log/monitor, and add a rule like this to your syslog.conf:

*.* |/var/log/monitor

Now every message (at every priority) is passed to the /var/log/monitor fifo, and any process watching it can react accordingly, all without taking up any disk space.
Mark Who?

Do you notice a bunch of lines like this in /var/log/messages?

Dec 29 18:33:35 catlin — MARK —

Dec 29 18:53:35 catlin — MARK —

Dec 29 19:13:35 catlin — MARK —

Dec 29 19:33:35 catlin — MARK —

Dec 29 19:53:35 catlin — MARK —

Dec 29 20:13:35 catlin — MARK —

Dec 29 20:33:35 catlin — MARK —

Dec 29 20:53:35 catlin — MARK —

Dec 29 21:13:35 catlin — MARK —

These are generated by the mark functionality of syslog, as a way of “touching base” with the system, so that you can (theoretically) tell if syslog has unexpectedly died. Most times, this only serves to fill your log files, and unless you are having problems with syslog, you probably don’t need it. To turn this off, pass the -m 0 switch to syslogd (after first killing any running syslogd), like this:

# killall syslogd; /usr/sbin/syslogd -m 0

If all of this fiddling about with facilities and priorities strikes you as arcane Unix speak, you’re not alone. These examples are provided for systems that include the default (and venerable) syslogd daemon. If you have the opportunity to install a new syslogd, you will likely want to look into syslog-ng. This new implementation of syslogd allows much more flexible filtering and a slew of new features. We take a look at some of what is possible with syslog-ng in [Hack #59] .

Udgivet i Knowledge Base, Linux, Networking, Old Base | Skriv en kommentar

Run central syslog

Keep your logs safe from attackers by storing them remotely.

Once an intruder has gained entry into one of your systems, how are you to know when or if this has happened? By checking your logs, of course. What if the intruder modified the logs? In this situation, centralized logging definitely saves the day. After all, if a machine is compromised but the log evidence isn’t kept on that machine, it’s going to be much more difficult for the attacker to cover his tracks. In addition to providing an extra level of protection, it’s also much easier to monitor the logs for a whole network of machines when they’re all in one place.

To quickly set up a central syslog server, just start your syslogd with the switch that causes it to listen for messages from remote machines on a UDP port.

This is done under Linux by specifying the -r command-line option:

# /usr/sbin/syslogd -m 0 -r

Under FreeBSD, run syslogd without the -s command-line option:

# /usr/sbin/syslogd

The -s option causes FreeBSD’s syslogd to not listen for remote connections. FreeBSD’s syslogd also allows you to restrict what hosts it will receive messages from. To set these restrictions, use the -a option, which has the following forms:

ipaddr/mask[:service]

domain[:service]

*domain[:service]

The first form allows you to specify a single IP address or group of IP addresses by using the appropriate netmask. The service option allows you to specify a source UDP port. If nothing is specified, it defaults to port 514, which is the default for the syslog service. The next two forms allow you to restrict access to a specific domain name, as determined by a reverse lookup of the IP address of the connecting host. The difference between the second and third is the use of the * wildcard character, which specifies that all machines ending in domain may connect.

Moving on, OpenBSD uses the -u option to listen for remote connections:

# /usr/sbin/syslogd -a /var/empty/dev/log -u

whereas Solaris’s syslogd uses -T:

# /usr/sbin/syslogd -T

Now let’s set up the clients. If you want to forward all logging traffic from a machine to your central log host, simply put the following in your /etc/syslog.conf:

*.* @loghost

You can either make this the only line in the configuration file, in which case messages will be logged only to the remote host, or add it to what is already there, in which case logs will be stored both locally and remotely for safekeeping.

One drawback to remote logging is that the stock syslogd for most operating systems fails to provide any measure of authentication or access control with regard to who may write to a central log host. Firewalls can provide some protection, keeping out everyone but those who are determined to undermine your logging infrastructure; however, someone who has already gained access to your local network can easily spoof his network connection and bypass any firewall rules that you set up. If you’ve determined that this is a concern for your network, take a look at [Hack #59], which discusses one method for setting up remote logging using public-key authentication and SSL-encrypted connections.

Udgivet i Knowledge Base, Linux, Networking, Old Base | Skriv en kommentar

SFS Secure Filesharing on unix

Use SFS to help secure your remote filesystems.

If you are using Unix systems and sharing files on your network, you are most likely using NFS. However, there are a lot of security problems, not only with individual implementations, but also with the design of the protocol itself. For instance, if a user can spoof an IP address and mount an NFS share that is only meant for a certain computer, she essentially has root access to all the files on that share. In addition, NFS employs secret file handles that are used with each file request. Since NFS does not encrypt its traffic, this makes it very easy for attackers to guess these file handles. If they guess correctly, they essentially get total root access to the remote filesystem.

SFS (http://www.fs.net), the Self-certifying File System, fixes all of these problems by employing a drastically different design philosophy. NFS was created with the notion that you can (and should) trust your network. SFS has been designed from the beginning with the idea that no network should ever be trusted until it can definitively prove its identity. To accomplish this, SFS makes use of public keys on both the client and server ends. It uses these keys to verify the identity of servers and clients, and also provides access control on the server side. One particularly nice side effect of such strong encryption is that SFS provides a much finer grained level of access control than NFS. With NFS, you are limited to specifying which hosts can or cannot connect to a given exported filesystem. In order to access an SFS server, a user must create a key pair and then authorizes the key by logging into the SFS server and registering the key manually.

Building SFS can take up quite a lot of disk space. Before you attempt to build SFS, make sure you have at least 550MB of disk space available on the filesystem on which you’ll be compiling SFS. You will also need to make sure that you have GMP (http://www.swox.com/gmp/), the GNU multiple precision math library, installed. Before you begin to build SFS, you will also need to create a user and group for SFS’s daemons. By default, these are both called sfs. If you want to use a different user or group, you can do this by passing options to the configure script.

Once your system is ready, you can build SFS by simply typing this command:

$ ./configure && make

Once that process is finished, become root and type make install.

If you want to use a user and group other than sfs, you can specify these with the –with-sfsuser and –with-sfsgroup options:

$ ./configure –with-sfsuser=nobody –with-sfsgroup=nobody

Building SFS can take quite a bit of time, so you may want to take the opportunity to enjoy a cup of coffee, a snack, or maybe even a full meal, depending on the speed of your machine and the amount of memory it has.

After SFS has finished building and you have installed it, you can test it out by connecting to the SFS project’s public server. You can do this by starting the SFS client daemon, sfscd, and then changing to the directory that the SFS server will be mounted under:

# sfscd

# cd /sfs/@sfs.fs.net,uzwadtctbjb3dg596waiyru8cx5kb4an

# ls

CONGRATULATIONS cvs pi0 reddy sfswww

# cat CONGRATULATIONS

You have set up a working SFS client.

#

sfscd automatically creates the /sfs directory and the directory for the SFS server. Note that SFS relies on the operating system’s portmap daemon and NFS mounter; you’ll need to have those running before running the client.

To set up an SFS server, first log into your server and generate a public and private key pair:

# mkdir /etc/sfs

# sfskey gen -P /etc/sfs/sfs_host_key

sfskey will then ask you to bang on the keys for a little while in order to gather entropy for the random number generator.

Now you will need to create a configuration file for sfssd, the SFS server daemon. To do this, create a file in /etc/sfs called sfsrwsd_config, which is where you configure the filesystem namespace that SFS will export to other hosts.

If you wanted to export the /home filesystem, you would create a configuration file like this:

Export /var/sfs/root /

Export /home /home

Then you would need to create the /var/sfs/root and /var/sfs/home directories. After that, you would create NFS exports so that the /home filesystem could be mounted under /var/sfs/root/home. These are then reexported by sfssd. The NFS exports need only to allow mounting from localhost.

Here’s what /etc/exports looks like for exporting /home:

/var/sfs/root localhost(rw)

/home localhost(rw)

This exports file is for Linux. If you are running the SFS server on another operating system (such as Solaris or OpenBSD), consult your operating system’s mountd manpage for the proper way to add these shares.

Now start your operating system’s NFS server. Once NFS has started, you can then start sfssd. After attempting to connect to the sfssd server, you should see some messages in your logs like these:

Dec 12 12:29:14 colossus : sfssd: version 0.7.2, pid 3503

Dec 12 12:29:14 colossus : rexd: version 0.7.2, pid 3505

Dec 12 12:29:14 colossus : sfsauthd: version 0.7.2, pid 3506

Dec 12 12:29:14 colossus : rexd: serving @colossus.nnc,fd82m36uwxj6m3q8tawp56ztgsvu7g77

Dec 12 12:29:14 colossus : rexd: spawning /usr/local/lib/sfs-0.7.2/ptyd

Dec 12 12:29:15 colossus rpc.mountd: authenticated mount request from

localhost.localdomain:715 for /var/sfs/root (/var/sfs/root)

Dec 12 12:29:15 colossus rpc.mountd: authenticated mount request from

localhost.localdomain:715 for /home (/home)

Dec 12 12:29:15 colossus : sfsauthd: serving

@colossus.nnc,fd82m36uwxj6m3q8tawp56ztgsvu7g77

Dec 12 12:29:16 colossus : sfsrwsd: version 0.7.2, pid 3507

Dec 12 12:29:16 colossus : sfsrwsd: serving

/sfs/@colossus.nnc,fd82m36uwxj6m3q8tawp56ztgsvu7g77

The last log entry shows the path that users can use to mount your filesystem. Before mounting any filesystems on your server, users will have to create a key pair and register it with your server. They can do this by logging into your server and running the sfskey command:

$ sfskey register

sfskey: /home/andrew/.sfs/random_seed: No such file or directory

sfskey: creating directory /home/andrew/.sfs

sfskey: creating directory /home/andrew/.sfs/authkeys

/var/sfs/sockets/agent.sock: No such file or directory

sfskey: sfscd not running, limiting sources of entropy

Creating new key: andrew@colossus.nnc#1 (Rabin)

Key Label: andrew@colossus.nnc#1

Enter passphrase:

Again:

sfskey needs secret bits with which to seed the random number generator.

Please type some random or unguessable text until you hear a beep:

DONE

UNIX password:

colossus.nnc: New SRP key: andrew@colossus.nnc/1024

wrote key: /home/andrew/.sfs/authkeys/andrew@colossus.nnc#1

Alternatively, if you already have an existing key pair on another server, you can type sfskey user@otherserver instead. This will retrieve the key from the remote machine and register it with the server you are currently logged into.

Now that you have registered a key with the server, you can log into the SFS server from another machine. This is also done with the sfskey program:

$ sfskey login andrew@colossus.nnc

Passphrase for andrew@colossus.nnc/1024:

SFS Login as andrew@colossus.nnc

Now try to access the remote server:

$ cd /sfs/@colossus.nnc,fd82m36uwxj6m3q8tawp56ztgsvu7g77

$ ls

home

As you can see, SFS is a very powerful tool for sharing files across a network, and even across the Internet. Not only does it provide security, but it also provides a unique and universal method for referencing a remote host and its exported filesystems. You can even put your home directory on an SFS server, simply by linking the universal pathname of the exported filesystem /home.

Udgivet i Knowledge Base, Linux, Networking, Old Base | Skriv en kommentar

Securing MySQL

Basic steps you can take to harden your MySQL installation.

MySQL (http://www.mysql.com), one of the most popular open source database systems available today, is often used in conjunction with both the Apache web server and the PHP scripting language to drive dynamic content on the Web. However, MySQL is a complex piece of software internally and, given the fact that it often has to interact both locally and remotely with a broad range of other programs, special care should be taken to secure it as much as possible.

Some steps you can take are running MySQL in a chrooted environment [Hack #10], running it as a nonroot user, and disabling MySQL’s ability to load data from local files. Luckily, none of these are as hard to do as they may sound. To start with, let’s look at how to chroot() MySQL.

First create a user and group for MySQL to run as. Next, you’ll need to download the MySQL source distribution. After you’ve done that, unpack it and go into the directory that it created. Run this command to build MySQL and set up its directory structure for chrooting:

$ ./configure –prefix=/mysql –with-mysqld-ldflags=-all-static && make

This configures MySQL to be installed in /mysql and statically links the mysqld binary. This will make setting up the chroot environment much easier, since you won’t need to copy any additional libraries to the environment.

After the compilation finishes, become root and then run these commands to install MySQL:

# make install DESTDIR=/mysql_chroot && ln -s /mysql_chroot/mysql /mysql

# scripts/mysql_install_db

The first command installs MySQL, but instead of placing the files in /mysql, it places them in /mysql_chroot/mysql. In addition, it creates a symbolic link from that directory to /mysql, which makes administering MySQL much easier after installation. The second command creates MySQL’s default databases. If you hadn’t created the symbolic link prior to running this command, the mysql_install_db script would have failed. This is because it expected to find MySQL installed beneath /mysql. Many other scripts and programs will expect this, too, so creating the symbolic link will make your life easier.

Now you need to set up the correct directory permissions so that MySQL will be able to function properly. To do this, run these commands:

# chown -R root:mysql /mysql

# chown -R mysql /mysql/var

Now try running MySQL:

# /mysql/bin/mysqld_safe&

Starting mysqld daemon with databases from /mysql/var

# ps -aux | grep mysql | grep -v grep

root 10137 0.6 0.5 4156 744 pts/2 S 23:01 0:00 /bin/sh /mysql/bin/

mysqld_safe

mysql 10150 7.0 9.3 46224 11756 pts/2 S 23:01 0:00 [mysqld]

mysql 10151 0.0 9.3 46224 11756 pts/2 S 23:01 0:00 [mysqld]

mysql 10152 0.0 9.3 46224 11756 pts/2 S 23:01 0:00 [mysqld]

mysql 10153 0.0 9.3 46224 11756 pts/2 S 23:01 0:00 [mysqld]

mysql 10154 0.0 9.3 46224 11756 pts/2 S 23:01 0:00 [mysqld]

mysql 10155 0.3 9.3 46224 11756 pts/2 S 23:01 0:00 [mysqld]

mysql 10156 0.0 9.3 46224 11756 pts/2 S 23:01 0:00 [mysqld]

mysql 10157 0.0 9.3 46224 11756 pts/2 S 23:01 0:00 [mysqld]

mysql 10158 0.0 9.3 46224 11756 pts/2 S 23:01 0:00 [mysqld]

mysql 10159 0.0 9.3 46224 11756 pts/2 S 23:01 0:00 [mysqld]

# /mysql/bin/mysqladmin shutdown

040103 23:02:45 mysqld ended

[1]+ Done /mysql/bin/mysqld_safe

Now that you know MySQL is working outside of its chroot environment, you can create the additional files and directories it will need to work inside the chroot environment:

# mkdir /mysql_chroot/tmp /mysql_chroot/dev

# chmod 1777 /mysql_chroot/tmp

# ls -l /dev/null

crw-rw-rw- 1 root root 1, 3 Jan 30 2003 /dev/null

# mknod /mysql_chroot/dev/null c 1 3

Now try running mysqld in the chrooted environment:

# /usr/sbin/chroot /mysql_chroot /mysql/libexec/mysqld -u 100

Here the UID of the user you want mysqld to run as is specified with the -u option. This should correspond to the UID of the user created earlier.

To ease management, you may want to modify the mysqld_safe shell script to chroot mysqld for you. You can accomplish this by finding the lines where mysqld is called and modifying them to use the chroot program.

To do this, open up /mysql/bin/mysqld_safe and locate block of lines that looks like this:

if test -z “$args”

then

$NOHUP_NICENESS $ledir/$MYSQLD $defaults \

–basedir=$MY_BASEDIR_VERSION \

–datadir=$DATADIR $USER_OPTION \

–pid-file=$pid_file –skip-locking >> $err_log 2>&1

else

eval “$NOHUP_NICENESS $ledir/$MYSQLD $defaults \

–basedir=$MY_BASEDIR_VERSION \

–datadir=$DATADIR $USER_OPTION \

–pid-file=$pid_file –skip-locking $args >> $err_log 2>&1”

fi

Change them to look like this:

if test -z “$args”

then

$NOHUP_NICENESS /usr/sbin/chroot /mysql_chroot \

$ledir/$MYSQLD $defaults \

–basedir=$MY_BASEDIR_VERSION \

–datadir=$DATADIR $USER_OPTION \

–pid-file=$pid_file –skip-locking >> $err_log 2>&1

else

eval “$NOHUP_NICENESS /usr/sbin/chroot /mysql_chroot \

$ledir/$MYSQLD $defaults \

–basedir=$MY_BASEDIR_VERSION \

–datadir=$DATADIR $USER_OPTION \

–pid-file=$pid_file –skip-locking $args >> $err_log 2>&1”

fi

Now you can start MySQL by using the mysqld_safe wrapper script, like this:

# /mysql/bin/mysqld_safe –user=100

In addition, you may want to create a separate my.conf file for the MySQL utilities and server. For instance, in /etc/my.cnf you could specify socket = /mysql_chroot/tmp/mysql.sock in the [client] section so you do not have to manually specify the socket every time you run a MySQL-related program.

You’ll also probably want to disable MySQL’s ability to load data from local files. To do this, you can add set-variable=local-infile=0 to the [mysqld] section of your /mysql_chroot/etc/my.cnf. This disables MySQL’s LOAD DATA LOCAL INFILE command. Alternatively, you can disable it from the command line by using the –local-infile=0 option.

Udgivet i Knowledge Base, Old Base | Skriv en kommentar

Chrooting named

Lock down your BIND setup to help contain potential security problems.

Due to BIND’s not-so-illustrious track record with regard to security, you’ll probably want to spend some time hardening your setup if you want to continue using it. One way to make running BIND a little safer is to run it inside a sandboxed environment. This is easy to do with recent versions of BIND, since it natively supports running as a nonprivileged user within a chroot( ) jail. All you need to do is set up the directory you’re going to have it chroot( ) to, and then change the command you’re using to start named to reflect this.

To begin, create a user and group to run named as (e.g., named). To prepare the sandboxed environment, you’ll need to create the appropriate directory structure. You can create the directories for such an environment within /named_chroot by running the following commands:

# mkdir /named_chroot

# cd /named_chroot

# mkdir -p dev etc/namedb/slave var/run

Next, you’ll need to copy your named.conf and namedb directory to the sandboxed environment:

# cp /etc/named.conf /named_chroot/etc

# cp -a /var/namedb/* /named_chroot/etc/namedb

This assumes that you store your zone files in /var/namedb. If you’re setting up BIND as a secondary DNS server, you will need to make the /named_chroot/etc/namedb/slave directory writable so that named can update the records it contains when it performs a domain transfer from the master DNS node. You can do this by running a command similar to the following:

# chown -R named:named /named_chroot/etc/namedb/slave

In addition, named will need to write its process ID (PID) file to /named_chroot/var/run, so you’ll need to make this directory writable by the named user as well:

# chown named:named /named_chroot/var/run

Now you’ll need to create some device files that named will need to access after it has called chroot():

# cd /named_chroot/dev

# ls -la /dev/null /dev/random

crw-rw-rw- 1 root root 1, 3 Jan 30 2003 /dev/null

crw-r–r– 1 root root 1, 8 Jan 30 2003 /dev/random

# mknod null c 1 3

# mknod random c 1 8

# chmod 666 null random

You’ll also need to copy your time zone file from /etc/localtime to /named_chroot/etc/localtime. Additionally, named usually uses /dev/log to communicate its log messages to syslogd. Since this doesn’t exist inside the sandboxed environment, you will need to tell syslogd to create a socket that the chrooted named process can write to. You can do this by modifying your syslogd startup command and adding -a /named_chroot/dev/log to it. Usually you can do this by modifying an existing file in /etc.

For instance, under Red Hat Linux you would edit /etc/sysconfig/syslogd and modify the SYSLOGD_OPTIONS line to read:

SYSLOGD_OPTIONS=”-m 0 -a /named_chroot/dev/log”

Or if you’re running FreeBSD, you would modify the syslogd_flags line in /etc/rc.conf:

syslogd_flags=”-s -a /named_chroot/dev/log”

After you restart syslogd, you should see a log socket in /named_chroot/dev.

Now to start named all you need to do is run this command:

# named -u named -t /named_chroot

Other tricks for increasing the security of your BIND installation include limiting zone transfers to your slave DNS servers and altering the response to BIND version queries. Restricting zone transfers ensures that random attackers will not be able to request a list of all the hostnames for the zones hosted by your name servers. You can globally restrict zone transfers to certain hosts by putting an allow-transfer section within the options section in your named.conf.

For instance, if you wanted to restrict transfers on all zones hosted by your DNS server to only 192.168.1.20 and 192.168.1.21, you could use an allow-transfer section like this:

allow-transfer {

192.168.1.20;

192.168.1.21;

};

If you don’t want to limit zone transfers globally and instead want to specify the allowed hosts on a zone-by-zone basis, you can put the allow-transfer section inside the zone section.

Before an attacker attempts to exploit a BIND vulnerability, they will often scan for vulnerable versions of BIND by connecting to name servers and performing a version query. Since you should never need to perform a version query on your own name server, you can modify the reply BIND sends to the requester. To do this, add a version statement to the options section in your named.conf.

For example:

version “SuperHappy DNS v1.5”;

Note that this really doesn’t provide extra security, but if you don’t want to advertise what software and version you’re running to the entire world, you don’t have to.

Udgivet i Knowledge Base, Linux, Networking, Old Base | Skriv en kommentar