Simple firewall with openbsd

Use OpenBSD’s firewalling features to protect your network.

PacketFilter, commonly known as PF, is the firewalling system available in OpenBSD. While it is a relatively new addition to the operating system, it has already surpassed IPFilter, the system it has replaced, in both features and flexibility. PF shares many features with Linux’s Netfilter. Although Linux’s Netfilter is more easily extensible with modules, PF outshines it in its traffic normalization capabilities and enhanced logging features.

To communicate with the kernel portion of PF, we need to use the pfctl command. Unlike the iptables command that is used with Linux’s Netfilter, it is not used to specify individual rules, but instead uses its own configuration and rule specification language. To actually configure PF, we must edit /etc/pf.conf. PF’s rule specification language is actually very powerful, flexible, and easy to use. The pf.conf file is split up into seven sections, each of which contains a particular type of rule. Not all sections need to be used—if you don’t need a specific type of rule, that section can simply be left out of the file.

The first section is for macros. In this section you can specify variables to hold either single values or lists of values for use in later sections of the configuration file. Like an environment variable or a programming-language identifier, macros must start with a letter and also may contain digits and underscores.

Here are some example macros:

EXT_IF=”de0″

INT_IF=”de1″

RFC1918=”{ 192.168.0.0/16, 172.16.0.0/12, 10.0.0.0/8 }”

A macro can be referenced later by prefixing it with the $ character:

block drop quick on $EXT_IF from any to $RFC1918

The second section allows you to specify tables of IP addresses to use in later rules. Using tables for lists of IP addresses is much faster than using a macro, especially for large numbers of IP addresses, because when a macro is used in a rule, it will expand to multiple rules, with each one matching on a single value contained in the macro. Using a table adds just a single rule when it is expanded.

Rather than using the macro from our previous example, we can define a table to hold the nonroutable RFC 1918 IP addresses:

table <rfc1918> const { 192.168.0.0/16, 172.16.0.0/12, 10.0.0.0/8 }

The const keyword ensures that this table cannot be modified once it has been created. Tables are specified in a rule in the same way that they were created:

block drop quick on $EXT_IF from any to <rfc1918>

You can also load a list of IP addresses into a table by using the file keyword:

table <spammers> file “/etc/spammers.table”

If you elect not to use the const keyword, then you can add addresses to a table by running a command such as this:

pfctl -t spammers -T add 10.1.1.1

Additionally, you can delete an address by running a command like this:

pfctl -t spammers -T delete 10.1.1.1

To list the contents of a table, you can run:

pfctl -t spammers -T show

In addition to IP addresses, hostnames may also be specified. In this case, all valid addresses returned by the resolver will be inserted into the table.

The next section of the configuration file contains options that affect the behavior of PF. By modifying options, we can control session timeouts, defragmentation timeouts, state-table transitions, statistic collection, and other behaviors. Options are specified by using the set keyword. The number of options is too numerous to discuss all of them in any meaningful detail; however, we will discuss the most pertinent and useful ones.

One of the most important options is block-policy. This option specifies the default behavior of the block keyword and can be configured to silently drop matching packets by specifying drop. Alternatively, return may be used, to specify that packets matching a block rule will generate a TCP reset or an ICMP unreachable packet, depending on whether the triggering packet is TCP or UDP. This is similar to the REJECT target in Linux’s Netfilter.

For example, to have PF drop packets silently by default, add a line like this to /etc/pf.conf:

set block-policy drop

In addition to setting the block-policy, additional statistics such as packet and byte counts can be collected for an interface. To enable this for an interface, add a line similar to this to the configuration file:

set loginterface de0

However, these statistics can only be collected on a single interface at a time. If you do not want to collect any statistics, you can replace the interface name with the none keyword.

To better utilize resources on busy networks, we can also modify the session-timeout values. Setting this to a low value can help improve the performance of the firewall on high-traffic networks, but at the expense of dropping valid idle connections.

To set the session timeout (in seconds), put a line similar to this in /etc/pf.conf:

set timeout interval 20

With this setting in place, any TCP connection that is idle for 20 seconds will automatically be reset.

PF can also optimize performance on low-end hardware by tuning its memory use regarding how many states may be stored at any one time or how many fragments may reside in memory for fragment reassembly. For example, to set the number of states to 20,000 and the number of entries used by the fragment reassembler to 15,000, we could put this in our pf.conf:

set limit states 20000

set limit frags 15000

Alternatively, we could combine these entries into a single one, like this:

set limit { states 20000, frags 15000 }

Moving on, the next section is for traffic normalization rules. Rules of this type ensure that packets passing through the firewall meet certain criteria regarding fragmentation, IP IDs, minimum TTLs, and other attributes of a TCP datagram. Rules in this section are all prefixed by the scrub keyword. In general, just putting scrub all is fine. However, if necessary, we can get quite detailed in specifying what we want normalized and how we want to normalize it. Since we can use PF’s general filtering-rule syntax to determine what types of packets a scrub rule will match, we can normalize packets with a great deal of control.

One of the more interesting possibilities is to randomize all IP IDs in the packets leaving your network for the outside world. In doing this, we can make sure that passive operating system determination methods based on IP IDs will break when trying to figure out the operating system of a system protected by the firewall. Because such methods depend on analyzing how the host operating system increments the IP IDs in its outgoing packets, and our firewall ensures that the IP IDs in all the packets leaving our network are totally random, it’s pretty hard to match them against a known pattern for an operating system. This also helps to prevent enumeration of machines in a network address translated (NAT) environment. Without random IP IDs, someone outside the network can perform a statistical analysis of the IP IDs being emitted by the NAT gateway in order to count the number of machines on the private network. Randomizing the IP IDs defeats this kind of attack.

To enable random ID generation on an interface, put a line such as this in /etc/pf.conf:

scrub out on de0 all random-id

We can also use the scrub directive to reassemble fragmented packets before forwarding them to their destinations. This helps prevent specially fragmented packets (such as packets that overlap) from evading intrusion-detection systems that are sitting behind the firewall.

To enable fragment reassembly on all interfaces, simply put the following line in the configuration file:

scrub fragment reassemble

If we want to limit reassembly to just a single interface, we can change this to:

scrub in on de0 all fragment reassemble

This will enable fragment reassembly for the de0 interface.

The next two sections of the pf.conf file involve packet queuing and address translation, but since this hack focuses on packet filtering, we’ll skip these. This brings us to the last section, which contains the actual packet-filtering rules. In general, the syntax for a filter rule can be defined by the following:

action direction [log] [quick] on int [af] [proto protocol] \

from src_addr [port src_port] to dst_addr [port dst_port] \

[tcp_flags] [state]

In PF, a rule can have only two actions: block and pass. As discussed previously, the block policy affects the behavior of the block action. However, this can be modified for specific rules by specifying it along with an action, such as block drop or block return. Additionally, block return-icmp can be used, which will return an ICMP unreachable message by default. An ICMP type can be specified as well, in which case that type of ICMP message will be returned.

For most purposes, we want to start out with a default deny policy; that way we can later add rules to allow the specific traffic that we want through the firewall.

To set up a default deny policy for all interfaces, put the following line in /etc/pf.conf:

block all

Now we can add rules to allow traffic through our firewall. First we’ll keep the loopback interface unfiltered. To accomplish this, we’ll use this rule:

pass quick on lo0 all

Notice the use of the quick keyword. Normally PF will continue through our rule list even if a rule has already allowed a packet to pass, in order to see whether a more specific rule that appears later on in the configuration file will drop the packet. The use of the quick keyword modifies this behavior and causes PF to stop processing the packet at this rule if it matches the packet and to take the specified action. With careful use, this can greatly improve the performance of a ruleset.

To prevent external hosts from spoofing internal addresses, we can use the antispoof keyword:

antispoof quick for $INT_IF inet

Next we’ll want to block any packets from entering or leaving our external interface that have a nonroutable RFC 1918 IP address. Such packets, unless explicitly allowed later, would be caught by our default deny policy. However, if we use a rule to specifically match these packets and use the quick keyword, we can increase performance by adding a rule like this:

block drop quick on $EXT_IF from any to <rfc1918>

If we wanted to allow traffic into our network destined for a web server at 192.168.1.20, we could use a rule like this:

pass in on $EXT_IF proto tcp from any to 192.168.1.20 port 80 \

modulate state flags S/SA

This will allow packets destined to TCP port 80 at 192.168.1.20 only if they are establishing a new connection (i.e., the SYN flag is set), and will enter the connection into the state table. The modulate keyword ensures that a high-quality initial sequence number is generated for the session, which is important if the operating system in use at either end of the connection uses a poor algorithm for generating its ISNs.

Similarly, if we wanted to pass traffic to and from an email server at the IP address 192.168.1.21, we could use this rule:

pass in on $EXT_IF proto tcp from any to 192.168.1.21 \

port { smtp, pop3, imap2, imaps } modulate state flags S/SA

Notice that multiple ports can be specified for a rule by separating them with commas and enclosing them in curly braces. We can also use service names, as defined in /etc/services, instead of specifying the service’s port number.

To allow traffic to a DNS server at 192.168.1.18, we can add a rule like this:

pass in on $EXT_IF proto tcp from any to 192.168.1.18 port 53 \

modulate state flags S/SA

This still leaves the firewall blocking UDP DNS traffic. To allow this through, add this rule:

pass in on $EXT_IF proto udp from any to 192.168.1.18 port 53 \

keep state

Notice here that even though this is a rule for UDP packets we have still used the state keyword. In this case, PF will keep track of the connection using the source and destination IP address and port pairs. Also, since UDP datagrams do not contain sequence numbers, the modulate keyword is not applicable. We use keep state instead, which is how to specify stateful inspection when not modulating ISNs. In addition, since UDP datagrams do not contain flags, we simply omit them.

Now we’ll want to allow connections initiated from the internal network to pass through the firewall. To do this, we’ll need to add the following rules to let the traffic into the internal interface of the firewall:

pass in on $INT_IF from $INT_IF:network to any

pass out on $INT_IF from any to $INT_IF:network

pass out on $EXT_IF proto tcp all modulate state flags S/SA

pass out on $EXT_IF proto { icmp, udp } all keep state

As you can see, OpenBSD has a very powerful and flexible firewalling system. There are too many features and possibilities to discuss here. For more information, you can look at the excellent PF documentation available online or the pf.conf manpage.

Udgivet i Knowledge Base, Networking, Old Base, OpenBSD | Skriv en kommentar

Simple IPTABLE firewall

Protect your network with Linux's powerful firewalling features.

Linux has long had the capability for filtering packets, and it has come a long way since the early days in terms of both power and flexibility. The first generation of packet-filtering code was called ipfw (for "IP firewall") and provided basic filtering capability. Since it was somewhat inflexible and inefficient for complex configurations, ipfw is rarely used now. The second generation of IP filtering was called IP chains. It improved greatly on ipfw and is still in common use. The latest generation of filtering is called Netfilter and is manipulated with the iptables command. It is used exclusively with the 2.4.x and later series of kernels. Although Netfilter is the kernel component and iptables is the user-space configuration tool, these terms are often used interchangeably.

An important concept in Netfilter is the chain , which consists of a list of rules that are applied to packets as they enter, leave, or traverse through the system. The kernel defines three chains by default, but new chains of rules can be specified and linked to the predefined chains. The INPUT chain applies to packets that are received and are destined for the local system, and the OUTPUT chain applies to packets that are transmitted by the local system. Finally, the FORWARD chain applies whenever a packet will be routed from one network interface to another through the system. It is used whenever the system is acting as a packet router or gateway, and applies to packets that are neither originating from nor destined for this system.

The iptables command is used to make changes to the Netfilter chains and rulesets. You can create new chains, delete chains, list the rules in a chain, flush chains (that is, remove all rules from a chain), and set the default action for a chain. iptables also allows you to insert, append, delete, and replace rules in a chain.

Before we get started with some example rules, it's important to set a default behavior for all the chains. To do this we'll use the -P command-line switch, which stands for "policy":

# iptables -P INPUT DROP

# iptables -P FORWARD DROP

This will ensure that only those packets covered by subsequent rules that we specify will make it past our firewall. After all, with the relatively small number of services that will be provided by the network, it is far easier to explicitly specify all the types of traffic that we want to allow, rather than all the traffic that we don't. Note that a default policy was not specified for the OUTPUT chain; this is because we want to allow traffic to proceed out of the firewall itself in a normal manner.

With the default policy set to DROP, we'll specify what is actually allowed. Here's where we'll need to figure out what services will have to be accessible to the outside world. For the rest of these examples, we'll assume that eth0 is the external interface on our firewall and that eth1 is the internal one. Our network will contain a web server (192.168.1.20), a mail server (192.168.1.21), and a DNS server (192.168.1.18)—a fairly minimal setup for a self-managed Internet presence.

However, before we begin specifying rules, we should remove filtering from our loopback interface:

# iptables -P INPUT -i lo -j ACCEPT

# iptables -P OUTPUT -o lo -j ACCEPT

Now let's construct some rules to allow this traffic through. First, we'll make a rule to allow traffic on TCP port 80—the standard port for web servers—to pass to the web server unfettered by our firewall:

# iptables -A FORWARD -m state --state NEW -p tcp \

 -d 192.168.1.20 --dport 80 -j ACCEPT

And now for the mail server, which uses TCP port 25 for SMTP:

# iptables -A FORWARD -m state --state NEW -p tcp \

 -d 192.168.1.21 --dport 25 -j ACCEPT

Additionally, we might want to allow remote POP3, IMAP, and IMAP+SSL access as well:


POP3

# iptables -A FORWARD -m state --state NEW -p tcp \

 -d 192.168.1.21 --dport 110 -j ACCEPT


IMAP

# iptables -A FORWARD -m state --state NEW -p tcp \

 -d 192.168.1.21 --dport 143 -j ACCEPT


IMAP+SSL

# iptables -A FORWARD -m state --state NEW -p tcp \

 -d 192.168.1.21 --dport 993 -j ACCEPT

Unlike the other services, DNS can use both TCP and UDP port 53:

# iptables -A FORWARD -m state --state NEW -p tcp \

-d 192.168.1.21 --dport 53 -j ACCEPT

Since we're using a default deny policy, it makes it slightly more difficult to use UDP for DNS. This is because our policy relies on the use of state tracking rules, and since UDP is a stateless protocol, there is no way to track it. In this case, we can configure our DNS server either to use only TCP, or to use a UDP source port of 53 for any response that it sends back to clients that were using UDP to query the nameserver.

If the DNS server is configured to respond to clients using UDP port 53, we can allow this traffic through with the following two rules:

# iptables -A FORWARD -p udp -d 192.168.1.18 --dport 53 -j ACCEPT

# iptables -A FORWARD -p udp -s 192.168.1.18 --sport 53 -j ACCEPT

The first rule allows traffic into our network destined for the DNS server, and the second rule allows responses from the DNS server to leave the network.

You may be wondering what the -m state and --state arguments are about. These two options allow us to use Netfilter's stateful packet-inspection engine. Using these options tells Netfilter that we want to allow only new connections to the destination IP and port pairs that we have specified. When these rules are in place, the triggering packet is accepted and its information is entered into a state table.

Now we can specify that we want to allow any outbound traffic that is associated with these connections by adding a rule like this:

# iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT

The only thing left now is to allow traffic from machines behind the firewall to reach the outside world. To do this, we'll use a rule like the following:

# iptables -A FORWARD -m state --state NEW -i eth1 -j ACCEPT

This rule enters any outbound connections from the internal network into the state table. It works by matching packets coming into the internal interface of our firewall that are creating new connections. If we were setting up a firewall that had multiple internal interfaces, we could have used a Boolean NOT operator on the external interface (e.g., -i ! eth0). Now any traffic that comes into the firewall through the external interface that corresponds to an outbound connection will be accepted by the preceding rule, because this rule will have put the corresponding connection into the state table.

In these examples, the order in which the rules were entered does not really matter. Since we're operating with a default DENY policy, all our rules have an ACCEPT target. However, if we had specified targets of DROP or REJECT as arguments to the -j option, then we would have to take a little extra care to ensure that the order of those rules would result in the desired effect. Remember that the first rule that matches a packet is always triggered as the rule chains are traversed, so rule order can sometimes be critically important.

It should also be noted that rule order can have a performance impact in some circumstances. For example, the rule shown earlier that matches ESTABLISHED and RELATED states should be specified before any of the other rules, since that particular rule will be matched far more often than any of the rules that will match only on new connections. By putting that rule first, it will prevent any packets that are already associated with a connection from having to traverse the rest of the rule chain before finding a match.

To complete our firewall configuration, we'll want to enable packet forwarding. Run this command:

# echo 1 > /proc/sys/net/ipv4/ip_forward

This tells the kernel to forward packets between interfaces whenever appropriate. To have this done automatically at boot time, add the following line to /etc/sysctl.conf:

net.ipv4.ip_forward=1

If your system doesn't support /etc/sysctl.conf, you can put the preceding echo command in one of your startup rc scripts, such as /etc/rc.local. Another useful kernel parameter is rp_filter, which helps prevent IP spoofing. This enables source address verification by checking that the IP address for any given packet has arrived on the expected network interface. This can be enabled by running the following command:

# echo 1 > /proc/sys/net/ipv4/conf/default/rp_filter

Much like how we enabled IP forwarding, we can also enable source address verification by editing /etc/sysctl.conf on systems that support it, or else put the changes in your rc.local. To enable rp_filter in your sysctl.conf, add the following line:

net.ipv4.conf.all.rp_filter=1

To save all of our rules, we can either write all of our rules to a shell script or use our Linux distribution's particular way of saving them. We can do this in Red Hat by running the following command:

# /sbin/service iptables save

This will save all currently active filter rules to /etc/sysconfig/iptables. To achieve the same effect under Debian, edit /etc/default/iptables and set enable_iptables_initd=true.

After doing this, run the following command:

# /etc/init.d/iptables save_active

When the machine reboots, your iptables configuration will be automatically restored.
Udgivet i Knowledge Base, Linux, Networking, Old Base | Skriv en kommentar

Statiske mac tabeller

Use static ARP table entries to combat spoofing and other nefarious activities.

As discussed in [Hack #31], a lot of bad things can happen if someone successfully poisons the ARP table of a machine on your network. The previous hack discussed how to monitor for this behavior, but how do we prevent the effects of someone attempting to poison an ARP table?

One way to prevent the ill effects of this behavior is to create static ARP table entries for all of the devices on your local network segment. When this is done, the kernel will ignore all ARP responses for the specific IP address used in the entry and use the specified MAC address instead.

To do this, you can use the arp command, which allows you to directly manipulate the kernel's ARP table entries. To add a single static ARP table entry, run this:

arp -s ipaddr macaddr

If you know that the MAC address that corresponds to 192.168.0.65 is 00:50:BA:85:85:CA, you could add a static ARP entry for it like this:

# arp -s 192.168.0.65 00:50:ba:85:85:ca

For more than a few entries, this can be a time-consuming process. To be fully effective, you must add an entry for each device on your network on every host that allows you to create static ARP table entries.

Luckily, most versions of the arp command can take a file as input and use it to create static ARP table entries. Under Linux, this is done with the -f command-line switch. Now all you need to do is generate a file containing the MAC and IP address pairings, which you can then copy to all the hosts on your network.

To make this easier, you can use this quick-n-dirty Perl script:

#!/usr/bin/perl

#

# gen_ethers.pl <from ip> <to ip>

#



my ($start_1, $start_2, $start_3, $start_4) = split(/\./, $ARGV[0], 4);

my ($end_1, $end_2, $end_3, $end_4) = split(/\./, $ARGV[1], 4); 

my $ARP_CMD="/sbin/arp -n";



for(my $oct_1 = $start_1; $oct_1 <= $end_1 && $oct_1 <= 255; $oct_1++ ){

  for(my $oct_2 = $start_2; $oct_2 <= $end_2 && $oct_2 <= 255; $oct_2++){

    for(my $oct_3 = $start_3; $oct_3 <= $end_3 && $oct_3 <= 255; $oct_3++){

      for(my $oct_4 = $start_4; $oct_4 <= $end_4 && $oct_4 < 255; $oct_4++){

    system("ping -c 1 -W 1 $oct_1.$oct_2.$oct_3.$oct_4 > /dev/null 2>&1");

          my $ether_addr = `$ARP_CMD $oct_1.$oct_2.$oct_3.$oct_4 | egrep 'HWaddress|

(incomplete)' | awk '{print \$3}'`;

    chomp($ether_addr);

    if(length($ether_addr) == 17){

      print("$ether_addr\t$oct_1.$oct_2.$oct_3.$oct_4\n");

    }

      }

    }

  }

}

This script will take a range of IP addresses and attempt to ping each one once. In doing this, each active IP address will appear in the machine's ARP table. After an IP address is pinged, the script will then look for that IP address in the ARP table, and print out the MAC/IP address pair in a format suitable for putting into a file to load with the arp command. This script was written with Linux in mind but should work on other Unix-like operating systems as well.

For example, if you wanted to generate a file for all the IP addresses from 192.168.1.1 to 192.168.1.255 and store the results in /etc/ethers, you would run the script like this:

# ./gen_ethers 192.168.1.1 192.168.1.255 > /etc/ethers

When using arp with the -f switch, it will automatically use the /etc/ethers file to create the static entries. However, you can specify any file you prefer. For example, if you wanted to use /root/arp_entries instead, you would run this:

# arp -f /root/arp_entries

This script isn't perfect, but it can save a lot of time when creating static ARP table entries for the hosts on your network. Once you've generated the file with the MAC/IP pairings, you can copy it to the other hosts and add an arp command to the system startup scripts, to automatically load them at boot time. The main downside to using this method is that all the devices on your network need to be powered on when the script runs; otherwise, they will be missing from the list. In addition, if the machines on your network change frequently, you'll have to regenerate and distribute the file often, which may be more trouble than it's worth. But for servers and devices that never change their IP or MAC address, this method can protect your machines from ARP poisoning attacks.
Udgivet i Knowledge Base, Networking, Old Base | Skriv en kommentar

Using ARPWATCH

Find out if there's a "man in the middle" impersonating your server.

One of the biggest threats to a computer network is a rogue system pretending to be a trusted host. Once someone has successfully impersonated another host, they can do a number of nefarious things. For example, they can intercept and log traffic destined for the real host, or lay in wait for clients to connect and begin sending the rogue host confidential information. Spoofing a host has especially severe consequences in IP networks, as this opens many other avenues of attack. One technique for spoofing a host on an IP network is Address Resolution Protocol (ARP) spoofing. ARP spoofing is limited only to local segments and works by exploiting the way IP addresses are translated to hardware Ethernet addresses.

When an IP datagram is sent from one host to another on the same physical segment, the IP address of the destination host must be translated into a MAC address. This is the hardware address of the Ethernet card that is physically connected to the network. To accomplish this, the Address Resolution Protocol is used.

When a host needs to know another host's Ethernet address, it sends out a broadcast frame that looks like this:

01:20:14.833350 arp who-has 192.168.0.66 tell 192.168.0.62

This is called an ARP request. Since this is sent to the broadcast address, all Ethernet devices on the local segment should see the request. The machine that matches the requests responds by sending an ARP reply:

01:20:14.833421 arp reply 192.168.0.66 is-at 0:0:d1:1f:3f:f1

Since the ARP request already contained the MAC address of the sender in the Ethernet frame, the receiver can send this response without making yet another ARP request. Unfortunately, ARP's biggest weakness is that it is a stateless protocol . This means that it does not track responses to the requests that are sent out, and therefore will accept responses without having sent a request. If someone wanted to receive traffic destined for another host, they could send forged ARP responses matching any chosen IP address to their MAC address. The machines that receive these spoofed ARP responses can't distinguish them from legitimate ARP responses, and will begin sending packets to the attacker's MAC address.

Another side effect of ARP being stateless is that a system's ARP tables usually only use the results of the last response. In order for someone to continue to spoof an IP address, it is necessary to flood the host with ARP responses that overwrite legitimate ARP responses from the original host. This particular kind of attack is commonly known as ARP cache poisoning .

Several tools—such as Ettercap (http://ettercap.sourceforge.net), Dsniff (http://www.monkey.org/~dugsong/dsniff/), and Hunt (http://lin.fsid.cvut.cz/~kra/)—employ techniques like this to both sniff on switched networks and perform man-in-the-middle attacks. This technique can of course be used between any two hosts on a switched segment, including the local default gateway. To intercept traffic bidirectionally between hosts A and B, the attacking host C will poison host A's ARP cache, making it think that host B's IP address matches host C's MAC address. C will then poison B's cache, to make it think A's IP address corresponds to C's MAC address.

Luckily, there are methods to detect just this kind of behavior, whether you're using a shared or switched Ethernet segment. One program that can help accomplish this is Arpwatch (ftp://ftp.ee.lbl.gov/arpwatch.tar.gz). It works by monitoring an interface in promiscuous mode and recording MAC/IP address pairings over a period of time. When it sees anomalous behavior, such as a change to one of the MAC/IP pairs that it has learned, it will send an alert to the syslog. This can be very effective in a shared network using a hub, since a single machine can monitor all ARP traffic. However, due to the unicast nature of ARP responses, this program will not work as well on a switched network.

To achieve the same level of detection coverage in a switched environment, Arpwatch should be installed on as many machines as possible. After all, you can't know with 100% certainty what hosts an attacker will decide to target. If you're lucky enough to own one, many high-end switches allow you to designate a monitor port that can see the traffic of all other ports. If you have such a switch, you can install a server on that port for network monitoring, and simply run Arpwatch on it.

After downloading Arpwatch, you can compile and install it in the usual manner by running:

# ./configure && make && make install

When running Arpwatch on a machine with multiple interfaces, you'll probably want to specify the interface on the command line. This can be done by using the -i command-line option:

arpwatch -i iface

As Arpwatch begins to learn the MAC/IP pairings on your network, you'll see log entries similar to this:

Nov  1 00:39:08 zul arpwatch: new station 192.168.0.65 0:50:ba:85:85:ca

When a MAC/IP address pair changes, you should see something like this:

Nov  1 01:03:23 zul arpwatch: changed ethernet address 192.168.0.65 0:e0:81:3:d8:8e 

(0:50:ba:85:85:ca)

Nov  1 01:03:23 zul arpwatch: flip flop 192.168.0.65 0:50:ba:85:85:ca (0:e0:81:3:d8:8e)

Nov  1 01:03:25 zul arpwatch: flip flop 192.168.0.65 0:e0:81:3:d8:8e (0:50:ba:85:85:ca)

In this case, the initial entry is from the first fraudulent ARP response that was received, and the subsequent two are from a race condition between the fraudulent and authentic responses.

To make it easier to deal with multiple Arpwatch installs on a switched environment, you can send the log messages to a central syslogd [Hack #54], aggregating all the output into one place. However, due to the fact that your machines can be manipulated by the same attacks that Arpwatch is looking for, it would be wise to use static ARP table entries [Hack #32] on your syslog server, as well as all the hosts running Arpwatch.
Udgivet i Knowledge Base, Networking, Old Base | Skriv en kommentar

Enforce user and groups resource limits

Make sure resource-hungry users don't bring down your entire system.

Whether it's through malicious intent or an unintentional slip, having a user bring your system down to a slow crawl by using too much memory or CPU time is no fun at all. One popular way of limiting resource usage is to use the ulimit command. This method relies on a shell to limit its child processes, and it is difficult to use when you want to give different levels of usage to different users and groups. Another, more flexible way of limiting resource usage is with the PAM module pam_limits.

pam_limits is preconfigured on most systems that have PAM installed. All you should need to do is edit /etc/security/limits.conf to configure specific limits for users and groups.

The limits.conf configuration file consists of single-line entries describing a single type of limit for a user or group of users. The general format for an entry is:

domain    type    resource    value

The domain portion specifies to whom the limit applies. Single users may be specified here by name, and groups can be specified by prefixing the group name with an @. In addition, the wildcard character * may be used to apply the limit globally to all users except for root. The type portion of the entry specifies whether the limit is a soft or hard resource limit. Soft limits may be increased by the user, whereas hard limits can be changed only by root. There are many types of resources that can be specified for the resource portion of the entry. Some of the more useful ones are cpu, memlock, nproc, and fsize. These allow you to limit CPU time, total locked-in memory, number of processes, and file size, respectively. CPU time is expressed in minutes, and sizes are in kilobytes. Another useful limit is maxlogins, which allows you to specify the maximum number of concurrent logins that are permitted.

One nice feature of pam_limits is that it can work together with ulimit to allow the user to raise her limit from the soft limit to the imposed hard limit.

Let's try a quick test to see how it works. First we'll limit the number of open files for the guest user by adding these entries to limits.conf:

guest            soft    nofile          1000

guest            hard    nofile          2000

Now the guest account has a soft limit of 1,000 concurrently open files and a hard limit of 2,000. Let's test it out:

# su - guest

$ ulimit -a

core file size    (blocks, -c) 0

data seg size     (kbytes, -d) unlimited

file size       (blocks, -f) unlimited

max locked memory   (kbytes, -l) unlimited

max memory size    (kbytes, -m) unlimited

open files          (-n) 1000

pipe size     (512 bytes, -p) 8

stack size      (kbytes, -s) 8192

cpu time       (seconds, -t) unlimited

max user processes      (-u) 1024

virtual memory    (kbytes, -v) unlimited

$ ulimit -n 2000

$ ulimit -n 
2000

$ ulimit -n 2001

-bash: ulimit: open files: cannot modify limit: Operation not permitted

There you have it. In addition to open files, you can create resource limits for any number of other resources and apply them to specific users or entire groups. As you can see, pam_limits is quite powerful and useful in that it doesn't rely upon the shell for enforcement.
Udgivet i Knowledge Base, Linux, Old Base | Skriv en kommentar

Restricted Shell Environments

Keep your users from shooting themselves (and you) in the foot.

Sometimes a sandboxed environment [Hack #10] is overkill for your needs. If you want to set up a restricted environment for a group of users that only allows them to run a few particular commands, you'll have to duplicate all of the libraries and binaries for those commands for each user. This is where restricted shells come in handy. Many shells include such a feature, which is usually invoked by running the shell with the -r switch. While not as secure as a system call-based sandbox environment, it can work well if you trust your users not to be malicious, but worry that some might be curious to an unhealthy degree.

Some common features of restricted shells are the ability to prevent a program from changing directories, to only allow the execution of commands using absolute pathnames, and to prohibit executing commands in other subdirectories. In addition to these restrictions, all of the command-line redirection operators are disabled. With these features, restricting the commands a user can execute is as simple as picking and choosing which commands should be available and making symbolic links to them inside the user's home directory. If a sequence of commands needs to be executed, you can also create shell scripts owned by another user. These scripts will execute in a nonrestricted environment and can't be edited within the environment by the user.

Let's try running a restricted shell and see what happens:

$ bash -r

bash: SHELL: readonly variable

bash: PATH: readonly variable

bash-2.05b$ ls

bash: ls: No such file or directory

bash-2.05b$ /bin/ls

bash: /sbin/ls: restricted: cannot specify `/' in command names

bash-2.05b$ exit

$ ln -s /bin/ls .

$ bash -r 
bash-2.05b$ ls -la

total 24

drwx------    2 andrew    andrew        4096 Oct 20 08:01 .

drwxr-xr-x    4 root      root          4096 Oct 20 14:16 ..

-rw-------    1 andrew    andrew          18 Oct 20 08:00 .bash_history

-rw-r--r--    1 andrew    andrew          24 Oct 20 14:16 .bash_logout

-rw-r--r--    1 andrew    andrew         197 Oct 20 07:59 .bash_profile

-rw-r--r--    1 andrew    andrew         127 Oct 20 07:57 .bashrc

lrwxrwxrwx    1 andrew    andrew           7 Oct 20 08:01 ls -> /bin/ls

Restricted ksh is a little different in that it will allow you to run scripts and binaries that are in your PATH, which can be set before entering the shell:

$ rksh

$ ls -la 
total 24

drwx------    2 andrew    andrew        4096 Oct 20 08:01 .

drwxr-xr-x    4 root      root          4096 Oct 20 14:16 ..

-rw-------    1 andrew    andrew          18 Oct 20 08:00 .bash_history

-rw-r--r--    1 andrew    andrew          24 Oct 20 14:16 .bash_logout

-rw-r--r--    1 andrew    andrew         197 Oct 20 07:59 .bash_profile

-rw-r--r--    1 andrew    andrew         127 Oct 20 07:57 .bashrc

lrwxrwxrwx    1 andrew    andrew           7 Oct 20 08:01 ls -> /bin/ls

$ which ls

/bin/ls

$ exit

This worked because /bin was in the PATH before we invoked ksh. Now let's change the PATH and run rksh again:

$ export PATH=.

$ /bin/rksh

$ /bin/ls 
/bin/rksh: /bin/ls: restricted

$ exit

$ ln -s /bin/ls .

$ ls -la

total 24

drwx------    2 andrew    andrew        4096 Oct 20 08:01 .

drwxr-xr-x    4 root      root          4096 Oct 20 14:16 ..

-rw-------    1 andrew    andrew          18 Oct 20 08:00 .bash_history

-rw-r--r--    1 andrew    andrew          24 Oct 20 14:16 .bash_logout

-rw-r--r--    1 andrew    andrew         197 Oct 20 07:59 .bash_profile

-rw-r--r--    1 andrew    andrew         127 Oct 20 07:57 .bashrc

lrwxrwxrwx    1 andrew    andrew           7 Oct 20 08:01 ls -> /bin/ls

Restricted shells are incredibly easy to set up and can provide minimal restricted access. They may not be able to keep out determined attackers, but they certainly make a hostile user's job much more difficult.

	 < Day Day Up >
Udgivet i Knowledge Base, Linux, Old Base | Skriv en kommentar

Control login access with pam

Seize fine-grained control of when and where your users can access your system.

In traditional Unix authentication there is not much granularity available in limiting a user's ability to log in. For example, how would you limit the hosts that users can come from when logging into your servers? Your first thought might be to set up TCP wrappers or possibly firewall rules [Hack #33] and [Hack #34] . But what if you wanted to allow some users to log in from a specific host, but disallow others from logging in from it? Or what if you wanted to prevent some users from logging in at certain times of the day because of daily maintenance, but allow others (i.e., administrators) to log in at any time they wish? To get this working with every service that might be running on your system, you would traditionally have to patch each of them to support this new functionality. This is where PAM enters the picture.

PAM, or pluggable authentication modules, allows for just this sort of functionality (and more) without the need to patch all of your services. PAM has been available for quite some time under Linux, FreeBSD, and Solaris, and is now a standard component of the traditional authentication facilities on these platforms. Many services that need to use some sort of authentication now support PAM.

Modules are configured for services in a stack, with the authentication process proceeding from top to bottom as the access checks complete successfully. You can build a custom stack for any service by creating a file in /etc/pam.d with the same name as the service. If you need even more granularity, an entire stack of modules can be included by using the pam_stack module. This allows you to specify another external file containing a stack. If a service does not have its own configuration file in /etc/pam.d, it will default to using the stack specified in /etc/pam.d/other.

When configuring a service for use with PAM, there are several types of entries available. These types allow one to specify whether a module provides authentication, access control, password change control, or session setup and teardown. Right now, we are interested in only one of the types: the account type. This entry type allows you to specify modules that will control access to accounts that have been authenticated. In addition to the service-specific configuration files, some modules have extended configuration information that can be specified in files within the /etc/security directory. For this hack, we'll mainly use two of the most useful modules of this type, pam_access and pam_time.

The pam_access module allows one to limit where a user or group of users may log in from. To make use of it, you'll first need to configure the service you wish to use the module with. You can do this by editing the service's PAM config file in /etc/pam.d.

Here's an example of what /etc/pam.d/login might look like under Red Hat 9:

#%PAM-1.0

auth       required     pam_securetty.so

auth       required     pam_stack.so service=system-auth

auth       required     pam_nologin.so

account    required     pam_stack.so service=system-auth

password   required     pam_stack.so service=system-auth

session    required     pam_stack.so service=system-auth

session    optional     pam_console.so

Notice the use of the pam_stack module—it includes the stack contained within the system-auth file. Let's see what's inside /etc/pam.d/system-auth:

#%PAM-1.0

# This file is auto-generated.

# User changes will be destroyed the next time authconfig is run.

auth        required      /lib/security/$ISA/pam_env.so

auth        sufficient    /lib/security/$ISA/pam_unix.so likeauth nullok

auth        required      /lib/security/$ISA/pam_deny.so

account     required      /lib/security/$ISA/pam_unix.so

password    required      /lib/security/$ISA/pam_cracklib.so retry=3 type=

password    sufficient    /lib/security/$ISA/pam_unix.so nullok use_authtok md5 shadow

password    required      /lib/security/$ISA/pam_deny.so

session     required      /lib/security/$ISA/pam_limits.so

session     required      /lib/security/$ISA/pam_unix.so

To add the pam_access module to the login service, you could add another account entry to the login configuration file, which would, of course, just enable the module for the login service. Alternatively, you could add the module to the system-auth file, which would enable it for most of the PAM-aware services on the system.

To add pam_access to the login service (or any other service for that matter), simply add a line like this to the service's configuration file after any preexisting account entries:

account    required     pam_access.so

Now that we've enabled the pam_access module for our services, we can edit /etc/security/access.conf to control how the module behaves. Each entry in the file can specify multiple users, groups, and hostnames to which the entry applies, and specify whether it's allowing or disallowing remote or local access. When pam_access is invoked by an entry in a service configuration file, it will look through the lines of access.conf and stop at the first match it finds. Thus, if you want to create default entries to fall back on, you'll want to put the more specific entries first, with the general entries following them.

The general form of an entry in access.conf is:

permission

 : 

users

 : 

origins

where permission can be either a + or -. This denotes whether the rule grants or denies access, respectively.

The users portion allows you to specify a list of users or groups, separated by whitespace. In addition to simply listing users in this portion of the entry, you can use the form user@host, where host is the local hostname of the machine being logged into. This allows you to use a single configuration file across multiple machines, but still specify rules pertaining to specific machines. The origins portion is compared against the origin of the access attempt. Hostnames can be used for remote origins, and the special LOCAL keyword can be used for local access. Instead of explicitly specifying users, groups, or origins, you can also use the ALL and EXCEPT keywords to perform set operations on any of the lists.

Here's a simple example of locking out the user andrew (Eep! That's me!) from a host named colossus:

- : andrew : colossus

Note that if a group that shares its name with a user is specified, the module will interpret the rule as applying to both the user and the group.

Now that we've covered how to limit where a user may log in from and how to set up a PAM module, let's take a look at how to limit what time a user may log in by using the pam_time module. To configure this module, you need to edit /etc/security/time.conf. The format for the entries in this file are a little more flexible than that of access.conf, thanks to the availability of the NOT (!), AND (&), and OR (|) operators.

The general form for an entry in time.conf is:

services;devices;users;times

The services portion of the entry specifies what PAM-enabled service will be regulated. You can usually get a full list of the available services by looking at the contents of your /etc/pam.d directory.

For instance, here's the contents of /etc/pam.d on a RedHat Linux system:

$ ls -1 /etc/pam.d

authconfig

chfn

chsh

halt

internet-druid

kbdrate

login

neat

other

passwd

poweroff

ppp

reboot

redhat-config-mouse

redhat-config-network

redhat-config-network-cmd

redhat-config-network-druid

rhn_register

setup

smtp

sshd

su

sudo

system-auth

up2date

up2date-config

up2date-nox

vlock

To set up pam_time for use with any of these services, you'll need to add a line like this to the file in /etc/pam.d that corresponds to the service that you want to regulate:

account     required      /lib/security/$ISA/pam_time.so

The devices portion specifies the terminal device that the service is being accessed from. For console logins, you can use !ttyp*, which specifies all TTY devices except for pseudo TTYs. If you want the entry to only affect remote logins, then use ttyp*. You can restrict it to all users (console, remote, and X11) by using tty*.

For the users portion of the entry, you can specify a single user or a list of users by separating each one with a | character. The times portion is used to specify the times that the rule will apply. Each time range is specified with a combination of two character abbreviations, which denote the days that the rule will apply, followed with a range of hours for that day. The abbreviations for the days of the week are Mo, Tu, We, Th, Fr, Sa, and Su. For convenience you can use Wk to specify weekdays and Wd to specify the weekend. In addition, you can use Al to specify every day of the week. These last three basically expand to the set of days that compose each time period. This is important to remember, since repeated days are subtracted from the set of days that the rule will apply to (e.g., WkSu would effectively be just Sa). The range of hours is simply specified as two 24-hour times, minus the colons, separated by a dash (e.g., 0630-1345 is 6:30 A.M. to 1:45 P.M.).

If you wanted to disallow access to the user andrew from the local console on weekends and during the week after hours, you could use an entry like this:

system-auth;!ttyp*;andrew;Wk1700-0800|Wd0000-2400

Or perhaps you want to limit remote logins through SSH during a system maintenance window lasting from 7 P.M. Friday to 7 A.M. Saturday, but want to allow a sysadmin to log in:

sshd;ttyp*;!andrew;Fr1900-0700

As you can see, there's a lot of flexibility for creating entries, thanks to the logical Boolean operators that are available. Just make sure that you remember to configure the service file in /etc/pam.d for use with pam_time when you create entries in /etc/security/time.conf.
Udgivet i Knowledge Base, Linux, Old Base | Skriv en kommentar

Automated systrace policy creation

Let Systrace's automated mode do your work for you.

In a true paranoid's ideal world, system administrators would read the source code for every application on their system and be able to build system-call access policies by hand, relying only on their intimate understanding of every feature of the application. Most system administrators don't have that sort of time, and would have better things to do with that time if they did.

Luckily, systrace includes a policy-generation tool that will generate a policy listing for every system call that an application makes. You can use this policy as a starting point to narrow down the access you will allow the application. We'll use this method to generate a policy for inetd.

Use the -A flag to systrace, and include the full path to the program you want to run:

# systrace -A /usr/sbin/inetd

To pass flags to inetd, add them at the end of the command line.

Then use the program for which you're developing a policy. This system has ident, daytime, and time services open, so run programs that require those services. Fire up an IRC client to trigger ident requests, and telnet to ports 13 and 37 to get time services. Once you have put inetd through its paces, shut it down. inetd has no control program, so you need to kill it by process ID.

Checking the process list will show two processes:

# ps -ax | grep inet

24421 ??  Ixs     0:00.00 /usr/sbin/inetd 

12929 ??  Is      0:00.01 systrace -A /usr/sbin/inetd

Do not kill the systrace process (PID 12929 in this example)—that process has all the records of the system calls that inetd has made. Just kill the inetd process (PID 24421), and the systrace process will exit normally.

Now check your home directory for a .systrace directory, which will contain systrace's first stab at an inetd policy. Remember, policies are placed in files named after the full path to the program, replacing slashes with underscores.

Here's the output of ls:

# ls .systrace

usr_libexec_identd   usr_sbin_inetd

systrace created two policies, not one. In addition to the expected policy for /usr/sbin/inetd, there's one for /usr/libexec/identd. This is because inetd implements time services internally, while ident calls a separate program to service requests. When inetd spawned identd, systrace captured the identd system calls as well.

By reading the policy, you can improve your understanding of what the program actually does. Look up each system call the program uses, and see if you can restrict access further. You'll probably want to look for ways to further restrict the policies that are automatically generated. However, these policies make for a good starting point.

Applying a policy to a program is much like creating the systrace policy itself; just run the program as an argument to systrace, using the -a option:

# systrace -a /usr/sbin/inetd

If the program tries to perform system calls not listed in the policy, they will fail. This may cause the program to behave unpredictably. Systrace will log failed entries in /var/log/messages.

To edit a policy, just add the desired statement to the end of the rule list, and it will be picked up. You could do this by hand, of course, but that's the hard way. Systrace includes a tool to let you edit policies in real time, as the system call is made. This is excellent for use in a network operations center environment, where the person responsible for watching the network monitor can also be assigned to watch for system calls and bring them to the attention of the appropriate personnel. You can specify which program you wish to monitor by using systrace's -p flag. This is called attaching to the program.

For example, earlier we saw two processes containing inetd. One was the actual inetd process, and the other was the systrace process managing inetd. Attach to the systrace process, not the actual program (to use the previous example, this would be PID 12929), and give the full path to the managed program as an argument:

# systrace -p 12929 /usr/sbin/inetd

At first nothing will happen. When the program attempts to make an unauthorized system call, however, a GUI will pop up. You will have the options to allow the system call, deny the system call, always permit the call, or always deny it. The program will hang until you make a decision, however, so decide quickly.

Note that these changes will only take effect so long as the current process is running. If you restart the program, you must also restart the attached systrace monitor, and any changes you set in the monitor are gone. You must add those rules to the policy if you want them to be permanent.

The original article that this hack is based on is available online at http://www.onlamp.com/pub/a/bsd/2003/02/27/Big_Scary_Daemons.html.

—Michael Lucas
Udgivet i Knowledge Base, Old Base | Skriv en kommentar

Restricting system calls with systrace (BSD)

Keep your programs from performing tasks they weren't meant to do.

One of the more exciting new features in NetBSD and OpenBSD is systrace, a system call access manager. With systrace, a system administrator can specify which programs can make which system calls, and how those calls can be made. Proper use of systrace can greatly reduce the risks inherent in running poorly written or exploitable programs. Systrace policies can confine users in a manner completely independent of Unix permissions. You can even define the errors that the system calls return when access is denied, to allow programs to fail in a more proper manner. Proper use of systrace requires a practical understanding of system calls and what functionality programs must have to work properly.

First of all, what exactly are system calls? A system call is a function that lets you talk to the operating-system kernel. If you want to allocate memory, open a TCP/IP port, or perform input/output on the disk, you'll need to use a system call. System calls are documented in section 2 of the manpages.

Unix also supports a wide variety of C library calls. These are often confused with system calls but are actually just standardized routines for things that could be written within a program. For example, you could easily write a function to compute square roots within a program, but you could not write a function to allocate memory without using a system call. If you're in doubt whether a particular function is a system call or a C library function, check the online manual.

You may find an occasional system call that is not documented in the online manual, such as break(). You'll need to dig into other resources to identify these calls (break() in particular is a very old system call used within libc, but not by programmers, so it seems to have escaped being documented in the manpages).

Systrace denies all actions that are not explicitly permitted and logs the rejection using syslog. If a program running under systrace has a problem, you can find out which system call the program wants to use and decide if you want to add it to your policy, reconfigure the program, or live with the error.

Systrace has several important pieces: policies, the policy generation tools, the runtime access management tool, and the sysadmin real-time interface. This hack gives a brief overview of policies; in [Hack #16], we'll learn about the systrace tools.

The systrace(1) manpage includes a full description of the syntax used for policy descriptions, but I generally find it easier to look at some examples of a working policy and then go over the syntax in detail. Since named has been a subject of recent security discussions, let's look at the policy that OpenBSD 3.2 provides for named.

Before reviewing the named policy, let's review some commonly known facts about the name server daemon's system-access requirements. Zone transfers and large queries occur on port 53/TCP, while basic lookup services are provided on port 53/UDP. OpenBSD chroots named into /var/named by default and logs everything to /var/log/messages.

Each systrace policy file is in a file named after the full path of the program, replacing slashes with underscores. The policy file usr_sbin_named contains quite a few entries that allow access beyond binding to port 53 and writing to the system log. The file starts with:

# Policy for named that uses named user and chroots to /var/named

# This policy works for the default configuration of named.

Policy: /usr/sbin/named, Emulation: native

The Policy statement gives the full path to the program this policy is for. You can't fool systrace by giving the same name to a program elsewhere on the system. The Emulation entry shows which ABI this policy is for. Remember, BSD systems expose ABIs for a variety of operating systems. Systrace can theoretically manage system-call access for any ABI, although only native and Linux binaries are supported at the moment.

The remaining lines define a variety of system calls that the program may or may not use. The sample policy for named includes 73 lines of system-call rules. The most basic look like this:

native-accept: permit

When /usr/sbin/named tries to use the accept() system call to accept a connection on a socket, under the native ABI, it is allowed. Other rules are far more restrictive. Here's a rule for bind( ), the system call that lets a program request a TCP/IP port to attach to:

native-bind: sockaddr match "inet-*:53" then permit

sockaddr is the name of an argument taken by the accept() system call. The match keyword tells systrace to compare the given variable with the string inet-*:53, according to the standard shell pattern-matching (globbing) rules. So, if the variable sockaddr matches the string inet-*:53, the connection is accepted. This program can bind to port 53, over both TCP and UDP protocols. If an attacker had an exploit to make named attach a command prompt on a high-numbered port, this systrace policy would prevent that exploit from working.

At first glance, this seems wrong:

native-chdir: filename eq "/" then permit

native-chdir: filename eq "/namedb" then permit

The eq keyword compares one string to another and requires an exact match. If the program tries to go to the root directory, or to the directory /namedb, systrace will allow it. Why would you possibly want to allow named to access the root directory? The next entry explains why:

native-chroot: filename eq "/var/named" then permit

We can use the native chroot() system call to change our root directory to /var/named, but to no other directory. At this point, the /namedb directory is actually /var/named/namedb. We also know that named logs to syslog. To do this, it will need access to /dev/log:

native-connect: sockaddr eq "/dev/log" then permit

This program can use the native connect() system call to talk to /dev/log and only /dev/log. That device hands the connections off elsewhere.

We'll also see some entries for system calls that do not exist:

native-fsread: filename eq "/" then permit

native-fsread: filename eq "/dev/arandom" then permit

native-fsread: filename eq "/etc/group" then permit

Systrace aliases certain system calls with very similar functions into groups. You can disable this functionality with a command-line switch and only use the exact system calls you specify, but in most cases these aliases are quite useful and shrink your policies considerably. The two aliases are fsread and fswrite. fsread is an alias for stat(), lstat(), readlink(), and access() under the native and Linux ABIs. fswrite is an alias for unlink(), mkdir(), and rmdir(), in both the native and Linux ABIs. As open() can be used to either read or write a file, it is aliased by both fsread and fswrite, depending on how it is called. So named can read certain /etc files, it can list the contents of the root directory, and it can access the groups file.

Systrace supports two optional keywords at the end of a policy statement, errorcode and log. The errorcode is the error that is returned when the program attempts to access this system call. Programs will behave differently depending on the error that they receive. named will react differently to a "permission denied" error than it will to an "out of memory" error. You can get a complete list of error codes from the errno manpage. Use the error name, not the error number. For example, here we return an error for nonexistent files:

filename sub "<non-existent filename>" then deny[enoent]

If you put the word log at the end of your rule, successful system calls will be logged. For example, if we wanted to log each time named attached to port 53, we could edit the policy statement for the bind() call to read:

native-bind: sockaddr match "inet-*:53" then permit log

You can also choose to filter rules based on user ID and group ID, as the example here demonstrates.

native-setgid: gid eq "70" then permit

This very brief overview covers the vast majority of the rules you will see. For full details on the systrace grammar, read the systrace manpage. If you want some help with creating your policies, you can also use systrace's automated mode [Hack #16] .

The original article that this hack is based on is available online at http://www.onlamp.com/pub/a/bsd/2003/01/30/Big_Scary_Daemons.html.

—Michael Lucas
Udgivet i FreeBSD, Knowledge Base, Old Base | Skriv en kommentar

Restrict apps with grsecurity

To restrict specific applications, you will need to make use of the gradm utility, which can be downloaded from the main grsecurity site (http://www.grsecurity.net). You can compile and install it in the usual way: unpack the source distribution, change into the directory that it creates, and then run make && make install. This will install gradm in /sbin, create the /etc/grsec directory containing a default ACL, and install the manpage.

After gradm has been installed, the first thing you'll want to do is create a password that gradm will use to authenticate itself to the kernel. You can do this by running gradm with the -P option:

# gradm -P

Setting up grsecurity ACL password

Password: 

Re-enter Password: 

Password written to /etc/grsec/pw.

To enable grsecurity's ACL system, use this command:

# /sbin/gradm -E

Once you're finished setting up your ACLs, you'll probably want to add that command to the end of your system startup. You can do this by adding it to the end of /etc/rc.local or a similar script that is designated for customizing your system startup.

The default ACL installed in /etc/grsec/acl is quite restrictive, so you'll want to create ACLs for the services and system binaries you want to use. For example, after the ACL system has been enabled, ifconfig will no longer be able to change interface characteristics, even when run as root:

# /sbin/ifconfig eth0:1 192.168.0.59 up

SIOCSIFADDR: Permission denied

SIOCSIFFLAGS: Permission denied

SIOCSIFFLAGS: Permission denied

The easiest way to set up an ACL for a particular command is to specify that you want to use grsecurity's learning mode, rather than specifying each ACL manually. If you've enabled ACLs, you'll need to temporarily disable them for your shell by running gradm -a. You'll then be able to access files within /etc/grsec; otherwise, the directory will be hidden to you.

Add an entry like this to /etc/grsec/acl:

/sbin/ifconfig lo {

        /               h

        /etc/grsec      h

        -CAP_ALL

}

This is about the most restrictive ACL possible because it hides the root directory from the process and removes any privileges that it may need. The lo next to the binary to which the ACL applies says to use learning mode and to override the default ACL. After you're done editing the ACLs, you'll need to tell grsecurity to reload them by running gradm -R.

Now try to run the ifconfig command again:

# /sbin/ifconfig eth0:1 192.168.0.59 up

# /sbin/ifconfig eth0:1

eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:E2:2B:C1  

          inet addr:192.168.0.59  Bcast:192.168.0.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          Interrupt:10 Base address:0x10e0

In addition to the command succeeding, grsecurity will create learning log entries. You can then use gradm to generate an ACL for the program based on these logs:

# gradm -a

Password:

# gradm -L -O stdout

/sbin/ifconfig o {

        /usr/share/locale/locale.alias r

        /usr/lib/locale/locale-archive r

        /usr/lib/gconv/gconv-modules.cache r

        /proc/net/unix r

        /proc/net/dev r

        /proc/net r

        /lib/ld-2.3.2.so x

        /lib/i686/libc-2.3.2.so rx

        /etc/ld.so.cache r

        /sbin/ifconfig x

        /etc/grsec h

        / h

        -CAP_ALL

        +CAP_NET_ADMIN

}

Now you can replace the learning ACL for /sbin/ifconfig in /etc/grsec/acl with this one, and ifconfig should work. You can then follow this process for each program that needs special permissions to function. Just make sure to try out anything you will want to do with those programs, to ensure that grsecurity's learning mode will detect that it needs to perform a particular system call or open a specific file.

Using grsecurity to lock down applications can seem like tedious work at first, but it will ultimately create a system that gives each process only the permissions it needs to do its job—no more, no less. When you need to build a highly secured platform, grsecurity can provide very finely grained control over just about everything the system can possibly do.
Udgivet i Knowledge Base, Linux, Old Base | Skriv en kommentar