Recover broken mysql table

mysql> repair table weblog use_frm;

+—————+——–+———-+—————————————–+
| Table | Op | Msg_type | Msg_text |
+—————+——–+———-+—————————————–+
| system.weblog | repair | warning | Number of rows changed from 0 to 666601 |
| system.weblog | repair | status | OK |
+—————+——–+———-+—————————————–+
2 rows in set (35.34 sec)

http://dev.mysql.com/doc/mysql/en/Repair.html

Udgivet i Knowledge Base, Old Base | Skriv en kommentar

ISO/OSI Network Model / TCP/IP Network Model

ISO/OSI Network Model
The standard model for networking protocols and distributed applications is the International Standard Organization’s Open System Interconnect (ISO/OSI) model. It defines seven network layers.

Layer 1 – Physical
Physical layer defines the cable or physical medium itself, e.g., thinnet, thicknet, unshielded twisted pairs (UTP). All media are functionally equivalent. The main difference is in convenience and cost of installation and maintenance. Converters from one media to another operate at this level.

Layer 2 – Data Link
Data Link layer defines the format of data on the network. A network data frame, aka packet, includes checksum, source and destination address, and data. The largest packet that can be sent through a data link layer defines the Maximum Transmission Unit (MTU). The data link layer handles the physical and logical connections to the packet’s destination, using a network interface. A host connected to an Ethernet would have an Ethernet interface to handle connections to the outside world, and a loopback interface to send packets to itself.

Ethernet addresses a host using a unique, 48-bit address called its Ethernet address or Media Access Control (MAC) address. MAC addresses are usually represented as six colon-separated pairs of hex digits, e.g., 8:0:20:11:ac:85. This number is unique and is associated with a particular Ethernet device. Hosts with multiple network interfaces should use the same MAC address on each. The data link layer’s protocol-specific header specifies the MAC address of the packet’s source and destination. When a packet is sent to all hosts (broadcast), a special MAC address (ff:ff:ff:ff:ff:ff) is used.

Layer 3 – Network
NFS uses Internetwork Protocol (IP) as its network layer interface. IP is responsible for routing, directing datagrams from one network to another. The network layer may have to break large datagrams, larger than MTU, into smaller packets and host receiving the packet will have to reassemble the fragmented datagram. The Internetwork Protocol identifies each host with a 32-bit IP address. IP addresses are written as four dot-separated decimal numbers between 0 and 255, e.g., 129.79.16.40. The leading 1-3 bytes of the IP identify the network and the remaining bytes identifies the host on that network. The network portion of the IP is assigned by InterNIC Registration Services, under the contract to the National Science Foundation, and the host portion of the IP is assigned by the local network administrators, locally by noc@indiana.edu. For large sites, usually subnetted like ours, the first two bytes represents the network portion of the IP, and the third and fourth bytes identify the subnet and host respectively.

Even though IP packets are addressed using IP addresses, hardware addresses must be used to actually transport data from one host to another. The Address Resolution Protocol (ARP) is used to map the IP address to it hardware address.

Layer 4 – Transport
Transport layer subdivides user-buffer into network-buffer sized datagrams and enforces desired transmission control. Two transport protocols, Transmission Control Protocol (TCP) and User Datagram Protocol (UDP), sits at the transport layer. Reliability and speed are the primary difference between these two protocols. TCP establishes connections between two hosts on the network through ‘sockets’ which are determined by the IP address and port number. TCP keeps track of the packet delivery order and the packets that must be resent. Maintaining this information for each connection makes TCP a stateful protocol. UDP on the other hand provides a low overhead transmission service, but with less error checking. NFS is built on top of UDP because of its speed and statelessness. Statelessness simplifies the crash recovery.

Layer 5 – Session
The session protocol defines the format of the data sent over the connections. The NFS uses the Remote Procedure Call (RPC) for its session protocol. RPC may be built on either TCP or UDP. Login sessions uses TCP whereas NFS and broadcast use UDP.

Layer 6 – Presentation
External Data Representation (XDR) sits at the presentation level. It converts local representation of data to its canonical form and vice versa. The canonical uses a standard byte ordering and structure packing convention, independent of the host.

Layer 7 – Application
Provides network services to the end-users. Mail, ftp, telnet, DNS, NIS, NFS are examples of network applications.

TCP/IP Network Model
Although the OSI model is widely used and often cited as the standard, TCP/IP protocol has been used by most Unix workstation vendors. TCP/IP is designed around a simple four-layer scheme. It does omit some features found under the OSI model. Also it combines the features of some adjacent OSI layers and splits other layers apart. The four network layers defined by TCP/IP model are as follows.

Layer 1 – Link
This layer defines the network hardware and device drivers.

Layer 2 – Network
This layer is used for basic communication, addressing and routing. TCP/IP uses IP and ICMP protocols at the network layer.

Layer 3 – Transport
Handles communication among programs on a network. TCP and UDP falls within this layer.

Layer 4 – Application
End-user applications reside at this layer. Commonly used applications include NFS, DNS, arp, rlogin, talk, ftp, ntp and traceroute.

Udgivet i Knowledge Base, Networking, Old Base | Skriv en kommentar

FreeBSD – Installing MySQL

cd /usr/ports/databases/mysql40-server
make DB_DIR=/data install
/usr/local/etc/rc.d/mysql-server.sh start
echo mysql_enable=yes > /etc/rc.conf

Udgivet i FreeBSD, Knowledge Base, Old Base | Skriv en kommentar

FreeBSD – Install/Update ports/Cvsup

If no ports at all: pkg_add -r cvsup-without-gui

cd /usr/ports/net/cvsup-without-gui
make
make install
rehash
cd /usr/share/examples/cvsup
vi ports-supfile

————————————–
# listed at http://www.freebsd.org/handbook/mirrors.html.
*default host=CHANGE_THIS.FreeBSD.org
*default base=/usr
————————————-
Change to:
————————————–
# listed at http://www.freebsd.org/handbook/mirrors.html.
*default host=cvsup.dk.FreeBSD.org
*default base=/usr
————————————-

cvsup -g -L 2 ports-supfile

– System now updates port collection

Udgivet i FreeBSD, Knowledge Base, Old Base | Skriv en kommentar

FreeBSD : Install Webserver (ftp,http,php)

# apache wants openssl
cd /usr/ports/security/openssl
make install

# and expat2
cd /usr/ports/textproc/expat2
make deinstall
make install

cd /usr/ports/www/apache2
make install
echo apache2_enable=YES >> /etc/rc.conf

cd /usr/ports/www/mod_php4
make install

– ncurses menu will come up, select wanted modules

echo AddType application/x-httpd-php .php >> /usr/local/etc/apache2/httpd.conf

Webdir is in : /usr/local/www/data-dist

/usr/local/etc/rc.d/apache2.sh restart

– FTP Deamon for lusers to upload shit

cd /usr/ports/pure-ftpd
make
– ncurses box will come up, select wanted (none)
make install
cd /usr/local/etc
cp pure-ftpd.conf.samlpe pure-ftpd.conf
pw useradd ftp
echo pureftpd_enable=”YES”>>/etc/rc.conf

Udgivet i FreeBSD, Knowledge Base, Old Base | Skriv en kommentar

using updatedb on freebsd – slocate

the updatedb program is named locate.updatedb

to fix:

cd /sbin
ln -s /usr/libexec/locate.updatedb updatedb
updatedb

Udgivet i FreeBSD, Knowledge Base, Old Base | Skriv en kommentar

Scan for rootkit

Use chkrootkit to determine the extent of a compromise.

If you suspect that you have a compromised system, it is a good idea to check for root kits that the intruder may have installed. In short, a root kit is a collection of programs that intruders often install after they have compromised the root account of a system. These programs will help the intruders clean up their tracks, as well as provide access back into the system. Because of this, root kits will sometimes leave processes running so that the intruder can come back easily and without the system administrator’s knowledge. This means that some of the system’s binaries (like ps, ls, and netstat) will need to be modified by the root kit in order to not give away the backdoor processes that the intruder has put in place. Unfortunately, there are so many different root kits that it would be far too time-consuming to learn the intricacies of each one and look for them manually. Scripts like chkrootkit (http://www.chkrootkit.org) will do the job for you automatically.

In addition to detecting over 50 different root kits, chkrootkit will also detect network interfaces that are in promiscuous mode, altered lastlog files, and altered wtmp files. These files contain times and dates of when users have logged on and off the system, so if they have been altered, this is evidence of an intruder. In addition, chkrootkit will perform tests in order to detect kernel module-based root kits. C programs that are called by the main chkrootkit script perform all of these tests.

It isn’t a good idea to install chkrootkit on your system and simply run it periodically, since an attacker may simply find the installation and change it so that it doesn’t detect his presence. A better idea may be to compile it and put it on removable or read-only media. To compile chrootkit, download the source package and extract it. Then go into the directory that it created and type make sense.

Running chkrootkit is as simple as just typing ./chkrootkit from the directory it was built in. When you do this, it will print each test that it performs and the result of the test:

# ./chrootkit

ROOTDIR is `/’

Checking `amd’… not found

Checking `basename’… not infected

Checking `biff’… not found

Checking `chfn’… not infected

Checking `chsh’… not infected

Checking `cron’… not infected

Checking `date’… not infected

Checking `du’… not infected

Checking `dirname’… not infected

Checking `echo’… not infected

Checking `egrep’… not infected

Checking `env’… not infected

Checking `find’… not infected

Checking `fingerd’… not found

Checking `gpm’… not infected

Checking `grep’… not infected

Checking `hdparm’… not infected

Checking `su’… not infected

That’s not very interesting, since the machine hasn’t been infected (yet). chrootkit can also be run on disks mounted in another machine; just specify the mount point for the partition with the -r option, like this:

# ./chrootkit -r /mnt/hda2_image

Also, since chrootkit depends on several system binaries, you may want to verify them before running the script (using the Tripwire [Hack #97] or RPM [Hack #98] methods). These binaries are awk, cut, egrep, find, head, id, ls, netstat, ps, strings, sed, and uname. However, if you have known good backup copies of these, you can specify the path to them by using the -p option. For instance, if you copied them to a CD-ROM and then mounted it under /mnt/cdrom, you would use a command like this:

# ./chrootkit -p /mnt/cdrom

You can also add multiple paths by separating each one with a :. Instead of maintaining a separate copy of each of these binaries, you could simply keep a statically compiled copy of BusyBox handy (http://www.busybox.net). Intended for embedded systems, BusyBox can perform the functions of over 200 common binaries, and does so using a very tiny binary with symlinks. A floppy, CD, or USB keychain (with the read-only switch enabled) with chkrootkit and a static BusyBox installed can be a quick and handy tool for checking the integrity of your system.

Udgivet i Knowledge Base, Linux, Old Base | Skriv en kommentar

Finding compromised packages with RPM

Verify operating system installed files in an RPM-based distribution.

So you’ve had a compromise and need to figure out which files (if any) were modified by the intruder, but you didn’t install Tripwire? Well, all is not lost if your distribution uses RPM for its package management system. While not as powerful as Tripwire, RPM can be useful for finding to what degree a system has been compromised. RPM keeps MD5 signatures for all the files it has ever installed. We can use this functionality to check the packages on a system against its signature database. In addition to MD5 checksums, you can also check a file’s size, user, group, mode, and modification time against that which is stored in the system’s RPM database.

To verify a single package, run this:

rpm -V
package

If the intruder modified any binaries, it’s very likely that the ps command was one of them. Let’s check its signature:

# which ps

/bin/ps

# rpm -V `rpm -qf /bin/ps`

S.5….T /bin/ps

Here we see from the S, 5, and T that the file’s size, checksum, and modification time has changed from when it was installed—not good at all. Note that only files that do not match the information contained in the package database will result in output.

If we want to verify all packages on the system, we can use the usual rpm option that specifies all packages, -a:

# rpm -Va

S.5….T /bin/ps

S.5….T c /etc/pam.d/system-auth

S.5….T c /etc/security/access.conf

S.5….T c /etc/pam.d/login

S.5….T c /etc/rc.d/rc.local

S.5….T c /etc/sysconfig/pcmcia

…….T c /etc/libuser.conf

S.5….T c /etc/ldap.conf

…….T c /etc/mail/sendmail.cf

S.5….T c /etc/sysconfig/rhn/up2date-uuid

…….T c /etc/yp.conf

S.5….T /usr/bin/md5sum

…….T c /etc/krb5.conf

There are other options that can be used to limit what gets checked on each file. Some of the more useful ones are -nouser, -nogroup, -nomtime, and -nomode. These can be used to eliminate a lot of the output that results from configuration files that you’ve modified.

Note that you’ll probably want to redirect the output to a file, unless you narrow down what gets checked by using the command-line options. Running rpm -Va without any options can result in quite a lot of output resulting from modified configuration files and such.

This is all well and good, but ignores the possibility that someone has compromised key system binaries and that they may have compromised the RPM database as well. If this is the case, we can still use RPM, but we’ll need to obtain the file the package was installed from in order to verify the installed files against it.

The worst-case scenario is that the rpm binary itself has been compromised. It can be difficult to be certain of this unless you boot from an alternate media, as mentioned earlier. If this is the case, you should locate a safe rpm binary to use for verifying the packages.

First find the name of the package that owns the file. You can do this by running:

rpm -qf
filename

Then you can locate that package from your distribution media, or download it from the Internet. After doing so, you can verify the installed files against what’s in the package using this command:

rpm -Vp

package file

RPM can be used for quite a number of useful things, including verifying the integrity of system binaries. However, it should not be relied on for this purpose. If at all possible, something like Tripwire [Hack #97] or AIDE (http://sourceforge.net/projects/aide) should be used instead.

Udgivet i Knowledge Base, Linux, Old Base | Skriv en kommentar

Verify file integrity and find compromised files

Use Tripwire to alert you to compromised files or verify file integrity in the event of a compromise.

One tool that can help you detect intrusions on a host and also ascertain what happened after the fact is Tripwire (http://sourceforge.net/projects/tripwire). Tripwire is part of a class of tools known as file integrity checkers, which can detect the presence of important changed files on your systems. This is desirable because intruders who have gained access to a system will often install what’s known as a root kit, in an attempt to both cover their tracks and maintain access to the system. A root kit usually accomplishes this by modifying key operating system utilities such as ps, ls, and other programs that could give away the presence of a backdoor program. This usually means that these programs will be patched to not report that a certain process is active or that certain files exist on the system. Attackers could also modify the system’s MD5 checksum program (e.g., md5 or md5sum) to report correct checksums for all the binaries that they have replaced. Since using MD5 checksums is usually one of the primary ways to verify whether a file has been modified, it should be clear that something else is sorely needed.

This is where Tripwire comes in handy. It stores a snapshot of your files in a known state, so you can periodically compare the files against the snapshot to discover discrepancies. With this snapshot, Tripwire can track changes in a file’s size, inode number, permissions, or other attributes, such as the file’s contents. To top all of this off, Tripwire encrypts and signs its own files, to detect if it has been compromised itself.

Tripwire is driven by two main components: a policy and a database. The policy lists all files and directories that Tripwire should snapshot, along with rules for identifying violations (i.e., unexpected changes). For example, a simple policy might treat any changes in /root, /sbin, /bin, and /lib as violations. The Tripwire database contains the snapshot itself, created by evaluating the policy against your filesystems. Once setup is complete, you can compare filesystems against the snapshot at any time, and Tripwire will report any discrepancies.

Along with the policy and database, Tripwire also has configuration settings, stored in a file that controls global aspects of its behavior. For example, the configuration specifies the locations of the database, policy file, and tripwire executable.

Tripwire uses two cryptographic keys to protect its files. The site key protects the policy file and the configuration file, and the local key protects the database and generated reports. Multiple machines with the same policy and configuration may share a site key, but each machine must have its own local key for its database and reports.

One caveat with Tripwire is that its batch-oriented method of integrity checking gives intruders a window of opportunity to modify a file after it has been legitimately modified and before the next integrity check has been run. The modified file will be flagged, but it will be expected (because you know that the file is modified) and probably dismissed as a legitimate change to the file. For this reason, it is best to update your Tripwire snapshot as often as possible. Failing that, you should note the exact time that you modified a file, so you can compare it with the modification time that Tripwire reports.

Tripwire is available with the latest versions of Red Hat and as a port on FreeBSD. However, if you’re not running either of those, you’ll need to compile it from source. To compile Tripwire, download the source package and unpack it. Next, check whether you have a symbolic link from /usr/bin/gmake to /usr/bin/make. (Operating systems outside the world of Linux don’t always come with GNU make, so Tripwire explicitly looks for gmake, but this is simply called make on most Linux systems.) If you don’t have such a link, create one.

Another thing to check for is a full set of subdirectories in /usr/share/man. Tripwire will need to place manpages in man4, man5, and man8. On systems where these are missing, the installer will create files named after those directories, rather than creating directories and placing the files within the appropriate ones. For instance, a file called /usr/man/man4 would be created instead of a directory of the same name containing the appropriate manual pages.

Now change your working directory to Tripwire source’s root directory (e.g., ./tripwire-2.3.1-2) and read the README and INSTALL files. Both are brief but important.

Finally, change to the source tree’s src directory (e.g., ./tripwire-2.3.1-2/src) and make any necessary changes to the variable definitions in src/Makefile. Be sure to verify that the appropriate SYSPRE definition is uncommented (SYSPRE = i686-pc-linux or SYSPRE = sparc-linux, etc.).

Now you’re ready to compile. While still in Tripwire’s src directory, enter this command:

$ make release

Then, after compilation has finished, run these commands:

$ cd ..

$ cp ./install/install.cfg .

$ cp ./intall/install.sh

Now open install.cfg with your favorite text editor to fine-tune the configuration variables. While the default paths are probably fine, you should at the very least examine the Mail Options section, which is where we initially tell Tripwire how to route its logs. Note that these settings can be changed later.

If you set TWMAILMETHOD=SENDMAIL and specify a value for TWMAILPROGRAM, Tripwire will use the specified local mailer (sendmail by default) to deliver its reports to a local user or group. If instead you set TWMAILMETHOD=SMTP and specify values for TWSMTPHOST and TWSMTPPORT, Tripwire will mail its reports to an external email address via the specified SMTP server and port.

Once you are done editing install.cfg, it’s time to install Tripwire. While still in the root directory of the Tripwire source distribution, enter the following:

# sh ./install.sh

You will be prompted for site and local passwords: the site password protects Tripwire’s configuration and policy files, whereas the local password protects Tripwire’s databases and reports. This allows the use of a single policy across multiple hosts, to centralize control of Tripwire policies but distribute responsibility for database management and report generation.

If you do not plan to use Tripwire across multiple hosts with shared policies, there’s nothing wrong with setting the site and local Tripwire passwords on a given system to the same string. In either case, choose a strong passphrase that contains some combination of upper- and lowercase letters, punctuation (which can include whitespace), and numerals.

When you install Tripwire (whether via binary package or source build), a default configuration file is created, /etc/tripwire/tw.cfg. You can’t edit this file, because it’s an encrypted binary, but for your convenience, a clear-text version of it, twcfg.txt, should also reside in /etc/tripwire. If it does not, you can create the text version with this command:

# twadmin –print-cfgfile > /etc/tripwire/twcfg.txt

By editing this file, you can make changes to the settings you used when installing Tripwire, and you can change the location where Tripwire will look for its database. This can be done by setting the DBFILE variable. One interesting use of this is to set the variable to a directory within the /mnt directory hierarchy. Then, after the database has been created you can copy it to a CD-ROM and remount it there whenever you need to perform integrity checks.

After you are done editing the configuration file, you can re-encrypt it by running this command:

# twadmin –create-cfgfile –site-keyfile ./site.key twcfg.txt

You should also remove the twcfg.txt file.

You can then initialize Tripwire’s database by running this command:

# tripwire –init

Since this uses the default policy file that Tripwire installed, you will probably see errors related to files and directories not being found. These errors are nonfatal, and the database will finish initializing. If you want to get rid of these errors, you can edit the policy and remove the files that were reported as missing.

First you’ll need to decrypt the policy file into an editable plain text format. You can do this by running the following command:

# twadmin –print-polfile > twpol.txt

Then comment out any files that were reported as missing. You will probably want to look through the file and determine whether any files that you would like to catalog aren’t already in there. For instance, you will probably want to monitor all SUID files on your system [Hack #2]. Tripwire’s policy-file language can allow for far more complex constructs than simply listing one file per line; read the twpolicy(4) manpage for more information if you’d like to use some of these features.

After you’ve updated your policy, you’ll also need to update Tripwire’s database. You can do this by running the following command:

# tripwire –update-policy twpol.txt

To perform checks against your database, run this command:

# tripwire –check

This will print a report to the screen and leave a copy of it in /var/lib/tripwire/report. If you want Tripwire to automatically email the report to the configured recipients, you can add –email-report to the end of the command. You can view the reports by running twprint.

For example:

# twprint –print-report –twrfile \

/var/lib/tripwire/report/colossus-20040102-205528.twr

Finally, to reconcile changes that Tripwire reports with its database, you can run a command similar to this one:

# tripwire –update –twrfile \

/var/lib/tripwire/report/colossus-20040102-205528.twr

You can and should schedule Tripwire to run its checks as regularly as possible. In addition to keeping your database in a safe place, such as on a CD-ROM, you’ll also want to make backup copies of your configuration, policy, and keys. Otherwise you will not perform an integrity check in the event that someone (malicious or not) deletes them.

Udgivet i Knowledge Base, Linux, Old Base | Skriv en kommentar

Forensics: Create an image of the entire harddisk

Make a bit-for-bit copy of your system’s disk for forensic analysis.

Before you format and reinstall the operating system on a recently compromised machine, you should take the time to make duplicates of all the data stored on the system. Having an exact copy of the contents of the system is not only invaluable for investigating a break-in, but may be necessary for pursuing any future legal actions. Before you begin, you should make sure that your md5sum, dd, and fdisk binaries are not compromised (you are running Tripwire [Hack #97] or otherwise have installed your packages using RPM [Hack #98], right?).

But hang on a second. Once you start wondering about the integrity of your system, where do you stop? Hidden processes could be running, waiting for the root user to log in on the console and ready to remove all evidence of the break-in. Likewise, there could be scripts installed to run at shutdown to clean up log entries and delete any incriminating files. Once you’ve determined that it is likely that a machine has been compromised, you may want to simply power down the machine (yes, just switch it off!) and boot from an alternate media. Use a boot CD or another hard drive that has a known good copy of the operating system. That way you can know without a doubt that you are starting the system from a known state, eliminating the possibility of hidden processes that could taint your data before you can copy it. The downside to this procedure is that it will obviously destroy any evidence of running programs or data stored on a RAM disk. However, chances are very good that the intruder has installed other backdoors that will survive a reboot, and these changes will most certainly be saved to the disk.

To make a bit-for-bit copy of our disks, we’ll use the dd command. But before we do this we’ll generate a checksum for the disk so that we can check our copy against the disk contents, to ensure that it is indeed an exact copy.

To generate a checksum for the partition we wish to image, run this command:

# md5sum /dev/hda2 > /tmp/hda2.md5

In this case we’re using the second partition of the first IDE disk on a Linux system. Now that that’s out of the way, it’s time to make an image of the disk:

# dd if=/dev/hda of=/tmp/hda.img

Note that you will need enough space in /tmp to hold a copy of the entire /dev/hda hard drive. This means that /tmp shouldn’t be a RAM disk and should not be stored on /dev/hda. Write it to another hard disk altogether.

Why do you want to image the whole disk? If you image just a partition, it is not an exact copy of what is on the disk. An attacker could store information outside of the partition, and this wouldn’t be copied if you just imaged the partition itself. In any case, we can always reconstruct a partition image as long as we have an image of the entire disk.

In order to create separate partition images, we will need some more information. Run fdisk to get the offsets and sizes for each partition in sectors. To get the sectors offsets for the partition, run this:

# fdisk -l -u /dev/hda

Disk /dev/hda: 4294 MB, 4294967296 bytes

255 heads, 63 sectors/track, 522 cylinders, total 8388608 sectors

Units = sectors of 1 * 512 = 512 bytes

Device Boot Start End Blocks Id System

/dev/hda1 * 63 208844 104391 83 Linux

/dev/hda2 208845 7341704 3566430 83 Linux

/dev/hda3 7341705 8385929 522112+ 82 Linux swap

Be sure to save this information for future reference, just in case you want to create the separate image files at a later date.

Now create an image file for the second partition:

# dd if=hda.img of=hda2.img bs=512 skip=208845 count=$[7341704-208845]

7132859+0 records in

7132859+0 records out

Note that the count parameter does some shell math for us: the size of the partition is the location of the last block (7341704) minus the location of the first block (208845). Be sure that the bs parameter matches the block size reported by fdisk (usually 512, but it’s best to check it when you run fdisk). Finally, we’ll generate a checksum of the image file and then compare it against the original one we created:

# md5sum hda2.img > /tmp/hda2.img.md5 && diff /tmp/hda2.md5 /tmp/hda2.img.md5

The checksum for the image matches that of the actual partition exactly, so we know we have a good copy. Now you can rebuild the original machine and look through the contents of the copy at your leisure.

Udgivet i Knowledge Base, Linux, Old Base | Skriv en kommentar