Gentoo Logo

Gentoo Zumastor Guide

Content:

1. Before You Begin

Partitioning

Zumastor requires some storage space for storing snapshot data (and for the volume itself if you are creating a new filesystem). If you are starting from scratch, install operating systems using custom partitioning, preferably leaving a large LVM volume group (named sysvg in this example) for use when creating origin and snapshot devices. While you ar at it, you might as well create an origin and snapshot store:

Code Listing 1.1: Sample LVM Setup

pvcreate /dev/hda7
vgcreate sysvg /dev/hda7
lvcreate --size 20g -n test sysvg
lvcreate --size 40g -n test_snap sysvg

2. Setting Up The Overlay

Install Layman

Zumastor is most easily installed by way of an overlay, managed through layman. If you already have layman installed, you can jump ahead to Add the Zumastor Overlay. If you don't know what I'm talking about, you might want to read the Gentoo Overlays Guide. For the impatient, I've lifted the key bits (note that this must all be done as root):

Code Listing 2.1: Installing layman

emerge layman
echo "source /usr/portage/local/layman/make.conf" >> /etc/make.conf

Congratulations, you've installed layman

Add the Zumastor Overlay

The Zumastor overlay isn't (yet) an officially sanctioned Gentoo overlay, so adding it is a bit more complicated than usual.

Warning: Overlays are perhaps the best known way to mess up a Gentoo setup... even the officially sanctioned overlays. Consider this fair warning.

If you (again as root) open up /etc/layman/layman.cfg in your editor of choice, you will find a line that looks something like this:

Code Listing 2.2: layman.cfg fragment

overlays   : http://www.gentoo.org/proj/en/overlays/layman-global.txt

URL's for addtional overlays belong on their own line. So, to add our overlay listing, so that it looks like this:

Code Listing 2.3: Edited layman.cfg fragment

overlays   : http://www.gentoo.org/proj/en/overlays/layman-global.txt
             http://zumastor.googlecode.com/svn/trunk/gentoo/overlay/overlays.xml

You've now added Zumastor's overlay listing to layman. Now you need to add Zumastor's overlay (still as root):

Code Listing 2.4: Adding the Zumastor overlay

# layman -f -a zumastor

3. Installing Zumastor

Emerging Zumastor

The next step is to emerge the relevant ebuilds. Because this is still experimental, everything is masked, so our first order of business is to unmask them (still as root ;-):

Code Listing 3.1: Unmasking Zumastor ebuilds

# echo "sys-kernel/zumastor-sources ~x86" >> /etc/portage/package.keywords
# echo "sys-block/zumastor ~x86" >> /etc/portage/package.keywords

Note: If you are building for a 64-bit platform, you should of course use ~amd64 instead of ~x86.

Now we emerge the packages:

Code Listing 3.2: Emerging Zumastor

# emerge zumastor-sources zumastor

Configuring/Building A Zumastor Kernel

If you are running Gentoo, you've built/configured a kernel before. Doing the same with the zumastor-source package is really no different. All you need change is adding the CONFIG_DM_DDSNAP option (as module or otherwise). CONFIG_DM_DDSNAP is located under "Device Drivers" -> "Multi-device support" -> "Distributed Data Snapshot target".

The Zumastor project also recommends a few options be enabled to assist with debugging: CONFIG_IKCONFIG_PROC, CONFIG_MAGIC_SYSRQ, and CONFIG_DEBUG_INFO.

4. Running Zumastor

Boot the New Kernel

Now it is time to reboot in to the new kernel you've built. Once you have, test out the zumastor init script by starting and stopping it. You can check it's status while running like so:

Code Listing 4.1: Checking that Zumastor is running

# zumastor status --usage
RUN STATUS:
/var/run/zumastor
|-- agents/
|-- cron/
|-- mount/
|-- running
`-- servers/

If all works well, you can add it to your default run level.

Code Listing 4.2: Adding Zumastor to Default Runlevel

# rc-update add zumastor default

Start Snapshotting

First you need to define your volume:

Code Listing 4.3: Define your volume

# zumastor define volume testvol /dev/sysvg/test /dev/sysvg/test_snap --initialize
js_bytes was 512000, bs_bits was 12 and cs_bits was 12
cs_bits 14
chunk_size is 16384 & js_chunks is 32
Initializing 5 bitmap blocks...
pid = 5879
Thu May 10 13:45:54 2007: [5880] snap_server_setup: ddsnapd server bound to socket /var/run/zumastor/servers/testvol
pid = 5881
Successfully created and initialized volume 'testvol'.
You can now create a filesystem on /dev/mapper/testvol

Next you create a filesystem of your choice on the volume:

Code Listing 4.4: Create a filesystem on testvol

# mkfs.ext3 /dev/mapper/testvol

Now lets set up automated daily snapshots:

Code Listing 4.5: Automate hourly and daily snapshots

# zumastor define master testvol -h 24 -d 7

Now, you can verify the new setup by checking zumastor's status:

Code Listing 4.6: Verifying automated snapshots

# zumastor status --usage

VOLUME testvol:
Data chunk size: 16384
Used data: 0
Free data: 0
Metadata chunk size: 16384
Used metadata: 56
Free metadata: 2621384
Origin size: 21474836480
Write density: 0
Creation time: Tue May 15 18:33:23 2007
  Snap            Creation time   Chunks Unshared   Shared
totals                                 0        0        0
Status: running
Configuration:
/var/lib/zumastor/volumes/testvol
|-- device/
|   |-- origin -> /dev/sysvg/test
|   `-- snapstore -> /dev/sysvg/test_snap
|-- master/
|   |-- next
|   |-- schedule/
|   |   |-- daily
|   |   `-- hourly
|   |-- snapshots/
|   `-- trigger|
`-- targets/

RUN STATUS:
/var/run/zumastor
|-- agents/
|   `-- testvol=
|-- cron/
|   `-- testvol
|-- mount/
|   `-- testvol/
|-- running
`-- servers/
    `-- testvol=

As time progresses, you should see the snapshots appear in the Configuration section under master/snapshot.

Trust But Verify

Now let's see if the snapshots are really working. Let's put a testfile on the volume:

Code Listing 4.7: Add a test file

# date >> /var/run/zumastor/mount/testvol/testfile; sync

Now we'll force a snapshot:

Code Listing 4.8: Force an hourly style snapshot

zumastor snapshot testvol hourly

Now we'll overwrite the contents of the file and take another snapshot.

Code Listing 4.9: Mutate and snapshot again

# date >> /var/run/zumastor/mount/testvol/testfile
# zumastor snapshot testvol hourly

Since we can see each of the snapshots, let's take a look at what is there:

Code Listing 4.10: Compare snapshots

# diff /var/run/zumastor/mount/testvol\(2\)/testfile /var/run/zumastor/mount/testvol\(2\)/testfile
1a2
> Wed Nov 21 16:29:25 PST 2007

5. Remote Replication

Get SSH setup

On each machine, run ssh-keygen as root. Copy the .ssh/id_dsa.pub file from each account to .ssh/authorized_keys on the other. This authorizes root on each machine to connect without a password to the other. More restricted access may be used in actual deployment.

Setup Master

Now you need to define a target to replicate to.

Code Listing 5.1: Define target on master

# zumastor define target testvol sparticus:11235 -p 30

This tells our master to replicate to port 11235 on sparticus every 30 seconds. Don't forget the "-p 30", otherwise replication will not happen. You should now see something like:

Code Listing 5.2: Evidence of replication

...
`-- targets/
    `-- sparticus/
        |-- period
        |-- port
        `-- trigger|
...

In your zumastor status.

Setup Slave

Now let's set up sparticus:

Code Listing 5.3: Define volume and configure source on target

# ssh sparticus
Last login: Wed Nov 21 16:20:11 2007 from lentulus
sparticus ~ # zumastor define volume testvol /dev/sysvg/test /dev/sysvg/test_snap --initialize
sparticus ~ # zumastor define source testvol lentulus --period 600

Let's Replicate

Let's get this party started:

Code Listing 5.4: Start replication

sparticus ~ # zumastor start source testvol

Alternatively, you may kick off replication manually each time from the master.

Code Listing 5.5: Start replication from the master

lentulus ~ # zumastor replicate testvol sparticus

Once initial replication is started, ddsnap processes should begin consuming CPU cycles and moving tens of megabits per second between the nodes. Initially, the entire origin device will need to be moved, so wait 15 minutes before looking on the target for snapshots. When replication is complete, df on the slave should show the same /var/run/zumstor/testvol volume locally mounted.

Just To Be Sure

You can't be too sure about these things, so let's make sure it is actually happening:

Code Listing 5.6: Verify replication

lentulus ~ # date >> /var/run/master/mount/testvol/testfile
lentulus ~ # sleep 60
(Waiting for data to replicate)
lentulus ~ # ssh sparticus cat /var/run/zumastor/mount/testvol/testfile
Wed Nov 21 18:44:01 2007


Print

Updated November 20, 2007

This translation is not maintained anymore

Summary: This guide shows you how to get a basic zumastor master node with snapshots running on Gentoo, and then how to replicate that to a second slave server. Actual deployments may be more complex - the goal here is to get up and running as simply as possible.

Christopher Smith
Author

Donate to support our development efforts.

Gentoo Centric Hosting: vr.org

VR Hosted

Tek Alchemy

Tek Alchemy

SevenL.net

SevenL.net

Global Netoptex Inc.

Global Netoptex Inc.

Copyright 2001-2007 Gentoo Foundation, Inc. Questions, Comments? Contact us.