sudo apt-get install dmsetup wget -m -np zumastor.org/downloads/releases/0.7-r1491/ mv zumastor.org/downloads/releases/0.7-r1491 . rm -rf zumastor.org sudo dpkg -i 0.7-r1491/*.deb sudo update-grub
This document will explain what Zumastor is, how it differs from other snapshot and replication tools, how to install it, and how to use it.
It has been difficult to convince users of commercial NAS applicances to switch to Linux. In particular, some commercial NAS boxes have had better support for snapshots and remote replication than Linux does. Zumastor is an effort to fix that.
Snapshots can be useful as part of an hourly backup system. Instead of shutting down all applications for the entire duration of the backup, you can shut them down for just the second or two needed to take a snapshot.
If your goal is to protect users from accidental deletion of files, you may want to take snapshots every hour, and leave the last few snapshots around; users who accidentally delete a file can just look in the snapshot.
LVM already lets administrators create snapshots, but its design has the surprising property that every block you change on the original volume consumes one block for each snapshot. The resulting speed and space penalty usually makes the use of more than one or two snapshots at a time impractical.
Zumastor keeps all snapshots for a particular volume in a common snapshot store, and shares blocks the way one would expect. Thus making a change to one block of a file in the original volume only uses one block in the snapshot store no matter how many snapshots you have.
Andrew Tridgell's rsync is a wonderful tool for replicating files remotely. However, when doing periodic replication of large numbers of infrequently changing files, the overhead for figuring out what files need to be sent can be extreme.
Zumastor keeps track of which block change between one snapshot and the next, and can easily send just the changed blocks. Thus Zumastor can do frequent replication of large filesystems much more efficiently than rsync can.
See Replication Latency Benchmark below.
Zumastor is licensed under GPLv2.
Zumastor development happens on an svn repository, mailing list, irc channel, and issue tracker linked to from our home page at http://zumastor.org.
Most development is done by a small group of contributors (mainly Daniel Phillips, Jiaying Zhang, Shapor Naghibzadeh, and Drake Diedrich), but we welcome patches from the community. (Jonathan Van Eenwyk contributed a cool offline backup feature, but his patch has not yet been merged. Let us know if you need this.)
Version 0.8, released on 9 May 2008, adds several bugfixes (in particular, replication between x86 and x86_64 hosts now works) as well as experimental native Hardy packages. See our issue tracker for the full list of fixed issues.
Version 0.7, released on 27 Mar 2008, adds experimental "revert to snapshot" and "multilayer replication" features, lets downstream replicas retain snapshots on their own schedule, and fixes several bugs. See our issue tracker for the full list of fixed issues.
Version 0.6, released on 4 Feb 2008, added support for offline resizing, and fixed a bug that could cause data loss. (For developers, it also added a way to run our autobuilder tests before committing changes.) See our issue tracker for the full list of fixed issues.
Version 0.5, released on 10 Jan 2008, added support for volumes larger than 2TB and for exporting snapshots via NFS and CIFS. See our issue tracker for the full list of fixed issues.
Version 0.4, released on 1 Dec 2007, was the first public release.
We have a number of kernel patches, and will push them upstream as soon as possible. See http://zumastor.org for a list.
Each volume can have at most 64 snapshots. This limit may be raised or removed in future releases; see bug 6.
Although writes to a Zumastor volume are scalable in that their cost doesn't depend on the number of snapshots, there is still significant constant overhead; see bug 52. Multiple spindles and RAID controller caches can reduce this problem. Reducing the write overhead is an important goal for future releases. Until that happens, Zumastor is best suited for read-mostly applications.
Zumastor's snapshot exception index uses about 1MB of RAM per gigabyte of snapshot change blocks, but the index's size is fixed at ddsnapd startup time. You can tune it with the —cachesize option to zumastor define volume; the default size of the cache is 128MB (or one quarter of physical RAM, whichever is smaller). You may need to tune it to get good performance with very large volumes.
A future release may use a more compact index; see bug 5.
Zumastor exposes itself to users in several ways:
An admin script, /etc/init.d/zumastor
A commandline tool, /bin/zumastor
Origin mount points, e.g. /var/run/zumastor/mount/myvolumename
Snapshot mount points, e.g. /var/run/zumastor/snapshot/myvolumename/yyyy.mm.dd-hh.mm.ss
The admin script, /etc/init.d/zumastor, simply starts or stops the Zumastor daemon responsible for mounting Zumastor volumes. Normally this is run once at system startup time by init, but you may need to run it manually in unusual circumstances.
For instance, if you tell Zumastor to use origin volumes or snapshot stores on loopback devices that are not set up at boot time, and then reboot, Zumastor can't mount them automatically; you have to set them up and then restart Zumastor with the command /etc/init.d/zumastor restart.
The sysadmin uses /bin/zumastor to configure snapshot stores, take or delete snapshots, and set up remote replication. See the zumastor man page for more information.
The zumastor command is really just a very fancy wrapper around the underlying command, ddsnap. You probably won't ever need to use ddsnap directly, but if you're curious, see the ddsnap man page.
Zumastor deals with two kinds of volumes: origin volumes and snapshot volumes. Simply put, origin volumes are what you take snapshots of. Zumastor mounts these volumes for you in appropriately named subdirectories of /var/run/zumastor/mount. In other words, you do not need to add any of these to /etc/fstab. The zumastor init script will mount the volume and all its snapshots automatically at boot time.
Zumastor mounts snapshot volumes in appropriately named subdirectories of /var/run/zumastor/snapshot. Each snapshot mount point is named for the creation time of that snapshot, in the form of YYYY.MM.DD-HH:MM.SS. Zumastor also maintains a series of symbolic links for each specified snapshot rotation (e.g., hourly.0, hourly.1, daily.0, etc).
Zumastor skips one number each snapshot for housekeeping reasons, so home(1000) would be roughly the 500th snapshot of home. The snapshot number never decreases (though perhaps it would wrap if you did billions of snapshots…)
By default, Zumastor mounts all its volumes under /var/run/zumastor/mount, and mounts all its volume snapshots under /var/run/zumastor/snapshot. You can change the mount point of a volume with the —mountpoint option of the command zumastor define volume or with the zumastor remount command, as described in Zumastor man page. Similarly, you can change the mount points of a volume snapshot with the —snappath option of the command zumastor define volume.
Zumastor requires changes to the Linux kernel that are not yet in the vanilla kernel, so a patched kernel is required. You can install Zumastor on any recent distribution as long as you're willing to do it by hand. Automated installation is available for Debian / Ubuntu and Gentoo. (RHEL, Fedora, and Suse are probably next on our list of distros to support, please let us know if you're willing to contribute RPM .spec files.) For developers, we also support automated installation with UML.
If you use a distro not covered below, you can install if you're willing to patch your kernel and compile everything manually. See the writeup at http://zumastor.googlecode.com/svn/trunk/ddsnap/INSTALL.
Prebuilt kernel packages for recent Debian / Ubuntu systems are available from zumastor.org. You should probably play with Zumastor in a virtual environment before installing on your real systems. (You can even do this on Windows!)
These packages should work with any version of Ubuntu from Dapper to Gutsy, but they aren't built quite like standard Ubuntu kernel packages, so they might not fit all needs. If you need more native Ubuntu packages, see the next section.
Stable releases are at http://zumastor.org/downloads/releases, and trunk snapshots are at http://zumastor.org/downloads/snapshots. Directories are named by branch and svn revision. e.g. 0.7-r1491 is the 0.7 branch as of svn revision 1491. Once you've picked a version, download and install its four .deb packages. (Install dmsetup first.)
Installing in Feisty or later requires some work to avoid ubuntu bug 78552. In particular, you have to run update-grub by hand after installing. Alternately, you could instead edit /etc/kernel-img.conf first to have an absolute path for update-grub.
For instance:
sudo apt-get install dmsetup wget -m -np zumastor.org/downloads/releases/0.7-r1491/ mv zumastor.org/downloads/releases/0.7-r1491 . rm -rf zumastor.org sudo dpkg -i 0.7-r1491/*.deb sudo update-grub
Don't forget the trailing slash on wget, else it will download all directories, not just the one you want!
The install takes several minutes. Ignore the warnings about deleting symbolic links …/build and …/source.
Try rebooting into your newly installed kernel. If booting hangs, the most likely cause is an hda1 / sda1 confusion. Try editing the new kernel's entries in /boot/grub/menu.lst to say root=/dev/hda1 instead of root=/dev/sda1. If that helps, note that you might need to mentally replace /dev/sda3 with /dev/hda3 when running through the examples later in this document. (Alternately, you could create a symlink so /dev/sda3 points to /dev/hda3.)
Our team page at launchpad.net (https://launchpad.net/~zumastor-team) has a continually-updated repository of standard Ubuntu packages for Zumastor. It's hooked up to our trunk right now, so it's only for those interested in the very latest untested packages. But they should be real Hardy packages, up to Ubuntu's packaging standards.
Here's how to use that repository, if you dare:
Add these lines to /etc/apt/sources.list
deb http://ppa.launchpad.net/zumastor-team/ubuntu gutsy main restricted universe multiverse deb-src http://ppa.launchpad.net/zumastor-team/ubuntu gutsy main restricted universe multiverse
Then run sudo apt-get update.
Install zumastor, ddsnap, dmsetup, and a zumastor-enabled kernel. For instance:
sudo aptitude install linux-image-2.6.22-14-zumastor zumastor ddsnap
Finally, reboot and choose the Zumastor-enabled kernel, and make sure the system comes up ok.
Install the free VMWare Player from http://vmware.com/download/player.
If you're on Feisty, it's easier, just do apt-get install vmware-player.
For all other versions of Ubuntu, you have to use the vendor's install script. You may need to do sudo apt-get install build-essential first. There's a nice tutorial for the timid at smokinglinux.com. Take the easy way out and choose all defaults when installing.
If you're running a very new kernel, you may get compile errors when installing vmware player. See https://bugs.launchpad.net/ubuntu/+source/linux-meta/+bug/188391 I was able to work around this by using the "any-any" script, first described at http://communities.vmware.com/thread/26693, and downloadable by clicking on http://vmkernelnewbies.googlegroups.com/web/vmware-any-any-update-116.tgz in a browser (no wget).
Download a vmware image with Ubuntu preinstalled. Dan Kegel, one of the Zumastor developers, prepared a standard Ubuntu Gutsy Jeos 7.10 vmware image that's a svelte 78MB download; you can grab it at http://kegel.com/linux/jeos-vmware-player/. The image is several files 7zipped together, so unpack it before trying to use it. Sadly, there are several versions of 7zip with different interfaces. On both dapper and gutsy, sudo apt-get install p7zip installs it. The command to decompress is
7z e ~/ubuntu-7.10-jeos-vmware.7z
on Dapper, and
7zr e ~/ubuntu-7.10-jeos-vmware.7z
on Gutsy. Go figure.
Boot the virtual machine with the command vmplayer gutsy.vmx. Inside vmware, log in to the virtual ubuntu box (using the username and password from the site where you downloaded the image, e.g. "notroot", "notpass" if you're using Dan Kegel's image).
Verify that you can ping the outside world. If you can't, you may need to make minor adjustments to the gutsy.vmx file's networking. On one system, I had to change the line
ethernet0.connectionType = "nat"
to
ethernet0.connectionType = "bridged"
Inside vmware, download and install dmsetup and the Zumastor packages as described above.
Congratulations, you now have a virtual system with Zumastor installed!
If you want to test replication virtually, you'll need to create at least one more virtual machine. The only trick here is getting the network addressing straight. Here's what to do (assuming you're using the gutsy jeos image from the previous example).
Set up one virtual machine first as follows:
Create a new directory vm1 and unzip ubuntu-7.10-jeos-vmware.7z into it, e.g. $ mkdir vm1 $ cd vm1 $ 7zr e ../ubuntu-7.10-jeos-vmware.7z
Set up the uuid and ethernet address in gutsy.vmx to end in 01 (since this is the first vm), i.e. change the lines
ethernet0.address = "00:50:56:3F:12:00" uuid.location = "56 4d 12 67 af 79 52 33-3f 54 3a 1b 7a 4c 4b ac" uuid.bios = "56 4d 12 67 af 79 52 33-3f 54 3a 1b 7a 4c 4b ac"
to
ethernet0.address = "00:50:56:3F:12:01" uuid.location = "56 4d 12 67 af 79 52 33-3f 54 3a 1b 7a 4c 4b 01" uuid.bios = "56 4d 12 67 af 79 52 33-3f 54 3a 1b 7a 4c 4b 01"
Boot the virtual machine by running vmplayer gutsy.vmx. Networking won't work yet because the system is only set up to use the first ethernet device, and the new ethernet address is assigned to the second ethernet device. Once logged in to the new virtual machine, fix the ethernet address of eth0 and remove any extra ethernet devices by editing /etc/udev/rules.d/70-persistent-net.rules. Then restart each new virtual machine's network by running
sudo /etc/init.d/udev restart
or rebooting it, and verify that networking works.
Configure the hostname to be vm1 by editing /etc/hostname.
Edit /etc/hosts to use vm1 rather than ubuntu, and add a line for the virtual machine's IP address, with its real hostname (you can get the IP address from ip addr).
Install an ssh server, e.g.
sudo apt-get install openssh-server
Set up ssh authorization as described in the section Try Remote Replication below. Add both your own public key from your real workstation, as well as the public key of the vm's root account, to the virtual machine's root account's authorized_keys file. e.g.
(copy ~/.ssh/id_rsa.pub onto the vm using scp, call it mykey) sudo cat mykey /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
Download and install dmsetup and the Zumastor packages as described above.
Shut down the virtual machine with halt.
Cloning the virtual machines is slightly easier than setting them up from scratch:
For each desired new virtual machine, copy the vm1 directory, e.g.
cp -a vm1 vm2 cd vm2
As above, edit the new machine's gutsy.vmx file to use different uuid and ethernet addresses. e.g. change
ethernet0.address = "00:50:56:3F:12:01" uuid.location = "56 4d 12 67 af 79 52 33-3f 54 3a 1b 7a 4c 4b 01" uuid.bios = "56 4d 12 67 af 79 52 33-3f 54 3a 1b 7a 4c 4b 01"
to
ethernet0.address = "00:50:56:3F:12:02" uuid.location = "56 4d 12 67 af 79 52 33-3f 54 3a 1b 7a 4c 4b 02" uuid.bios = "56 4d 12 67 af 79 52 33-3f 54 3a 1b 7a 4c 4b 02"
Boot the new machine, log in to it as root, and edit /etc/udev/rules.d/70-persistent-net.rules as above to use the new address for eth0, and remove eth1. Then restart the network by running
sudo /etc/init.d/udev restart
and verify that networking works.
Configure the hostname to be vm2, by editing /etc/hostname.
Edit /etc/hosts to use vm2 rather than ubuntu, and add a line for the virtual machine's IP address, with its real hostname (you can get the IP address from ip addr).
Reboot the virtual machine so the hostname change takes effect.
Then merge the /etc/hosts entries for all virtual machines together, and add them to both your host's /etc/hosts file and the /etc/hosts files on all the virtual machines.
Finally, from your real host machine, make sure you can do
ssh root@vm1 ssh root@vm2 ...
to make sure you can log in to each vm as root without being prompted for a password.
Tip: create a script to start up all your virtual machines, e.g.
for vm in vm1 vm2 vm3 do cd $vm vmplayer gutsy.vmx & sleep 2 cd .. done
Tip: after starting virtual machines, just minimize them, and log in via ssh. That way you won't have to jump through hoops to switch windows, and copy-and-paste will work as expected.
Install qemu on your machine. (If you're using Ubuntu, it might be as simple as apt-get install qemu.)
Download an Ubuntu or Debian installation CD image. Use one of the more complete images if you don't want to configure network access for the test image. Here are links to download Ubuntu Dapper via http or torrent.
Create an empty file to become the image's block device using either dd or qemu-img: dd if=/dev/zero bs=1M seek=10k count=0 of=zqemu.raw
Start qemu, booting from CD the first time to install a base system: qemu -cdrom ubuntu-6.06.1-server-i386.iso -hda zqemu.raw -boot d
Chose manual partitioning, create a 2G partition for / on /dev/sda and leave the rest unallocated. You may or may not want a 1G swap. Create any user you wish and remember the password.
Boot again off the newly install disk image, this time with userspace networking enabled: qemu -cdrom ubuntu-6.06.1-server-i386.iso -hda zqemu.raw -boot c -net user -net nic
Log in to the virtual machine, and inside it, download and install dmsetup and the Zumastor packages as described above.
Boot the virtual machine again.
Congratulations, you now have a virtual system with Zumastor installed!
Most people won't need to do this, but just in case, here's how to build our .deb packages. Zumastor is currently built as three packages: a patched kernel package, a ddsnap (snapshot device userspace) package, and a zumastor (management cli) package.
1. Make sure you have the necessary build tools installed, e.g.:
$ sudo aptitude install debhelper devscripts fakeroot kernel-package libc6-dev \ libevent-dev libkrb5-dev libkrb53 libldap2-dev libpopt-dev ncurses-dev \ slang1-dev subversion zlib1g-dev
2. Get the latest code:
$ svn checkout http://zumastor.googlecode.com/svn/trunk/ zumastor
3. Run the build script. This will build all three packages.
$ cd zumastor $ ./buildcurrent.sh kernel/config/full
This will download kernel source from kernel.org. The version downloaded is controlled by the file KernelVersion at the top of the Zumastor source tree. The script takes a parameter which is a .config file for the kernel. There is distribution-like config in the repository in kernel/config/full. If you need to use a different config, make sure you enable CONFIG_DM_DDSNAP.
The Zumastor project has a Gentoo overlay to simplify deployment of Zumastor using Gentoo's Portage package system. To use the overlay in an automated fashion you'll want to make sure you have the Gentoo overlay manager, layman, installed.
# emerge layman # echo "source /usr/portage/local/layman/make.conf" >> /etc/make.conf
Next you'll want to add the Zumastor overlay diretory to layman's /etc/layman/layman.cfg. You want to update the "overlays" variable. Note that layman's config format want's each new entry in the overlays list to be on a new line. After you have updated it, the section of the file where overlays is set will look something like:
overlays : http://www.gentoo.org/proj/en/overlays/layman-global.txt http://zumastor.googlecode.com/svn/trunk/gentoo/overlay/overlays.xml
You will then want to add the Zumastor overlay to layman:
# layman -f -a zumastor
The zumastor ebuilds are currently masked, so the next think you'll need to do is to unmask them:
# echo "sys-kernel/zumastor-sources ~x86" >> /etc/portage/package.keywords # echo "sys-block/zumastor ~x86" >> /etc/portage/package.keywords
After all that preamble, we are now ready to install Zumastor and the Zumastor kernel sources:
# emerge zumastor-sources zumastor
Congratulations, you have now installed the zumastor software and kernel sources. You'll need to build a kernel as per usual, with one exception: you need to ensure that you enable the CONFIG_DM_DDSNAP kernel option when you configure your kernel. It can be found in "Device Drivers" → "Multi-device support" → "Distributed Data Snapshot target".
The Zumastor project also recommends a few options be enabled to assist with debugging: CONFIG_IKCONFIG_PROC, CONFIG_MAGIC_SYSRQ, and CONFIG_DEBUG_INFO.
We have a prebuilt Gentoo vmware image at http://zumastor.org/downloads/misc/Gentoo_VMImage.tar.7z. You'll need to unpack that image with 7zip (believe it or not, compressing with 7zip saved a lot more space than with rzip, and lrzip crashed on it. Go figure.)
Be sure to check the settings to make sure that the RAM allocation is reasonable for your system. When you look at the settings you'll notice there are two SCSI drives installed on the image. The first one contains the Gentoo image, and the second is a blank, set aside for your Zumastor datastore. You may wish to resize the second one to better suit your needs.
Power on the virtual image and select the Zumastor kernel. The root password on the image is set to "zumastor", so you should login as root and immediately change the password. You should also look at the contents of /etc/conf.d/clock to make sure that the timezone and clock settings are correct for your system. I'd also recommend syncing and updating through portage/layman, just to be sure you have the latest software.
Once you've got the image the way you want it, it is time to set up the Zumastor storage space. You'll need to use lvm to create a physical volume and volume group:
# pvcreate /dev/sdb # vgcreate sysvg /dev/sdb # lvcreate --size 500MB -n test sysvg # lvcreate --size 1500MB -n test sysvg
Your Gentoo image is now ready for Zumastor.
UML is potentially the fastest and easiest way to try out Zumastor, at least if you're a developer who likes working with the commandline.
There is a simple shell script, test/uml/demo.sh, which starts UML, runs a few simple tests to make sure Zumastor is basically working, and shuts UML back down. It's somewhat Ubuntu/Debian specific, but could probably be made compatible with rpm-based distos (patches gratefully accepted).
Developers who wish to contribute changes to Zumastor should make sure that demo.sh passes before and after their changes.
Before you run demo.sh, you should look through it. It starts off by doing six configuration steps, three of which use sudo to gain root privs:
As root, it installs any missing packages with apt-get.
It downloads some large files (Linux sources, UML system images) which can take quite a while on a slow connection. These files are cached in the build subdirectory at the top of the zumastor tree so they don't need to be downloaded again.
It builds UML in the directory working/linux-2.6.xxx if it's not present.
It creates a root filesystem for UML.
As root, it configures the UML root filesystem.
As root, it runs setup_network_root.sh to configure your system to allow ssh to the UML instances. For instance, it modifies /etc/hosts to include the lines
192.168.100.1 uml_1 192.168.100.2 uml_2
Finally, after all that setup, it starts running the UML tests (whew!).
If you want to do a squeaky-clean run, do "rm -rf working". Otherwise demo.sh will reuse the UML and images in that directory to save time.
You may want to do sudo true before running demo.sh just to make sure sudo has your password, so the sudo in the middle of the script doesn't prompt you and hang forever.
Also, you may want to add ", timestamp_timeout=100" to the end of the Defaults line in your /etc/sudoers file so sudo doesn't time out so quickly.
Now that you've read all of the above :-), go ahead and run "bash demo.sh" now. On my oldish Athlon 64 3000, it takes 23 minutes (plus download time) on the first run, and 10 minutes on the second run.
Once demo.sh passes, or even if it doesn't quite, you might want to start and log in to a UML instance manually. The four simple shell scripts, xssh.sh, xget.sh, xput.sh, and xstop.sh, let you log in to, get a file from, put a file on, and stop the uml instance set up by demo.sh. These are handy for poking at things manually.
Note that these tests are not the ones run by our continuous build and test system; see below for those.
There is experimental support for using UML on Debian / Ubuntu systems to run the same tests that our continuous build and test system runs; see http://zumastor.googlecode.com/svn/trunk/cbtb/uml/README. This modifies your system, for instance, by installing Apache to act as a local debian package repository for use by the UML instances, so it may not be everybody's cup of tea. But it does get the time to test small changes to the cbtb test scripts down below ten minutes, which should help people developing new automated tests — at least if they're willing to run a particular version of Ubuntu.
These tests are very touchy. It may take you hours to get them working, or they may run on the first try. They use sudo and ssh a lot, so make sure sudo and ssh are working well. Ask on IRC for help if you run into trouble.
Steps to follow:
0) Set up ccache if you want fast kernel compiles after the first run (assumes /usr/local/bin is early on your PATH):
$ sudo apt-get install ccache $ sudo ln -sf /usr/bin/ccache /usr/local/bin/gcc
And if your home directory is on NFS, you have to point to a local drive (else the cache will be too slow), e.g.:
$ mkdir /data/$LOGNAME/ccache.dir $ ln -s /data/$LOGNAME/ccache.dir ~/.ccache
1) Run the smoke tests
$ cd cbtb/uml $ bash smoke-test.sh
Sounds simple, doesn't it? Yet there are dozens of problems you might hit. Here are the ones I hit when I tried it, in order:
doesn't support Ubuntu Gutsy (I switched to Dapper; Hardy may also work)
ssh prompted for password, but I didn't notice
I ran the tests with sh instead of bash (we check for that now)
I mistakenly ran the tests as root (don't do that)
/dev/net/tun had the wrong permissions, so the script couldn't contact the UML instances (the script is now much aggressive about relaxing permissions on that file)
The script to set up the UML environment behaves very badly if there are any network errors while accessing the Ubuntu repository (we now set up a proxy server, and download an initial tarball, to deal with that — but those add failure modes of their own)
The script to set up the proxy (../host-setup/proxy.sh) had a bad URL for the ubuntu server, so I had to delete /etc/apache2/sites-enabled/proxy and run again.
I accidentally ran svn update in a subdirectory rather than the top of the tree, and ended up with a mismatched tree. The script warns you about a split repository, but didn't say quite what to do about it. (The script is clearer now.)
If the UML instance stops with a "Terminated" message, and the kernel log for the UML instance as an error like "Bus error - the host /dev/shm or /tmp mount likely just ran out of space," then you probably need to increase the size of /dev/shm on the host system. Run "mount | grep shm" to see what the size option is currently set to. Then run "mount -o remount,size=NEWSIZE /dev/shm" to set it to NEWSIZE.
Other notes:
You may append —all to smoke-test.sh's command line if you wish to run all cbtb tests instead of just the smoke tests. There is also a —no-fail flag to skip tests that are expected to fail.
When creating a new snapshot store on an existing device from a previous test runs, be sure to zero the snapshot store by e.g. giving the -i option to "zumastor define volume". Otherwise creating the first snapshot may fail with "snapshot already exists".
Many zumastor commands are not synchronous. For instance, "zumastor snapshot" doesn't take effect right away; it can take several seconds for the snapshot to be taken and mounted. These nonsynchronous commands can't show any error messages directly, so you have to look in the log files /var/log/zumastor/VOLNAME/{master,server} for errors.
Although Zumastor doesn't require lvm2, many of these examples use it, so install it now if it's not already installed, and give the pvcreate comand with no arguments just to check whether it's installed and ready. (This example is for Debian / Ubuntu, but it's similar for Fedora et al. If you don't have /etc/init.d/lvm, don't worry, just skip that command.)
$ sudo apt-get install lvm2 $ sudo /etc/init.d/lvm start $ pvcreate
If pvcreate complains No program "pvcreate" found for your current version of LVM, you probably forgot to start the lvm service.
$ zumastor Usage: /bin/zumastor {define|forget|start|stop|snapshot|replicate|status} [<subarguments>...] $ /etc/init.d/zumastor status zumastor is running use 'zumastor status' for more info... $ /bin/zumastor status RUN STATUS: agents cron mount running servers snapshot
Note: this doesn't yet complain properly if you forgot to reboot into a kernel that supports zumastor. (It will later, when you try to define a volume.)
The output of /etc/init.d/zumastor status is easy to understand, but the more verbose output of /bin/zumastor status needs some explanation.
First, if you stop zumastor with /etc/init.d/zumastor stop, the line running won't show up in the ouptut of /bin/zumastor status.
Second, some of the other output lines you see there is a category. Once volumes are mounted, there will be more lines underneath some of the categories; see below.
Before you go messing with real devices, it's useful to experiment with plain old files dressed up to look like devices via the loopback device.
Annoyingly, the loopback device naming scheme changed at some point; on some systems it's /dev/loop/0, on others it's /dev/loop0. You may need to adjust the path to suit your version of Linux.
Run the following commands as root (e.g. with sudo sh):
dd if=/dev/zero of=/media/vg.img bs=1024 count=220000 losetup -d /dev/loop0 > /dev/null 2>&1 || true losetup /dev/loop0 /media/vg.img pvcreate /dev/loop0 vgcreate sysvg /dev/loop0 lvcreate --name test --size 100MB sysvg mkfs.ext3 /dev/sysvg/test lvcreate --name test_snap --size 100MB sysvg zumastor define volume zumatest /dev/sysvg/test /dev/sysvg/test_snap --initialize zumastor define master zumatest
If this complains
/bin/zumastor[889]: error: dmsetup returned 1 create failed /bin/zumastor: define volume 'zumatest' failed
you may have forgotten to reboot into the Zumastor-enabled kernel you installed earlier.
If lvcreate complains
/proc/misc: No entry for device-mapper found Is device-mapper driver missing from kernel?
you may be hitting Ubuntu bug 178889; try adding dm_mod to /etc/modules or do sudo modprobe dm_mod manually.
Verify that zumastor has mounted the volume by doing
$ df | grep zumatest
Starting with the previous example, reboot the virtual machine and log back in. Notice that df doesn't show any Zumastor volumes mounted. That's because when Zumastor started at boot time, it couldn't see the loopback device, because we didn't arrange for it to be set up automatically at boot time. (This won't happen with real devices!) So do it manually now, then tell LVM about it, then tell Zumastor about it:
$ sudo sh # losetup /dev/loop0 /media/vg.img # vgchange -ay # /etc/init.d/zumastor restart # df | grep zumatest
You should now see the expected set of volumes and snapshots mounted.
If vgchange -ay doesn't detect your volume group, do rm /etc/lvm/.cache and try again.
$ sudo sh # zumastor forget volume zumatest # lvremove sysvg # losetup -d /dev/loop0 # rm /media/vg.img
First, find or create an big empty partition. (If you're using Dan Kegel's gutsy vmware image, the virtual hard drive has lots of unpartitioned space; use fdisk /dev/sda to create new partition 3 with type 8e (LVM), then reboot and make sure /dev/sda3 now exists.)
Here's how to use LVM to put a volume and its snapshot store onto it, assuming the empty partition is named /dev/sda3 and has size 10GB:
$ sudo sh # pvcreate /dev/sda3 Physical volume "/dev/sda3" successfully created # vgcreate sysvg /dev/sda3 Volume group "sysvg" successfully created # lvcreate --name test --size 5G sysvg Logical volume "test" created # lvcreate --name test_snap --size 4G sysvg Logical volume "test_snap" created # zumastor define volume zumatest /dev/sysvg/test /dev/sysvg/test_snap --initialize Wed Nov 21 19:32:58 2007: [3622] snap_server_setup: ddsnapd bound to socket /var/run/zumastor/servers/zumatest pid = 3623 Successfully created and initialized volume 'zumatest'. # mkfs.ext3 /dev/mapper/zumatest # zumastor define master zumatest
The zumastor define master command causes it be to mounted on /var/run/zumastor/mount/zumatest.
You now have a mounted zumastorized filesystem.
Note that you can run mkfs either before or after zumastor define volume, i.e. you can zumastor-ize an existing volume if you like, or you can mkfs a zumastor-ized volume. (You can even do silly things like taking snapshots during the middle of mkfs.)
Note: If you are using XFS, you will need to add —mountopts nouuid to your zumastor define volume line.
To set up hourly snapshots on the volume from the previous example, do:
# zumastor define schedule zumatest --hourly 24
The —hourly 24 specifies that we want to keep the 24 most recent hourly snapshots (a days worth). A snapshot will be created once an hour via the zumastor hourly cron script. We can accelerate this by manually telling zumastor to take an "hourly" snapshot right now, with the zumastor snapshot command:
# zumastor snapshot zumatest hourly # df | grep zumatest /dev/mapper/zumatest 5160576 141440 4756992 3% /var/run/zumastor/mount/zumatest /dev/mapper/zumatest(0) 5160576 141440 4756992 3% /var/run/zumastor/snapshot/zumatest/2008.02.18-19.10.14
Notice that the snapshot is mounted for you in a subdirectory of /var/run/zumastor/snapshot/zumatest/ named after the date and time of when you took the snapshot.
Now look at the directory where all the snapshots are mounted:
# ls -l /var/run/zumastor/snapshot/zumatest/ drwxr-xr-x 3 root root 4096 2008-02-18 18:51 2008.02.18-19.10.14 lrwxrwxrwx 1 root root 19 2008-02-18 19:20 hourly.0 -> 2008.02.18-19.10.14
The hourly.0 symlink will "always" point to the most recent hourly snapshot. This is handy when writing scripts, and mimics common practice on other systems.
If you do some IO on the filesystem, the number of blocks used by the origin volume (/var/run/zumastor/mount/zumatest) should increase, but the number of blocks used by any existing snapshot should not:
# echo 'This is the first file!' > /var/run/zumastor/mount/zumatest/foo.txt # df | grep zumatest /dev/mapper/zumatest 5160576 141444 4756988 3% /var/run/zumastor/mount/zumatest /dev/mapper/zumatest(0) 5160576 141440 4756992 3% /var/run/zumastor/snapshot/zumatest/2008.02.18-19.10.14
As expected, the origin's block counts change a bit (by four in this example), but the snapshot's doesn't change.
Now take another snapshot, and look at df again:
# zumastor snapshot zumatest hourly # df | grep zumatest /dev/mapper/zumatest 5160576 141444 4756988 3% /var/run/zumastor/mount/zumatest /dev/mapper/zumatest(0) 5160576 141440 4756992 3% /var/run/zumastor/snapshot/zumatest/2008.02.18-19.10.14 /dev/mapper/zumatest(2) 5160576 141444 4756988 3% /var/run/zumastor/snapshot/zumatest/2008.02.18-19.15.01
Notice that the first snapshot's volume was named zumatest(0), but the new one's is named zumatest(2). (Why the gap? Zumastor uses another snapshot internally during replication. Yes, this is a bit of a rough edge.)
Note
|
If you want to export Zumastor snapshots via NFS or Samba, you need to use the export or samba options with the zumastor define master command. See sections "Exporting a Zumastor volume via NFS" and "Exporting a Zumastor volume via Samba" for such examples. |
LVM can of course also do snapshots. For instance, to back up a filesystem at an instant using an LVM snapshot and tar, you might do:
# lvcreate --name backup --size 100M --snapshot sysvg/test # mkdir /mnt/backup # mount /dev/sysvg/backup /mnt/backup # tar -czf /tmp/backup.tgz -C /mnt/backup . # umount /dev/backup # lvremove --force sysvg/backup
Zumastor simplifies this a bit in that it doesn't require creating a new LVM logical volume for each snapshot, and it handles the mounting for you. For instance:
# zumastor snapshot zumatest hourly # tar -czf /tmp/backup.tgz -C /var/run/zumastor/snapshot/zumatest/hourly.0 .
The snapshot will be garbage-collected later, no need to remove it, no need for manual mounting or unmounting.
See Replication Latency Benchmark below for a script that compares performance of zumastor replication with rsync.
Assuming you've set up a volume to keep 24 hourly snapshots around with the command zumastor define schedule zumatest —hourly 24 as in the last example, what happens when the 25th snapshot is taken?
To find out, you can run a loop that takes lots of snapshots. Run it, then use df to see how many snapshots are around afterwards:
# num=0 # while test $num -lt 50 do zumastor snapshot zumatest hourly num=`expr $num + 1` echo $num done # df -P | grep zumatest
You should see 24 snapshots (the limit set when we did the zumastor define schedule) plus the origin volume. Each time you run zumastor snapshot … it creates a new snapshot, and deletes any that are too old.
In this section, we will be working with two machines, original (the main machine) and replica (where data is replicated to).
Note: zumastor requires you to refer to each machine by the string returned by uname -n. You can't use numerical IP addresses! This is bug 54.
For example, if in DNS your machines are original.example.com and replica.example.com but uname -n returns only original and replica, you must tell Zumastor the hosts are original and replica, not the FQDNs of the machines.
If your organization uses the .local TLD, you might run into some issues with mDNS. See Avahi and Unicast Domains .local for more information.
As of Zumastor 0.7, remote replication still requires that both the original and replica machines trust each other, and allow root to remotely log in to each other. This is bug 40 in the zumastor issue tracker. Eventually Zumastor's networking will be redone to allow replication without requiring both machines to trust each other so strongly.
If you use Ubuntu or any other distribution which disables root login by default, set a password for the root user to enable this account:
$ sudo passwd root
Make sure the SSH daemon is installed and PermitRootLogin is set to yes in your sshd_config.
Please note if you get a root shell by using sudo -s, this step will fail because by default preserves your environment (including your home directory). Either use su (if you have already set a password for root) or sudo -s -H.
On each machine, run ssh-keygen as root and hit enter to all three questions, creating phraseless keys:
root@original:~# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: 7a:81:de:d4:71:18:32:f0:4e:2c:ee:7d:e4:33:c1:5d root@original
root@replica:~# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: 4a:fe:e7:39:ec:eb:ab:39:e8:b3:29:09:b7:77:02:02 root@replica
Append root's .ssh/id_rsa.pub file from each machine to .ssh/authorized_keys on the other. This authorizes root on each machine to connect without a password to the other. For instance:
root@original:~# cat /root/.ssh/id_rsa.pub | ssh root@replica "cat >> /root/.ssh/authorized_keys" root@replica:~# cat /root/.ssh/id_rsa.pub | ssh root@original "cat >> /root/.ssh/authorized_keys"
Using ssh to do this copy also happens to put each machine in the other's /root/.ssh/known_hosts file. This is important for later; without it, replication will fail.
Finally, verify that root can ssh from replica to original, and vice versa, without any prompting.
(Bidirectional root trust between machines is scary, isn't it? As mentioned above, we'll get rid of this eventually).
This example uses volume zumatest just like the previous examples, with one minor difference: to make the first replication efficient, one should zero the partition before creating the filesystem. You can do this with /bin/dd; zumastor 0.7 also has a handy —zero option that does this. (Yes, this can be slow on a large volume, but it can save time when you don't have an infinitely fast network.)
When following this example on vmware, you probably want to use small volumes, e.g. 500M rather than 5G, unless you have lots of time to kill.
For instance:
# pvcreate /dev/sda3 # vgcreate sysvg /dev/sda3 # lvcreate --name test --size 5G sysvg # lvcreate --name test_snap --size 4G sysvg # dd if=/dev/zero of=/dev/sysvg/test bs=1024k count=5120 # mkfs.ext3 /dev/sysvg/test # zumastor define volume zumatest /dev/sysvg/test /dev/sysvg/test_snap --initialize
Tell original to be ready to replicate to replica:
root@original:~# zumastor define target zumatest replica.example.com
Replications have to be triggered somehow. There are three ways to do this:
automatically, with a —period NUMSECONDS option on `zumastor define target' on original
automatically, with a —period NUMSECONDS option on `zumastor define source' on replica
manually, with `zumastor replicate'
Now make sure the new relationship is reflected in zumastor status:
root@original:~# zumastor status --usage ... `-- targets/ `-- replica.example.com/ |-- period `-- trigger| ...
Zumastor doesn't create volumes for you, so you have to do it. The volumes should be exactly the same size on the target as on the master. e.g.
# pvcreate /dev/sda3 # vgcreate sysvg /dev/sda3 # lvcreate --name test --size 5G sysvg # lvcreate --name test_snap --size 4G sysvg
root@replica:~# zumastor define volume zumatest /dev/sysvg/test /dev/sysvg/test_snap --initialize root@replica:~# zumastor define source zumatest original.example.com:11235 --period 600
Please note you do not need to create a filesystem (i.e. run mkfs) on the target machine.
Running zumastor status —usage on the target should show a hostname, replication port number, and period entry under source for volume zumatest.
You can use zumastor define schedule command to keep periodic snapshots on target. By default, periodic snapshots are not mounted on target. You can use the —mount-snap option of the command zumastor define source to have snapshots mounted and symlinked, in the same way as those mounted on master.
root@replica:~# zumastor define source zumatest original.example.com:11235 --period 600 --mount-snap root@replica:~# zumastor define schedule zumatest --hourly 24
This is important; without it, no automatic replication happens:
root@replica:~# zumastor start source zumatest
Alternatively, you may kick off replication manually each time from the master.
root@original:~# zumastor replicate zumatest replica.example.com
Once initial replication is started, ddsnap processes should begin consuming CPU cycles and moving tens of megabits per second between the nodes. Initially, the entire origin device will need to be moved, which can take several minutes. (5 minutes for a 500MB volume on vmware, less on real hardware.)
When replication is complete, df on the slave should show the same /var/run/zumastor/mount/zumatest volume locally mounted.
root@original:~# date >> /var/run/zumastor/mount/zumatest/testfile
Wait 30-60 seconds for next replication cycle to complete.
root@replica:~# cat /var/run/zumastor/mount/zumatest/testfile
If your current directory is inside the volume in question, you won't see any new snapshots; you should watch from outside the volume if you want to see incoming snapshots.
If the same file is there on the target (replica), replication is working.
Here's a handy script that illustrates all the commands needed to set up small test volumes and kick off rapid replication between two machines named vm1 and vm2. (It also rather rudely resets zumastor and clears its logs, but that's handy when you're trying to learn how things work.) It assumes you've already set up a volume group named sysvg. On my 1GHz system, it takes about five minutes to do the initial replication, but seems to replicate quickly after that.
#!/bin/sh cat > restart-test.sh <<_EOF_ set -x # Clean up after last run, if any. zumastor forget volume zumatest > /dev/null 2>&1 lvremove -f sysvg/test > /dev/null 2>&1 lvremove -f sysvg/test_snap > /dev/null 2>&1 # Clear logs. Wouldn't do this in production! Just for testing. /etc/init.d/zumastor stop rm -rf /var/log/zumastor /var/run/zumastor /etc/init.d/zumastor start lvcreate --name test --size 500M sysvg lvcreate --name test_snap --size 400M sysvg dd if=/dev/zero of=/dev/sysvg/test bs=500k count=1024 _EOF_ set -x for host in vm1 vm2 do scp restart-test.sh root@${host}: ssh root@${host} sh restart-test.sh done ssh root@vm1 "mkfs.ext3 /dev/sysvg/test ; zumastor define volume zumatest /dev/sysvg/test /dev/sysvg/test_snap --initialize ; zumastor define master zumatest ; zumastor define target zumatest vm2" ssh root@vm2 " zumastor define volume zumatest /dev/sysvg/test /dev/sysvg/test_snap --initialize ; zumastor define source zumatest vm1:11235 --period 5; zumastor start source zumatest"
This section illustrates how to set up chained replication on multiple machines, e.g., original replicates to replica1 that replicates to replica2.
Set up root login on all of the three machines, see section "Enable root login" above.
Set up SSH authorizations between original and replica1, and replica1 and replica2, as described in section "SSH authorization" above.
Create volume on master, as described in section "Create volume on master".
Define target on master, as described in section "Define target on master".
root@original:~# zumastor define target zumatest replica1.example.com
Create volume on replica1, as described in section "Create volume on target".
Define volume and configure source on replica1, as described in section "Define volume and configure source on target".
root@replica1:~# zumastor define volume zumatest /dev/sysvg/test /dev/sysvg/test_snap --initialize root@replica1:~# zumastor define source zumatest original.example.com --period 600
Define target on replica1.
root@replica1:~# zumastor define target zumatest replica2.example.com
Running zumastor status on replica1 shows that the machine is both the target and the source of the volume replication.
root@replica1:~# zumastor status ... source hostname: original.example.com name: zumatest period: 600 targets replica2.example.com name: zumatest ...
Create volume on replica2, as described in section "Create volume on target".
Define volume and configure source on replica2 as replica1.
root@replica2:~# zumastor define volume zumatest /dev/sysvg/test /dev/sysvg/test_snap --initialize root@replica2:~# zumastor define source zumatest replica1.example.com --period 600
Start replication. You can set up automatic chained replication
root@replica1:~# zumastor start source zumatest root@replica2:~# zumastor start source zumatest
or start replication manually
root@original:~# zumastor replicate zumatest replica1.example.com root@replica1:~# zumastor replicate zumatest replica2.example.com
Verify replication.
root@original:~# date >> /var/run/zumastor/mount/zumatest/testfile
Wait for the next replication cycle to complete on both replicas.
root@replica2:~# cat /var/run/zumastor/mount/zumatest/testfile
Install the NFS server packages:
# aptitude install nfs-common nfs-kernel-server
Add the following lines to the NFS exports file /etc/exports (where zumatest is the Zumastor volume to be exported):
/var/run/zumastor/mount/zumatest *(rw,fsid=0,nohide,no_subtree_check,crossmnt) /var/run/zumastor/snapshot/zumatest *(ro,fsid=1,nohide,no_subtree_check,crossmnt)
The “fsid=xxx” option is required to ensure that the NFS server uses the same file handle after a restart or remount of an exported volume. If it is not specified, when the snapshot is remounted the file handle will change and the client will receive a stale file handle error. Any 32 bit number can be used for fsid value but it must be unique across all exported filesystems.
The “crossmnt” option is required to allow client access to exported snapshot directories mounted under the Zumastor volume, see below.
If you want to also export zumastor snapshots, you'll need to use the —export option with the command zumastor define master described in section "Mount a new volume and set up periodic snapshots" above.
# zumastor define master zumatest --export
You'll also need to modify your system's NFS startup script to reread /etc/exports on restart. For Ubuntu, we provide a patch that does this for you:
# cd /etc/init.d/ # wget http://zumastor.googlecode.com/svn/trunk/zumastor/nfs-patches/debian/nfs-kernel-server.patch # patch -p0 < nfs-kernel-server.patch # rm -f nfs-kernel-server.patch
To export zumastor volumes from a different path, you can either symlink the zumastor volumes to that path or use the —mountpoint and —snappath options with the command zumastor define volume described in section "Origin and Snapshot mount points" above.
First, make sure you have used the —export option with the zumastor define master command and modified your NFS server's startup script as described in the section "Exporting a Zumastor volume via NFS" above.
Then mount the Zumastor volume as normal on an NFS client, for instance:
# mount -tnfs nfs-server:/var/run/zumastor/snapshot/zumatest /mnt # ls /mnt
There will be a number of snapshot directories under /mnt. Each of these directories is named for the time at which that snapshot was taken and symlinked to the corresponding snapshot rotation (e.g., hourly.0, hourly.1, daily.0, etc). To recover a file, find it in the appropriate snapshot directory and copy it to the desired location.
Note
|
A future revision of Zumastor may add options to make the snapshots visible inside each directory, which would let you access them from any place under the volume mount point. |
Here is an example of setting up Samba on Zumastor using the USER authentication flavor.
Install the Samba server packages:
# aptitude install samba
Edit file /etc/samba/smb.conf, configuring the “workgroup” option according to your domain setup, commenting out the “obey pam restrictions = yes” option, and adding the following lines (where zumatest is the Zumastor volume to be exported and homevol is the name of the exported Samba volume):
[homevol] path = /var/run/zumastor/mount/zumatest comment = test volume browseable = yes case sensitive = no read only = no vfs objects = shadow_copy
The “vfs objects = shadow_copy” option is required for accessing snapshot directories via CIFS's previous version interface, see below.
Setup smbpasswd for the volume homevol. Please replace testuser with the user name you want to use for accessing the exported Samba volume from a client. You don't need the useradd command if testuser has already existed on the Samba server.
# useradd testuser # smbpasswd -a testuser New SMB password: your-secure-passwd Retype new SMB password: your-secure-passwd
If you want to also export zumastor snapshots via CIFS's previous version interface, you'll need to use the —samba option with the command zumastor define master described in section "Mount a new volume and set up periodic snapshots" above.
# zumastor define master zumatest --samba
First, make sure you have used the samba option with the zumastor define master command as described in the section "Exporting a Zumastor volume via Samba" above. Then you can recover files from a Windows Samba client as follows.
Open Windows Explorer and enter the Samba server name followed by the exported volume name in the explorer bar, e.g., \\samba-server\homevol\. Enter testuser and your-secure-passwd in the poped up authtication window, where testuser and your-secure-passwd are the username and password your have set up on the Samba server, as described in section "Exporting a Zumastor volume via Samba" above.
Go to the file or directory you want to recover. Right click it and select Properties. You will see the Previous versions tab appear on the top. Select it and you will see a list of snapshots in the window. Select a snapshot. Then you can view, copy, or restore any file from that snapshot.
Suppose you have a Zumastor volume zumatest that uses the /dev/sysvg/test LVM device as the origin and /dev/sysvg/test_snap LVM device as the snapshot store. Here are the examples to resize the origin/snapshot device of the Zumastor volume that has an EXT3 file system running on top of it.
First, make sure you have shut down all of the services running on top of the Zumastor volume, such as NFS or Samba. You can use lsof to check if any file in /var/run/zumastor/mount/zumatest is still open:
# sync # lsof | grep /var/run/zumastor/mount/zumatest
Resize the LVMm device of the origin volume with the newsize. Here newsize may be suffixed by unit designator such as K, M, or G.
# lvresize /dev/sysvg/test -L newsize
Resize zumastor origin volume with the zumastor resize —origin command. Then resize the EXT3 file system with the e2fsck/resize2fs utilities from the e2fsprogs debian package (you need to install the package first if it is not already installed). Here, we need to first stop zumastor master, which will umount the origin volume as well as all of the existing snapshots. If the volume is being replicated to another server, you need to stop replication (both in the origin and in the replica host). After the EXT3 resizing, we restart zumastor master, which will mount the origin volume with the new size.
root@replica:~# zumastor stop source zumatest root@original:~# zumastor stop target zumatest replica root@original:~# zumastor stop master zumatest root@original:~# zumastor resize zumatest --origin newsize root@original:~# e2fsck -f /dev/mapper/zumatest root@original:~# resize2fs /dev/mapper/zumatest newsize root@original:~# zumastor start master zumatest root@original:~# zumastor start target zumatest replica root@replica:~# zumastor start source zumatest
Now check the size of the Zumastor volume with df | grep zumatest. You will see that the size of the origin volume has changed but the size of all the existing snapshots is unchanged.
The steps to shrink the origin device of a Zumastor volume are similar to those for enlarging the origin device, but need to be ran in a different order.
Shut down all of the services running on top of the Zumastor volume.
Stop zumastor master. Then shrink the EXT3 file system.
# zumastor stop master zumatest # e2fsck -f /dev/mapper/zumatest # resize2fs /dev/mapper/zumatest newsize
Shrink the zumastor origin device. Before running the zumastor resize —origin command, we need to first have all of the data on the shrinked space to be copied out to the snapshot store. Otherwise, you may get access error when reading/writing an existing snapshot. The copyout can be executed by overwriting the shrinked space with the dd command. E.g., when shrinking the origin device from 4G to 1G, you can use dd if=/dev/zero of=/dev/mapper/zumatest bs=64k seek=16k count=48k. Depends on the old size of your origin device, this step may take a long time. After that, restart zumastor master.
# dd if=/dev/zero of=/dev/mapper/zumatest bs=unit seek=newsize/unit count=(oldsize-newsize)/unit # sync # zumastor resize zumatest --origin newsize # zumastor start master zumatest
Now check the size of the Zumastor volume with df | grep zumatest. You should see that the size of the origin volume has changed but the size of all the existing snapshots is unchanged.
Now it is safe to shrink the origin LVM device.
# lvresize /dev/sysvg/test -L newsize
To enlarge the snapshot store of a Zumastor volume, run the zumastor resize —snapshot command after enlarging the snapshot lvm device.
# lvresize /dev/sysvg/test_snap -L newsize # zumastor resize zumatest --snapshot newsize
After this, check the new size of the Zumastor snapshot store with the zumastor status —usage command.
Shrinking the snapshot store of a Zumastor volume is not difficult, but must be done with care. First, run the zumastor resize —snapshot command to shrink the Zumastor snapshot store. (The command may fail if the existing snapshot store does not have enough continuous free space. In this case, try a larger value.) If the zumastor resize —snapshot command succeeds, you can then run the lvresize command to shrink the snapshot lvm device.
# zumastor resize zumatest --snapshot newsize # lvresize /dev/sysvg/test_snap -L newsize
After this, check the new size of the Zumastor snapshot store with the zumastor status —usage command.
Note
|
You can follow the similar procedures when resizing a Zumastor volume with a different file system running on top of it. For example, you can replace the e2fsck/resize2fs commands with resize_reiserfs -f /dev/mapper/zumatest to resize a reiserfs file system. If you want to resize a Zumastor volume that uses raw partitions instead of lvm devices, you can use fdisk instead of lvresize to resize those partitions. |
Take a look at /var/run/zumastor/snapshot/zumatest/ and decide which snapshot you want to revert to. Suppose you want to revert to a snapshot that is created two days ago. You can find out its corresponding snapshot number as follows:
# zumastor status ... snapshot ... daily.2 -> 2008.03.10-09.09.40 ... # mount | grep mapper | grep 2008.03.10-09.09.40 /dev/mapper/zumatest(xxx) on /var/run/zumastor/snapshot/zumatest/2008.03.10-09.09.40 type ext3 (ro)
Here, xxx is the snapshot number you want to revert to.
Stop all of the services running on top of the zumastor volume and then stop zumastor master.
# zumastor stop master zumatest
Revert the zumastor volume to snapshot xxx.
# zumastor revert zumatest xxx
Restart zumastor master.
# zumastor start master zumatest # ls /var/run/zumastor/mount/zumatest
You will see that the zumastor volume has changed back to the old version.
First off, cd into test/uml.
Second, if you haven't already, run demo.sh to make sure UML is set up and working.
Then you can easily start and log in to a UML instance by running ./xssh.sh.
Rather than using LVM, use the devices created by demo.sh, /dev/ubdb and /dev/ubdc. For instance, "zumastor define volume -i zumatest /dev/ubdb /dev/ubdc".
We provide several test and benchmark tools that you may want to run first before using Zumastor to host your critical data. The benchmark results depend a lot on the underlying hardware. E.g., with a single-spindle disk, you would expect about five times slowdown on write throughput. But with a ramdisk, you may observe no performance overhead introduced by Zumastor, even when a number of snapshots have been taken.
We recommend users to first try a large volume copy test to make sure that Zumastor works properly for the volume size you intend to use. A simple test script is provided that copies a large volume to an exported zumastor volume via nfs and at the same time take hourly replication between the zumastor nfs server and a backup server. The srcipt is located at http://zumastor.googlecode.com/svn/trunk/test/large_volume_copy_test.sh. Please take a look at the script and specify the configuration parameters according to your setup.
Nsnaps benchmark provides performance data that relates the number of snapshots taken versus the time needed to untar a Linux kernel source tree and sync. The test script is located at http://zumastor.googlecode.com/svn/trunk/benchmarks/nsnaps/. Please take a look at the README file included in that directory before running the test.
Bonnie is a program to test hard drives and file systems performance. You can obtain the Bonnie Debian package with the apt-get install bonnie command. After that, you can run the Bonnie benchmark with the wrapper we provide at http://zumastor.googlecode.com/svn/trunk/benchmarks/bonnie.
Fstress is a benchmark that is used to test NFS server performance, i.e., average latency for a mixed number of client operations at different load levels. The source code provided from our repository, http://zumastor.googlecode.com/svn/trunk/benchmarks/fstress, was originally obtainded from Duke and includes several small changes to port the code to the latest Linux systems. Please take a look at the README file included in that directory for the instructions and examples on running the benchmark.
benchmarks/replatbench.sh is a trivial synthetic replication benchmark. It measures how long it takes to replicate filesystems containing 1 to 100 directories, each containing 1MB of small or large files, with zumastor, rsync, and rsync plus lvm snapshots. It seems to show that Zumastor as of 0.7 is faster than rsync at replicating many, many small unchanged files, but slower in all other cases. (In Zumastor's defense, we have not yet spent any time optimizing Zumastor's write performance.)