cat /var/run/zumastor/mount/testvol\(2\)/testfile
cat /var/run/zumastor/mount/testvol\(4\)/testfile
cat /var/run/zumastor/mount/testvol/testfile
Remote replication
Install both an origin and a slave node. These will be
lab1 and lab2 below.
ssh authorization
On each machine, run ssh-keygen
as root. Copy the
.ssh/id_dsa.pub file from each account to .ssh/authorized_keys on
the other. This authorizes root on each machine to connect
without a password to the other. More restricted access may be
used in actual deployment.
Define target on master
root@lab1:~#
zumastor define target testvol lab2.example.com:11235 -p 30
This tells lab1 to replicate to port 11235 on lab2 every 30 seconds.
If the period is omitted, replication will not occur.
zumastor status --usage
...
`-- targets/
`-- lab2.example.com/
|-- period
|-- port
`-- trigger|
...
Define volume and configure source on target
root@lab2:~#
zumastor define volume testvol /dev/sysvg/test /dev/sysvg/test_snap --initialize
root@lab2:~#
zumastor define source testvol lab1.example.com --period 600
zumastor status --usage
VOLUME testvol:
Data chunk size: 16384
Used data: 0
Free data: 0
Metadata chunk size: 16384
Used metadata: 56
Free metadata: 2621384
Origin size: 21474836480
Write density: 0
Creation time: Wed May 16 10:28:27 2007
Snap Creation time Chunks Unshared Shared
totals 0 0 0
Status: running
Configuration:
/var/lib/zumastor/volumes/testvol
|-- device/
| |-- origin -> /dev/sysvg/test
| `-- snapstore -> /dev/sysvg/test_snap
|-- source/
| |-- hostname
| `-- period
`-- targets/
RUN STATUS:
/var/run/zumastor
|-- agents/
| `-- testvol=
|-- cron/
|-- mount/
|-- running
`-- servers/
`-- testvol=
Start replication
root@lab2:~#
zumastor start source testvol
Alternatively, you may kick off replication manually each time from the master.
root@lab1:~#
zumastor replicate testvol lab2.example.com
Once initial replication is started, ddsnap
processes should begin consuming CPU cycles and moving tens of
megabits per second between the nodes. Initially, the entire
origin device will need to be moved, so wait 15 minutes before
looking on the target for snapshots. When replication is
complete, df on the slave should show the same
/var/run/zumstor/testvol volume locally mounted.
Verify replication
root@lab1:~#
date >> /var/run/zumastor/mount/testvol/testfile
Wait 30-60 seconds for next replication cycle to complete.
root@lab2:~#
cat /var/run/zumastor/mount/testvol/testfile
You may need to leave the current directory to see incoming
changes. If the same file is there on the slave (lab2),
replication is working.
Stopping zumastor
If a zumastor volume is exported via nfs, the kernel server
must be stopped before zumastor is stopped in order to allow
unmounting of the volume. Due to what appears to be a bug,
nfs-common also needs to be stopped in some cases.