Zumastor design and implementation notes
Snapshots versus the origin.
When a snapshot is taken, it is empty; all reads
and writes get passed through to the origin. When an origin write
happens, the affected chunk or chunks must be copied to any extant
snapshots. (This chunk is referred to as an "exception" to
the rule that all chunks belong to the origin.) The make_unique()
function checks whether the chunk needs to be copied out, does so if
necessary, and returns an indication of whether that happened.
Path of a bio request through dm-ddsnap and ddsnapd.
The path a bio request takes as a result of a read from a snapshot device
that goes to the origin (aka "snapshot origin read").
- Kernel (dm-ddsnap):
- Generic device handling (or in this case the device mapper) calls
ddsnap_map() with the initial bio. ddsnap_map() queues the bio on
the "queries" queue of the dm_target private (devinfo)
structure.
- worker() finds the bio on the "queries" queue. It
dequeues it from "queries" and queues it on the "pending"
queue, then sends a QUERY_SNAPSHOT_READ message to ddsnapd.
- Process space (ddsnapd):
- incoming() gets QUERY_SNAPSHOT_READ, vets it, performs
readlock_chunk() (to assert a snaplock) for each chunk shared with
origin, sends SNAPSHOT_READ_ORIGIN_OK message to kernel.
- Kernel (dm-ddsnap):
- incoming() gets SNAPSHOT_READ_ORIGIN_OK, calls replied_rw().
- replied_rw() finds each chunk id in "pending" queue,
dequeues it. For a read from the origin (per
SNAPSHOT_READ_ORIGIN_OK), sets the bi_end_io callback to
snapshot_read_end_io() and queues the bio in the "locked"
queue. It then passes the request to lower layers via
generic_make_request().
- When the request has completed, the lower level calls
snapshot_read_end_io(), which moves the bio from the "locked"
queue to the "releases" queue (and pings the worker that
there's more work). It also calls any chained end_io routine.
-
worker() finds the bio on the "releases" queue. It
dequeues it from "releases" and sends a
FINISH_SNAPSHOT_READ message to ddsnapd.
- Process space (ddsnapd):
- incoming() gets FINISH_SNAPSHOT_READ and calls
release_chunk() for each chunk, which finds the lock for that
chunk and calls release_lock().
- release_lock() finds the client on the hold list for that
lock and frees that hold.
- If there are still holds remaining (on behalf of other
clients), release_lock() terminates, leaving the snaplock and
any waiting writes pending.
- If there are no more holds, release_lock() walks the
waitlist, freeing each wait as well as each "pending"
structure and transmitting the associated
ORIGIN_WRITE_OK message to permit each pending write to
continue.
The path a bio request takes as a result of a write to the origin device (aka "origin write").
- Kernel (dm-ddsnap):
- Generic device handling (or in this case the device mapper)
calls ddsnap_map() with the initial bio. ddsnap_map() queues
the bio on the "queries"
queue of the dm_target private (devinfo) structure.
- worker() finds
the bio on the "queries" queue. It dequeues it from
"queries" and queues it on the "pending" queue,
then sends a QUERY_WRITE message to ddsnapd.
- Process space (ddsnapd):
- incoming() gets QUERY_WRITE for the origin (snaptag of -1).
- Calls make_unique() to copy each chunk out to snapshot(s) if
necessary.
- If chunk was copied out, calls waitfor_chunk(), which finds any
snaplock outstanding for the chunk. If one is found, it adds a
"pending" structure (allocated as needed) to the lock wait
list.
- If no chunk was queued "pending," it sends an
ORIGIN_WRITE_OK message to the kernel.
- If any chunk was queued "pending" it copies the
ORIGIN_WRITE_OK message to the message buffer on the "pending"
structure. This message is transmitted as a side effect of
when ddsnapd receiving FINISH_SNAPSHOT_READ and removing all
holds and waiters on the associated lock, as outlined in the
description of snapshot origin read.
- Kernel (dm-ddsnap):
- incoming() gets ORIGIN_WRITE_OK, calls replied_rw().
- replied_rw() finds each chunk id in "pending" queue,
dequeues it. For a write to the origin (per ORIGIN_WRITE_OK), it
passes the request to lower layers via generic_make_request().
The path a bio request takes as a result of a write to a snapshot device (aka "snapshot write").
- Kernel (dm-ddsnap):
- Generic device
handling (or in this case the device mapper) calls ddsnap_map()
with the initial bio. ddsnap_map() queues the bio on the "queries"
queue of the dm_target private (devinfo) structure.
- worker() finds
the bio on the "queries" queue. It dequeues it from
"queries" and queues it on the "pending" queue,
then sends a QUERY_WRITE message to ddsnapd.
- Process space (ddsnapd):
- incoming() gets QUERY_WRITE for a snapshot.
- If snapshot not squashed, calls make_unique() to copy each chunk
from the origin out to snapshot(s) if necessary.
- Sends a SNAPSHOT_WRITE_OK (SNAPSHOT_WRITE_ERROR if squashed or an
error on make_unique()) message to the kernel.
- Kernel (dm-ddsnap):
- incoming() gets SNAPSHOT_WRITE_OK, calls replied_rw().
- replied_rw() finds each chunk id in "pending" queue,
dequeues it. For a write to a snapshot (per SNAPSHOT_WRITE_OK), it
fills in the appropriate device and computes the physical block for
each chunk, then passes the request to lower layers via
generic_make_request().
Startup.
The Zumastor system of processes is started as a side effect of creating a
Zumastor volume by use of the "zumastor define volume" command. Zumastor starts
the user-space daemons (ddsnap agent and ddsnap server), then commands the
devmapper to create the actual devices; this eventually results in a call
to the kernel function ddsnap-create() through the constructor call from the
devmapper. This function creates the control, client and worker threads,
which proceed as outlined below.
Generally, when it is started the ddsnap agent just waits for connections;
it sends no messages and all further operations are performed on behalf of
clients. During processing it gets a new connection, accepts it, allocates
a client structure and adds it to its client list. It adds the fd for
that client to the poll vector and uses the offset therein to locate
the client information later. After startup the ddsnap server (ddsnapd)
operates the same way.
- dm-ddsnap client ("incoming" thread) starts and initializes, sends
NEED_SERVER to agent, blocks on server_in_sem.
- Agent gets NEED_SERVER, calls have_server() which returns
true if the server has connected; if that is so then it calls
connect_clients():
- Connects to client address.
- Sends CONNECT_SERVER and fd to dm-ddsnap control.
- ddsnapd starts and initializes, sends SERVER_READY to agent, then
waits for connections.
- Agent gets SERVER_READY, calls try_to_instantiate() to send
START_SERVER to ddsnapd (which initializes its superblock).
- Agent calls connect_clients():
- Connects to client address.
- Sends CONNECT_SERVER and fd to dm-ddsnap control.
- After agent sends CONNECT_SERVER (by either path):
- dm-ddsnap control gets CONNECT_SERVER, gets fd, ups
server_in_sem (unblocking client), sends IDENTIFY over
just-received server fd, ups recover_sem (unblocking
worker).
- ddsnapd gets IDENTIFY, sends IDENTIFY_OK to client, passing
"chunksize_bits." (To "identify" a snapshot? What does this
mean?)
- dm-ddsnap client gets IDENTIFY_OK:
- Sets READY_FLAG.
- Sets chunksize_bits from message.
- Sends CONNECT_SERVER_OK to agent.
- ups identify_sem, which allows ddsnap_create() to return.
- Agent receives CONNECT_SERVER_OK, does nothing, is happy.
dm-ddsnap worker starts, blocks on recover_sem. When unblocked
by CONNECT_SERVER operation:
- If the target is a snapshot, calls upload_locks() to clear
outstanding releases list entries, send UPLOAD_LOCK and
FINISH_UPLOAD_LOCK messages to ddsnapd (which does nothing
with them) and move any outstanding entries on the "locked"
list to the "releases" list.
- Calls requeue_queries() to move any queries in the "pending"
queue back to the "queries" queue for reprocessing (since
ddsnapd has restarted and has therefore lost any "pending"
queries).
- Clears RECOVER_FLAG, ups recover_sem (again??), starts
worker loop.
Locking
When a chunk is shared between the origin and one or more
snapshots, any reads of that chunk must go to the origin device. A write to
that chunk on the origin, however, may change the shared data and must therefore
be properly serialized with any outstanding reads.
Locks are created when a read to a shared chunk takes place; a snaplock is
allocated if necessary and a hold record added to its "hold" list. When a
write to such a chunk takes place, we first copy the chunk out to the snapshots
with which it is shared so that future snapshot reads no longer must lock the
chunk. We then check whether the chunk has already been locked. If it has,
we create a "pending" structure and append it to the eponymous list on the
snaplock, thereby queueing the write for later processing. When all
outstanding reads of that chunk have completed, the chunk will be unlocked and
the queued writes allowed to complete.
Replication and Hostnames
Configuration
Zumastor currently stores replication target information on the source host
by volume name and the name of the target host. For each volume, the
replication targets are keyed by hostname, which is generated from the
output of the "uname -n" command. That is, for a volume "examplevol," the
configuration for replication to a volume on the host "targethost.dom.ain.com"
is stored in the directory
/var/lib/zumastor/volumes/examplevol/targethost.dom.ain.com/.
Files in this directory include files containing the progress of an ongoing
replication as well as "port," which contains the port on which the target
will listen for a snapshot to be transmitted, and "hold," which contains the
name of the snapshot that was last successfully transmitted to the target and
which is therefore being "held" as the basis for future replication.
On the target host, replication source information is stored by volume name
only, in the directory "source," e.g.
/var/lib/zumastor/volumes/examplevol/source/.
Files here include "hostname," which contains the name of the source host
(which currently must be the output of "uname -n" on the source host), "hold,"
which contains the name of the snapshot that was last successfully received
from the source and directly corresponds to the source file of the same name,
and "period," which contains the number of seconds between each automatic
replication cycle.
Sanity checking
The zumastor command checks the host given in a "replicate" command against
the actual output of "uname -n" on the source host. The output of "uname -n"
is also used when triggering replication via the "nag" daemon, by running
a command on the source host to write the string "target/<hostname>" to the
"trigger" named pipe.
Replication
When the "zumastor replicate" command is given (on the source host), it does
an "ssh" to the target host (given on the command line) to get the contents
of the "source/hostname" configuration file. If the contents of that file
don't match the output of "uname -n" on the source host, it logs an error and
aborts. Otherwise, after some setup it does another "ssh" to the target to
run the command "zumastor receive start," giving the volume and the TCP port
to which the data transmission process will connect. The target host prepares
to receive the replication data and starts a "ddsnap delta listen" process in
the background, to wait for the data connection. The source host, still
running the "replicate" command, issues a "ddsnap transmit" command, giving
in addition to other parameters the target hostname and port to which the
data connection will be made. When the "ddsnap transmit" command completes,
the source host does yet another "ssh" to the target to run the command
"zumastor receive done," also giving the volume and TCP port.
Other uses of "uname -n"
During configuration on a target host, when the administrator runs the
"zumastor source" command, the script does an "ssh" to the given source host
to retrieve the size of the volume to be replicated. Also on the target host,
the "nag" daemon (used to periodically force a replication) does an "ssh" to
the source host to write the string "target/<uname -n output>" to the "trigger"
named pipe to trigger replication on the source host to the target.
Glossary
- chunk:
A unit of data in a Zumastor volume, measured in bytes, which may not
(and probably will not) correspond to sector, block or page sizes;
Zumastor manipulates the contents of a volume in "chunk"-sized units.
More specifically, the unit of storage granularity in the snapshot
store.
- exception:
A chunk that no longer exists solely on the origin, but has been
modified after a snapshot was taken, so that a copy of the old
contents has been made in the affected snapshot(s).
- logical address:
An address within a virtual volume mapped through the device mapper.
- logical chunk:
A chunk on a virtual device that is accessed through the device mapper.
- snaplock:
A data structure that describes a list of read locks associated with
a chunk associated with a snapshot but actually shared with the origin
for which there is a read outstanding.