Jerry Jelinek's blog

Search
Close this search box.

Solaris Volume Manager disksets

April 5, 2005

Solaris Volume Manager has had support for a feature called
“disksets” for a long time, going back to when it was
an unbundled product named SDS. Disksets are a way to group
a collection of disks together for use by SVM. Originally
this was designed for sharing disks, and the metadevices
on top of them, between two or more hosts. For example,
disksets are used by SunCluster. However, having a way to
manage a set of disks for exclusive use by SVM simplifies administration
and with S10 we have made several enhancements to the diskset
feature which make it more useful, even on a single host.

If you have never used SVM disksets before, I’ll briefly summarize
a couple of the key differences from the more common use
of SVM metadevices outside of a diskset. First, with a diskset,
the whole disk is given over to SVM control. When you do this
SVM will repartition the disk and create a slice 7 for metadb
placement. SVM automatically manages metadbs in disksets so
you don’t have to worry about creating those. As disks are
added to the set, SVM will create new metadbs and automatically
rebalance them across the disks as needed. Another difference
with disksets is that hosts can also be assigned to the set.
The local host must be in the set when you
create it. Disksets implement the concept of a set owner
so you can release ownership of the set and a different host
can take ownership. Only the host that owns the set can access
the metadevices within the set. An exception to this is the
new multi-owner diskset which is a large topic on its own so
I’ll save that for another post.

With S10 SVM has added several new features based around the
diskset. One of these is the metaimport(1M) command which
can be used to load a complete diskset configuration onto
a separate system. You might use this if your host died and
you physically moved all of your storage to a different machine. SVM
uses the disk device IDs to figure out how to put the configuration
together on the new machine. This is required since the
disk names themselves (e.g. c2t1d0,c3t5d0,…) will probably
be different on the new system. In S10 disk images in a diskset
are self-identifying. What this means is that if you use
remote block replication software like HDS TrueCopy or the SNDR
feature of Sun’s StorEdge Availability Suite to do
remote replication you can still use metaimport(1M) to import
the remotely replicated disks, even though the device IDs
of the remote disks will different.

A second new feature is the metassist(1M) command. This command
automates the creation of metadevices so that you don’t have
to run all of the separate meta* commands individually. Metassist
has quite a lot of features which I won’t delve into
in this post, but you can read more

here
.
The one idea I wanted to discuss is that metassist uses
disksets to implement the concept of a storage pool. Metassist
relies on the automatic management of metadbs that disksets
provide. Also, since the whole disk is now under control of
SVM, metassist can repartition the disk as necessary to automatically
create the appropriate metadevices within the pool. Metassist
will use the disks in the pool to create new metadevices and only
try to add additional disks to the pool if it can’t configure the new
metadevice using the available space on the existing disks in the pool.

When disksets were first implemented back in SDS they were intended
for use by multiple hosts. Since the metadevices in the set were
only accessible by the host the owned the set the assumption was
that some other piece of software, outside of SDS, would manage the
diskset ownership. Since SVM is now making greater use of disksets
we added a new capability called “auto-take” which allows the local
host to automatically take ownership of the diskset during boot and thus have
access to the metadevices in the set. This means that you can
use vfstab entries to mount filesystems built on metadevices within
the set and those will “just work” during the system boot.
The metassist command relies on this feature and the storage pools (i.e. disksets)
it uses will all have auto-take enabled. Auto-take
can only be enabled for disksets which have the single, local host
in the set. If you have multiple hosts in the set than you are really
using the set in the traditional manner and you’ll need something
outside of SVM to manage which host owns the set (again, this ignores
the new multi-owner sets). You use the new “-A” option on the metaset
command to enable or disable auto-take.

Finally, since use of disksets is expected to be more common with these
new features, we added the “-a” option to the metastat command so that
you can see the status of metadevices in all disksets without have to
run a separate command for each set. We also added a “-c” option to
metastat which gives a compact, one-line output for each metadevice.
This is particularly useful as the number of metadevices in the
configuration increases. For example, on one of our local servers we have 93
metadevices. With the original, verbose metastat output this resulted in
842 lines of status, which makes it hard to see exactly what is going
on. The “-c” option reduces this down to 93 lines.

This is just a brief overview of some of the new features we have
implemented around SVM disksets. There is lots more detail in
the docs

here
.
The metaimport(1M) command was available starting in S9 9/04
and the metassist command was available starting in S9 6/04. However,
the remote replication feature for metaimport requires S10.

5 Responses

  1. I seem to have found the correct person here who knows a thing or two about disk sets 🙂 Please humor this situation if you could:

    I’m running Solaris 10 x86 on several V20zs, each connected to a fc SAN and each box can see all disks and the mxio device names do match up on all boxes.

    On host1, I create a two drive disk set. rpc/metad and rpc/metamh are running on all hosts. root is a member of GID 14 (sysadmin) on all hosts.

    I created that disk set with just host1 in it. I want to add host2 to it. So on host1, I run:

    [root@host1]/> metaset -s fooset -a -h host2
    Things chug for a while, but metaset dies with the following:

    metaset: host2: No such file or directory

    I can try this with another host on the SAN and get the same result. I’m perplexed. On host2, I do notice that there is a dangling symlink created but then removed in /dev/md. It’s a symlink with the name of the disk set I’m trying to add that host to and it is pointing to /dev/md/shared/1, which doesn’t exist, and according to a truss of rpc.metad on host2, is never made before the error is shown on host1.

  2. I am not sure that I could debug this problem
    through my blog. There is a few things you
    could check. Does a local metadb exist on
    host2? You might try
    setting up /.rhosts and making sure that
    you can rsh from each host to the other.
    Beyond that, I would try logging a support
    call so that somebody can work through this
    step by step with you.
    Sorry I can’t be more direct help,
    Jerry

  3. Dear Jerry,
    I ran into an issue with the auto-take feature, I think you can easily diagnose what is wrong.
    My environment is as follows,
    E4900 and a v890 SAN attached to IBM DS4800 SAN storage server. All RAID1+0 LUN’s created on DS4800 are managed via SVM. The environment is not a Sun cluster – just a two indepdennt servers sharing DS4800 storage. I am going to take a point-in-time snapshot of the LUN’s owned by E4900 and present it to V890 for off-host backup. I have five disk sets defined on the E4900 and their snapshots has already been imported to v890. Hence we will recreate the point-in-time copy in the night every day and the v890 will take these disk sets mount the file systems and take the backup. Once backup is complete it will release the disks and I will disable the point-in-time copy on DS4800.
    Issue:
    1. On the E4900 I have enabled the auto-take feature of these five disk sets via metaset -s <set name> -A enable. But they are not being taken by Solaris upon reboot, hence the file systems will not be mounted. You have to manually take the disks and mount the file systems once the system boots up. I have enabled the meta daemons in /etc/inetd.conf and auto-take enabled the sets what else should I do to make this work?

  4. Jerry, Thanks for the quick and short update on SVM. I have two questions.
    1. My existing server running S10 with SVM, but no diskset. So the data disk partition doesn’t conform to what SVM liked for diskset disk. How can I convert them into disk set and move over to another server later on? I have to keep all the data.
    2. If I already have metadb on some none diskset disks (and these disks will not be converted into diskset disks), how does SVM manage the metadb when diskset is introduced into the server? All metadb copied will be updated and more copies will be created on diskset disks?
    Thanks.

Recent Posts

September 23, 2010
September 13, 2010
May 26, 2009

Archives