![]() | Sun System Handbook - ISO 4.1 October 2012 Internal/Partner Edition | ||
|
|
![]() |
||||||||||||
Solution Type Technical Instruction Sure Solution 1009387.1 : Sun StorageTek[TM] 99xx: Howto Use Shadow Image for Backup with StorageTek 99x0 & Solaris [TM] Volume Manager
PreviouslyPublishedAs 212980
Applies to:Sun Storage 9990V System - Version: Not ApplicableSun Storage 9985V System - Version: Not Applicable and later [Release: N/A and later] All Platforms ***Checked for relevance on 09-Dec-2011*** GoalThis document describe the entire process of Backup & Restore using Shadow Image for StorageTek[TM] 99x0 and Solaris[TM] Volume Manager. Though there are BluePrint documents available, but these help only in providing a broad guideline on how a 'Split Mirror Backup' is configured.This document is intended for those already familiar with the RAID Manager CCI and/or Storage Navigator interfaces for Shadow Image. The command line detail is provided for rebuilding the Solaris Volume Manager metaset on the backup server.
SolutionSteps to Follow Sun StorageTek [TM] 99xx: Using Shadow Image for Backup with StorageTek 99x0 & Solaris [TM] Volume ManagerUsing Shadow Image for Backup with StorageTek 99x0 & Solaris Volume Manager When doing a snapshot using Shadow Image on a logical volume, the entire content of the physical disks is cloned. This includes the configuration section (called the private region in VERITAS, or metadb in the Solaris[TM] Volume Manager (SVM), and the data section (also called the public region). However, this private region (or metadb) holds disk identifications parameters. Therefore, the cloned disks and original disks have the same ID. This is not a major issue if the cloned data is to be accessed on two different hosts, but it can be a difficult issue to solve if you want to access the cloned data on the same host. Cloned Data On a Different Host Accessing the replicated data on a secondary host is equivalent to importing the logical group (or diskset) that contains the logical volumes you want to access. However, because the disks are cloned, the volume manager on the secondary host will believe this diskset is already imported on the primary host. This information is stored in the diskset metadatabases on the disks. It is necessary to clean up this information on every cloned disk, making it possible to take ownership of the diskset, and access its metadevices. Definitions/Setup Data Server: Sun[TM] Cluster 3.1 node using Solaris Logical Volume Manager ShadowImage is used to clone the LUNs for the purpose of Backup. The data to access is configured as a metadevice using SLVM patched at the latest level. In this example, the primary and secondary volumes (P-Vols and S-Vols) are accessed from two different hosts, a Data Server and Backup Server. In this situation, the Data Server has access only to the P-Vols, and Backup Server sees only the S-Vols. This constraint forces to reconstruct the metaset and metadevices on the secondary site before accessing the data. There is no possibility of importing or exporting the metaset from one host to the other (take and release ownership of a metaset implies that every disk is visible on both hosts) ShadowImage is a track for track asynchronous copy technology at the logical device level. It has no concept of file systems or applications, so the data must be properly prepared on the P-VOL by the host to ensure data integrity. A typical implementation would be as follows. 1. Pair create. ============================================================ Create a metaset on secondary host: root@Back1 # devfsadm Populate the new metaset with cloned disks: Create new configuration for the secondary metaset. root@Node1 # metaset -s myds -p On the secondary host (Backup Server), create a metadevice configuration file called /etc/lvm/md.tab containing the previous output with the correct secondary LUNs. The order of appearance must be respected: root@Back1 # cat /etc/lvm/md.tab Apply the metadevice configuration file to the replicated host: root@Back1 # metainit -s myds -a If you face any of the following issues, please follow the steps listed below them. root@back1 # metainit -s myds -a root@back1 # metaset -s myds -t This is for the metaset clear problem that you may face. -P metaset -s -C metaset -s These are used to clear the metasets with stale or no DB Replicas. How to use the command In a non-cluster environment - metaset -s In a Sun Cluster 3.x environment - metaset -s The node on which the command is run will be removed from the CCR for If the diskset is not in the CCR (i.e. not seen in the scstat -D output), metaset -s This will cause the command to have no interaction with the Sun Cluster framework. 8. Verify the file system on the S-VOL. root@Back1 # fsck /dev/md/myds/rdsk/d101 9.Mount and verify the integrity of the database on the S-VOL if possible. root@Back1 # mount /dev/md/myds/dsk/d101 /mnt/LAB 10. Backup the S-VOL. NOTE: The SLVM config requires a recreation by using the md.tab entries, which may require a metaset purge as well. Currently configurations using SVM cannot Attachments This solution has no attachment |
||||||||||||
|