Best Practices for Copy Data

 ECX 2.0
Best Practices for Copy Data
1 © Catalogic Software, Inc TM, 2015. All rights reserved. This publication contains proprietary and confidential material, and is only for use by licensees of Catalogic DPXTM, Catalogic BEXTM, or Catalogic ECXTM proprietary software systems. This publication may not be reproduced in whole or in part, in any form, except with written permission from Catalogic Software. Catalogic, Catalogic Software, DPX, BEX, ECX, and NSB are trademarks of Catalogic Software, Inc. Backup Express is a registered trademark of Catalogic Software, Inc. All other company and product names used herein may be the trademarks of their respective owners. 2 Table of Contents Concepts and Terms ..................................................................................................................................... 4 Planning ........................................................................................................................................................ 4 A Useful Copy Data Pattern .......................................................................................................................... 6 3 Concepts and Terms ECX 2.0 introduces two new functions that enable you to amplify the value of your corporate data. With the first of these functions, “copy data”, you create copies of important storage volumes. With the second, “use data”, you employ these data copies for a variety of purposes including restore, disaster recovery, OS and application maintenance, testing, validation, and data analytics. The copy data function can be applied at two different levels: storage (for example, NetApp) or application (for example, VMware). Applied at the VMware level, the copy data function allows you to create copies of the storage that supports VMware datacenters, folders, and VMs. Moreover, the resulting data copies can be made application consistent. Applied at the NetApp level, the copy data function allows you to create copies of NetApp storage volumes. From an application perspective, such copies should be considered merely crash consistent. It is important to understand that when you apply the copy data function at the VMware level, ECX ultimately applies it at the NetApp storage level. That is, the copied VMware objects are resolved by ECX to their supporting NetApp storage volumes. Both the source and target volumes for copy data must reside on a NetApp cluster running Clustered Data ONTAP (CDOT) 8.2 or greater. The copy data function is currently not supported for 7‐mode Data ONTAP (7DOT). Storage volumes in CDOT belong to a storage virtual machine, or SVM. When NetApp first introduced CDOT, SVMs were called “Vservers”. Planning Copy Data Sources Any NetApp volume may be a Copy Data source. The rationale for copying data may derive from various needs and purposes including data analytics, legal requirements, and disaster recovery. VMware Copy Data Sources Following are some considerations for VMware copy data sources: 1. Register with ECX all vCenters and CDOT clusters that contain VMware datastores. 2. Run a VMware Catalog policy. 3. Run the ECX Unprotected Virtual Machines report which can be found under Protection Compliance reports under the Reports tab. Among the reported VMs, identify those that cannot be protected because they represent legitimate ineligible VMs for ECX. The reasons for ineligibility include:  they use non‐NetApp storage  they use VMware RDM disks  their storage does not belong to a CDOT SVM. When you define VMware copy data policies you can then exclude these VMs. 4 4. Identify the VMs that need application consistency. Then, after selecting the VM Snapshot option for the copy data policy, select only those VMs requiring application consistency. While this will create a VM snapshot of all the VMs in the policy, only the selected VMs will be made application consistent. It is important to select application consistency for only the VMs that require it because creating an application consistent snapshot typically takes considerably more time than a snapshot without application consistency. 5. Ensure that VMware Tools is installed on all VMs for which you desire application consistency. The VMware copy data function also offers the ability to directly select VMware datastores rather than selecting within its datacenter, folder, and VM hierarchy. When you select at the datastore level, ECX catalogs no information about the VMs that use those datastores, which is a limitation that will be addressed in future releases. For ECX 2.0, these relationships can be discovered by using ECX’s search function. NetApp Copy Data Sources In planning for NetApp copy data sources, we recommend that you run a NetApp Storage Catalog policy against all CDOT SVMs whose volumes will serve as copy data sources. The data generated by this policy, and accessible via the ECX NetApp Storage Volumes report (under Storage Utilization), provides insight into the characteristics of the SVM and its several volumes. This recommendation is especially important if you are low on disk storage. The rationale here is that the ECX copy data function dynamically determines and provisions a target volume based on the size of the source volume. Even though the target volume is thin provisioned, ECX will ensure that the target aggregate contains free space roughly equal to the size of the source volume. Running the NetApp Storage policy is one way to learn the size of the source volume and, consequently, that of the automatically provisioned target volume. Another important consideration for NetApp copy data sources involves the relationships between NetApp volumes, logical units (LUs), VMFS datastores, VMs, and ECX copy data policies. By way of background, a LU is a WAFL file on a NetApp volume that can be exposed to VMware as if it were a SCSI disk. One or more LUs can contribute to a VMFS datastore. VMDK files provisioned on a datastore serve as virtual SCSI devices for VMs. Additionally, if you have multiple source VMs coming from the same volume, it is recommended to include them in a single copy data policy, if practical. The reason is due to the 256 snapshot limit on NetApp volumes. If you have, say, 2 source VMs and you define 2 ECX copy data policies, the 256 snapshot limit is reached twice as fast than would be the case if you define a single copy data policy – because twice as many snapshots are produced. As such, half as many recovery points are available for each VM. More generally, the number of recovery points available for a given source VM is reduced by a factor equal to the number of distinct policies that resolve to the same NetApp volume. Copy Data Targets Prior to release 2.0 ECX functionality did not meaningfully alter your environment. But with 2.0, the copy data function consumes NetApp storage in the form of data copies. These data copies, the result of a SnapMirror or SnapVault operation, represent NetApp source/target relationships, knowledge of which is kept by ECX. 5 Consequently, it is simplest to define no more than one or two target SVMs. In the event that the ECX appliance is lost, then you would still know which SVMs contained data copies. We recommend that the SnapMirror and SnapVault targets reside on different NetApp clusters, preferably with each cluster at a different physical site. This recommendation assumes that you are using the copy data function for data protection purposes. If you are using the copy data function chiefly for test/dev or data analytics, this recommendation is less critical but, for the purpose of not impacting your source production environment, it is still a best practice to use remote mirror and local vault on different storage. We also recommend that you use dedicated NetApp aggregates as copy data targets. The intent here is to avoid degrading the performance of your production workload. Such degradation could occur if the production I/O workload had to compete with the copy data I/O workload. The ECX copy data function automatically provisions a target volume. In so doing, ECX ensures that the target volume, although thin‐provisioned, is on an aggregate that has free space roughly equal to the size of the source volume. The auto‐provisioned target volume has NetApp storage efficiency (“de‐dupe”) enabled. Cluster Peering Peering is essentially the process of enabling network connectivity between two different physical clusters. Before running an ECX copy data policy, you must peer the source and target clusters. This can be accomplished by means of NetApp's OnCommand UI or by the clusters’ CLI. Peering of the source and target SVMs is performed automatically by ECX itself based on the protection policy definition. Licensing The ECX copy data function requires licenses for SnapMirror and SnapVault on both the source and target NetApp clusters. A Useful Copy Data Pattern A common and useful copy data pattern for data protection is as follows: 1. Hourly snapshot on source FAS 2. Nightly snapshot on source FAS 3. Mirror nightly snapshot(s) to secondary (DR) FAS 4. Vault snapshots from secondary FAS to tertiary ("backup") FAS The merits of this pattern are: 1. Satisfy RTO and RPO. Hourly snapshots give immediate (onsite) and high fidelity (no more than 1 hour’s data loss) recovery points. 2. Nightly snapshots accumulate day’s changes and serve as source for SnapMirror. The resulting DR site copies can be used for true BC/DR, DR testing, data analytics, etc. 6 P r i ma r y
Cl ust er
3. SnapVault copies to the backup (tertiary) site can serve as a long‐term, e.g., compliance related, repository. SVM 1
Source Volume
Source Site
Hourly Snapshot
Nightly
Snapshot
Secondar y
Cl ust er
SnapMirror over network link
SVM 2
Disaster Recovery Site
Target volume
Tert i ar y
Cl ust er
SnapVault over network link
Occurs Nightly
SVM 3
Backup Site
Target volume
Schedules and Events By default, snapshots are taken periodically, i.e., they happen at scheduled times. By default, mirror and vault operations occur in response to an event, i.e., in response to the completion of another operation. 7 In the example above, the mirror policy will start to run when the nightly snapshot event has finished. In the same way, the vault policy will start to run when the mirror policy has finished. Please note that these 3 sub‐policies always run serially with respect to each other. 8