Thin Space Reclamation with EMC VPLEX

TTechnical No
otes
Thin Space
S
e Recllamation w
with EM
MC® VPLEX
X™
 VMware ESX
Xi, Microssoft Wind
dows, Geeneric UN
NIX / Linu
ux
 EMC XtremIO
O™, EMC
C VMAX3™, and EEMC VNX
X™
Absstract
Thiss document describes
d
ma
anual proced
dures that can be used to reclaim consumed
storrage on thin LUNs using host-based
h
to
ools along w
with VPLEX da
ata mobility.
Marrch 2015
Cop
pyright © 201
15 EMC Corp
poration. All rrights reserveed. Published
d in the USA..
Pub
blished March 2015
EMC
C believes th
he information in this pub
blication is acccurate as off its publicatiion date.
The
e information is subject to
o change with
hout notice.
The
e information in this publiication is pro
ovided as is. EMC Corpora
ation makes no
representationss or warrantie
es of any kind
d with respecct to the information in th
his
pub
blication, and
d specificallyy disclaims im
mplied warraanties of mercchantability or
fitness for a particular purpo
ose. Use, cop
pying, and disstribution of any EMC sofftware
desscribed in this publication
n requires an
n applicable ssoftware license.
EMC
C2, EMC, and
d the EMC log
go are registeered trademaarks or tradem
marks of EMC
C
Corrporation in the United States and oth
her countries.. All other tra
ademarks use
ed
herein are the property of their respectiv e owners.
For the most up-to-date listin
ng of EMC prroduct namess, see EMC C
Corporation
Trad
demarks on EMC.com.
E
Thin
n Space Recllamation with EMC VPLEX
X
Tecchnical Notess
Partt Number h14055
2
TThin LUN Spa
ace Reclaim Using EMC VPLEX
V
TTechnical No
otes
Contents
Conte
ents
Cha
apter 1 In
ntroduction
n
6 Purpose ............................................................................................................................ 7 ope ............................................................................................................................... 7 Sco
Aud
dience .......................................................................................................................... 7 Doccument Organization ................................................................................................... 7 Process Overvie
ew ............................................................................................................ 7 Cha
apter 2 Thin
T
Provisio
oning
9 VPLLEX Thin Provvisioning ................................................................................................ 10 VPLLEX Rebuilds for Thin Devices ................................................................................... 10 VPLLEX Mobility to
t Reclaim Unused Spacee .................................................................... 11 Extent
E
Migrattions ...................................................................................................... 11 Device
D
Migra
ations ..................................................................................................... 11 VMw
ware API for Array
A
Integra
ation (VAAI) S
Support .......................................................... 11 Compare
C
and
d Write ................................................................................................... 11 WriteSame
W
(1
16) ......................................................................................................... 12 VNX
X2 Thin Provisioning.................................................................................................. 12 VMA
AX3 Thin Pro
ovisioning ............................................................................................... 13 VMware
V
vSto
orage API for VMAX3 .............................................................................. 13 Xtre
emIO Thin Pro
ovisioning ............................................................................................. 13 XtremIO’s
X
support for the
e VAAI ................................................................................. 13 Thin
n Provisionin
ng Summary ....................
.
...................................................................... 14 Cha
apter 3 VMware
V
ESX
Xi
15 VMw
ware ESXi Re
eclaim .................................................................................................... 16 Virttual Machine Disks (VMDKs)..................................................................................... 16 Raw
w Data Mappings (RDMs) ....................
.
...................................................................... 16 Dattastores (VMFFS) ......................................................................................................... 17 Cha
apter 4 Generic
G
UNIX
X / Linux
18 UNIX / Linux File
esystem Recllaim ................................................................................... 19 The
e “dd” Comm
mand ....................................................................................................... 19 The
e “mount –o discard”
d
Com
mmand ............................................................................... 19 The
e “fstrim” Com
mmand .................................................................................................. 20 Thin LUN Space Recla
aim Using EM
MC VPLEX
Techniccal Notes
3
Contents
Cha
apter 5 Microsoft
M
Windows
21 Thin
n Provisionin
ng LUN Identiification ............................................................................. 22 Storage Space Reclamation
R
....................
.
...................................................................... 22 AP Command
d ......................................................................................... 22 Using the UNMA
MAP Requestts from Hyper-V ..................................................................................... 22 UNM
Using the sdelette.exe Comm
mand .................................................................................. 23 Scriipting with PowerShell .............................................................................................. 23 App
pendix A VMware
V
ESX
Xi UNMAP Exxamples
24 Spa
ace Reclamattion with VMw
ware ESXi .......................................................................... 25 vmkfstools
v
---punchzero ............................................................................................ 25 App
pendix B Windows
W
RD
DM Examplee
28 Spa
ace Reclamattion with Miccrosoft Windo
ows ................................................................ 29 sdelete.exe
s
................................................................................................................ 29 App
pendix C Linux
L
with EMC VPLEX aand VNX
31 Spa
ace Reclamattion through VPLEX Mobillity Jobs ......................................................... 32 How
H Data Mo
obility Works ......................................................................................... 32 VPLEX
V
Data Mobility
M
.................................................................................................. 33 4
TThin LUN Spa
ace Reclaim Using EMC VPLEX
V
TTechnical No
otes
Contents
Figures
Figu
ure 1 - Operating System Process Flow
w ...................................................................... 8 Figu
ure 2 - VPLEX
X Virtualized Storage
S
.............................................................................. 10 Figu
ure 3 - VMware Storage La
ayers ................................................................................. 16 Figu
ure 4 - LUN Utilization Prio
or to File Del etion ............................................................. 25 Figu
ure 5 - Deleting Files on th
he Guest Hosst ................................................................... 25 Figu
ure 6 - LUN Utilization After File Deleti on ................................................................. 26 Figu
ure 7 - Using “dd” to Fill the Free Disk Space with ZZeroes ....................................... 26 Figu
ure 8 - LUN Utilization afte
er Space Recclamation ....................................................... 26 Figu
ure 9 - Inflate
ed VMDK Size
e prior to vmkkfstools ......................................................... 27 Figu
ure 10 - Exam
mple of Runniing “vmkfsto
ools --punchzzero" .......................................... 27 Figu
ure 11 - Defla
ated VMDK Size after runn
ning vmkfsto
ools ............................................ 27 Figu
ure 12 - File Size
S Prior to Running
R
sdellete.exe ......................................................... 29 Figu
ure 13 - Exam
mple of running sdelete.exxe .................................................................. 30 Figu
ure 14 - File size
s after running sdeletee.exe .............................................................. 30 Figu
ure 15 - SuSE
E_OS_LUN_0 Consumed C
Capactiy ........................................................ 32 Figu
ure 16 - Deletting a file and zeroing thee filesystem ................................................... 33 Figu
ure 17 - SuSE
E_OS_LUN_0 Consumed C
Capacity uncchanged ..................................... 33 Figu
ure 18 – Settting the Thin Rebuild Attriibute .............................................................. 33 Figu
ure 19 - VPLE
EX Data Mobility .................................................................................... 34 Figu
ure 20 - Creatte Device Mo
obility Job ........................................................................... 34 Figu
ure 21 - Selecct Virtual Vollume .................................................................................. 34 Figu
ure 22 - Creatte Source / Target
T
Mobilitty mapping .................................................... 35 Figu
ure 23 - SuSE
E_OS_LUN_1 Consumed C
Capacity ........................................................ 35 Thin LUN Space Recla
aim Using EM
MC VPLEX
Techniccal Notes
5
Chapter 1: Introduction
Chapteer 1
In
ntroducction
Thiss chapter pre
esents the following topiccs:
Purrpose ........................................................................................................................ 7 Sco
ope ........................................................................................................................... 7 Aud
dience ...................................................................................................................... 7 Doccument Organization ............................................................................................... 7 Process Overvie
ew ........................................................................................................ 7 6
TThin LUN Spa
ace Reclaim Using EMC VPLEX
V
TTechnical No
otes
C
Chapter 1: Intrroduction
Purpose
Man
ny applicatio
ons have the potential to write zeroes to free space as part of tthe
stan
ndard initialiization, alloccation, or miggration proceesses. Depen
nding on the w
way
zero
oes are writte
en. The poten
ntial exists to
o reclaim thee storage spa
ace allocated
d as a
resu
ult of these processes.
p
Th
his technical note discussses some of the most com
mmon
situ
uations that cause
c
zeroess to be written
n to storage devices.
Scope
Thiss technical note outline shows
s
how to
o reclaim all-zzero space a
and also how
w to
recllaim previoussly used non-zero space w
with host-baased applicattions.
A
Audience
e
Thiss technical note is intend
ded for EMC ffield personn
nel, partners, and customers who
will be configuriing, installing
g, and suppo
orting VPLEX.. An understanding of the
ese core
tech
hnologies is required:

Server and
d Application
n Administrattion

Storage Architecture and
a Network Design

nnel Block Sttorage Conceepts
Fiber Chan

VPLEX Con
ncepts and Components
C
Documen
nt Organization
Thiss technical note is divided into multip
ple sections:
Secction One: Ea
ach host ope
erating system
ms and their specific requ
uirements for
recllaiming all-ze
ero marked space.
s
Secction Two: The appendix section
s
will ccontain real w
world examplles for each h
host
ope
erating system
m.
Process Overview
w
The
e foundation of thin device space reclaamation is th
hat a zero wriitten to disk can be
recllaimed for the thin pool th
hat provides the backingg storage for tthe thin device.
Dep
pending on th
he back-end array, zeroess may be ded
duplicated in
n the array, or in
other cases leve
eraging VPLEX
X’s built-in th
hin awarenesss to deduplicate zeroes with a
data mobility job. In many, if not all casees, when a fille is written tto a filesystem
m, then
dele
eted the space that was originally
o
wri tten for the ffile will not be overwritten
n with
zero
oes by the sttandard delette command (s) used in U
UNIX and Win
ndows system
ms. A
man
nual processs must be use
ed to overwriite the newlyy freed space with zeroes so the
spa
ace can then be reclaimed
d by the backk-end storagee array.
Thin LUN Space Recla
aim Using EM
MC VPLEX
Techniccal Notes
7
Chapter 1: Introduction
The
e first thing to
o consider is whether or n
not the system is a virtual machine orr it is
running on hard
dware. If it is running as a virtual mach
hine, additional clean-up
p will be
required to fullyy reclaim the space, both at the hyperv
rvisor layer an
nd the storagge array
laye
er. The proce
edures for zerroing filesysttems for UNIX
X and Window
ws are the sa
ame,
resp
pectively, reg
gardless if the system is rrunning virtualized or nott.
Ded
duplication on
o back-end storage
s
arrayy
Storage arrays that support deduplicatio
d
on, such as th
he EMC Xtrem
mIO, will
automatically re
eclaim and free space on thin LUNs ass the zeroes a
are written to
o the
new
wly claimed space.
s
No furrther action iss required.
On storage arrayys that do no
ot support deeduplication, VPLEX Data Mobility can
n be
leve
eraged to mo
ove a zeroed thin LUN to a new thin LU
UN. Although
h VPLEX is not thin
awa
are, VPLEX will preserve the thin-ness of devices aand will only transfer the n
non-zero
data to the new LUN, therebyy re-thinningg the device.
The
e following flo
owchart diag
grams the bassic procedurees required tto reclaim un
nused
spa
ace on Thin LUNs.
Fig
gure 1 - Ope
erating Syst
tem Process
s Flow
8
TThin LUN Spa
ace Reclaim Using EMC VPLEX
V
TTechnical No
otes
Chaptter 2: Thin Provisioning
Chap
pter 2
Thin P
Provision
ning
Thiss chapter pre
esents the following topiccs:
VPLLEX Thin Provvisioning ............................................................................................ 10 VPLLEX Rebuilds for Thin Devvices ............................................................................... 10 VPLLEX Mobility to Reclaim Unused
U
Spacee ................................................................. 11 VMw
ware API for Array Integrration (VAAI) Support ...................................................... 11 VNX
X2 Thin Proviisioning.............................................................................................. 12 VMA
AX3 Thin Pro
ovisioning ........................................................................................... 13 Xtre
emIO Thin Prrovisioning ......................................................................................... 13 Thin
n Provisionin
ng Summary ...................................................................................... 14 Thin LUN Space Recla
aim Using EM
MC VPLEX
Techniccal Notes
9
Chapter 2: Thiin Provisioning
g
V
VPLEX Th
hin Provisioning
Trad
ditional (thicck) provisioniing anticipatees future gro
owth and thus allocates storage
cap
pacity beyond
d the immediiate requirem
ment. This implies that du
uring a rebuilld
process all the data
d
will be copied
c
from tthe source to
o the target.
With “Thin” provvisioning you
u may allocatte only the sttorage capaccity needed a
as the
app
plication needs it — when
n it writes. Wh
hich means tthat if a targe
et is claimed as a
“thiin” device, VPLEX
V
will rea
ad the storagge volumes but will not wrrite any unalllocated
bloccks to the tarrget, preserving the targeet’s thin provisioning.
Ben
nefits for VPLLEX Thinly pro
ovisioned vo lumes:

Expand dyynamically depending on
n the amountt of data writtten to them.

Do not consume physiical space un
ntil written to
o.

Thin proviisioning optimizes the avvailable storaage space to be used
Fig
gure 2 - VPL
LEX Virtuali
ized Storag
ge
e: By default, VPLEX treats all
a storage vollumes as if theey were thicklyy provisioned.. You
Note
can tell VPLEX to claim arrays th
hat are thinly provisioned u
using the thin-rebuild attribu
ute.
V
VPLEX Re
ebuilds for
f Thin Devices
D
When claiming back-end
b
sto
orage, VPLEX requires thee user to speccify “Thin”
provvisioning forr each back-e
end storage vvolume. Storaage volumess that have be
een
claimed as “Thin
n” devices allow that sto rage to migraate onto a thinly provisioned
storrage volumess while alloca
ating the exaact amount o
of consumed thin storage pool
cap
pacity.
VPLLEX preservess the unalloccated thin po ol space of tthe target sto
orage volume
e by
detecting zeroed
d data content prior to wrriting, and th
hen skipping those unuse
ed
bloccks to preven
nt unnecessa
ary allocation
n. If a storagee volume is tthinly provisioned,
10
TThin LUN Spa
ace Reclaim Using EMC VPLEX
V
TTechnical No
otes
Chaptter 2: Thin Provisioning
the "thin-rebuild
d" attribute must
m
be to "trrue" prior to the storage vvolumes bein
ng used
for Data Mobilityy, Raid-1 or DR1.
D
If a thinly provissioned storag
ge volume co
ontains non-zzero data beffore being co
onnected
to VPLEX,
V
the pe
erformance of
o the migrati on or initial R
RAID 1 rebuilld is adverse
ely
affe
ected becausse it must cop
py all blocks..
V
VPLEX Mobility
M
to
o Reclaim
m Unused
d Space
Among the many different usse cases for V
VPLEX Mobillity, one of th
hese use case
es is to
movve from “Thicck” to “Thin”” -or- “Thin” tto “Thin” devvices (or exte
ents) to reclaiim
unu
used space due
d to the lim
mitations of V
VPLEX’s inability to leverage the SCSI U
UNMAP
functions.
Note
e: In most casses, modern Operating Systeems now offerr methods of rreclaiming unu
used (or
dele
eted) space fro
om mounted storage
s
volum es. However, there are man
ny older versio
ons that
do not
n offer SCSI UNMAP suppo
ort and VPLEXX Mobility offerrs a great meth
hod of resolving this
prob
blem.
Extent Migra
ations
Exte
ent migration
ns move data
a between exxtents in the same clusterr.
Use
e extent migrations to:
Device Migra
ations

ents from a “hot” storagee volume shared by other busy extentss
Move exte

Defragment a storage volume to crreate more co
ontiguous fre
ee space

Perform migrations
m
wh
here the sourrce and targeet have the sa
ame numberr of
volumes with
w identical capacities
Devvice migrations move data
a between deevices on thee same cluster or between
devvices on diffe
erent clusterss.
Use
e device migrrations to:

Migrate da
ata between dissimilar arrrays

Relocate a “hot” volum
me to a faste r array

Relocate devices
d
to ne
ew arrays in a different clu
uster
VMware
e API for Array
A
Inte
egration (VAAI) S
Support
On VPLEX, VAAI is implemen
nted using th e following tw
mmands:
wo SCSI com

“Compare
e and Write” (CAW) offloa ds coordinattion of powerring virtual m
machines
(VMs) on//off, and movving them beetween ESX servers.

“WriteSam
me (16)” offlo
oads copyingg data to and
d from the arrray through tthe
hypervisor.
e CompareAndWrite (CAW
W) SCSI comm
mand is used to coordinatte VMware
Compare and
d Write The
ope
erations such
h as powering
g-on/off VMss, moving VM
Ms from one E
ESXi to anoth
her
Thin LUN Space Recla
aim Using EM
MC VPLEX
Techniccal Notes
11
1
Chapter 2: Thiin Provisioning
g
with
hout halting applications (VMotion), aand Distributted Resource
e Scheduler ((DRS)
ope
erations.
CAW
W is used by VMWare ESX
Xi servers to rrelieve storagge contention, which mayy be
cau
used by SCSI RESERVATIO
ON in distribu
uted VM envirronments. CA
AW assists sttorage
hardware accele
eration by alllowing ESX s ervers to lock a region off disk instead
d of
entire disk.
VPLLEX allows CA
AW to be ena
abled/disableed at either a system leve
el or by a storage
view
w level. When
n CAW is disa
abled on VPLLEX, VPLEX viirtual volume
es, do not incclude
CAW
W support infformation in their respon ses to inquirries from hossts.
Note
e: VM operatio
ons may experience significcant performan
nce degradation if CAW is not
enabled.
W
WriteSame (16)
(
The
e WriteSame (16) SCSI com
mmand provvides a mechanism to offlload initializiing
virtual disks to VPLEX.
V
WriteSame (16) reequests the sserver to writte blocks of d
data
tran
nsferred by th
he applicatio
on client multtiple times to
o consecutive
e logical bloccks.
WritteSame (16) is used to offfload VM pro
ovisioning an
nd snapshottting in vSphe
ere to
VPLLEX which enables the arrray to perform
m copy operaations indepe
endently with
hout
usin
ng host cycle
es. The array can schedul e and executte the copy fu
unction much more
efficiently.
V
VNX2 Thin Provissioning
For native VMwa
are environm
ments, the Virrtual Machinee File System
m (VMFS) has many
cha
aracteristics that
t
are thin--friendly. Firsst, a minimal number of th
hin extents a
are
allo
ocated from the
t pool when a VMware ffile system iss created on thin LUNs. A
Also, a
VMFS Datastore
e reuses previously allocaated blocks th
hat are beneficial to thin LUNs.
When using RDM
M volumes, the file system
m or device ccreated on th
he guest OS d
dictates
whe
ether the RDM
M volume is thin-friendly.
t
.
When creating a VMware virttual disk, LU Ns can be prrovisioned ass:

Thick Provvision Lazy Zeroed

Thick Provvision Eager Zeroed
Z

Thin Proviision
Thicck Provision Lazy Zeroed is the defaullt and recommended virtu
ual disk type
e for thin
LUN
Ns. When using this method, the storaage required for the virtua
al disk is rese
erved in
the Datastore, but
b the VMwa
are kernel do
oes not initiallize all the bllocks at creation.
As of
o vSphere 5, there is also
o the ability tto perform th
hin LUN spacce reclamatio
on at the
storrage system level. VMFS 5 uses the SC
CSI UNMAP ccommand to return space
e to the
storrage pool wh
hen created on
o thin LUNs.. SCSI UNMA
AP is used anyy time VMFS 5
dele
etes a file, su
uch as Storag
ge vMotion, d
delete VM, delete snapsh
hot, etc. Earliier
12
TThin LUN Spa
ace Reclaim Using EMC VPLEX
V
TTechnical No
otes
Chaptter 2: Thin Provisioning
verssions of VMFFS would onlyy return the ccapacity at th
he file system
m level. vSphere 5
grea
atly simplifie
es the processs by conductting space reeclaim autom
matically.
Note
e: When using
g Thin Provisio
on, space requ
uired for the viirtual disk is n
not allocated a
at
crea
ation. Instead,, it is allocated
d and zeroed o
out on demand.
V
VMAX3 Thin
T
Provvisioning
g
All VMAX3
V
arrayys are pre-con
nfigured with
h Virtual Provvisioning (VP)) to help redu
uce cost,
imp
prove capacitty utilization,, and simplify
fy storage maanagement. TThe VMAX3 in
n fact
onlyy supports th
hin devices and
a no longerr uses any th
hick devices.
V
VMware vSto
orage
A
API for VMAX
X3
VMw
ware vStorag
ge APIs for Arrray Integratio
on (VAAI) willl offload Virttual Machine
e (VM)
ope
erations to th
he VMAX3 arrray to optimizze server perrformance. VA
AAI enable th
he ESXi
servvers to free up
u server reso
ources by offfloading certaain operation
ns. For VMAX
X3, these
ope
erations are:

Full Copy - This operattion offloads replication tto VMAX3 to enable much
h faster
deployme
ents of VMs, snaps,
s
clonees, and storagge vMotion o
operations.

Block Zero
o - This opera
ation allows you to rapidlly initialize fiile system blocks
and virtua
al disk space

Hardware-Assisted Loccking - This o
operation optimizes meta
adata update
es and
assists with virtual dessktop deployyments

UNMAP - This
T
operatio
on will allow V
VMs to reclaim zeroed sp
pace within V
VMDK
files and Datastores
D
making
m
more efficient use of disk spacce. This unussed
space is automatically
a
y returned to the thin poo
ol where it oriiginated.

VMware vSphere Stora
age API for Sttorage Awareeness (VASA))
X
XtremIO Thin Pro
ovisioning
g
Xtre
emIO arrays are
a inherently thinly prov isioned. When the host a
allocates a th
hickeag
ger-zero virtual disk with VAAI
V
block zeeroing, the X
XtremIO arrayy still thinly
provvisions the space,
s
startin
ng with absollutely no con
nsumed SSD space at all! The
preparation or in
nitialization of such an EZZT disk is super-fast beca
ause it is all
mettadata opera
ations as a result of writin
ng zeroes. With every writtten unique 4
4KB
blocck, exactly 4KB of space is increment ally consumeed. So you gget the best o
of both
worrlds: Deduplication and Thin
T
Provision
ning benefitss with no run-time overhe
ead of
lazyy-zero or thin
n-format virtu
ual disks on tthe ESX hostss.
X
XtremIO’s su
upport
ffor the VAAI
When the ESX host issues an
n unmap com
mmand, the sspecific LBA--to-fingerprin
nt
map
pping is erassed from the metadata. TThe referencee count of the
e underlying block
corrresponding to
t that fingerprint is decreemented. Wh
hen a subseq
quent read comes for
thatt erased LBA
A, the XtremIO
O array will reeturn a zero b
block (assum
ming the referrence
cou
unt was decre
emented to zero)
z
becausee the entry no
o longer exissts in the map
pping
mettadata. Therre is no need to immediattely erase thee now-de-refe
erenced 4K b
block on
SSD
D, avoiding any erase ove
erhead.
Thin LUN Space Recla
aim Using EM
MC VPLEX
Techniccal Notes
13
1
Chapter 2: Thiin Provisioning
g
When a host wriites a zero bllock to an XtrremIO array aat a certain LLBA, we imme
ediately
reco
ognize this iss a 4KB block
k filled with zzeroes — beccause all zero
o blocks have the
sam
me unique co
ontent fingerp
print which iss well known
n by the arrayy. Upon
iden
ntification off this fingerprrint, we immeediately ackn
nowledge the
e write to the
e host
with
hout doing anything interrnally.
Xtre
emIO has glo
obal inline de
eduplication, which mean
ns that no ma
atter how ma
any times
a sp
pecific 4KB data
d
pattern is written to tthe array, theere is only evver one copy of it
storred on flash in the array. You can imaagine for all tthose logical 4KB zero blo
ocks,
there would be mappings fro
om their logiccal addressees (LBA) to the same uniq
que
fing
gerprint for alll zero blockss. And the fin
ngerprint wo
ould be mapp
ped to the sin
ngle zero
blocck stored on SSD.
TThin Provvisioning
g Summa
ary
In summary,
s
it’ss important to
o note that evven though V
VPLEX fully supports “Thin”
provvisioning bettween dozen
ns of heterogeeneous backk-end arrays, there is still some
worrk to be done
e to facilitate SCSI UNMAP
P commandss between VP
PLEX, Back-en
nd
Storage Arrays, and Host OS
S’s.
The
e VNX2, VMAX
X3, and Xtrem
mIO back-en d arrays all n
natively supp
port SCSI UNM
MAP
com
mmands and VAAI feature
e sets, but alll of these back-end arrays handle spa
ace
recllamation diffferently while
e being virtuaalized with V
VPLEX.
Thiss is where VP
PLEX Mobilityy can help ressolve these iissues by ena
abling the
tran
nsparent movvement of da
ata between eextents and//or devices to
o trim the unclaimed
spa
ace and reclaim that spacce for each reespective “Th
hin” pool.
Note
e: VPLEX Mobility jobs are all
a done onlinee without the rrequirement to
o take the hosst
offliine for any rea
ason. This ensures completee transparencyy to the user e
environments.
14
TThin LUN Spa
ace Reclaim Using EMC VPLEX
V
TTechnical No
otes
Chapter 3: VMw
ware ESXi
Chapterr 3
VM
Mware EESXi
Thiss chapter pre
esents the following topiccs:
VMw
ware ESXi Re
eclaim ................................................................................................ 16 Virttual Machine
e Disks (VMD
DKs) ................................................................................. 16 Raw
w Data Mappings (RDMs) ...................................................................................... 16 Dattastores (VMFS) ..................................................................................................... 17 Thin LUN Space Recla
aim Using EM
MC VPLEX
Techniccal Notes
15
1
Chapter 3: VM
Mware ESXi
V
VMware ESXi Recclaim
In the VMware ESXi
E
environm
ment, there aare two layerss of the stora
age stack tha
at must
z
for storage reclam
mation to takee place. The V
VM’s filesysttem are conta
ained on
be zeroed
the Virtual Mach
hine Disk File
e (vmdk) on tthe Virtual Machine layer and the Data
astore
whiich is created
d as Virtual Machine
M
File S
System (vmfs
fs) on the ESX
Xi Layer. Thiss section
will discuss procedures for each
e
of thesee layers.
Fig
gure 3 - VMw
ware Storage
e Layers
V
Virtual Machine
M
Disks
D
(VM
MDKs)
If th
he space to be
b reclaimed is part of a V
VMDK file and
d is in use byy a guest ope
erating
systtem, the gue
est operating system’s fileesystems mu
ust first be ze
ero-written be
efore
con
ntinuing on with
w ESXi-specific procedu
ures. This willl be covered in detail for each OS
late
er in this document.
Som
mething to co
onsider is tha
at if the VMD
DK files were aallocated as thin VMDKs.. They
musst first be deflated to enssure the guesst operating ssystem does not run out o
of space
whiile running th
he procedure
e to zero out tthe guest operating syste
em’s filesyste
em. This
is done
d
by creatting a temporrary file that w
will inflate th
he filesystem
m to its maxim
mum
cap
pacity then de
eleting that temporary filee. As a resultt, the space a
allocation of this
thin
nly provisione
ed VMDK will also inflate . So VMwaree has provide
ed a CLI tool tto trim
the zero space and
a “re-thin”” the device aafter it was teemporarily in
nflated. This ttool is
callled: “vmkfstools –pu
unchzero”
For more information see VM
Mware vSpheere 5.5 Docum
mentation Ce
enter:
Using vmkfstools Help File
Note
e: The VMDK must
m
be free of
o any locks prrior to runningg vmkfstools –punchzero on it. As a
resu
ult, the virtual machine that is using the V
VMDK must bee powered off prior to runnin
ng the -punchzero command.
Raw Data
a Mappin
ngs (RDM
Ms)
16
TThin LUN Spa
ace Reclaim Using EMC VPLEX
V
TTechnical No
otes
Chapter 3: VMw
ware ESXi
If th
he filesystem
m is located on a Raw Deviice Mapping (RDM) devicce, only follow
w the
procedure for th
he VM’s respe
ective operatting system. No further acction will be required
sincce RDMs are not under hyypervisor con
ntrol.
Datastorres (VMFS
S)
Sim
milar to deletiing and recla
aiming space for files locaated on a VM
MDK, deletingg Virtual
Macchines and th
he associated files stored
d on a datasttore will not a
automatically
recllaim the storage on that datastore.
d
Th
his can be do
one by using the “dd” com
mmand.
Using the “dd” command
The
e procedure to write zeroe
es to the unu sed space on
n a UNIX filessystem is done by
usin
ng the “dd” command. This
T
will creatte a temporaary file that w
will write zero
oes until
it fu
ully consume
es all available disk spacee, then will im
mmediately d
delete the tem
mp file.
# dd if
f=/dev/zer
ro of=<path
h to Datas
store>/zero
oes bs=102
2400
# rm <p
path to Da
atastore>/z
zeroes
Note
e: It is critical to understand
d that this pro cedure will temporarily com
mpletely fill the
e
Data
astore filesysttem. Any Virtual Machines w
with VMDKs on
n the datastore using thin LU
UNs
mayy experience out
o of space errrors during th
his time. It is reecommended that all Virtua
al
Macchines associa
ated with that datastore to b
be powered do
own prior to ze
eroing the Dattastore
filessystem.
Thin LUN Space Recla
aim Using EM
MC VPLEX
Techniccal Notes
17
1
Chapter 4: Generic UNIX / Linux
Chapterr 4
Geeneric U
UNIX / Liinux
Thiss chapter pre
esents the following topiccs:
UNIIX / Linux File
esystem Recclaim ............................................................................... 19 The
e “dd” Comm
mand ................................................................................................... 19 The
e “mount –o discard”
d
Com
mmand ............................................................................ 19 The
e “fstrim” Command .............................................................................................. 20 18
TThin LUN Spa
ace Reclaim Using EMC VPLEX
V
TTechnical No
otes
Chapter 4
4: Generic UNIIX / Linux
UNIX / Liinux Filessystem Reclaim
R
Deleting files on
n UNIX or Linu
ux filesystem
ms does not aautomaticallyy zero out the
e data.
Onlly the pointerr in the filesyystem headerr is removed,, leaving the data still intact on
the disk. There are
a a couple of ways to reesolve this isssue:
1. Using the “d
dd” command
2. Using the “m
mount –o di
iscard” com
mmand
3. Using a cron job to run “fs
strim” to tri m at a schedu
uled interval
TThe “dd”” Comma
and
The
e procedure to write zeroe
es to unused space on UN
NIX is to use the dd comm
mand to
crea
ate a zero-fillled file that fully
f
consumees all availab
ble disk spacce, then immediately
dele
ete the file.
# dd
d if=/dev/
/zero of=<
<path to fi
ilesystem>
>/zeroes bs
s=102400
# rm
r <path to
o filesyst
tem>/zeroes
s
Note
e: It is critical to understand
d that this pro cedure will temporarily com
mpletely fill the
e
filessystem. Any ap
pplications on
n the host thatt tries to write the filesystem
m may receive out of
space errors durin
ng this time. Itt is recommen
nded that all applications asssociated with
h that
utdown prior to
o zeroing the ffilesystem.
filessystem be shu
TThe “mount –o diiscard” Command
C
d
The
e “mount –o
o discard
d” option allo
ows you to au
utomatically TRIM deleted
d file
thatt were using the EXT4 file
e system. Theere is howeveer a noticeab
ble performan
nce
pen
nalty in sendiing TRIM com
mmands afterr every deletee which can make deletio
on much
slow
wer than usu
ual on some drives.
d
To enable
e
autom
matic TRIM on
n a mount po
oint, it must b
be mounted with the disccard
option in fstab
b. Follow these steps:
1) Backup your fstab then open itt for editing
# cp /e
etc/fstab ~/fstab-<d
date>
# vi /e
etc/fstab
2) Add disccard to the fsstab options for each drivve or mount p
point.
/dev/sd
db1 /app1 ext4 disca
ard,errors
s=remount-r
ro 0 1
3) Save & Exit
E fstab, then
t
reboot. Automatic TTRIM is now E
Enabled.
Thin LUN Space Recla
aim Using EM
MC VPLEX
Techniccal Notes
19
1
Chapter 4: Generic UNIX / Linux
TThe “fstrrim” Com
mmand
The
e "fstrim" command iss used on a m
mounted filessystem to disscard (or "trim
m")
bloccks which are not in use by the filesysstem. This is extremely usseful for thinlyprovvisioned storage where you
y need to d
discard all un
nused blockss in the filesyystem.
Sch
heduling "fs
strim" for most
m
storage volumes sho
ould start witth a trimmingg
freq
quency of oncce a week. Once
O
a baseli ne for behavvior has been
n established
d,
incrrease or decrrease the freq
quency to meeet your need
ds. To schedule “fstrim”
follo
ow these ste
eps:
1) Create a CRON job to
o run once a w
week:
vi /etc
c/cron.wee
ekly/fstrim
m
2) Add the following to the fstrim file:
#! /bin
n/sh
#
#
#
#
#
By de
efault we assume onl
ly / is on
n a “Thin” device
You can
c
add mo
ore “Thin” mount poi
ints, separ
rated by s
spaces.
Make sure all mount poin
nts are wi
ithin the q
quotes.
For example:
e
THIN_
_MOUNT_POI
INTS='/ /bo
oot /home /opt/app1 /opt/app2
2'
THIN_MO
OUNT_POINT
TS='/'
for mou
unt_point in $SSD_MO
OUNT_POINT
TS
do
fst
trim $moun
nt_point
done
3) Make the script execcutable:
sudo ch
hmod +x /e
etc/cron.we
eekly/fstr
rim
4) And fina
ally, Run it:
sudo /e
etc/cron.w
weekly/fstr
rim
Note
e: Trim has be
een defined ass a non-queueed command b
by the T13 sub
bcommittee, and
consequently incurs massive exxecution penaalty if used aftter each filesysstem delete
mmand. The no
on-queued natture of the com
mmand requirres the driver tto first finish a
any
com
operation, issue the
t trim comm
mand then resu
ume all normaal commands. For this reaso
on Trim
ete and may evven trigger some garbage collection depe
ending
can take a lot of time to comple
on your
y
back end storage arrayy.
20
TThin LUN Spa
ace Reclaim Using EMC VPLEX
V
TTechnical No
otes
Chapterr 5: Microsoft Windows
Chapteer 5
M
Microso
oft Windows
Thiss chapter pre
esents the following topiccs:
Thin
n Provisionin
ng LUN Identtification ......................................................................... 22 Storage Space Reclamation
R
...................................................................................... 22 AP Command
d ..................................................................................... 22 Using the UNMA
MAP Requests from Hype
er-V ................................................................................. 22 UNM
Using the sdele
ete.exe Comm
mand ............................................................................... 23 P
.......................................................................................... 23 Scripting with PowerShell
Thin LUN Space Recla
aim Using EM
MC VPLEX
Techniccal Notes
21
2
Chapter 5: Microsoft Windo
ows
TThin Provvisioning
g LUN Ide
entification
With Microsoft Server
S
2012, thin provisio
oning is an end-to-end storage provisioning
ution. Thin provisioning features
f
inclu
uded with Miicrosoft’s Serrver 2012 incclude
solu
logiical unit (LUN
N) identification, thresholld notificatio
on, handles fo
or resource
exh
haustion, and
d space reclamation.
Win
ndows Serverr 2012 has adopted the TT10 SCSI Blocck Command
d 3 (SBC3) standard
spe
ecification forr identifying thinly
t
provis ioned LUNs. During the in
nitial target d
device
enu
umeration, th
he Windows Server
S
will gaather the bacckend storage device prop
perties
to determine
d
the
e provisionin
ng type, the U
UNMAP, and the TRIM cap
pabilities.
Note
e: The storage
e device reportts its provision
ning type and UNMAP and TTRIM capabilitty
acco
ording to the SBC3
S
specifica
ation.
Storage Space Re
eclamatio
on
Spa
ace reclamatiion can be triggered by fi le deletion, a file system level trim, or a
storrage optimization operatiion. File systeem level trim
m is enabled ffor a storage
e device
dessigned to perrform “read re
eturn zero” aafter a trim orr an unmap o
operation
Using the UNMAP
P Comma
and
When a large file
e is deleted from
f
the file system or a ffile system le
evel trim is trriggered,
Win
ndows Serverr converts file
e delete or trrim notificatio
ons into a co
orrespondingg UNMAP
request. The sto
orage port driiver stack traanslates the U
UNMAP request into an S
SCSI
UNM
MAP comman
nd or an ATA TRIM commaand accordin
ng to the prottocol type off the
storrage device. During the sttorage devicee enumeratio
on, the Windows storage stack
gath
hers informa
ation about whether
w
the s torage device supports U
UNMAP or TRIIM
com
mmands. Onlly the UNMAP
P request is ssent to the sttorage device
e if the device has
SCS
SI UNMAP or ATA TRIM capability.
Note
e: Windows Se
erver does nott adopt T10 SC
CSI WRITE SAM
ME command sets.
UNMAP Requests
R
s from Hyyper-V
Durring the virtua
al machine (V
VM) creation
n, a Hyper-V h
host will send
d an inquiry a
about
whe
ether the storage device where
w
the virrtual hard dissk (VHD) resides supportss
UNM
MAP or TRIM commands. When a largee file is deletted from the file system o
of a VM
gue
est operating
g system, the guest operatting system ssends a file d
delete request to the
virtual machine’’s virtual hard disk (VHD) or VHD file. The VM’s VH
HD or VHD file
e tunnels
the SCSI UNMAP
P request to the
t class drivver stack of tthe Windowss Hyper-V hosst, as
follo
ows:

If the VM has
h a VHD, th
he VHD convverts SCSI UN
NMAP or ATA TRIM comma
ands into
a Data Sett Management I/O contro
ol code (IOCTTL DSM) TRIM
M request, and then
sends the
e request to the host storaage device.
22
TThin LUN Spa
ace Reclaim Using EMC VPLEX
V
TTechnical No
otes
Chapterr 5: Microsoft Windows

If the VM has
h a VHD fille, the VHD fiile system co
onverts SCSI UNMAP or ATTA TRIM
command
ds into file system-level trrim requests,, and then se
ends the requ
uests to
the host operating
o
sysstem.
Note
e: Windows Hyyper-V also su
upports IOCTL DSM TRIM callls from the gu
uest operatingg system
Using the sdelete
e.exe Com
mmand
Deleting files on
n older versio
ons of Microssoft Server do
oes not automatically zerro out
the data. Only th
he pointer in the filesysteem header is removed, leaving the da
ata still
inta
act on the dissk.
The
e procedure to write zeroe
es to unused space on Miicrosoft Wind
dows is to usse the
sde
elete.exe command. The
T sdelete
e.exe comm
mand provide
es a ‘-z’ flagg to fill
the unused filessystem space
e with zeroess.
Dow
wnloading sd
delete.exe
Sde
elete.exe is available
a
from
m Microsoft TTechNet as paart of the Win
ndows SysInternals
tools. It may be downloaded
d from:
http
ps://technet.microsoft.co
om/en-us/syysinternals/b
bb897443.asspx
Using sdelete.e
exe
Usa
age: sdelet
te [-p pas
sses] [-s] [-q] <fil
le or direc
ctory> ...
sde
elete [-p passes]
p
[-z|-c] [dri
ive letter
r] ...
-a
-c
-p
-q
-s
-z
Remo
ove Read-O
Only attrib
bute
Clea
an free sp
pace
Pass
ses Specif
fies number
r of overw
write passe
es(default
t=1)
Don'
't print errors
e
(Qui
iet)
or -r
- Recurse
e subdirect
tories
Zero
o free spa
ace (good f
for virtua
al disk opt
timization
n)
The
e sdelete.exe command iss used againsst a Windowss filesystem as follows:
C:\sdel
lete.exe -z
- <drive>
S
Scripting
g with Po
owerShelll
The
e following lin
nk is for a PowerShell scriipt that will ccreate a file ccalled ThinSA
AN.tmp
on the
t specified
d volume then fills that vo
olume up with zeroes leavving 5% perccent of
free
e space. This allows the storage
s
array that is thin p
provisioned tto mark that drive
spa
ace as unused and reclaim
m the space o
on the physical disks. He
ere is the linkk:
http
p://blog.wha
atsupduck.ne
et/2012/03//powershell-aalternative-to
o-sdelete.htm
ml
Secction Break, DO
D NOT DELE
ETE
Thin LUN Space Recla
aim Using EM
MC VPLEX
Techniccal Notes
23
2
A
Appendix A: VMware
V
ESXi UNMAP
U
Examp
ples
Appe
endix A
VMwaare ESX
Xi UNMAP Examp
ples
Thiss appendix presents
p
the following
f
top
pics:
Spa
ace Reclamattion with VMware ESXi ....................................................................... 25 24
TThin LUN Spa
ace Reclaim Using EMC VPLEX
V
TTechnical No
otes
Appendix A: VMware ESXi UNMAP E
Examples
Space Re
eclamatio
on with VMware
V
ESXi
E
The
e environmen
nt used for this example cconsists of a SuSE Linux V
Virtual Machine
running on ESXi 5.5. A 30 GB
B datastore ((SuSE_OS_Daatastore) hass been create
ed and
the guest operating system installed
i
on iit. The datasttore is a VPLEX LUN with XtremIO
storrage backing
g.
vvmkfstools -punchzero
In this example,, a large file was
w deleted in a guest VM
M. The operating system
imm
mediately ma
akes the spacce available, but the Dataastore and Arrray layers arre not
awa
are that this space
s
is now
w available.
Note
e: The host co
ommands dem
monstrated herre are fully exp
plained in the Generic UNIX / Linux
chapter. Likewise
e for Microsoft Windows Serv
rvers and Hypeer-V VMs.
Prio
or to deleting
g any files, the ESXi serve r reports that there is 11..5 GB used on the
SussE_OS_LUN volume.
v
This volume is prresented to th
he host as an
n RDM.
Fig
gure 4 - LUN
N Utilizatio
on Prior to
o File Delet
tion
At this point we delete the desired file(s)) on the virtual machine.
Fig
gure 5 - Del
leting Files
s on the Gu
uest Host
Thin LUN Space Recla
aim Using EM
MC VPLEX
Techniccal Notes
25
2
A
Appendix A: VMware
V
ESXi UNMAP
U
Examp
ples
Afte
er the deletio
on, notice tha
at the ESXi Seerver still rep
ports that 11.5 GB is bein
ng
utilized. This means that no
o space has b
been reclaimed by the arrray.
Fig
gure 6 - LUN
N Utilizatio
on After Fi
ile Deletion
n
Since this VM iss leveraging a version of tthe SuSE Opeeration Syste
em that is nott
currrently supporting the TRIM
M and UNMA
AP operationss, we will use
e the “dd”
com
mmand to wriite zeroes ovver the unclaiimed disk sp
pace.
Fig
gure 7 - Usi
ing “dd” to Fill the F
Free Disk Sp
pace with Z
Zeroes
Since our back-e
end array is XtremIO
X
thatt automaticallly de-dups zzeroes, the arrray will
automatically id
dentify and optimize the ffreed space. In this example, we have
e cleared
a the Xtrem
mIO array hass de-duplicatted the zeroe
es, immediately
the 4.5 GB file and
free
eing up that space
s
withou
ut any furtherr actions.
Fig
gure 8 - LUN
N Utilizatio
on after Sp
pace Reclama
ation
26
TThin LUN Spa
ace Reclaim Using EMC VPLEX
V
TTechnical No
otes
Appendix A: VMware ESXi UNMAP E
Examples
At this point, the
e RDM disk has
h been zero
oed and the space has be
een automattically
de-duped by the
e XtremIO ba
ack-end arrayy. However, th
he VMDK file
e that is store
ed on the
ESX
Xi datastore has
h been infllated from bo
oth the origin
nal file and th
he temporaryy zeroes
file that was created during the
t thinning process. In o
order to resolve this discrrepancy
we will also nee
ed to run the “vmkfstoo
“
ols –punch
hzero” com
mmand.
Fig
gure 9 - Inf
flated VMDK Size prior
r to vmkfsto
ools
Thiss 16 GB inclu
udes the dele
eted file pluss the zeroes tthat were written over the
e deleted
file.. Follow these steps:
1. Shut dow
wn the guestt VM using th
he VMDK to reelease any fiile/device loccks.
2. Run vmk
kfstools --punchze
ero <VMDK Filename>
> on the VMD
DK.
Fig
gure 10 - Ex
xample of Ru
unning “vmk
kfstools --p
punchzero"
3. Verify VM
MDK has bee
en resized in vCenter.
Fig
gure 11 - De
eflated VMDK
K Size afte
er running v
vmkfstools
Thin LUN Space Recla
aim Using EM
MC VPLEX
Techniccal Notes
27
2
A
Appendix B: Windows
W
RDM Example
App
pendix B
Wind
dows RD
DM Exam
mple
Thiss appendix presents
p
the following
f
top
pics:
Spa
ace Reclamattion with Miccrosoft Wind ows ............................................................ 29 Pag
ge Break – DO
O NOT DELETTE
28
TThin LUN Spa
ace Reclaim Using EMC VPLEX
V
TTechnical No
otes
A
Appendix B: W
Windows RDM Example
Space Re
eclamatio
on with Microsoft
M
t Window
ws
The
e environmen
nt used for this example cconsists of a Windows Se
erver Virtual M
Machine
running on a 40 GB VMDK fille located on
n the WINServv_Datastore.. Our test willl consist
of copying
c
an IS
SO file to a 10
0 GB RDM alllocated from a VPLEX thatt has been backend
ded by XtremIO.
Note
e: It should be
e noted that Windows
W
Serveer 2012 does sspace reclama
ation by defau
ult and
sdelete.exe is nott needed to de
eflate the RDM
M devices.
ssdelete.exe
In this example,, a large file was
w copied to
o a Raw Deviice Mapped ((RDM) volum
me
atta
ached to a Gu
uest VM. The
e operating syystem does n
not automatically makes the
spa
ace available so it will req
quire the use of the Sdeleete.exe comm
mand to free tthat
unu
used space. Also
A needed will be the v mkfstools co
ommand on E
ESXi to free the
infla
ated files on the datastorre.
Prio
or to deleting
g any files, the Windows S
Server reports that there iis 4.49 GB ussed on
the WIN2012_RDM-1 volume
e.
Fig
gure 12 - Fi
ile Size Pri
ior to Runn
ning sdelete
e.exe
As previously
p
discussed, file
e deletion on older versio
ons of Microsoft Server do
oes not
automatically ze
ero out the data. Only thee pointer in th
he filesystem
m header is re
emoved,
leavving the data
a still intact on
o the disk.
Thiss is where the sdelete.
.exe commaand providess a quick and
d easy way to
o fill
unu
used filesyste
em space witth zeroes and
d ultimately ffacilitate spa
ace reclamatiion.
Thin LUN Space Recla
aim Using EM
MC VPLEX
Techniccal Notes
29
2
A
Appendix B: Windows
W
RDM Example
Fig
gure 13 - Ex
xample of ru
unning sdel
lete.exe
Afte
er deleting th
he file(s), the Windows Seerver reports that there is 145 MB use
ed on the
WIN
N2012_RDM--1 volume.
Fig
gure 14 - Fi
ile size aft
ter running
g sdelete.ex
xe
At this point, the
e RDM disk has
h been zero
oed and the space has be
een reclaime
ed by the
bacck-end storag
ge array. How
wever, the VM
MDK file that is stored on the ESXi datastore
hass been inflate
ed from both the original file and the ttemporary ze
eroes file that was
crea
ated during the
t thinning process.
p
In o
order to resollve this discrepancy we w
will also
nee
ed to run the “vmkfstoo
ols –punch
hzero” com
mmand as in
n the previous
exa
ample in Appendix-A.
30
TThin LUN Spa
ace Reclaim Using EMC VPLEX
V
TTechnical No
otes
Append
dix C: Linux witth EMC VPLEX
X and VNX
App
pendix C
Linuxx with EMC VPLLEX and V
VNX
Thiss appendix presents
p
the following
f
top
pics:
Spa
ace Reclamattion through VPLEX Mobiility Jobs...................................................... 32 Thin LUN Space Recla
aim Using EM
MC VPLEX
Techniccal Notes
31
3
A
Appendix C: Linux
L
with EMC
C VPLEX and VNX
Space Re
eclamatio
on throug
gh VPLEX
X Mobilitty Jobs
How Data Mo
obility
W
Works
VPLLEX Data Mob
bility, along with
w VPLEX’s thin-awareness can be le
everaged to rre-thin a
devvice that is ba
acked by a sttorage array tthat does no
ot automatica
ally duplicate
e zeroes
writtten to a thin disk. The en
nvironment in
n this examp
ple is a Linux virtual mach
hine that
is ru
unning on a datastore
d
on a VPLEX Virttual Volume backed by an
n EMC VNX th
hin LUN.
In this example,, a large file was
w deleted in the guest VM. The ope
erating system
m
imm
mediately ma
akes the spacce available, but the spacce is not retu
urned to the tthin pool
in the storage array.
Note
e: This example is used to demonstrate
d
th
hin-to-thin migration in order to re-thin a device.
It ca
an also be use
ed from thick-tto-thin to convvert a thickly p
provisioned de
evice to a thin device.
Prio
or to beginnin
ng, make notte of the conssumed spacee on the VNX
X LUN that pro
ovides
storrage to the VPLEX
V
Virtual Volume. A 4.5GB file hass been copied to the filesystem
and
d approximattely 14.7GB of
o capacity haas been conssumed.
Fig
gure 15 - Su
uSE_OS_LUN_0
0 Consumed Capacity
The
e file is then deleted
d
and the
t filesystem
m is zeroed p
per the Generic UNIX / Lin
nux dd
procedure.
32
TThin LUN Spa
ace Reclaim Using EMC VPLEX
V
TTechnical No
otes
Append
dix C: Linux witth EMC VPLEX
X and VNX
Fig
gure 16 - De
eleting a fi
ile and zer
roing the fi
ilesystem
Afte
er zeroing, no
o change in Consumed
C
Caapacity is ob
bserved on th
he VNX LUN. TThis is
exp
pected becau
use the VNX will
w not autom
matically ded
duplicate zerroes.
Fig
gure 17 - Su
uSE_OS_LUN_0
0 Consumed Capacity un
nchanged
VPLLEX Data Mob
bility to a new
w thin devicee will facilitatte reclaimingg the zeroed sspace.
VPLLEX’s thin-aw
wareness will not write thee zeroes to th
he target devvice thereby ffreeing
the space on the
e LUN.
V
VPLEX Data
Mobility
In this example,, a thin-to-thiin VPLEX dataa mobility job will be com
mpleted due to the
“Thin Rebuild” attribute
a
bein
ng set duringg the claimingg process.
Fig
gure 18 – Se
etting the Thin
T
Rebuil
ld Attribute
e
Und
der the Data Mobility tab,, select Movee Data within
n Cluster to se
et up the Datta
Mob
bility job.
Thin LUN Space Recla
aim Using EM
MC VPLEX
Techniccal Notes
33
3
A
Appendix C: Linux
L
with EMC
C VPLEX and VNX
Fig
gure 19 - VP
PLEX Data Mo
obility
The
en click on the Create Data Mobility Jo bs button.
Fig
gure 20 - Cr
reate Device
e Mobility Job
Afte
er selecting the local clusster to create the Data Mo
obility on, sellect the Virtu
ual
Volume that is being
b
used by the operatiing system filesystem. In this case, it’’s the
SuS
SE_OS_vol Viirtual Volumee.
Fig
gure 21 - Se
elect Virtua
al Volume
The
en select the backing Device and creatte the Sourcee-Target map
pping by iden
ntifying
an unused
u
Devicce that is baccked by a ne w thin LUN o
on the array.
34
TThin LUN Spa
ace Reclaim Using EMC VPLEX
V
TTechnical No
otes
Append
dix C: Linux witth EMC VPLEX
X and VNX
Fig
gure 22 - Cr
reate Source
e / Target Mobility ma
apping
Start the Data Mobility
M
job and commit itt upon completion. The data has been
n
tran
nsferred to th
he new LUN, named SuSEE_OS_LUN_1. The space tthat was prevviously
con
nsumed by th
he 4.5GB file has been recclaimed and the new LUN
N is only conssuming
app
proximately 10.2GB.
1
Fig
gure 23 - Su
uSE_OS_LUN_1
1 Consumed Capacity
At this point, the
e original LUN can be rem
moved from V
VPLEX and de
eleted from th
he array.
Thiss will return all
a of its conssumed spacee to the thin p
pool to be avvailable for fu
uture
use
e.
VPLLEX Data Mob
bility in coord
dination with
h operating syystem based
d utilities offe
ers a
sea
amless metho
od for re-thinning a LUN aafter freeing sspace on the
e host operating
systtem.
Thin LUN Space Recla
aim Using EM
MC VPLEX
Techniccal Notes
35
3