data - EuroSys 2015

Simba: Tunable End-to-End Data
Consistency for Mobile Apps
Dorian Perkins∗†, Ni/n Agrawal†, Akshat Aranya†, Cur/s Yu∗, Younghwan Go⋆†, Harsha Madhyastha‡, and Cris/an Ungureanu† NEC Labs†, UC Riverside∗, KAIST⋆, U Michigan‡ Motivation
Difficult to provide reliability & consistency! §  Wide-­‐spread use of data-­‐centric mobile apps §  Data consistency is a primary requirement §  How to transparently/efficiently handle data sync? §  How to provide tabular + object data model? §  How to enable useful consistency seman?cs? §  App devs tasked with ensuring consistency of user’s data §  Current solu?ons are inflexible §  Rolling own service is difficult Simba:
Cloud Infrastructure for Mobile Apps
§  Failures, Conflicts, Connec?vity, Consistency, … Mobile App Consistency Study We studied 23 popular mobile apps and found that half of them exhibit undesirable behavior! Simba Table
Data Sync Abstrac/on Logical abstrac?on Unified tabular + object model Atomicity: Row-­‐level 1000
R
R
Consistency: Table-­‐lWevel W
Simba Design
Latency (ms)
§  Strong, 100
Causal, Eventual 10
1
10
100
10
1000
1
10
Tables
(a) Table only.
(b) Table+object, chunk cache en
Figure 6: sCloud performance whe
100000
Performance
R
100000
10000
1000
100
10
Scalability
100
Tables
Event
Simba
WSimba
and C
10000
Efficient Syncing via Change-­‐sets expire
1000
40
ferred
No cache
No cache
1 MiB 64 KiB 35
Key cache
Key cache
Change c
ache e
nables s
ync nally,
Key+Data
cache
Key+Data
30
100 cache
of o2520nly modified row data becau
10
is kep
15
e.g., 10update of single 64 KiBserver
RTable
WTable
RObject
WObject
Throughput (MiB/s)
Simba Client [FAST 2015] Table
Table
sCloud
sCloud
100
Latency (ms)
§  Simple high-­‐level programming abstrac?ons §  Transparent handling of data sync & failures §  End-­‐to-­‐end tunable consistency §  Scalable architecture RTable
WTable
RsCloud
WsCloud
Physical layout Latency (ms)
Key Features Latency (ms)
1000
10
1
1
4
20
16
64
Clients
30
40
256
1024
Clients
50
60 chunk 705 80in 90
a 1 100
MiB object 0
1
(in thousands)
4
16
64
Clients
6.51024W
256
S
S
S
w
r
Data
Transfer (KiB)
IOPS
sCloud
sCloud
Object
Object
IOPS
Table
Table
Latency (ms)
Latency (ms)
sCloud
sCloud
Time (seconds)
Figure 7: sCloud performance when scaling
clients.
Writi
(b) Aggregate node throughput.
(a)
Per-operation
latency.
Consistency vs. Performance Trade-­‐off Todo.t
16 Gateways and 16 Stores, 500 ops/sec 5
450
Figure
4: Downstream
sync performance
for app
one th
G
Strong
C
4.5
400
Causal
C
10000
1000
Read
5000
5000
R
R
R
Eventual
4
350
Immediate W
W
W
uses D
Write
3.5
3004000
1000
4000
1 s
econd sync 3
100
tive
ta
250
2.5
sync 3000
100
2003000
2
data
in
150
10
period 2000 1.5
2000
10
100
so the
1
501000
1000 0.5
sures
1
10
100
1000
0
0
10 20 30 40 50 60 70 80 90 100
Strong
Causal Eventual
0
0
Write
Sync
Read
Tables
Clients (in thousands)
modifi
1
4
16
64 256 1024 4096
1
4
16
64 256 1024
4096
Figure
8:
Consistency
Comparison.
End-to-end
latency
and
data
(c) Table+object, chunk cache disabled.
Clients
Clients
Stronger consistency increases performance overhead chang
Simba Cloud scales well with increasing tables and clients g tables.
transfer for(a)
each
consistency
Gateway
only. scheme.
(b) Store with table-only.
on ano
the lowest data transfer since last writer wins
Figure
5:
Upstream
sync
performance
for
one
Ga
the
ap
third
client
C
to
write
an
row
for
the
same
row-key
as
C
,
c
w
Simba
only the latest version after the read
period Source Code: https://github.com/SimbaService/Simba
logic
b
which
always
occurs
prior
C
’s
write.
We
use
a
subscription
w
ut conflicts, the sync latencies and data transWe run the Store in three configurations: (1) No caching,
Figure 4(c)
lowed
Project Homepage: http://tinyurl.com/SimbaService
period
of
1
second
for
Causal
and
Eventual
and
ensure
all
alS and EventualS are similar (not shown). FiS
S
(2) Change cache with row keys only, and (3) Change cache
work for a sin
needs.
ncies are similar for all consistency schemes
updates
occur
before
this
period
is
over.
Only
C
has
a
read
r
with row keys and chunk data. The chunk size is set to 64
that a no-cach
are local; even with StrongS , the local replica
European
Conference
on
Computer
Systems
(EuroSys)
Fixing
subscription
to
the
table.
KiB. For reference, we show the server-side sync processing
compared to o
date and reads do not communicate with the
April
21—24,
2015,
Bordeaux,
France
study,
Figure
8
shows
the
WiFi
latency
and
associated
data
time in Table 8. In order to populate the Store with data, a
The amount
o
S
S
S