Space Bounds for Reliable Storage: Fundamental Limits of Coding

Alexander Spiegelman, Yuval Cassuto, Gregory Chockler, Idit Keidar

Research output: Chapter in Book/Report/Conference proceedingConference contribution

102 Downloads (Pure)


We study the inherent space requirements of reliable storage algorithms in asynchronous distributed systems. A number of recent works have used codes in order to achieve a better storage cost than the well-known replication approach. However, a closer look reveals that they incur extra costs in certain scenarios. Specifically, if multiple clients access the storage concurrently, then existing asynchronous code-based algorithms may store a number of copies of the data that grows linearly with the number of concurrent clients. We prove here that this is inherent. Given three parameters, (1) the data size – D bits, (2) the concurrency level – c, and (3) the number of storage node failures that need to be tolerated – f, we show a lower bound of \Omega(min(f, c) · D) bits on the space complexity of asynchronous distributed storage algorithms. Intuitively, this implies that the asymptotic storage cost is either as high as with replication, namely O(fD), or as high under concurrency as with the aforementioned code-based algorithms, i.e., O(cD).
We further present a technique for combining erasure codes with replication so as to obtain the best of both. We present an adaptive f tolerant storage algorithm whose storage cost is O(min(f, c) · D). Together, our results show that the space complexity of providing reliable storage in asynchronous distributed systems is \Theta(min(f, c) · D).
Original languageEnglish
Title of host publicationProceedings of the 2016 ACM Symposium on Principles of Distributed Computing
Number of pages10
ISBN (Print)978-1-4503-3964-3
Publication statusPublished - 25 Jul 2016

Cite this