Your submission was sent successfully! Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

What is software-defined storage?

Tags: Storage , UAS

This article is more than 9 years old.


What is software-defined storage, and how do NAS and SAN appliances compare to software-defined storage?

Large-scale storage presents an inherent scalability challenge: how do you connect multiple disk drives with producers and consumers of data while ensuring performance and durability — and furthermore, without blowing out bandwidth, capacities & budget? The most common way of addressing these requirements is to provide remote filesystems or block storage through NAS (Network Attached Storage) and SAN (Storage Area Networks) appliances.

NAS and SAN appliances are typically built with proprietary hardware, and powered by proprietary software which serves up the relevant storage protocols like iSCSI or NFS; the software internally handles replication, rebalancing and reconstruction. Their design normally allows for some scalability through added storage elements, but the number of nodes involved is typically low. They are generally built to be fault-tolerant through high internal redundancy — RAID, standby power supplies and multiple network & disk bus interfaces.

Software-defined storage (SDS) embodies a different philosophy: the storage service is actually built from a cluster of commodity-hardware-based server nodes, some of which act as proxies, managers or gateways, and some of which act as storage nodes. Each storage node is responsible for storing a subset of the overall data. This allows additional nodes to be added to provide greater storage capacity, higher availability or increased throughput; clusters of dozens and even hundreds of nodes are possible. The software powering the cluster manages data placement and may offer enhanced capabilities to clients, including novel interfaces such as HTTP REST APIs for object storage.

Fault-tolerance in SDS is implemented by assuming that the failure domain is an entire node, and that no single node is essential. This dramatically reduces the hardware complexity (and cost) of the individual nodes, but also forces a horizontally scaling software architecture. Each node hosting a component of the service is designed to share state and data to its peers, allowing the resulting cluster to deliver high throughput while surviving the failure of individual nodes.

This is how giants like Google, Amazon and Baidu have successfully and economically deployed storage and compute for over a decade. Ubuntu Advantage Storage brings to customers solutions with these same characteristics, but which are built on solid open source technology, and are ready for deploying on your hardware today.

Learn more about Ubuntu Advantage Storage

Ubuntu cloud

Ubuntu offers all the training, software infrastructure, tools, services and support you need for your public and private clouds.

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

Meet the Canonical Ceph team at Cephalocon 2024

Date: December 4-5th, 2024 Location: Geneva, Switzerland In just a few weeks, Cephalocon will be held at CERN in Geneva. After last year’s successful...

Managed storage with Ceph

Treat your open source storage infrastructure as a service What if storage was like coffee: menu driven and truly service oriented? Everyone knows how quick...

How do you select the best enterprise data storage solution for your business?

The choices you make around IT infrastructure have great impact for both business cost and performance, across areas as diverse as operations, finance, data...