FOLLOW

FOLLOW

SHARE

The Elements You Need To Make Scale-Out NAS For Hybrid Cloud A Reality

A Next-Gen Approach for a Scale-Out Future

31Mar

Most of the world’s data centers today are still using vertical scaling solutions for storage, but the modern data deluge is overwhelming this legacy architecture. The dilemma has sent organizations in search of alternatives that enable them to scale cheaply and efficiently in order to remain competitive. With the advance of software-defined storage, the use of scale-out storage solutions in data centers is expanding.

Another relatively recent advance, the hybrid cloud, gives organizations the maximum amount of business flexibility from cloud architectures, which helps maximize budget efficiency and performance goals at the same time. In a nutshell, hybrid cloud is a computing environment that uses a mix of on-premises, private cloud and public cloud services, with orchestration between the two platforms.

The novelty of the hybrid cloud creates an information gap regarding the benefits and challenges associated with deploying a hybrid cloud architecture. This article offers a number of design elements that you can use to ensure your hybrid cloud delivers the performance, flexibility, and scalability you need.

NAS: The Secret Sauce

In order for the hybrid cloud storage model described herein to function, it must have as its foundation scale-out Network-Attached Storage (NAS). Since hybrid cloud architectures are relatively new to the market—and even newer in full-scale deployment—many organizations are unaware of the importance of consistency in a scale-out NAS. Many environments are eventually consistent, meaning that files you write to one node are not immediately accessible from other nodes. This can be caused by not having a proper implementation of the protocols, or not tight enough integration with the virtual file system. The opposite of that is being strictly consistent: files are accessible from all nodes at the same time. Compliant protocol implementations and tight integration with the virtual file system is a good recipe for success.

With a scale-out NAS approach, the ideal hybrid cloud architecture will be based on three layers. Each server in the cluster will run a software stack based on these layers.

  1. The persistent storage layer. Based on an object store, this layer provides advantages like extreme scalability. However, the layer must be strictly consistent in itself.
  2. The virtual file system layer comes second. This is the heart of any scale-out NAS, and it is where features like caching, locking, tiering, quota and snapshots are handled.
  3. Protocols and integrations. This third layer contains the protocols like SMB and NFS but also integration points for hypervisors, for example.

It is very important to keep the architecture symmetrical and clean. If you manage to do that, many future architectural challenges will be much easier to solve.

The first layer merits closer examination. Because the storage layer is based on an object store, we can now easily scale our storage solution. With a clean and symmetrical architecture, we can reach exabytes of data and trillions of files.

As the storage layer is responsible for ensuring redundancy, a fast and effective self-healing mechanism is needed. To keep the data footprint low in the data center, the storage layer needs to support different file encodings. Some are good for performance and some for reducing the footprint.

Scale-out NAS as a VM

Hybrid cloud storage requires support for hypervisors, naturally. Therefore, the scale-out NAS needs to be able to run as hyper-converged as well. Being software-defined makes sense here.

In the absence of external storage, the scale-out NAS must be able to run as a virtual machine and make use of the hypervisor host’s physical resources. The guest virtual machine’s (VM) own images and data will be stored in the virtual file system that the scale-out NAS provides. The guest VMs can use this file system to share files between them, making it perfect for VDI environments as well.

It is important in a virtual environment to support multiple protocols because there are many different applications running, having different needs for protocols. By supporting many protocols, we keep the architecture flat, and we can share data between applications that speak different protocols, to some extent.

Building out a software-defined storage solution supports hardware that is both fast and energy-efficient. It allows us to start small and scale up, supporting bare-metal as well as virtual environments, and has support for all major protocols.

Managing Metadata

Metadata comprises a significant piece of the virtual file system. Metadata are pieces of information that describe the structure of the file system. For example, one metadata file can contain information about what files and folders are contained in a single folder in the file system. That means that we will have one metadata file for each folder in our virtual file system. As the virtual file system grows, we will get more and more metadata files.

Though the centralized storage of metadata has its appeal, it’s not suitable for scale-out situations. So, let’s look at where not to store metadata. Storing metadata in a single server can cause poor scalability, poor performance, and poor availability. Since our storage layer is based on an object store, a better place to store all our metadata is in the object store – particularly when we are talking about high quantities of metadata. This will ensure good scalability, good performance, and good availability.

Performance and Access

Increasing performance is necessary with software-defined storage solutions, so they need caching devices. From a storage solution perspective, both speed and size matter – as well as price; finding the sweet spot is important. For an SDS solution, it is also important to protect the data at a higher level by replicating the data to another node before destaging the data to the storage layer.

Especially in virtual or cloud environments, it becomes more important to support multiple file systems and domains as the storage solution grows in both capacity and features. Different applications and use cases prefer different protocols. And sometimes, it is also necessary to be able to access the same data across different protocols.

A File System for the Cloud

In an organization with multiple sites, each site has its own independent file system. A likely scenario is that different offices have a need for both a private area and an area that they share with other branches. So only parts of the file system will be shared with others.

Flexibility needed to scale the file system outside the four walls of the office, so the approach here is to select a section of a file system and letting others mount it at any given point in the other file systems – making sure that the synchronization is made at the file system level in order to have a consistent view of the file system across sites. Being able to specify different file encodings at different sites is useful, for example, if one site is used as a backup target.

The Scale-out Future

The elements discussed above come together to create a next-generation hybrid cloud system. A system configured in this way offers linear scaling up to exabytes of data that is also clean and efficient. With a single file system that spans all servers, there are multiple entry points that help eliminate performance bottlenecks. Native support for protocols is included, as is the flash needed to increase performance. Nodes can be added for flexible scale-out, as well. This approach addresses all the points needed for a scale-out NAS, providing better control and easier, more rapid data center expansion.

Sources

About the Author

Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Swedish giant Ericsson, the world-leading provider of telecommunications equipment and services to mobile and fixed network operators.

Comments

comments powered byDisqus
Beijing

Read next:

Expert Insight: 'Create Stories With The Customers At The Core'

i