How Software-Defined Storage Supports IT Infrastructure Modernization in the Digital Era
Click ‘Attend’ to register and add the sessions below to your calendar.
If you do not have a BrightTALK account you will be prompted to create a free account first. If you have a BrightTALK account, please click on Log In.
Eric Burgener, Research Vice President, IDC Analyst
Alan Johnson, Director of Emerging Technologies, Supermicro
Frederik De Schrijver, ActiveScale Product Lead, Quantum Corporation
Steven Umbehocker, Founder & CEO, OSNEXUS
Tobias Flitsch, Director of Product Management, Nebulon
David Larsen, Technical Solutions Specialist, Intel
Paul Evans, Solution Advisor, RozoFS
Other sessions in this series
The Open Storage Summit will commence with keynote presentations from Nutanix, Intel and Supermicro senior executives on the latest software-defined storage technology developments for enterprise, data center, and multi-cloud infrastructure.
Discussion of the enabling technologies that customers running software-defined storage (SDS) are most interested in (Performance)
• New storage technology PMM, NVMe, Dual Actuator HDDs, NVMeOF, Composable, Computational storage,/Smart media, and new offload strategies like DPU
• New performance milestones
• Challenges delivering value (rapid ROI)
One of the main challenges of a true scale out system is scaling the metadata in an effective way to enable effective scale out without compromising on performance, traditionally this is solved using a meta data server that then becomes a performance bottleneck or by TOKEN sharing between nodes that becomes less effective as the cluster grows. in this session we will present how WekaIO a fully software defined data platform solves this issue allowing for effective scale out to 1000s of servers and actually increasing performance as more servers are added to the cluster. This unique approach also allows for multiple configurations options from the Supermicro product portfolio.
Intel has been building an entirely open source software ecosystem for data-centric computing, fully optimized for Intel® architecture and non-volatile memory (NVM) technologies, including Intel® Optane™ DC persistent memory and Intel® Optane™ DC SSDs. Distributed Asynchronous Object Storage (DAOS) is the foundation of the Intel Exascale storage stack. DAOS is an open source software-defined scale-out object store that provides high bandwidth, low latency, and high I/O operations per second (IOPS) storage containers to HPC applications. It enables next-generation data-centric workflows that combine simulation, data analytics, and AI.
A new networked storage architecture called Software Defined Storage emerged from the world of cloud computing a few years ago. Called SDS for short, it took the storage world by storm with many startups emerging followed by investments and acquisitions from the larger storage vendors. But what is SDS? How does it fit into the networked storage landscape? What are its advantages over long time tried and true solutions like Fibre Channel SANs? Join this session at Supermicro’s Open Storage Summit and find out not only the answers to these questions, but also how to deploy SDS with Supermicro solutions.
Along with the growth in unstructured data, the workloads being applied to that data are becoming less structured as well
Clouds have (re)taught us what HPC sites already knew, that compute should be shared for highest utilization
Shared compute requires shared storage and results in many storage workloads being applied at the same time
The explosion of AI/ML brings an entirely new workload that needs to be supported, without buying a new storage system
No one wants to buy special purpose storage, storage should serve all your needs without having to sync data across different systems for different uses
-Overview of a typical scale-out solution using high-density Supermicro nodes
-Overview of a typical scale-up solution using SBB Supermicro nodes
-Overview of pros/cons of using scale-up vs scale-out and what use cases best fit which hardware architecture
Not all object storage systems are alike. Designing systems to store terabytes to exabytes of data – for the long term – not only requires scalable performance, but also serious considerations that ensure the durability, protection and availability of both the data and the underlying storage platform. In this session, we will discuss how object storage and scale-out architectures are evolving to meet the future demands of incessant unstructured data growth. Join us to learn how:
• ActiveScale’s software architecture uniquely solves the challenge of scale and durability
• How rebalancing prevents some systems from meeting your data growth needs
• How object storage is a key resource in your fight against ransomware
• How new storage architectures are driving innovation in genomics, HPC, and AI
Join Frederik De Schrijver, Quantum, and Paul Mcleod, Supermicro, to learn how we are working together to drive new levels of capability.
Get the benefits of the public cloud experience for any on-prem application, core-to-edge with Supermicro server-embedded, infrastructure software delivered as-a-service
There is a well-deserved love for the speed and simplicity of RAID, but in a world of impossibly-large HDDs and Multi-PB buckets, there is little RAID can do for us at scale. Unfortunately, the ‘classic’ Erasure Coding used to replace RAID has brought us mathematical complexity, high costs and significant performance impacts. Is there a way to blend both speed and scale and run it on simple and efficient hardware? Indeed there is: Rozo Systems has adapted a mathematical transform (Mojette) to function as an encode/decode engine mimicking the simplicity of RAID, the integrity of erasure coding, and the scalability and distributed access of object storage. Even better: it works beautifully over SMB3, NFS4, and the Rozo DirectSCALE parallel client. Find out how RozoFS is easy on the hardware, but greased lightning for your applications and users.
Discussion about the impact of containerized workflows on server and storage architectures. Discuss hardware and software selection for converged, hyper converged and disaggregated storage architectures and the challenges and rewards of SDS for container based workloads (Edge to the Datacenter)
In this presentation we will looking into how Red Hat OpenShift® Data Foundation (ODF), a software-defined storage, is integrated with and optimized for Red Hat OpenShift Container Platform. We will also look into how OpenShift Data Foundation 4.x is built on Red Hat Ceph® Storage, Rook, and NooBaa to provide container native storage services that support block, file, and object services. OpenShift Data Foundation will deploy anywhere OpenShift does: on-premise or in the public cloud.
We will also look how Red Hat ODF supports a variety of traditional and cloud-native workloads including:
Block storage for databases and messaging.
Shared file storage for continuous integration and data aggregation.
Object storage for archival, backup, and media storage.
With OpenShift 4.x, Red Hat has rearchitected OpenShift to bring the power of Kubernetes Operators to transform our enterprise-grade Kubernetes distribution with automating complex workflows, i.e. deployment, bootstrapping, configuration, provisioning, scaling, upgrading, monitoring and resource management. In conjunction, OpenShift Data Foundation along with Supermicro servers will transform the storage customer experience by making it easier for Supermicro and Red Hat customers to install, upgrade and manage storage on OpenShift.
Modernize the datacenter to simplify workload management with VMWare Cloud Foundation and increase workload performance and operation efficiency with Intel Scalable processors and Intel Optane Technology
Software Defined Storage (SDS) as a platform for hosting general workloads has been around for a while. It continues to add value to various on-premise use-cases due to its flexibity, scalablity and easy to use. In the last few years, the field of data-protection has leveraged the SDS platform for protecting primary data against both in-advertent and intentional data loss. An SDS system, due to its scale-out features, is better able to cater to most retention and Recovery Point Objectives (RPO), in a more agile manner. Recent developments in flash technology has opened this space to more use-cases. This session will provide a glimpse on current and potential applications for an all-flash SDS platform in data-protection.