Agenda

Tuesday, July 27, 10:00 am PT

The Open Storage Summit will commence with keynote presentations from Nutanix, Intel and Supermicro senior executives on the latest software-defined storage technology developments for enterprise, data center, and multi-cloud infrastructure.

Tuesday, July 27 11:00 am PT

Discussion of the enabling technologies that customers running software-defined storage (SDS) are most interested in (Performance)
• New storage technology PMM, NVMe, Dual Actuator HDDs, NVMeOF, Composable, Computational storage,/Smart media, and new offload strategies like DPU
• New performance milestones
• Challenges delivering value (rapid ROI)

Tuesday, July 27 11:30 am PT

One of the main challenges of a true scale out system is scaling the metadata in an effective way to enable effective scale out without compromising on performance, traditionally this is solved using a meta data server that then becomes a performance bottleneck or by TOKEN sharing between nodes that becomes less effective as the cluster grows. in this session we will present how WekaIO a fully software defined data platform solves this issue allowing for effective scale out to 1000s of servers and actually increasing performance as more servers are added to the cluster. This unique approach also allows for multiple configurations options from the Supermicro product portfolio.

Tuesday, July 27 11:30 am PT

Intel® has been building an entirely open source software ecosystem for data-centric computing, fully optimized for Intel® architecture and non-volatile memory (NVM) technologies, including Intel® Optane™ DC persistent memory and Intel® Optane™ DC SSDs. Distributed Asynchronous Object Storage (DAOS) is the foundation of the Intel Exascale storage stack. DAOS is an open source software-defined scale-out object store that provides high bandwidth, low latency, and high I/O operations per second (IOPS) storage containers to HPC applications. It enables next-generation data-centric workflows that combine simulation, data analytics, and AI.

Tuesday, July 27 11:30 am PT

A new networked storage architecture called Software Defined Storage emerged from the world of cloud computing a few years ago. Called SDS for short, it took the storage world by storm with many startups emerging followed by investments and acquisitions from the larger storage vendors. But what is SDS? How does it fit into the networked storage landscape? What are its advantages over long time tried and true solutions like Fibre Channel SANs? Join this session at Supermicro’s Open Storage Summit and find out not only the answers to these questions, but also how to deploy SDS with Supermicro solutions.

Tuesday, July 27 11:30 am PT

Along with the growth in unstructured data, the workloads being applied to that data are becoming less structured as well
Clouds have (re)taught us what HPC sites already knew, that compute should be shared for highest utilization
Shared compute requires shared storage and results in many storage workloads being applied at the same time
The explosion of AI/ML brings an entirely new workload that needs to be supported, without buying a new storage system
No one wants to buy special purpose storage, storage should serve all your needs without having to sync data across different systems for different uses

Wednesday, July 28 10:00 am PT

Discussion of how enterprises are transforming storage siloes using cloud enabled SDS for increased agility and resiliency. The power of SDS achieve cloud goals. The impact to IT and what concerns customers may have about doing this.

Wednesday, July 28 10:30 am PT

-Overview of a typical scale-out solution using high-density Supermicro nodes
-Overview of a typical scale-up solution using SBB Supermicro nodes
-Overview of pros/cons of using scale-up vs scale-out and what use cases best fit which hardware architecture

Wednesday, July 28 10:30 am PT

Not all object storage systems are alike. Designing systems to store terabytes to exabytes of data – for the long term – not only requires scalable performance, but also serious considerations that ensure the durability, protection and availability of both the data and the underlying storage platform. In this session, we will discuss how object storage and scale-out architectures are evolving to meet the future demands of incessant unstructured data growth. Join us to learn how:

• ActiveScale’s software architecture uniquely solves the challenge of scale and durability
• How rebalancing prevents some systems from meeting your data growth needs
• How object storage is a key resource in your fight against ransomware
• How new storage architectures are driving innovation in genomics, HPC, and AI

Join Frederik De Schrijver, Quantum, and Paul Mcleod, Supermicro, to learn how we are working together to drive new levels of capability.

Wednesday, July 28 10:30 am PT

Get the benefits of the public cloud experience for any on-prem application, core-to-edge with  Supermicro server-embedded, infrastructure software delivered as-a-service.

Wednesday, July 28 10:30 am PT

There is a well-deserved love for the speed and simplicity of RAID, but in a world of impossibly-large HDDs and Multi-PB buckets, there is little RAID can do for us at scale. Unfortunately, the ‘classic’ Erasure Coding used to replace RAID has brought us mathematical complexity, high costs and significant performance impacts. Is there a way to blend both speed and scale and run it on simple and efficient hardware? Indeed there is: Rozo Systems has adapted a mathematical transform (Mojette) to function as an encode/decode engine mimicking the simplicity of RAID, the integrity of erasure coding, and the scalability and distributed access of object storage. Even better: it works beautifully over SMB3, NFS4, and the Rozo DirectSCALE parallel client. Find out how RozoFS is easy on the hardware, but greased lightning for your applications and users.

Thursday, July 29 10:00 am PT

Discussion about the impact of containerized workflows on server and storage architectures. Discuss hardware and software selection for converged, hyper converged and disaggregated storage architectures and the challenges and rewards of SDS for container based workloads (Edge to the Datacenter)

Thursday, July 29 10:30 am PT

In this presentation we will looking into how Red Hat OpenShift® Data Foundation (ODF), a software-defined storage, is integrated with and optimized for Red Hat OpenShift Container Platform. We will also look into how OpenShift Data Foundation 4.x is built on Red Hat Ceph® Storage, Rook, and NooBaa to provide container native storage services that support block, file, and object services. OpenShift Data Foundation will deploy anywhere OpenShift does: on-premise or in the public cloud.

We will also look how Red Hat ODF supports a variety of traditional and cloud-native workloads including:

Block storage for databases and messaging.
Shared file storage for continuous integration and data aggregation.
Object storage for archival, backup, and media storage.

With OpenShift 4.x, Red Hat has rearchitected OpenShift to bring the power of Kubernetes Operators to transform our enterprise-grade Kubernetes distribution with automating complex workflows, i.e. deployment, bootstrapping, configuration, provisioning, scaling, upgrading, monitoring and resource management. In conjunction, OpenShift Data Foundation along with Supermicro servers will transform the storage customer experience by making it easier for Supermicro and Red Hat customers to install, upgrade and manage storage on OpenShift.

Thursday, July 29 10:30 am PT

Modernize the datacenter to simplify workload management with VMWare Cloud Foundation and increase workload performance and operation efficiency with Intel Scalable processors and Intel Optane Technology

Thursday, July 29 10:30 am PT

Software Defined Storage (SDS) as a platform for hosting general workloads has been around for a while. It continues to add value to various on-premise use-cases due to its flexibity, scalablity and easy to use. In the last few years, the field of data-protection has leveraged the SDS platform for protecting primary data against both in-advertent and intentional data loss. An SDS system, due to its scale-out features, is better able to cater to most retention and Recovery Point Objectives (RPO), in a more agile manner. Recent developments in flash technology has opened this space to more use-cases. This session will provide a glimpse on current and potential applications for an all-flash SDS platform in data-protection.

Premier Partner