Quincy is here, but when will you see it in HyperDrive?

Part of the value that we are able to bring in the way in which we deliver Ceph to market in HyperDrive is the ability to preserve the “goodness” of open source, whilst making it easier, more consistent and predictable to consume. In our experience, rushing to adopt the latest upstream releases rarely ends well and with many of our customers running large scale, mission critical storage infrastructures it is incumbent on us to place the right level of due diligence on evaluating these new releases before making a decision on when (or indeed if) to deploy them.

The upstream Ceph community has just released Ceph 17.2.0, the first “stable” release in the Quincy release series. While we’re excited to “kick the tires” and test out the new release, due to performance regressions over time and other stability issues we’re only just now at a point that we’re starting to push 16.2.x, the Pacific release series, to customers in a way that delivers the same or better performance over Nautilus and ends in a reliable cluster. We’ll be applying the same level of due diligence to our evaluation of Quincy too – ensuring that at the point at which we move to supporting it the customer experience will be the same or better than before. And you might have also read that we’re also expanding our ability to contribute directly into future community releases with a significant expansion of engineering resources within my team wholly focused on doing just that.

For reference, in the past, Ceph has had a “well-supported” release followed by a somewhat “less well-supported” release. For example, Ceph Jewel was a long-term support release, while Ceph Kraken was not, but Ceph Luminous was. With this cadence in mind, the next supported release of Ceph for SoftIron is planned to be Ceph Reef, due out in the first half of 2023.

That said, to ensure that our approach doesn’t disadvantage our customers from taking advantage of new features that may be available in current releases we have in the past backported features from newer releases if necessary. I’m particularly interested in osd_memory_target_autotune being enabled by default, RGW per-user rate-limiting, and RGW per-bucket rate-limiting. We’re already consulting with our partners and customers to make a call on that in the case of the Quincy release now.

It’s one of the things that makes SoftIron a great choice for customers desiring a reliable, performant Ceph cluster!