When deploying S3, operators typically want to integrate their authentication to S3 with a centralised identity service to better consolidate and manage the process. There are a number of options for this but OIDC and LDAP are both popular and common approaches. In this post, I will describe the workflow for the authentication and token exchange between RADOS Gateway on our Storage Routers and a third-party authentication service.
Using an external OpenID Connect (OIDC) provider to authenticate with RADOS Gateway
With S3, every user has a pair of keys – an access key and a secret key. Providing these two fields in the headers of any HTTP request lets a user authenticate with RADOS Gateway as part of a request. If the user has the correct permissions/authorisation, the user can create buckets and read/write objects. If not, we get a 403 (access denied) back. You can see this in red below (Authorization: AWS ACCESS:SECRET).

There are a number of ways to manage users in Ceph’s RADOS Gateway – the management daemon aka the orchestrator API, the radosgw-admin command-line tool, or by using an external identity provider.
The radosgw-admin tool has an extensive set of commands to allow administrators to manage users and their capabilities, manually administer buckets and objects, configure availability zone and multi-site setup, and a whole host of other knobs.
Most organisations have a centralised place to manage credentials and authentication and usually use LDAP or an OIDC provider or both, and don’t want yet another service to manage users and roles. RGW can be configured to use either LDAP or OIDC as an authority to check authentication and authorisation and provide users or applications with tokens and temporary S3 credentials to use for the length of their session.
An external OpenID Connect Provider can be configured to integrate with RGW using the Security Token Service protocol. This means that applications can request session tokens from RGW which will check for authorisation and authentication with an external OIDC provider before granting access to the application. Clients and the application itself doesn’t have to worry about RGW users as these are created and managed on the fly by RGW.

The OIDC technology builds on top of OAuth2 and is exactly the same technology we use when we login to third party applications with our Google, Apple, or Facebook credentials. Google in this case will act as an OIDC provider to the application to tell them we are who we say we are and the application will then decide what we can and can’t do based on that information.

When this occurs the OIDC provider issues the user with a JSON Web Token or JWT for short. JWTs consist of a header, a payload, and a signature. The encoded token is small, but it contains all the information we need to know to provide the user with an application session. At a minimum, it includes the name of the issuing party, the subject and the intended audience. In some cases it may specify roles or rights that dictate what can and cannot be done with it, though this may be handled by the application itself.
So how does this whole process work? Well, it’s a series of short steps all starting with a user requesting to use an application that makes use of our S3 service.
Client Request
In this case, our user is Alice, and Alice wishes to use the application, and provides her OIDC credentials to do it.

OIDC Authentication

The application then contacts the OIDC provider asking for an auth token with a REST call with client details:
- The ID and shared-secret of the application (called a ‘client’ in OIDC terminology).
- The username and password of the user we wish to authenticate.
- The request type, the tenancy, and anything else we wish to specify.
If the request succeeds, the application receives a JWT, signed by the provider, which includes at minimum these fields:

“aud” is the audience field, which is the name of the type of client; “azp” is the authorised party which in this case is the application itself; “iss” is the issuer, which is the URL of the OIDC provider; “sub” is the subject, which is the unique identifier for the user.
Convert to S3 Access and Secret Key
The application then makes an STS call to convert the OIDC token into ephemeral AWS credentials. This call is the AssumeRoleWithWebIdentity which takes a JWT and returns S3 credentials. The important pieces that it takes as arguments are the token and the ARN resource identifier of the AWS Role to be assumed.
Token Introspection
The STS service may then contact the OIDC provider for more information. This can be to check the user’s authorisation or to decrypt the token. If successful, the OIDC provider will return an almost identical token.
Assume Role (Authorisation)
The STS service applies the Assume-Role policy. The STS service tests the conditions in the Assume-Role policy associated with the Role to see if it will allow this to proceed or not.
Here’s an example Assume-Role policy:

This policy states that the request will be allowed as long as certain conditions are met:
- The user was authenticated by the OIDC Provider
- The request is an STS AssumeRoleWithWebIdentiy
- The aud field contains our client id
If all these conditions are met, we provide S3 credentials.
S3 Call
Now that the application has credentials it is able to connect to RGW and perform standard S3 operations. While this whole process seems quite long-winded it takes place in milliseconds and once a client has a session open that application or user can use the token for as long as the session lifetime is specified for.
Limitations Today
There are a couple of limitations today in Nautilus and Octopus for using STS with radosgw, and these revolve around multi-tenancy support and multiple role support. While these are both still under development we don’t expect to see them upstream in Ceph until at least the Pacific release. In the meantime, if we’re looking to do multi-tenancy or a mature list of role policies we can use a standalone OpenStack Keystone instance to bridge the gap and handle the OIDC connection on our behalf.
Summary
In summary, if you’re looking to deploy S3 with a third-party authentication piece you can do this easily with SoftIron Ceph, and it’s only going to get easier and better with this year’s Pacific release.
If you’d like to set up some time with one of our technical team to discuss your implementation contact us here.
Danny Abukalam
Product Engineering Lead