Skip to content

Object Storage

Overview

firstcolo Cloud provides S3 compatible Object Storage based on the OpenStack Swift Project.

It stores and retrieves arbitrary unstructured data objects via an HTTP-based API. It is highly fault tolerant with its data replication and scale-out architecture. In its implementation as a distributed eventually consistent object storage, it is not mountable like a file server.

You can use the OpenStack API to generate credentials to access the firstcolo Cloud Object Storage. You can then use the S3 API with various S3 clients and/or SDKs.

Supported S3 operations

Feature Supported
bucket HEAD/GET/PUT/DELETE yes
object HEAD/GET/PUT/POST/DELETE yes
multipart upload yes
bucket listing yes
GET/PUT Object ACL yes
GET/PUT Bucket ACL yes
Versioning yes
Bucket notification no
Bucket lifecycle no
Bucket policy no
Public website no
Billing no
Get Bucket Location yes
Delete Multiple Object yes
Object Tagging no
GET Object torrent no
Bucket inventory no
Get Bucket service no
Bucket accelerate no
Encryption no

We do not offer the complete feature set compared to Amazon S3. Thus we cannot guarantee the same user experience. Certain API calls may fail and receive a 501 Not Implemented response. If you are in doubt, feel free to contact our Support. The latest compatibility list can also be checked here.

For setting up bucket/object ACLs we suggest to use boto3. Not all predefined (canned) ACLs of AWS are supported. We are supporting the following canned ACLs: private, public-read, public-read-write, authenticated-read. Not supported canned ACLs will be interpreted as the default private ACL. We have prepared a guide in our Tutorials section which shows how to set up custom ACLs.

Buckets

Buckets (also called containers) are the logical unit firstcolo Cloud Object Storage uses to store objects.

Note

When using the S3 compatible API it is required to create buckets with an S3 client only. Creating the bucket using the Horizon Web UI will result in Access Denied errors when trying to access objects which reside in the bucket.

Objects

Objects are self-contained blobs of data with associated metadata, identified by a unique key and stored in buckets. Once uploaded an object can't be modified, instead a new version is uploaded which replaces the old one. If versioning is enabled on bucket-level, old (called non-current) objects will be kept in background and can be restored if needed.

Moreover, objects can have an expiring date after which they will be scheduled for deletion. This can be set during upload using the HTTP headers X-Delete-After=<seconds> or X-Delete-At=<unix-epoch-timestamp>.

Note

We discourage the use of special characters, especially dots (.) and slashes (/) in bucket or object names, especially at the start and end of names. As names can be used or interpreted as dns names, pathnames and/or filenames, this can confuse both server and client software and in consequence may lead to buckets or objects being unaccessible or unmaintainable. In case you are stuck with such a phenomenon, please contact our Support.

Regions

The firstcolo Cloud Object Storage is available in every region. The storage systems run independent from each other.

Region API Endpoint Available Storage Classes
eu-central-1 https://cloud-fc.de:8080 HDD, SSD

Note

When using non-AWS S3 service implementations, like firstcolo Cloud Object Store, the region shown in clients does not match the actual storage region, but falls back to us-east-1. This can safely be ignored, as firstcolo Cloud uses an individual API endpoint for each region.

Storage Classes

The firstcolo Cloud Object Storage provides 2 different Storage Classes (also called Swift Storage Policies). The "SSD" storage class stores the data on Solid State Drive and "HDD" on Hard Disk Drive. Different buckets in the same project can use different classes by setting the HTTP header X-Storage-Policy=<HDD|SSD> during bucket creation. If not further specified, the "HDD" storage class is used for new buckets.

Note

Like the region, also the storage class is not shown properly in clients, but falls back to STANDARD.

Credentials

You need to meet the following prerequisites to generate credentials:

To generate and list S3 credentials run:

openstack ec2 credentials create
openstack ec2 credentials list

Note

Since the authentication service is centralised, the credentials created have access to S3 in all regions. Access to a specific region is gained by defining different storage backend URLs.

EC2 credentials are not restricted to use on S3 resources, they can also be used to create a token for the user owning the credentials, thus providing full access to all OpenStack resources that the user that created the EC2 credentials has access to. This is not a privilege escalation, it is (although probably unexpectedly) designed that way.

Handle your EC2 credentials with the same caution as your OpenStack credentials.

Clients

S3cmd

Information about the s3cmd client can be found here. To use it, create configuration file at $HOME/.s3cfg like this:

[default]
access_key = <ec2-access-key>
secret_key = <ec2-secret-key>
use_https = True
check_ssl_certificate = True
check_ssl_hostname = False

host_base = https://cloud-fc.de:8080
website_endpoint = https://cloud-fc.de:8080/%(bucket)s/%(location)s/
host_bucket = https://cloud-fc.de:8080/

Next, create an S3 bucket:

s3cmd --add-header="X-Storage-Policy:SSD" mb s3://my-bucket

The flag -P or --acl-public can be added to create a public bucket which allows listing and downloading public objects without authentication. Without this option or with --acl-private the public retrieval of objects and the list is not possible.

To enable object versioning for the bucket, run:

s3cmd setversioning s3://my-bucket enable

Then add some object(s):

s3cmd put test.jpg s3://my-bucket

By default, objects like buckets are not publicly available. This also can be changed by adding the option -P or --acl-public to the command. Please note that s3cmd may be configured to return incorrect URLs for public objects, but these can easily be determined:

https://cloud-fc.de:8080/v1/AUTH_<project-id>/<bucket>/<object-path>
# e.g. for the uploaded file (project id is individual)
https://cloud-fc.de:8080/v1/AUTH_<project-id>/my-bucket/test.jpg

Info

The project ID can be fetched from firstcolo Cloud Dashboard API access page. Look for the Object Store service endpoint.

Show object metadata:

s3cmd info s3://my-bucket/test.jpg

Download object:

s3cmd get s3://my-bucket/test.jpg test.jpg

Delete an object:

s3cmd del s3://my-bucket/test.jpg

Boto3

Information about the boto3 python S3 library can be found here. We suggest to make use of this library to set up and manage more complex ACLs.

Using the following python snippet we can configure our client:

import boto3
import botocore
# Get our session
session = boto3.session.Session()
s3 = session.resource(
    service_name = 's3',
    aws_access_key_id = "<ec2-access-key>",
    aws_secret_access_key = "<ec2-secret-key>",
    endpoint_url = 'https://cloud-fc.de:8080'
)
# Get our client
s3client = s3.meta.client

For (re)creating a bucket we can use following snippet:

bucket = "my-bucket"
try:
    s3client.create_bucket(Bucket=bucket)
except Exception:
    print("There was a problem creating the bucket '{}', it may have already existed".format(bucket))
    oldbucket = s3.Bucket(bucket)
    # Cleanup remaining objects in bucket
    oldbucket.objects.all().delete()
    s3client.delete_bucket(Bucket=bucket)
    s3client.create_bucket(Bucket=bucket)

To upload our first objects we may use following commands:

bucket = "my-bucket"
s3client.put_object(Body="secret", Bucket=bucket, Key="private-scope-object")

Boto3 also supports defining ACLs for your buckets and objects. To learn more take a look on our ACL guide in the Tutorials section.

MinIO

Information about the MinIO client can be found here. To use it, run the following command to generate the configuration (will be saved to $HOME/.mc/config.json):

mc config host add fc-eu-central-1 https://cloud-fc.de:8080 \
    <ec2-access-key> <ec2-secret-key> \
    --api S3v4 --lookup dns

Next, create an S3 bucket:

mc mb fc-eu-central-1/my-bucket --custom-header="X-Storage-Policy:SSD"

To enable object versioning add the flag --with-versioning.

Then, use it to add some file(s):

mc cp test.jpg fc-eu-central-1/my-bucket/test.jpg

List all objects:

mc ls fc-eu-central-1/my-bucket

List all versions of an object:

mc ls fc-eu-central-1/my-bucket/test.jpg --versions
[2025-10-09 11:01:18 CEST]    28B STANDARD 1760000478.54250 v1 PUT test.jpg

Show object metadata:

mc stat fc-eu-central-1/my-bucket/test.jpg

Delete an object:

mc rm fc-eu-central-1/my-bucket/test.jpg

Download a specific object version:

mc cp fc-eu-central-1/my-bucket/test.jpg test.jpg --version-id="1760000478.54250"

Delete a bucket:

mc rb fc-eu-central-1/my-bucket

Warning

Only empty buckets can be deleted, otherwise the --force flag is required (use with caution!). When using force deletion, the process includes deleting all remaining objects in the bucket (including non-current versions) which can take some time and is irreversible.