Skip to content

Object Storage

Overview

firstcolo Cloud provides S3 compatible Object Storage based on the Openstack Swift Project.

It stores and retrieves arbitrary unstructured data objects via an HTTP-based API. It is highly fault tolerant with its data replication and scale-out architecture. In its implementation as a distributed eventually consistent object storage, it is not mountable like a file server.

You can use the OpenStack API to generate credentials to access the firstcolo Cloud Object Storage. You can then use the S3 API with various S3 clients and/or SDKs.

Supported S3 operations

Feature Supported
bucket HEAD/GET/PUT/DELETE yes
object HEAD/GET/PUT/POST/DELETE yes
multipart upload yes
bucket listing yes
GET/PUT Object ACL yes
GET/PUT Bucket ACL yes
Versioning yes
Bucket notification no
Bucket lifecycle no
Bucket policy no
Public website no
Billing no
Get Bucket Location yes
Delete Multiple Object yes
Object Tagging no
GET Object torrent no
Bucket inventory no
Get Bucket service no
Bucket accelerate no
Encryption no

We do not offer the complete feature set compared to Amazon S3. Thus we cannot guarantee the same user experience. Certain API calls may fail and receive a 501 Not Implemented response. If you are in doubt, feel free to contact our Support. The latest compatibility list can also be checked here.

For setting up bucket/object ACLs we suggest to use boto3. Not all predefined (canned) ACLs of AWS are supported. We are supporting the following canned ACLs: private,public-read,public-read-write,authenticated-read. Not supported canned ACLs will be interpreted as the default private ACL. We have prepared a guide in our Tutorials section which shows how to set up custom ACLs.

Buckets

Buckets are the logical unit firstcolo Cloud Object Storage uses to stores objects.

Note

When using the S3 compatible API it is required to create buckets with an S3 Client only. Creating the bucket using the Horizon Web UI will result in Access Denied errors when trying to access objects which reside in the bucket.

Objects

Basically, firstcolo Cloud Object Storage is a big key/value store. Objects can be assigned a file name and made available under this name.

Note

We discourage the use of special characters, especially dots (.) and slashes (/) in bucket or object names, especially at the start and end of names. As names can be used or interpreted as dns names, pathnames and/or filenames, this can confuse both server and client software and in consequence may lead to buckets or objects being unaccessible or unmaintainable. In case you are stuck with such a phenomenon, please contact our Support.

Regions

The firstcolo-Object-Storage / S3 is available in every region. The storage systems run independent from each other.

Region URL Transfer Encryption
us-east-1 cloud-fc.de:8080 Yes

(us-east-1 is actually eu-central-1, due to a special condition/bug this defaults to us-east-1)

Storage Classes

The firstcolo-Object-Storage / S3 provides 2 different Storage Classes (also called Swift Storage Policies). They are called "SSD" and "HDD". The storage class SSD stores the data on Solid State Drive and the HDD on Hard Disk Drive.

Credentials

You need to meet the following prerequisites to generate credentials:

  • You need the OpenStack command line tools in version >= 3.13.x.
  • A change to your shell environment (you can add this to your openrc file):
export OS_INTERFACE="public"

When these prerequisites are met, you can generate and display S3 credentials:

openstack ec2 credentials create
openstack ec2 credentials list

Note

Since the authentication service is centralised, the credentials created have access to S3 in all regions. Access to a specific region is gained by defining different storage backend URLs.

EC2 credentials are not restricted to use on S3 resources, they can also be used to create a token for the user owning the credentials, thus providing full access to all OpenStack resources that the user that created the EC2 credentials has access to. This is not a privilege escalation, it is (although probably unexpectedly) designed that way.

Handle your EC2 credentials with the same caution as your OpenStack credentials.

Clients

S3cmd

Information about the s3cmd client can be found here.

Now you can create an s3cmd configuration which could look like this:

firstcolo@kickstart:~$ cat .s3cfg
[default]
access_key = < REPLACE ME >
secret_key = < REPLACE ME >
use_https = True
check_ssl_certificate = True
check_ssl_hostname = False

host_base = https://cloud-fc.de:8080
website_endpoint = https://cloud-fc.de:8080/%(bucket)s/%(location)s/
host_bucket = https://cloud-fc.de:8080/

Next, create an S3 Bucket.

s3cmd --add-header="X-Storage-Policy:SSD" mb s3://BUCKET_NAME -P

The Storage-Policy can be HDD or SSD. If --add-header is omitted, the default Storage-Policy "HDD" will be used.

The option -P or --acl-public makes this a public bucket which means that public objects in this bucket and a list of public objects can be retrieved. Without this option or with --acl-private the public retrieval of objects and the list is not possible.

Then add some object(s):

s3cmd put test.jpg s3://BUCKET_NAME -P

Here we upload a file and make it public by specifying the option -P or --acl-public. Objects without this option or with --acl-private are not publicly available and will not be contained in the listing.

Please note that s3cmd may be configured to return incorrect URLs, i.e. by default:

Public URL of the object is: http://cloud-fc.de:8080/BUCKET_NAME/test.jpg

Boto3

Information about the boto3 python S3 library can be found here. We suggest to make use of this library to set up and manage more complex ACLs.

Using the following python snippet we can configure our client:

import boto3
import botocore
# Get our session
session = boto3.session.Session()
s3 = session.resource(
    service_name = 's3',
    aws_access_key_id = "< REPLACE ME >",
    aws_secret_access_key = "< REPLACE ME >",
    endpoint_url = 'https://cloud-fc.de:8080'
)
# Get our client
s3client = s3.meta.client

For (re)creating a bucket we can use following snippet:

bucket = "myBucket"
try:
    s3client.create_bucket(Bucket=bucket)
except Exception:
    print("There was a problem creating the bucket '{}', it may have already existed".format(bucket))
    oldbucket = s3.Bucket(bucket)
    # Cleanup remaining objects in bucket
    oldbucket.objects.all().delete()
    s3client.delete_bucket(Bucket=bucket)
    s3client.create_bucket(Bucket=bucket)

To upload our first objects we may use following commands:

bucket = "myBucket"
s3client.put_object(Body="secret", Bucket=bucket, Key="private-scope-object")

Boto3 also supports defining ACLs for your buckets and objects. To learn more take a look on our ACL guide in the Tutorials section.

Minio

Information about the Minio client can be found here.

Installation of Minio client into the home directory of the current user is necessary for the following example commands to work!

Now you can create a Minio S3 configuration:

~/mc config host add <ALIAS> <YOUR-S3-ENDPOINT> <YOUR-ACCESS-KEY> <YOUR-SECRET-KEY> --api <API-SIGNATURE> --lookup <BUCKET-LOOKUP-TYPE>
~/mc config host add eu-central-1 https://cloud-fc.de:8080 accesskey secretkey --api S3v4 --lookup dns

Next, create an S3 Bucket:

~/mc mb eu-central-1/bucketname
Bucket created successfully ‘eu-central-1/bucketname’.

Then, use it to add some file(s):

~/mc cp /root/test.jpg eu-central-1/bucketname/test.jpg
/root/test.jpg: 380.21 KB / 380.21 KB ┃▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓┃ 100.00% 65.42 MB/s 0s

List the file(s):

~/mc ls eu-central-1/bucketname
[2018-04-27 14:18:28 UTC] 380KiB test.jpg