Storage: Difference between revisions

From bwCloud-OS
Jump to navigation Jump to search
Admin (talk | contribs)
No edit summary
Admin (talk | contribs)
Line 51: Line 51:
</pre>
</pre>


=== What about the security (= integrity) of my data in the bwCloud? ===
=== What about the security (= integrity) of my data in bwCloud-OS? ===
Both the runtime environment (root disk) of a virtual machine and the attached storage are stored in our CEPH storage systems. This is organized in such a way that each piece of information is stored on three different hard disks (redundancy level 3). This means that the data is pretty good against hardware failure protected. Furthermore, the virtual machine data (both root disk and attachd storage) is not backed up anymore! So please make sure that you have an appropriate backup of the data.
Both the runtime environment (root disk) of a virtual machine and the attached storage are stored in our CEPH storage systems. This is organized in such a way that each piece of information is stored on three different hard disks (redundancy level 3). This means that the data is pretty good against hardware failure protected. Furthermore, the virtual machine data (both root disk and attachd storage) is not backed up anymore! So please make sure that you have an appropriate backup of the data.


In general, the bwCloud is operated as a "best-effort resource". This means: To be able to offer an appropriate amount of read-only memory for the performance, no high redundancy is built in. In certain and very rare scenarios (software errors of the CEPH storage system, several disks fail), no recovery is possible, which is why we recommend to store '''all valuable data on corresponding external storage systems''' (like all relevant configurations required to recover the machine are needed, ....).
In general, the bwCloud is operated as a "best-effort resource". This means: To be able to offer an appropriate amount of read-only memory for the performance, no high redundancy is built in. In certain and very rare scenarios (software errors of the CEPH storage system, several disks fail), no recovery is possible, which is why we recommend to store '''all valuable data on corresponding external storage systems''' (like all relevant configurations required to recover the machine are needed, ....).


= Performance =
= Performance =

Revision as of 12:40, 22 September 2025


Quickstart - Volumes/Storage

Quickstart: attach volume

Backups

Does bwCloud-OS provide a dedicated interface for backups/data protection?

No, we do not provide a dedicated interface. However, the data of running instances can be backed up using standard tools.

How can I back up my virtual machines?

Snapshots of instances and attached volumes can be created via the dashboard. Snapshots of instances are images and can be downloaded using the CLI clients (openstack-client) (keyword: openstack image download ...). Volumes can also be turned into images (keyword: create image from volume) and downloaded in the same way. For large volumes, it is often easier and more efficient to export the data directly from the instances, e.g., using tools like rsync, scp, etc.

Download Volumes or Images

Connect to bwCloud using the openstack-client. Create an image from your volume.

# openstack volume list
# openstack image create \
    --volume <UUID> \
    my_volume_as_image

Download the image:

# openstack image list
# openstack image save \
    --file my_image_file.img \
    <UUID>

Use the following command to upload a local image file to the image catalog of the selected region and create the metadata entry. The image is not copied to an existing VM.

# openstack image create \
    --property os_distro=linux \
    --property ssh_user=<USER> \
    --property hw_video_model=cirrus \
    --file my_image_file.img \
    <NAME>

Upload Image to bwCloud-OS

Log in to the dashboard and navigate through the GUI as follows:

'Compute' -> 'Images' -> 'Create Image'

What about the security (= integrity) of my data in bwCloud-OS?

Both the runtime environment (root disk) of a virtual machine and the attached storage are stored in our CEPH storage systems. This is organized in such a way that each piece of information is stored on three different hard disks (redundancy level 3). This means that the data is pretty good against hardware failure protected. Furthermore, the virtual machine data (both root disk and attachd storage) is not backed up anymore! So please make sure that you have an appropriate backup of the data.

In general, the bwCloud is operated as a "best-effort resource". This means: To be able to offer an appropriate amount of read-only memory for the performance, no high redundancy is built in. In certain and very rare scenarios (software errors of the CEPH storage system, several disks fail), no recovery is possible, which is why we recommend to store all valuable data on corresponding external storage systems (like all relevant configurations required to recover the machine are needed, ....).

Performance

Throttling of Data Throughput

Due to the internal architecture, all data (root disks of instances, attached storage volumes, etc.) resides in the Ceph storage of the respective region. Ceph is a network-based distributed storage system, connected to the compute hosts via the network. The available storage throughput is therefore shared among all active users. The more parallel write operations occur, the lower the throughput for each individual. This is logical, as the network capacity and bandwidth are limited overall.

To provide roughly equal performance to all users, the storage throughput per instance is limited to either 100 MB/s in both directions (full duplex) or 800 IOPS.

Requesting Higher Throughput

If justified, users can request higher data throughput. Please submit a ticket to us. In the ticket, include the following information:

  • You would like to receive higher data throughput.
  • Description of the use case or application: Why do you need higher throughput?
  • OpenStack identifier (ideally the user ID).
  • In which region do you need higher throughput?

The above points are mandatory.