Storage

From bwCloud-OS
Revision as of 12:38, 22 September 2025 by Admin (talk | contribs)
Jump to navigation Jump to search


Quickstart - Volumes/Storage

Quickstart: attach volume

Backups

Does bwCloud-OS provide a dedicated interface for backups/data protection?

No, we do not provide a dedicated interface. However, the data of running instances can be backed up using standard tools.

How can I back up my virtual machines?

Snapshots of instances and attached volumes can be created via the dashboard. Snapshots of instances are images and can be downloaded using the CLI clients (openstack-client) (keyword: openstack image download ...). Volumes can also be turned into images (keyword: create image from volume) and downloaded in the same way. For large volumes, it is often easier and more efficient to export the data directly from the instances, e.g., using tools like rsync, scp, etc.

Download Volumes or Images

Connect to bwCloud using the openstack-client. Create an image from your volume.

# openstack volume list
# openstack image create \
    --volume <UUID> \
    my_volume_as_image

Download the image:

# openstack image list
# openstack image save \
    --file my_image_file.img \
    <UUID>

Use the following command to upload a local image file to the image catalog of the selected region and create the metadata entry. The image is not copied to an existing VM.

# openstack image create \
    --property os_distro=linux \
    --property ssh_user=<USER> \
    --property hw_video_model=cirrus \
    --file my_image_file.img \
    <NAME>

Upload Image to bwCloud-OS

Log in to the dashboard and navigate through the GUI as follows:

'Compute' -> 'Images' -> 'Create Image'

Performance

Throttling of Data Throughput

Due to the internal architecture, all data (root disks of instances, attached storage volumes, etc.) resides in the Ceph storage of the respective region. Ceph is a network-based distributed storage system, connected to the compute hosts via the network. The available storage throughput is therefore shared among all active users. The more parallel write operations occur, the lower the throughput for each individual. This is logical, as the network capacity and bandwidth are limited overall.

To provide roughly equal performance to all users, the storage throughput per instance is limited to either 100 MB/s in both directions (full duplex) or 800 IOPS.

Requesting Higher Throughput

If justified, users can request higher data throughput. Please submit a ticket to us. In the ticket, include the following information:

  • You would like to receive higher data throughput.
  • Description of the use case or application: Why do you need higher throughput?
  • OpenStack identifier (ideally the user ID).
  • In which region do you need higher throughput?

The above points are mandatory.