Storage: Difference between revisions

From bwCloud-OS
Jump to navigation Jump to search
Admin (talk | contribs)
No edit summary
Admin (talk | contribs)
No edit summary
Line 6: Line 6:
{{:Quickstart: attach volume}}
{{:Quickstart: attach volume}}


= Backups =  
= Backups =


=== Does bwCloud-OS provide a dedicated interface for backups/data protection? ===
No, we do not provide a dedicated interface. However, the data of running instances can be backed up using standard tools.


=== Bietet  bwCloud-OS eine besondere Schnittstelle für ein Backup / eine Datensicherung an? ===
=== How can I back up my virtual machines? ===
Nein, wir bieten keine spezielle Schnittstelle an. Die Daten der (laufenden) Instanzen können aber mit herkömmlichen Bordmitteln gesichert werden.
Snapshots of instances and attached volumes can be created via the dashboard. Snapshots of instances are images and can be downloaded using the CLI clients (openstack-client) (keyword: ''openstack image download ...''). Volumes can also be turned into images (keyword: ''create image from volume'') and downloaded in the same way. For large volumes, it is often easier and more efficient to export the data directly from the instances, e.g., using tools like rsync, scp, etc.


=== Download Volumes or Images ===
Connect to bwCloud using the ''openstack-client''. Create an image from your volume.


=== Wie kann ich denn jetzt meine virtuellen Maschinen sichern? ===
Von Instanzen und angehängten Volumes können über das Dashboard Snapshots erzeugt werden. Snapshots von Instanzen sind Images und können mit den CLI Clients (openstack-client) heruntergeladen werden (Stichwort: ''openstack image download ...''). Von Volumes können ebenfalls Images erzeugt werden (Stichwort: ''create image from volume'') und auf dieselbe Art heruntergeladen werden. Bei großen Volumes ist es aber einfacher und effizienter, direkt aus den Instanzen heraus die Daten zu exportieren, z.B. mit rsync, scp o.ä. tools.
=== Volumes oder Images herunterladen ===
Verbinden Sie sich mittel ''openstack-client'' mit der bwCloud. Erzeugen Sie aus Ihrem Volume ein Image.
<!--T:7-->
<pre>
<pre>
# openstack volume list
# openstack volume list
Line 28: Line 24:
</pre>
</pre>


Download the image:


Image herunterladen:
<pre>
<pre>
# openstack image list
# openstack image list
Line 35: Line 31:
     --file my_image_file.img \
     --file my_image_file.img \
     <UUID>
     <UUID>
</pre>  
</pre>
 
 
Mit folgendem Befehl laden Sie eine lokale Image-Datei in den Abbildkatalog der gewählten Region hoch und erstellen dabei den Metadateneintrag. Das Image wird nicht in eine bestehende VM kopiert.


Use the following command to upload a local image file to the image catalog of the selected region and create the metadata entry. The image is not copied to an existing VM.


<pre>
<pre>
Line 50: Line 44:
</pre>
</pre>


=== Upload Image to bwCloud-OS ===
Log in to the dashboard and navigate through the GUI as follows:


=== Image in bwCloud-OS hochladen ===
<pre>
Melden Sie sich im Dashboard an. Klicken Sie sich wie folgt durch die Grafische-Oberfläche:
'Compute' -> 'Images' -> 'Create Image'
</pre>


<pre> 'Compute' -> 'Abbilder' -> 'Abbild erstellen' </pre>
= Performance =
= Performance =


== Drosselung des Datendurchsatzes == <!--T:13-->
== Throttling of Data Throughput ==
 
 
Aufgrund der internen Architektur liegen alle Daten (root discs der Instanzen, attached storage volumes etc.) im Ceph-Speicher der jeweiligen Region. Ceph ist ein netzwerkbasiertes verteiltes Speichersystem, was intern via Netzwerk mit den Compute-Hosts verbunden ist. Der verfügbare Speicherdurchsatz verteilt sich also auf alle aktiven Nutzer:innen. Hierbei gilt: Je mehr parallele Schreibvorgänge stattfinden, desto stärker sinkt der Datendurchsatz für jeden einzelnen. Was auch ganz logisch ist, denn die Netzwerkkapazität und die Bandbreite sind insgesamt limitiert.
 
 
Um möglichst allen Nutzer:innen eine gleiche Leistung anbieten zu können, wird der Speicherdurchsatz pro Instanz entweder auf 100 MByte pro Sekunde in beide Richtungen (full duplex) oder auf 800 IOPS begrenzt.
 
== mehr Durchsatz erhalten == <!--T:16-->
 
 
Bei berechtigtem Anliegen können Nutzer:innen nach mehr Datendurchsatz anfragen. Bitte dazu ein [https://bw-support.scc.kit.edu/ Ticket] an uns schreiben. In dem Ticket teilen Sie uns folgende Punkte mit: 
 
 
* Sie möchten mehr Datendurchsatz bekommen
 
 
* Beschreibung des Use-Case bzw. der Anwendung: Warum benötigen Sie mehr Datendurchsatz?


Due to the internal architecture, all data (root disks of instances, attached storage volumes, etc.) resides in the Ceph storage of the respective region. Ceph is a network-based distributed storage system, connected to the compute hosts via the network. The available storage throughput is therefore shared among all active users. The more parallel write operations occur, the lower the throughput for each individual. This is logical, as the network capacity and bandwidth are limited overall.


* OpenStack Kennung (idealerweise die User-ID)
To provide roughly equal performance to all users, the storage throughput per instance is limited to either 100 MB/s in both directions (full duplex) or 800 IOPS.


== Requesting Higher Throughput ==


* In welcher Region benötigen Sie mehr Datendurchsatz?
If justified, users can request higher data throughput. Please submit a [https://bw-support.scc.kit.edu/ ticket] to us. In the ticket, include the following information:


* You would like to receive higher data throughput.
* Description of the use case or application: Why do you need higher throughput?
* OpenStack identifier (ideally the user ID).
* In which region do you need higher throughput?


Die oben genannten Punkte sind Pflichtangaben.
The above points are mandatory.

Revision as of 12:38, 22 September 2025


Quickstart - Volumes/Storage

Quickstart: attach volume

Backups

Does bwCloud-OS provide a dedicated interface for backups/data protection?

No, we do not provide a dedicated interface. However, the data of running instances can be backed up using standard tools.

How can I back up my virtual machines?

Snapshots of instances and attached volumes can be created via the dashboard. Snapshots of instances are images and can be downloaded using the CLI clients (openstack-client) (keyword: openstack image download ...). Volumes can also be turned into images (keyword: create image from volume) and downloaded in the same way. For large volumes, it is often easier and more efficient to export the data directly from the instances, e.g., using tools like rsync, scp, etc.

Download Volumes or Images

Connect to bwCloud using the openstack-client. Create an image from your volume.

# openstack volume list
# openstack image create \
    --volume <UUID> \
    my_volume_as_image

Download the image:

# openstack image list
# openstack image save \
    --file my_image_file.img \
    <UUID>

Use the following command to upload a local image file to the image catalog of the selected region and create the metadata entry. The image is not copied to an existing VM.

# openstack image create \
    --property os_distro=linux \
    --property ssh_user=<USER> \
    --property hw_video_model=cirrus \
    --file my_image_file.img \
    <NAME>

Upload Image to bwCloud-OS

Log in to the dashboard and navigate through the GUI as follows:

'Compute' -> 'Images' -> 'Create Image'

Performance

Throttling of Data Throughput

Due to the internal architecture, all data (root disks of instances, attached storage volumes, etc.) resides in the Ceph storage of the respective region. Ceph is a network-based distributed storage system, connected to the compute hosts via the network. The available storage throughput is therefore shared among all active users. The more parallel write operations occur, the lower the throughput for each individual. This is logical, as the network capacity and bandwidth are limited overall.

To provide roughly equal performance to all users, the storage throughput per instance is limited to either 100 MB/s in both directions (full duplex) or 800 IOPS.

Requesting Higher Throughput

If justified, users can request higher data throughput. Please submit a ticket to us. In the ticket, include the following information:

  • You would like to receive higher data throughput.
  • Description of the use case or application: Why do you need higher throughput?
  • OpenStack identifier (ideally the user ID).
  • In which region do you need higher throughput?

The above points are mandatory.