Objective of this article
[vc_separator css=”.vc_custom_1557400598596{margin-top: -20px !important;}”]Test the new features of Proxmox VE 6 and create a 3-node cluster with Ceph directly from the graphical interfaceJTVCYWRyb3RhdGUlMjBiYW5uZXIlM0QlMjIzJTIyJTVESoftware used
[vc_separator css=”.vc_custom_1557400611541{margin-top: -20px !important;}”]Proxmox VE version 6.0-5Ceph version 14.2.1 Nautilus (stable)
Hardware used
[vc_separator css=”.vc_custom_1557400611541{margin-top: -20px !important;}”]3 A3Server each equipped with 2 SSD disks (1 with 480GB and the other with 512GB – intentionally), 1 HDD 2TB disk and 16GB of RAM.Type Raid: ZFS Raid 0 (on HDD)
SSD disks (sda, sdb) for Ceph
We called the nodes PVE1, PVE2, PVE3[vc_single_image image=”15830″ img_size=”full” alignment=”center” onclick=”custom_link” img_link_target=”_blank” link=”https://www.miniserver.store/appliance-a3-server-aluminum”]
Introduction
[vc_separator css=”.vc_custom_1557400611541{margin-top: -20px !important;}”]Before starting we have created a Proxmox VE cluster of 3 nodes from the graphical interface, but it is always possible to do so even by clients.In the following paragraphs we will show how to make a cluster from GUI, how to install the Ceph package and its first configuration.
For those who are already competent in Ceph matters and want to deepen the configuration and aspects of hyperconvergency through Ceph I invite you to read this guide: https://blog.miniserver.it/en/proxmox-ve-6-cluster-advanced-3-node-configuration-with-ceph/
At the bottom of the page you can also find a video on our youtube channel where we talk about the topic Proxmox Ceph.
Download the test environment
[vc_separator css=”.vc_custom_1557400611541{margin-top: -20px !important;}”]To better understand the potential of the Cluster Proxmox VE solution and the possible configurations, we have created a laboratory aimed at testing the possible configurations of Ceph.The lab is made up of 3 Proxmox VE virtual machines already configured in clusters with Ceph.
Below, you will find the link to download the test environment.[vc_btn title=”Click on the following link to download the 3 configured Proxmox nodes” style=”custom” custom_background=”#00a0df” custom_text=”#ffffff” size=”sm” i_icon_fontawesome=”fas fa-chevron-circle-right” add_icon=”true” link=”url:https%3A%2F%2Fwww.firewallhardware.it%2Fopt%2Fproxmox%2Ftest-the-functionality-of-the-proxmox-cluster-with-ceph%2F|title:Opt-in%20Page%20Proxmox%20Ceph%20Cluster|target:%20_blank|” css=”.vc_custom_1588421818121{margin-top: -15px !important;}”]
3-node cluster
[vc_separator css=”.vc_custom_1557400611541{margin-top: -20px !important;}”]Move to Datacenter -> Cluster and then click on the Create Cluster button[vc_single_image image=”17917″ img_size=”full” alignment=”center” onclick=”link_image”]Give a name to the Cluster you are about to create, then choose the dedicated interface, and then the Create button.Note
In the production environment it is always necessary, but not mandatory, to separate the Cluster interface from the other interfaces (especially Ceph’s).
It is better to have an interface for the Cluster, an interface for Ceph, and one for the GUI administration, separate from those dedicated to VMs and / or containers to keep everything clean and not incur any performance problems.[vc_single_image image=”17920″ img_size=”full” alignment=”center” onclick=”link_image”][vc_single_image image=”17923″ img_size=”full” alignment=”center” onclick=”link_image”]At the end of the procedure, you will see a window similar to the one below. The Cluster is therefore created, all that remains is to add the other nodes.[vc_single_image image=”17926″ img_size=”full” alignment=”center” onclick=”link_image”]Now click on the Join Information button and then on the Copy Information button.[vc_single_image image=”17929″ img_size=”full” alignment=”center” onclick=”link_image”]Move to any other node you want to insert, and always from the same path (Datacenter -> Cluster) click on Join Cluster, and paste the content by entering any missing values.[vc_single_image image=”18039″ img_size=”full” alignment=”center” onclick=”link_image”]We have also updated the packages, again from the graphical interface, and then – as always – we have installed some basic Debian packages that are useful for troubleshooting.
# apt install htop iotop[vc_single_image image=”18032″ img_size=”full” alignment=”center” onclick=”link_image”]
Ceph: installation and setup
[vc_separator css=”.vc_custom_1557400611541{margin-top: -20px !important;}”]Also to install Ceph we used the convenient graphical interface. Select each node of the cluster, then move to Ceph and click on the Install Ceph-nautilus button.[vc_single_image image=”17984″ img_size=”full” alignment=”center” onclick=”link_image”]Click on the Start Installation button[vc_single_image image=”17908″ img_size=”full” alignment=”center” onclick=”link_image”]Then type Y and enter[vc_single_image image=”17911″ img_size=”full” alignment=”center” onclick=”link_image”]Although the purpose of this article is not a deepening of Ceph (for which we refer you to the official Help page that you find at the bottom of this article, to get more details on the configuration parameters), we will spend a few minutes to quickly introduce the parameters of this page (which you see below) that will appear during the installation procedure.[vc_single_image image=”17902″ img_size=”full” alignment=”center” onclick=”link_image”]Public Network: it is necessary to configure a dedicated network for Ceph, the setting is mandatory. It is highly recommended to separate Ceph traffic from the rest, because it could cause problems with other latency-dependent services such as, for example, cluster communication which, if not performed, can reduce Ceph’s performance.Cluster Network: optionally you can also separate the OSD replication, and heartbeat traffic. This will lighten the public network (Public Network) and could lead to significant performance improvements especially in large clusters.
Number of replicas: defines the frequency with which an object is replicated
Minimum replicas: defines the minimum number of replicas required for I / O, to be marked as complete.
In this lab, and for the purpose of this article, the Ceph network is not separate from the rest!
Furthermore it is mandatory to choose the first monitor node.[vc_single_image image=”17893″ img_size=”full” alignment=”center” onclick=”link_image”]If all went well you should see a successful page, like the one in the figure above, where there are further instructions on how to proceed. You are now ready to start using Ceph, but you will first need to create additional Monitors, some OSDs and at least one Pool (as you read inside!).
Opening the status page you will be able to see immediately (thanks to the intuitive use of colors and icons) if everything goes well or not. In the following image the green color suggests to us at a glance the state of health, and if you look a little better in the OSDs column you will notice that there are still no disks (OSD). Let’s see together in the next step how to create an OSD from a disk.[vc_single_image image=”18072″ img_size=”full” alignment=”center” onclick=”link_image”]
Ceph: OSD creation
[vc_separator css=”.vc_custom_1557400611541{margin-top: -20px !important;}”]Select a cluster node, then Ceph and still OSD. Click on Create: OSD the window below will appear where you can insert all the disks you want, and set some parameters.[vc_single_image image=”17959″ img_size=”full” alignment=”center” onclick=”link_image”]NOTEIn this lab, we initially chose to insert only the first SSD (480GB) for each server into Ceph; this choice to simulate a situation where there is a need to increase storage space in production environments.
Below is the final result.[vc_single_image image=”18078″ img_size=”full” alignment=”center” onclick=”link_image”]
Ceph: consideration of the disks
[vc_separator css=”.vc_custom_1557400611541{margin-top: -20px !important;}”]Ceph works best with a uniform and distributed amount of disks per node. For example, 4 500 GB disks in each node are better than a mixed configuration with a single 1 TB disk and three 250 GB disks.In planning the Ceph cluster, in terms of size, it is important to consider recovery times (especially with small clusters). To optimize these times, Proxmox recommends using SSD instead of HDD in small configurations,
In general, as you know, SSDs provide more IOPs than classic spinning disks, but given the higher cost than HDDs, it might be interesting to separate class-based pools (or disk types).
Short note for those who love the command line, a quick and quick way to visually verify the concept of class, is to give the command #ceph osd tree. You will have an output, similar to the one shown in the following image, which shows the essential information on the OSD including the CLASS column, which identifies the disks (ours have the ssd value).[vc_single_image image=”17899″ img_size=”full” onclick=”link_image” css=”.vc_custom_1566203681050{padding-top: 1px !important;}”]There is a possible configuration, supported by Proxmox VE, to speed up the OSD in a “mixed” HDD + SSD environment: use a faster disk as journal or DB / Write-Ahead-Log (WAL) device. These parameters are visible in the previous image in Ceph: OSD creation.
Always keep in mind that, if you use a faster disk for more OSDs, you will need to balance a correct balance between the OSD disk and WAL / DB (or journal), otherwise the fast disk risks becoming the bottleneck for all connected OSDs.
It is also necessary to balance the number of OSDs and their individual capacity. Increased capacity increases storage density, but also means that a single OSD error forces Ceph to retrieve more data at the same time.
Ceph: advantages in using with Proxmox VE
[vc_separator css=”.vc_custom_1557400611541{margin-top: -20px !important;}”]Ceph is a distributed object store and a file system designed to provide excellent performance, reliability and scalability.Also defined as RADOS Block Devices (RBD) implements a functional block-level archive; using it with Proxmox VE you get the following advantages:
- Easy configuration and management with CLI and GUI support
- Thin provisioning
- Resizable volumes
- Distributed and redundant (striped across multiple OSDs)
- Support for snapshots
- Self healing – in the event of problems with automatic procedures, they try to solve the problem.
- No single point of failure
- Scalable to exabyte level
- Configuration of multiple Pools with different redundancy and performance characteristics
- The data is replicated, making it fault tolerant
- Works with inexpensive hardware
- No hardware RAID controller is required
- Open source
With recent technological developments, the new hardware (on average) has powerful CPUs and a fair amount of RAM, so it is possible to run Ceph services directly on Proxmox VE nodes. It is possible to perform archiving and VM services on the same node. This type of configuration is suitable for small and medium-sized clusters and is the subject of this lab and article.
Ceph: simulation increase of storage space
[vc_separator css=”.vc_custom_1557400611541{margin-top: -20px !important;}”]During the lab, as previously mentioned, we have deliberately inserted only one SSD per node, to be assigned as Ceph’s OSD, to verify what happens if we need to scale.We then inserted 3 other SSDs, one for each node of the cluster, with different capacity (even if slightly – 512GB) compared to those already inserted and then one by one we made them become OSD.
In this image you can see the summary screen of the disks inside the node (in our case it is the same for each node).[vc_single_image image=”17965″ img_size=”full” onclick=”link_image”]The creation of the new OSD to be assigned to the existing Pool (but you can also create a new Pool, as mentioned above, based on the class of the disk), is always the same. In our case, we left all the default parameters.[vc_single_image image=”17884″ img_size=”full” onclick=”link_image”][vc_single_image image=”17887″ img_size=”full” onclick=”link_image”]Every time we add a disk, in the status we see the operations that Ceph does to be able to use the disk in the pool (or in the pools).[vc_single_image image=”17914″ img_size=”full” onclick=”link_image”]When the procedure ends, and if it ends correctly, you will see something similar to the one shown in the figure (with the number of OSD “In” increased):[vc_single_image image=”17977″ img_size=”full” onclick=”link_image”]NOTE
We had no evidence of downs or malfunctions of the VMs or of the containers during the expansion of the pool or during the creation of the new OSD.
Some considerations
[vc_separator css=”.vc_custom_1557400611541{margin-top: -20px !important;}”]In the first part of the lab, in which we included only 1 SSD per node as Ceph’s OSD, we created a Debian VM running a 10-hour stress test, and 1 Ubuntu container.This cluster with only 16GB of RAM and small SSD disks, we have not thought of it for production environments, but we feel obliged to inform you that despite the sizing, it has always behaved well.
After the stress test that saturated the memory, the nodes did not see each other graphically (see image below), but the cluster and the VMs / Containers continued to work perfectly.
The problem was due to a malfunction of the Corosync service which did not impact the guests. To solve, we started the service of on all three nodes:
#systemctl restart corosync.service[vc_single_image image=”17996″ img_size=”full” onclick=”link_image”]Simulation of fault disk
We disconnected the SSD disk on the PVE2 (where there was the only VM) to simulate a disk fault. The VM starts and works anyway (see image below – VM100) thanks to the Ceph features (see advantages above).
Note
On the test pool (that of Ceph), the space has increased from 486GB to 650GB (trivially and without going into too much detail, removing 1 disk the information that Ceph must replicate has decreased and this means more space on each disk).
In the Ceph status pages, below in the box of Performace -> Usage the value is instead decreased from 1,31TB to 894GB (this is the sum of the sizes of the disks that we have made OSD to then create the pool).[vc_single_image image=”17953″ img_size=”full” onclick=”link_image”]By quickly configuring the HA settings, we simulated the down of a node. The container, subject of this test, has been moved to the node with priority configured, and then restored on the starting node once the down node is back online.
[vc_single_image image=”18090″ img_size=”full” onclick=”link_image”][vc_single_image image=”18087″ img_size=”full” onclick=”link_image”]A small sore point, unfortunately you can’t make a storage migration, or simply Move Volume, of a container and / or VM turned on. We wanted to move the test container from the local-zfs storage to Ceph, here is the result:[vc_single_image image=”17962″ img_size=”full” onclick=”link_image”]Video Resources
[vc_separator css=”.vc_custom_1557400611541{margin-top: -20px !important;}”]On our Youtube channel you can find interesting videos on Proxmox. We invite you to subscribe by clicking here!The video that delves into the Ceph topic can be found below:
Conclusions
[vc_separator css=”.vc_custom_1557400611541{margin-top: -20px !important;}”]Ceph seems a stable and very intuitive product, if you study the values during the configuration (see link below the official documentation).It offers several advantages that we have already listed above, however it is necessary to consider the space that is “lost”, and carefully design the storage.
We invite you to subscribe to our newsletter to stay up to date on the topics by clicking here.