The documentation, articles, and content surrounding the Governorate are certainly not lacking on the Internet. Even based on this wealth of information, we’ve found that some users struggle with the command line interface when interacting with a hosted Kubernetes running on OpenStack Magnum in a public cloud. In order to benefit extensively from the use of both Kubernetes and Magnum, we came up with this guide to help you prepare your environment, use the CLI, and create your own cluster models based on the required Magnum parameters.
Among many other reasons, creating models is useful because it is not possible to update an existing cluster according to Magnum’s workflow. To upgrade, you must create a new cluster using public cloud templates or your own template.
This guide is useful for us public cloud users who use Kubernetes as a service. This guide applies to both sjc1 and ca-ymq-1 areas. Fun fact: Both regions now support the latest versions of Kubernetes, what is v1.18.x! You can use the public templates we provide Governors 1.18.x be called
Before you get started, it’s important to note that Kubernetes as a service can be tailored to the needs of your application or your entire IT team. From the above details, it is understood that we are using
v2-standard-8 tastes, but users can change this when creating clusters based on the models we make available, as well as the ability to create our own cluster models. Enterprise-level teams can manage large numbers of salaries and mark larger clusters, while smaller teams are more likely to maintain fewer pods and mark smaller equivalent clusters.
Let’s now look at how you can create your own templates.
Preparing the environment and using the Magnum CLI
To get started with the Magnum CLI, it is essential to prepare your environment for interaction with KaaS endpoints.
- First, you need to create a virtualenv for an isolated python environment. To do this, install python-virtualenv, which is one of the well-known packages for different distributions. When virtualenv is activated,
- install the openstack command line application
pip install python-openstackclient
- install the openstack command line application
- Once the client is installed, upload and obtain your login information to our public cloud service.
- Test the customer by listing the available cluster models:
openstack coe cluster template list
A blank result is normal; it shows that you do not yet have cluster models. If the tool complains about an unknown “coe” command, make sure that python-magnumclient is installed.
Instructions for creating clusters and cluster models
These steps describe how you can interact with KaaS by creating clusters and cluster models.
Because there are
--labels If you want to set all cluster models, it is important to take our public cluster model as a reference for creating private cluster models. Is
--labels required for the cluster model to work.
To move forward and get the information you need, view the information for the public cluster model:
openstack coe cluster template show <id of v2-k8s-8-v1.18.2 template>
As you can see below, there is a field in the field
--labels where you need to specify
boot_volume_type. In this section, make sure you select the appropriate volume type for each region from the list of available disc types that you get by
Openstack volume type list.
- To create a cluster model, follow these steps:
openstack coe cluster template create <template-name> --image "fedora-coreos-31.20200601.3.0-openstack.x86_64" --external-network public --master-flavor v2-standard-1 --flavor v2-highcpu-8 --docker-volume-size 50 --network-driver calico <rbd for sjc1 and ssd for ca-ymq2> --docker-storage-driver overlay2 --master-lb-enabled --volume-driver cinder --labels boot_volume_type=,boot_volume_size=50,kube_tag=v1.18.2,availability_zone=nova --coe kubernetes -f value -c uuid
- As a result, you will receive a generally unique ID (uuid) for the template:
Request to create cluster template <template name> accepted <template uuid>
- By listing the available cluster templates, you get both a public and a newly created template:
- Next, you need to create a Kubernetes cluster using any of the above templates by doing the following:
openstack coe cluster create k8s-cluster --master-count 1 --node-count 2 --cluster-template <chosen template id>
- As a result, you will receive a new cluster:
Response: Request to create cluster accepted <cluster uuid>
- Now you should wait until the cluster is ready to create, and you can check the status update this way:
watch -n 2 openstack coe cluster show -c status -c master_addresses -c faults <cluster uuid>
- When the status is
the administrator can be accessed using the key pair provided nuclear and cluster main address.
For further clarification main counter determines the number of Kubernetes control nodes, while number of nodes determine the number of pods.
In addition, if the master-count is greater than 1, the load balancer is configured and you can access the Kubernetes API through the VIP placed on the load balancer. You can also use an external load balancer in front of the Kubernetes service, which creates an external IP address that can be accessed and shares traffic between the pods.
Now that you have created the cluster to the end, you can start interacting with the Kubernetes API!
The collaboration with OpenStack and Kubernetes brings together the two power plants of the open source platform. With the MagnSt OpenStack project on your side, you get countless benefits such as high efficiency and safety while choosing the engine of your chosen container orchestra! In addition, the seamless integration of Kubernetes and OpenStack is not limited to Magnum, but also includes block storage, Keystone, and load balancers, as briefly mentioned earlier.
You are now familiar with the fact that VEXXHOST Cloud Console has both Kubernetes and OpenStack services. VEXXHOST ensures that the availability of these services maintains an agnostic model, so you can run your application on any infrastructure model, whether public, private or even hybrid! In addition, the VEXXHOST team is in constant contact with the OpenStack magnum community via IRC, which helps us keep up to date with any changes. If you want a similar walkthrough to interact with you Kubernetes API, stay tuned for new content. In the meantime, you can reach us about your Kubernetes questions or contact us for more information on how we can help you make the most of KaaS.