Bosh Required Stemcell Not Found for Default Cpi

Using BOSH multi-CPI feature to deploy to different IaaS | Benjamin Guttman - DevOps anynines Blogpost Header Image

Introduction

We have a vSphere installation with two data centers and idea about the possibility of adding a third availability zone without requiring an additional DC for a while. Therefore we idea about the option to movement the third availability zone to AWS. With the multi-CPI feature introduced in BOSH version v261+, the initial blocker was removed.

Theoretically, nosotros are now able to deploy to different infrastructures just while setting up some test deployment I faced some interesting questions that were not conspicuously answered in the BOSH docs or in sure blog posts I plant about this topic so I decided to share my struggles and share my possible solutions with you.

CPI

As I already said BOSH supports the configuration for several CPIs but the documentation seems but to cover the case where you desire to deploy to different regions (AWS, GCP) or dissimilar datacenters (vSphere) for the same infrastructure, simply it shouldn't be that hard to deploy to two different infrastructures correct?

The starting time affair we'll need is the correct CPIs packed onto our BOSH manager. As we are using bosh-deployment to deploy the director, this should not be likewise hard. We just add together the correct ops file and run bosh create-env, merely for ops files the order matters. If you use the wrong order for the CPI ops files, your cloud_provider block will take the wrong configuration.

Question 1: Which CPI ops file to utilize first?

As the CPI too includes information needed to create the director, I decided to apply the CPI for the infrastructure the director is deployed to terminal, so we ensure that all information needed for the CPI to deploy the director is available. An instance create-env command could look like that:

          bosh create-env bosh.yml   -o uaa.yml   -o credhub.yml   -o jumpbox-user.yml   -o aws/cpi.yml   -o vsphere/cpi.yml   --vars-store creds.yml   --vars-file vars.yml   --land state.yml                  

Cloud Config

After we got our BOSH Director upwards and running and all CPI configs in place, we need to check our deject-config for possible adjustments. In my instance this base of operations cloud-configuration was used to deploy on a vSphere surroundings:

                      azs: - cloud_properties:     datacenters:     - clusters:       - Cluster01:           resource_pool: Test_Cluster01       name: nameme   name: z1 - cloud_properties:     datacenters:     - clusters:       - Cluster02:           resource_pool: Test_Cluster02       name: nameme   name: z2 - cloud_properties:     datacenters:     - clusters:       - Cluster03:           resource_pool: Test_Cluster03       name: nameme   name: z3 compilation:   az: z1   network: compilation   reuse_compilation_vms: <span class="token boolean">true</bridge>   vm_type: compilation   workers: 2 disk_types: - cloud_properties:     type: thin   disk_size: 2048   proper noun: pocket-sized - cloud_properties:     type: thin   disk_size: 4096   proper noun: medium - cloud_properties:     type: sparse   disk_size: 6144   proper noun: big - cloud_properties:     type: sparse   disk_size: 10144   name: big - cloud_properties:     type: thin   disk_size: 20124   proper noun: xlarge networks: - name: net   subnets:   - az: z1     cloud_properties:       name: Cluster01_TEST-one     dns:     - 8.viii.eight.8     - 8.eight.4.iv     gateway: 10.0.i.ane     range: 10.0.i.0/24     reserved:     - 10.0.one.1  - 10.0.1.10     - x.0.i.200 - x.0.i.255   - az: z2     cloud_properties:       proper name: Cluster02_TEST-1     dns:     - 8.8.8.viii     - 8.8.iv.4     gateway: 10.0.2.1     range: 10.0.2.0/24     reserved:     - 10.0.ii.1  - 10.0.2.16     - x.0.2.eighteen - 10.0.ii.254   - az: z3     cloud_properties:       proper noun: Cluster03_TEST-1     dns:     - viii.eight.8.8     - eight.8.4.4     gateway: 10.0.3.1     range: 10.0.3.0/24     reserved:     - 10.0.3.ane  - 10.0.3.16     - 10.0.three.18 - 10.0.three.254   type: transmission - name: compilation   subnets:   - az: z1     cloud_properties:       proper name: Cluster01_TEST-1     dns:     - 8.8.eight.8     - 8.8.4.4     - viii.8.eight.8     - eight.viii.4.4     gateway: 10.0.1.1     range: x.0.i.0/24     reserved:     - 10.0.1.1 - 10.0.1.200 vm_types: - cloud_properties:     cpu: 1     disk: 4096     ram: 1024   proper noun: <span form="token function">nano</span> - cloud_properties:     cpu: 1     disk: 10000     ram: 4096   name: minor - cloud_properties:     cpu: 2     deejay: 20000     ram: 4096   name: medium - cloud_properties:     cpu: 4     disk: 20000     ram: 4096   name: big - cloud_properties:     cpu: 4     disk: 60000     ram: 8192   name: large - cloud_properties:     cpu: twenty     disk: 60000     ram: 16384   name: xlarge - cloud_properties:     cpu: 20     deejay: 20000     ram: 8192   name: compilation                  

The first part nosotros check for adjustments is the 'availability_zone' definition, which looks similar this  at the moment:

                      azs: - cloud_properties:     datacenters:     - clusters:       - Cluster01:           resource_pool: Test_Cluster01       name: nameme   name: z1 - cloud_properties:     datacenters:     - clusters:       - Cluster02:           resource_pool: Test_Cluster02       name: nameme   name: z2 - cloud_properties:     datacenters:     - clusters:       - Cluster03:           resource_pool: Test_Cluster03       proper noun: nameme   proper noun: z3                  

What we need to exercise now is to add together sure availabilities for AWS, in our instance, we volition add together a `z4` for AWS now.

                      azs: - cloud_properties:     datacenters:     - clusters:       - Cluster01:           resource_pool: Test_Cluster01       proper name: nameme   proper noun: z1   cpi: vsphere_cpi - cloud_properties:     datacenters:     - clusters:       - Cluster02:           resource_pool: Test_Cluster02       name: nameme   name: z2   cpi: vsphere_cpi - cloud_properties:     datacenters:     - clusters:       - Cluster03:           resource_pool: Test_Cluster03       proper noun: nameme   proper noun: z3   cpi: vsphere_cpi - cloud_properties:     availability_zone: eu-central-1a   proper name: z4   cpi: aws_cpi                  

Did you notice that we added the CPI name here to tell BOSH which availability zone needs to get targeted with which CPI? The names you lot tin can use here for the CPIs are defined via the CPI configs which we will accept a look at in a couple of lines.

But before, we volition bank check the remaining parts of the cloud-config for adjustments, that ways the disk_types and the vm_types:

                      disk_types: <span class="token annotate"># (vsphere)</span> - cloud_properties:     blazon: thin   disk_size: 2048   name: small disk_types: <span form="token comment"># (aws)</span> - cloud_properties:     blazon: gp2   disk_size: 2048   proper name: small                  

When comparison the respective part from an AWS deject-config and a vSphere cloud-config nosotros can see that the blazon property is used for both infrastructures. For vSphere we ready the value `sparse` for AWS we used `gp2`:

So how practice we tell the CPI which type should be used?
Exercise nosotros demand to create separate disk_types for every infrastructure?
And if yeah, how exercise we utilise them in the manifests?

So actually this issue can be solved via the CPI config . Having a look at the CPI Configuration  for AWS and vSphere we can see that for AWS the disk_type defaults to 'gp2' so we practice not need to explicitly configure it for AWS and for vSphere we got a global property named 'default_disk_type' which means we can set a default_disk_type via the cpi config. We will accept a closer await at that right later on this department. So by removing the unneeded values, we get the following result:

                      disk_types: <span form="token annotate"># (vsphere|aws)</span>   disk_size: 2048   name: small-scale                  

Last footstep to check is the vm_type definition:

                      - cloud_properties: vm_types: <span grade="token comment">#(vsphere)</span> - cloud_properties:     cpu: 1     ram: 1024   name: xsmall  vm_types: <span class="token comment">#(aws)</span> - cloud_properties:     instance_type: t2.micro   proper name: xsmall                  

The cloud_properties needed for the AWS/ vSphere CPI differ, then they will not get overwritten by each other and we can just merge them into one. Like this, every CPI will accept the data needed to create the VM.

                      vm_types: <span class="token annotate">#(vsphere|aws)</span> - cloud_properties:     cpu: 1     ram: 1024     instance_type: t2.micro   name: xsmall                  

Last thing that is missing in the deject-config is the network for the 4th availability_zone:

                      - az: z4     cloud_properties:       subnet: subnet-     dns:     - x.0.4.two     - 8.8.8.8     - 8.8.four.4     gateway: x.0.four.1     range: 10.0.4.0/24     reserved:     - 10.0.4.one  - 10.0.4.16     - 10.0.4.18 - x.0.4.254                  

CPI Config

The centerpiece to enable the multi-cpi feature is the CPI config. The CPI config includes all the necessary information to configure the used CPIs. For a general overview, you tin can have a look at the official BOSH documentation .

In our case the CPI config includes non but the needed credentials and configuration information just also the 'default_disk_type: thin' for our vSphere VMs to solve the disk_type issue we discussed earlier:

                      cpis: - proper noun: a9s-vsphere   type: vsphere   properties:     host: <span class="token variable"><bridge class="token punctuation">((</span>vcenter_ip<span class="token punctuation">))</span></span>     user: <span class="token variable"><span class="token punctuation">((</span>vcenter_user<span class="token punctuation">))</span></span>     countersign: <span class="token variable"><span class="token punctuation">((</bridge>vcenter_password<span form="token punctuation">))</bridge></bridge>     default_disk_type: thin     datacenters:     - clusters: <span class="token variable"><span class="token punctuation">((</span>vcenter_clusters<span course="token punctuation">))</bridge></bridge>       datastore_pattern: <span class="token variable"><bridge class="token punctuation">((</bridge>vcenter_ds<bridge class="token punctuation">))</span></bridge>       disk_path: <span class="token variable"><span course="token punctuation">((</span>vcenter_disks<span class="token punctuation">))</span></bridge>       name: <span class="token variable"><span class="token punctuation">((</span>vcenter_dc<span class="token punctuation">))</span></span>       persistent_datastore_pattern: <span class="token variable"><span class="token punctuation">((</span>vcenter_ds<span grade="token punctuation">))</span></span>       template_folder: <bridge class="token variable"><span class="token punctuation">((</span>vcenter_templates<bridge grade="token punctuation">))</bridge></span>       vm_folder: <bridge course="token variable"><span class="token punctuation">((</span>vcenter_vm_folder<bridge class="token punctuation">))</span></span> - name: aws-a9s   type: aws   properties:     access_key_id: <span class="token variable"><span class="token punctuation">((</span>access_key_id<span class="token punctuation">))</bridge></span>     secret_access_key: <span class="token variable"><span form="token punctuation">((</span>secret_access_key<span class="token punctuation">))</span></span>     default_key_name: <bridge class="token variable"><span class="token punctuation">((</bridge>default_key_name<span class="token punctuation">))</span></span>     default_security_groups:     - <bridge class="token variable"><bridge class="token punctuation">((</span>default_security_groups<span form="token punctuation">))</bridge></bridge>     region: <span class="token variable"><span course="token punctuation">((</span>region<span class="token punctuation">))</span></span>                  

One of import step to mention here is that after you uploaded the CPI config you lot need to re-upload the stemcells for the different CPIs. Later on this was done the output of bosh stemcells wait similar the following:

As y'all tin can see the unlike stemcells are now distinguished by the CPI they are used for (in our instance a9s-vsphere and aws-a9s).

After everything is in place now, I used the following manifest to deploy a Prometheus Alertmanager to both infrastructures AWS and vSphere.

                      --- proper noun: prometheus  instance_groups:   - name: alertmanager     azs:       - z1       - z4     instances: 2     vm_type: pocket-size     persistent_disk: 1_024     stemcell: default     networks:       - name: net     jobs:       - proper name: alertmanager         release: prometheus         backdrop:           alertmanager:             road:               receiver: default             receivers:               - name: default             test_alert:               daily: <span class="token boolean">true</span>  update:   canaries: one   max_in_flight: 32   canary_watch_time: 1000-100000   update_watch_time: 1000-100000   series: <span class="token boolean">fake</span>  stemcells:   - alias: default     bone: ubuntu-xenial     version: latest  releases: - name: prometheus   version: 25.0.0   url: https://github.com/bosh-prometheus/prometheus-boshrelease/releases/download/v25.0.0/prometheus-25.0.0.tgz   sha1: 71cf36bf03edfeefd94746d7f559cbf92b62374c                  

Which volition result to

If yous are familiar with the style of the VM CID you tin here, that z4 shows an AWS styled VM CID and z1 one for vSphere.

So let'due south wrap up what needed to be done to use the BOSH multi CPI to deploy to different infrastructures:

  • Add the AZs for the new infrastructure
  • Add the vm_type data needed for new infrastructure
  • Remove properties that can be merely used by one CPI and motility it to CPI config (e.g. disk_type in cloud-config to default_disk_type in CPI config)
  • Upload a CPI config
  • Add new AZs to your manifest
  • Deploy

I promise this small-scale blog post helps you to hands spread your deployment over unlike infrastructures.

maciasench1952.blogspot.com

Source: https://blog.anynines.com/using-bosh-multi-cpi-feature-to-deploy-to-different-iaas/

0 Response to "Bosh Required Stemcell Not Found for Default Cpi"

Postar um comentário

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel