Set additional ceph pools for use in Openstack

Hits: 148

  CinderRbdExtraPools:
    default: ''
    description: >
      List of extra Ceph pools for use with RBD backends for Cinder. An
      extra Cinder RBD backend driver is created for each pool in the
      list. This is in addition to the standard RBD backend driver
      associated with the CinderRbdPoolName.
    type: comma_delimited_list


Any pools specified in the (optional) list would automatically generate
additional Cinder backends. For example, deploying an environment file that
contained this:

parameter_defaults:
  CinderRbdExtraPools: fast,slow

Would result in a Cinder deployment with three RBD backends:

RBD Pool   Cinder Backend
--------   -----------------
volumes    tripleo_ceph
fast       tripelo_ceph_fast
slow       tripleo_ceph_slow

Note 1: For Ceph clusters managed by TripleO, the "CephPools" THT parameter
can be used to create additional pools.

parameter_defaults:
  CephPools:
    fast:
      pg_num: 1024
      pgp_num: 1024
    slow:
      pg_num: 512
      pgp_num: 512
    
Note 2: The user/operator would be responsible for creating the Ceph CRUSH map
necessary to establish appropriate service levels for each RBD pool.

Note 3: The user/operator would be responsible for creating Cinder volume
types associated with each of the Cinder RBD backends. That is, the Cinder
backends would be automatically created, but the Cinder volume types would
need to be defined post-deployment.

Source: 1309550 – [RFE] Update Cinder heat template to allow multiple Ceph pools

Also see: https://leeyj7141blog.wordpress.com/2018/07/25/redhat-openstack-13-queens-install-manual/

 

for further ceph args like

vm_min_free_kbytes 4gb
kernel.pid_max, value: 4194303
fs.file-max, value: 26234859