Posts

HTTPS Endpoint

  For Secure EON Mode, we need to have below configuration: Secure MinIO or any other secure Object Store . I have secured the MinIO. Below are the steps for configuring secure MinIO. Download MinIO binary and make it executable. Open "/root/.minio/certs" folder and store there public certificate and private key files. server.key is the Private Key and server.crt is the Public Certificate Start MinIO: ./minio server --address ":443" /data/vertica/eondisk Note: Status: Below are important points: API:  https://192.168.0.1    https://127.0.0.1 Console:  https://192.168.0.1:41593   https://127.0.0.1:41593   Below are the steps for generating the Public Certificate and Private Key files. I have used certgen to generate the public certificate and private key files. Open " https://github.com/minio/certgen " site. Choose the Binary release for your desired OS. It will automatically download the b

CRUSH

CRUSH (Controlled Replication Under Scalable Hashing) that uses CRUSH Map-an allocation table- to find an OSD with the requested file

Ceph ZAP

 This subcommand is used to zap lvs, partitions or raw devices that have been used by ceph OSDs so that they may be reused.

ceph orchestrator

 ceph orc: This command I used while doing the setup of the ceph. eg: ceph orch host add ceph-cluster-j2kj ceph orch apply rgw ceph-rados-gw --port=9000 --placement="2 ceph-cluster-732k" [ceph: root@ceph1 /]# ceph orch device ls HOST   PATH      TYPE  DEVICE ID   SIZE  AVAILABLE  REFRESHED  REJECT REASONS ceph1  /dev/sdb  hdd               214G             13m ago    Insufficient space (<10 extents) on vgs, LVM detected, locked ceph1  /dev/sdc  hdd               214G             13m ago    Insufficient space (<10 extents) on vgs, LVM detected, locked ceph2  /dev/sdb  hdd               214G             13m ago    Insufficient space (<10 extents) on vgs, LVM detected, locked ceph3  /dev/sdb  hdd               214G             13m ago    Insufficient space (<10 extents) on vgs, LVM detected, locked [ceph: root@ceph1 /]# ceph orch ls NAME                       PORTS        RUNNING  REFRESHED  AGE  PLACEMENT alertmanager               ?:9093,9094      1/1  5m ago     4

Change Replication Factor in CEPH

To set the number of object replicas on a replicated pool, execute the following: ceph osd pool set <poolname> size <num-replicas> Output will be: [ceph: root@ceph1 /]# ceph osd pool set .rgw.root size 3 set pool 1 size to 3

How to see the Replication of Ceph

 ceph osd map .rgw.root object -f json-pretty In above command <.rgw.root> is the pool name that I have seen from the dashboard. The output of the command is: [ceph: root@ceph1 /]# ceph osd map .rgw.root object -f json-pretty {     "epoch": 217,     "pool": ".rgw.root",     "pool_id": 1,     "objname": "object",     "raw_pgid": "1.570e3222",     "pgid": "1.2",     "up": [         0,         1,         3     ],     "up_primary": 0,     "acting": [         0,         1,         3     ],     "acting_primary": 0 } This shows the data is replicated in 0, 1 and 3 OSD which are located in different hosts.