Ceph luminous运维指令01

1.获取CRUSH map


# ceph osd getcrushmap -o {crush-file-bin}

2.反编译CRUSH map


# crushtool -d {crush-file-bin} -o {crush-file-txt}

3.编译CRUSH map


# crushtool -c {crush-file-txt} -o {crush-file-bin}

4.导入CRUSH map


# ceph osd setcrushmap -i {crush-file-bin}

5.显示默认配置


# ceph --show-config

6.显示进程当前配置


# ceph daemon client.rgw.gz-open-dw-c43 config show
# ceph daemon osd.1 config show
# ceph mon.gz-open-dw-c43 config show

7.修改线上服务器的配置


# ceph tell 进程 injectargs "--参数=值"
eg: ceph tell osd.214 injectargs "--debug_osd=0/5"

8.查看bucket的shards


# radosgw-admin metadata list
[
    "bucket",
    "bucket.instance",
    "user"
]

9.查看rgw用户信息


10.查看object的存储节点


11.检测bucket index 是否需要resharding(查看bucket object数量和shards数量)


```
# sudo radosgw-admin bucket limit check
[
    {
        "user_id": "tupu_aws3",
        "buckets": [
            {
                "bucket": "api_image_test",
                "tenant": "",
                "num_objects": 51243241,
                "num_shards": 512,
                "objects_per_shard": 100084,
                "fill_status": "OK"
            },
            {
                "bucket": "my-new-bucket",
                "tenant": "",
                "num_objects": 9520,
                "num_shards": 0,
                "objects_per_shard": 9520,
                "fill_status": "OK"
            }
        ]
    }
]
```

12.ceph 在线修改mon,osd,mgr,rgw等参数


13.osd关闭 scurb 和 deep scrub


# ceph osd set nodeep-scrub
# ceph osd set noscrub

目前没有配置文件选项可以关闭,只能通过命令关闭,关闭会引起health warn

14.在osd 需要更换的时候设置noout,防止数据迁移


# ceph osd set noout

15.查看当前default.rgw.buckets.index 中存在的分片的数量(包含历史的分片)。

osd 在进行每一次sharing的时候都会保留上一份分片的index数据,这就是为什么radosgw-admin metadata list bucket.instance 会出现同个bucket有多个记录,只有一个是当前使用的instance。其他的都是历史使用,但是任然占据着空间

16.radosgw-admin 删除一个bucket(清空数据)


17.获取bucket的所有object key


# radosgw-admin bi list --bucket=api_image_test_2018-01-20 | grep '"name"' | cut -d ":" -f2 | cut -d '"' -f 2 >> /home/ceph/api_image_test_2018-01-20.keys