Ceph - OSD (Object Storage Daemon)
Ceph OSD is the object storage daemon for the Ceph distributed file system.
It is responsible for storing objects on a local file system and providing
access to them over the network.
Overview
Placement Group (PG)
Placement groups (PGs) are an internal implementation detail of how Ceph distributes data.
You may enable pg-autoscaling to allow the cluster to make recommendations or automatically
adjust the numbers of PGs (pgp_num) for each pool based on expected cluster and pool utilization.
Commands
Globals
List cluster pools:
Show OSD status (host, used, state, ...):
Show OSD detail:
Show OSD distribution by node:
Show usage by OSD:
Reweight all OSD:
Reweight a OSD:
Get pool detail with dump command:
Get number of replicas per pool:Placement Group (PG)
See PG autoscale status:
ceph osd pool autoscale-status
# off : Disable autoscaling for this pool. It is up to the administrator to choose an appropriate pgp_num for each pool.
# on : Enable automated adjustments of the PG count for the given pool.
# warn : Raise health alerts when the PG count should be adjusted
Display current PG of a pool:
Increase PG numbers:
# Specify the number of placement groups:
ceph osd pool set {pool} pg_num {pg-num}
# After you increase the number of placement groups, you must
# also increase the number of placement groups for placement
# (pgp_num) before your cluster will rebalance:
ceph osd pool set {pool} pgp_num {pgp-num}
# Ex:
# ceph osd pool set pool-1 pg_num 64
# ceph osd pool set pool-1 pgp_num 64
Get OSD of a PG:
Links
- https://docs.ceph.com/en/latest/man/8/ceph-osd/
- https://docs.ceph.com/en/latest/rados/operations/control/#osd-subsystem
- https://docs.ceph.com/en/latest/rados/operations/placement-groups/