openeuler存储管理

openeuler存储管理

共享存储管理实验

nfs存储管理

节点IP系统功能CPU内存硬盘
node110.80.20.1openeuler20.03nfs2核心4GB20GB
node210.80.20.2openeuler20.03client2核心4GB20GB
node310.80.20.3openeuler20.03client2核心4GB20GB

node1

下载安装nfs:

1
2
3
4
5
# dnf install -y nfs-utils
# systemctl enable nfs --now
# systemctl status nfs
# showmount --version
showmount for 2.5.1

配置共享目录:

1
2
3
# mkdir /data
# vim /etc/exports
/data *(rw,sync,no_root_squash)
1
# systemctl restart nfs

查看共享目录:

1
2
3
# showmount -e
Export list for node1:
/data *

node2、node3

下载安装nfs:

1
2
3
# dnf install -y nfs-utils
# systemctl enable nfs --now
# systemctl status nfs

查看共享目录:

1
2
3
# showmount -e 10.80.20.1
Export list for 10.80.20.1:
/data *

挂载共享目录:

1
2
3
4
5
6
7
8
9
10
11
12
13
# mkdir /mnt/data
# mount -t nfs 10.80.20.1:/data /mnt/data/
# df -Th
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 1.7G 0 1.7G 0% /dev
tmpfs tmpfs 1.7G 0 1.7G 0% /dev/shm
tmpfs tmpfs 1.7G 9.1M 1.7G 1% /run
tmpfs tmpfs 1.7G 0 1.7G 0% /sys/fs/cgroup
/dev/mapper/openeuler-root ext4 17G 2.5G 14G 16% /
tmpfs tmpfs 1.7G 0 1.7G 0% /tmp
/dev/nvme0n1p1 ext4 976M 126M 783M 14% /boot
tmpfs tmpfs 341M 0 341M 0% /run/user/0
10.80.20.1:/data nfs4 17G 2.5G 14G 16% /mnt/data

node2

测试nfs目录,写入文件:

1
2
3
4
# cd /mnt/data/
# echo "node2" > file1
# cat file1
node2

node1

查看写入文件:

1
2
# cat /data/file1
node2

node3

修改测试文件:

1
2
3
4
5
6
# cd /mnt/data/
# cat file1
node2
# echo "node3" > file1
# cat file1
node3

node1

细化共享目录权限:

1
2
3
# vim /etc/exports
/data 10.80.20.2/32(rw,sync,no_root_squash)
/data 10.80.20.3/32(ro)
1
# systemctl restart nfs

node2

可以查看和修改:

1
2
3
4
5
6
7
# cd /mnt/data/
# echo "node new" > file1
# cat file1
node new
# mkdir test1
# ls
file1 test1

node3

有读权限没有写权限:

1
2
3
4
5
6
7
8
9
# cd /mnt/data/
# echo "node new 2" > file1
-bash: file1: Read-only file system
# cat file1
node new
# mkdir test2
mkdir: cannot create directory ‘test2’: Read-only file system
# ls
file1 test1

node2

设置开机自动挂载:

1
2
3
# vim /etc/fstab
# 尾行,添加配置
10.80.20.1:/data /mnt/data nfs defaults 0 0

重启,自动挂载:

1
2
3
4
5
6
7
8
9
10
11
12
# reboot
# df -Th
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 1.7G 0 1.7G 0% /dev
tmpfs tmpfs 1.7G 0 1.7G 0% /dev/shm
tmpfs tmpfs 1.7G 9.1M 1.7G 1% /run
tmpfs tmpfs 1.7G 0 1.7G 0% /sys/fs/cgroup
/dev/mapper/openeuler-root ext4 17G 2.5G 14G 16% /
tmpfs tmpfs 1.7G 0 1.7G 0% /tmp
/dev/nvme0n1p1 ext4 976M 126M 783M 14% /boot
10.80.20.1:/data nfs4 17G 2.5G 14G 16% /mnt/data
tmpfs tmpfs 341M 0 341M 0% /run/user/0

san存储实验

节点IP系统功能CPU内存硬盘
node110.80.20.1openeuler20.03san2核心4GB20GB、20GB
node210.80.20.2openeuler20.03client2核心4GB20GB
node310.80.20.3openeuler20.03client2核心4GB20GB

node1

下载安装iscsi:

1
2
3
# dnf install -y scsi-target-utils
# systemctl enable tgtd --now
# systemctl status tgtd

查看生成文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# rpm -ql scsi-target-utils
/etc/sysconfig/tgtd
/etc/tgt
/etc/tgt/conf.d
/etc/tgt/conf.d/sample.conf
/etc/tgt/targets.conf
/etc/tgt/tgtd.conf
/usr/lib/systemd/system/tgtd.service
/usr/sbin/tgt-admin
/usr/sbin/tgt-setup-lun
/usr/sbin/tgtadm
/usr/sbin/tgtd
/usr/sbin/tgtimg
/usr/share/doc/scsi-target-utils
/usr/share/doc/scsi-target-utils/README
/usr/share/doc/scsi-target-utils/README.iscsi
/usr/share/doc/scsi-target-utils/README.iser
/usr/share/doc/scsi-target-utils/README.lu_configuration
/usr/share/doc/scsi-target-utils/README.mmc
/usr/share/doc/scsi-target-utils/README.ssc

查看端口:

1
2
3
# netstat -tlunp | grep 3260
tcp 0 0 0.0.0.0:3260 0.0.0.0:* LISTEN 15711/tgtd
tcp6 0 0 :::3260 :::* LISTEN 15711/tgtd

创建target,查看target信息:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# tgtadm --lld iscsi --mode target --op new --tid 1 --targetname iqn.00-00.com.openeuler:target1
# tgtadm -L iscsi -m target -o show
Target 1: iqn.00-00.com.openeuler:target1
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
Account information:
ACL information:

查看新磁盘位置,/dev/nvme0n2:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# fdisk -l
Disk /dev/nvme0n1: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk model: VMware Virtual NVMe Disk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x80ce5a11

Device Boot Start End Sectors Size Id Type
/dev/nvme0n1p1 * 2048 2099199 2097152 1G 83 Linux
/dev/nvme0n1p2 2099200 41943039 39843840 19G 8e Linux LVM


Disk /dev/nvme0n2: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk model: VMware Virtual NVMe Disk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/openeuler-root: 17 GiB, 18249416704 bytes, 35643392 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/openeuler-swap: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

磁盘作为logicalunit添加给target1:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
# tgtadm --lld iscsi --mode logicalunit --op new --tid 1 --lun 1 --backing-store /dev/nvme0n2
# tgtadm -L iscsi -m target -o show
Target 1: iqn.00-00.com.openeuler:target1
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 21475 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rdwr
Backing store path: /dev/nvme0n2
Backing store flags:
Account information:
ACL information:

target1绑定node2访问:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# tgtadm --lld iscsi --mode target --op bind --tid 1 --initiator-address 10.80.20.2
# tgtadm -L iscsi -m target -o show
Target 1: iqn.00-00.com.openeuler:target1
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 21475 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rdwr
Backing store path: /dev/nvme0n2
Backing store flags:
Account information:
ACL information:
10.80.20.2

node2

下载安装iscsi:

1
2
3
# dnf install -y iscsi-initiator-utils
# systemctl enable iscsid --now
# systemctl status iscsid

iscsi发现target:

1
# iscsiadm -m discovery -t sendtargets -p 10.80.20.1

关联所有target:

1
# iscsiadm -m node -L all

查看磁盘列表,有一块新磁盘:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# fdisk -l
Disk /dev/nvme0n1: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk model: VMware Virtual NVMe Disk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xfbb83e39

Device Boot Start End Sectors Size Id Type
/dev/nvme0n1p1 * 2048 2099199 2097152 1G 83 Linux
/dev/nvme0n1p2 2099200 41943039 39843840 19G 8e Linux LVM


Disk /dev/mapper/openeuler-root: 17 GiB, 18249416704 bytes, 35643392 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/openeuler-swap: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sda: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk model: VIRTUAL-DISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

node1

查看连接状态:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
# tgtadm --lld iscsi --op show --mode target
Target 1: iqn.00-00.com.openeuler:target1
System information:
Driver: iscsi
State: ready
I_T nexus information:
I_T nexus: 1
Initiator: iqn.2012-01.com.openeuler:2c5fdcdfca9 alias: node2
Connection: 0
IP Address: 10.80.20.2
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 21475 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rdwr
Backing store path: /dev/nvme0n2
Backing store flags:
Account information:
ACL information:
10.80.20.2

node2

取消关联target:

1
# iscsiadm -m node -T iqn.00-00.com.openeuler:target1 -u

node1

查看连接状态:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
# tgtadm --lld iscsi --op show --mode target
Target 1: iqn.00-00.com.openeuler:target1
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 21475 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rdwr
Backing store path: /dev/nvme0n2
Backing store flags:
Account information:
ACL information:
10.80.20.2

node2

再次关联target1:

1
# iscsiadm -m node -T iqn.00-00.com.openeuler:target1 -p 10.80.20.1 -l

查看磁盘:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# fdisk -l
Disk /dev/nvme0n1: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk model: VMware Virtual NVMe Disk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xfbb83e39

Device Boot Start End Sectors Size Id Type
/dev/nvme0n1p1 * 2048 2099199 2097152 1G 83 Linux
/dev/nvme0n1p2 2099200 41943039 39843840 19G 8e Linux LVM


Disk /dev/mapper/openeuler-root: 17 GiB, 18249416704 bytes, 35643392 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/openeuler-swap: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sda: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk model: VIRTUAL-DISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

查看连接状态:

1
2
# iscsiadm -m session
tcp: [2] 10.80.20.1:3260,1 iqn.00-00.com.openeuler:target1 (non-flash)

格式化磁盘,会格式化服务器磁盘:

1
# mkfs.ext4 /dev/sda

检查磁盘格式:

1
2
# blkid /dev/sda 
/dev/sda: UUID="61446386-25d6-4350-847a-ee966e9b9ae9" BLOCK_SIZE="4096" TYPE="ext4"

挂载磁盘:

1
2
3
4
5
6
7
8
9
10
11
12
13
# mkdir /mnt/san
# mount /dev/sda /mnt/san/
# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.7G 0 1.7G 0% /dev
tmpfs 1.7G 0 1.7G 0% /dev/shm
tmpfs 1.7G 33M 1.7G 2% /run
tmpfs 1.7G 0 1.7G 0% /sys/fs/cgroup
/dev/mapper/openeuler-root 17G 2.5G 14G 16% /
tmpfs 1.7G 0 1.7G 0% /tmp
/dev/nvme0n1p1 976M 126M 783M 14% /boot
tmpfs 341M 0 341M 0% /run/user/0
/dev/sda 20G 45M 19G 1% /mnt/san

创建文件进行测试:

1
2
3
4
# cd /mnt/san/
# echo "node2" > test.txt
# cat test.txt
node2

target1绑定node3访问:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# tgtadm --lld iscsi --mode target --op bind --tid 1 --initiator-address 10.80.20.3
# tgtadm --lld iscsi --op show --mode targetTarget 1: iqn.00-00.com.openeuler:target1
System information:
Driver: iscsi
State: ready
I_T nexus information:
I_T nexus: 2
Initiator: iqn.2012-01.com.openeuler:2c5fdcdfca9 alias: node2
Connection: 0
IP Address: 10.80.20.2
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 21475 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rdwr
Backing store path: /dev/nvme0n2
Backing store flags:
Account information:
ACL information:
10.80.20.2
10.80.20.3

node3

下载安装iscsi:

1
2
3
# dnf install -y iscsi-initiator-utils
# systemctl enable iscsid --now
# systemctl status iscsid

iscsi发现target:

1
# iscsiadm -m discovery -t sendtargets -p 10.80.20.1

关联所有target:

1
# iscsiadm -m node -L all

查看连接状态:

1
# iscsiadm -m session

查看磁盘列表,有一块新磁盘:

1
2
3
4
5
6
7
8
9
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
sr0 11:0 1 4.1G 0 rom
nvme0n1 259:0 0 20G 0 disk
├─nvme0n1p1 259:1 0 1G 0 part /boot
└─nvme0n1p2 259:2 0 19G 0 part
├─openeuler-root 253:0 0 17G 0 lvm /
└─openeuler-swap 253:1 0 2G 0 lvm [SWAP]

挂载磁盘:

1
2
3
4
5
6
7
8
9
10
11
12
13
# mkdir /mnt/san
# mount /dev/sda /mnt/san/
# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.7G 0 1.7G 0% /dev
tmpfs 1.7G 0 1.7G 0% /dev/shm
tmpfs 1.7G 49M 1.7G 3% /run
tmpfs 1.7G 0 1.7G 0% /sys/fs/cgroup
/dev/mapper/openeuler-root 17G 2.5G 14G 16% /
tmpfs 1.7G 0 1.7G 0% /tmp
/dev/nvme0n1p1 976M 126M 783M 14% /boot
tmpfs 341M 0 341M 0% /run/user/0
/dev/sda 20G 45M 19G 1% /mnt/san

查看文件夹,已同步node2文件:

1
2
3
# cd /mnt/san/
# cat test.txt
node2

创建测试文件:

1
2
3
# echo "node3" > test1.txt
# cat test1.txt
node3

node2

node3创建的文件未同步:

1
2
3
# cd /mnt/san/
# ls
lost+found test.txt

创建新文件:

1
2
3
# echo "node2 new" > test2.txt
# cat test2.txt
node2 new

node3

重新挂载,test1.txt文件丢失:

1
2
3
4
5
# cd
# umount /mnt/san
# mount /dev/sda /mnt/san/
# ls /mnt/san/
lost+found test2.txt test.txt

node2、node3

卸载/dev/san:

1
2
# cd
# umount /mnt/san/

退出target1连接:

1
2
# iscsiadm -m node -T iqn.00-00.com.openeuler:target1 -u
# iscsiadm -m session

node1

查看target连接信息:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# tgtadm --lld iscsi --op show --mode target
Target 1: iqn.00-00.com.openeuler:target1
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 21475 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rdwr
Backing store path: /dev/nvme0n2
Backing store flags:
Account information:
ACL information:
10.80.20.2
10.80.20.3

按顺序删除连接地址、logicalunit、target1:

1
2
# tgtadm --lld iscsi --mode target --op unbind --tid 1 --initiator-address 10.80.20.2
# tgtadm --lld iscsi --mode target --op unbind --tid 1 --initiator-address 10.80.20.3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# tgtadm --lld iscsi --mode logicalunit --op delete --tid 1 --lun 1
# tgtadm --lld iscsi --mode target --op show
Target 1: iqn.00-00.com.openeuler:target1
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
Account information:
ACL information:

删除target1:

1
2
# tgtadm --lld iscsi --mode target --op delete --tid 1
# tgtadm --lld iscsi --mode target --op show

使用target文件创建target:

1
2
3
4
5
6
# vim /etc/tgt/targets.conf
# 尾行,添加配置
<target iqn.00-00.com.openeuler.target2>
backing-store /dev/nvme0n2
initiator-address 10.80.20.0/24
</target>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# systemctl restart tgtd
# tgtadm --lld iscsi --op show --mode target
Target 1: iqn.00-00.com.openeuler.target2
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 21475 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rdwr
Backing store path: /dev/nvme0n2
Backing store flags:
Account information:
ACL information:
10.80.20.0/24

node2

重新连接并登录target:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# iscsiadm -m discovery -t sendtargets -p 10.80.20.1
10.80.20.1:3260,1 iqn.00-00.com.openeuler.target2
# iscsiadm -m node -L all
Logging in to [iface: default, target: iqn.00-00.com.openeuler.target2, portal: 10.80.20.1,3260]
Login to [iface: default, target: iqn.00-00.com.openeuler.target2, portal: 10.80.20.1,3260] successful.
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
sr0 11:0 1 4.1G 0 rom
nvme0n1 259:0 0 20G 0 disk
├─nvme0n1p1 259:1 0 1G 0 part /boot
└─nvme0n1p2 259:2 0 19G 0 part
├─openeuler-root 253:0 0 17G 0 lvm /
└─openeuler-swap 253:1 0 2G 0 lvm [SWAP]

设置自动登录:

1
# iscsiadm -m node -T iqn.00-00.com.openeuler.target2 -p 10.80.20.1 --op update -n node.startup -v automatic

自动挂载:

1
2
3
# vim /etc/fstab
# 尾行,添加配置
/dev/sda /mnt/san ext4 defaults,_netdev 0 0
1
2
3
4
5
6
7
8
9
10
11
12
# mount -a
# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.7G 0 1.7G 0% /dev
tmpfs 1.7G 0 1.7G 0% /dev/shm
tmpfs 1.7G 98M 1.6G 6% /run
tmpfs 1.7G 0 1.7G 0% /sys/fs/cgroup
/dev/mapper/openeuler-root 17G 2.5G 14G 16% /
tmpfs 1.7G 0 1.7G 0% /tmp
/dev/nvme0n1p1 976M 126M 783M 14% /boot
tmpfs 341M 0 341M 0% /run/user/0
/dev/sda 20G 45M 19G 1% /mnt/san

分布式存储glusterfs管理实验

节点IP系统功能CPU内存硬盘
node110.80.20.1openeuler20.03glusterfs2核心4GB20GB、20GB、20GB、20GB、20GB、20GB、20GB
node210.80.20.2openeuler20.03glusterfs2核心4GB20GB、20GB、20GB、20GB、20GB、20GB、20GB
node310.80.20.3openeuler20.03glusterfs2核心4GB20GB、20GB、20GB、20GB、20GB、20GB、20GB
node410.80.20.4openeuler20.03glusterfs2核心4GB20GB、20GB、20GB、20GB、20GB、20GB、20GB
node510.80.20.5openeuler20.03glusterfs2核心4GB20GB、20GB、20GB、20GB、20GB、20GB、20GB
node610.80.20.6openeuler20.03glusterfs2核心4GB20GB、20GB、20GB、20GB、20GB、20GB、20GB
node710.80.20.7openeuler20.03client2核心4GB20GB

node1、node2、node3、node4、node5、node6

查看磁盘:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 4.1G 0 rom
nvme0n1 259:0 0 20G 0 disk
├─nvme0n1p1 259:1 0 1G 0 part /boot
└─nvme0n1p2 259:2 0 19G 0 part
├─openeuler-root 253:0 0 17G 0 lvm /
└─openeuler-swap 253:1 0 2G 0 lvm [SWAP]
nvme0n2 259:3 0 20G 0 disk
nvme0n3 259:4 0 20G 0 disk
nvme0n4 259:5 0 20G 0 disk
nvme0n5 259:6 0 20G 0 disk
nvme0n6 259:7 0 20G 0 disk
nvme0n7 259:8 0 20G 0 disk

格式化磁盘:

1
2
3
4
5
6
7
8
# for i in {2..7}; do mkfs.xfs /dev/nvme0n${i}; done
# for i in {2..7}; do blkid /dev/nvme0n${i}; done
/dev/nvme0n2: UUID="3a84a02e-64db-4d30-811b-5a344e517490" BLOCK_SIZE="512" TYPE="xfs"
/dev/nvme0n3: UUID="1b9836ec-5b3d-4b59-a1f3-392cd57a0d32" BLOCK_SIZE="512" TYPE="xfs"
/dev/nvme0n4: UUID="070851f5-47a6-4b04-84e5-242b7e94c8e5" BLOCK_SIZE="512" TYPE="xfs"
/dev/nvme0n5: UUID="6e641bcd-7a71-409d-a953-31275dffb750" BLOCK_SIZE="512" TYPE="xfs"
/dev/nvme0n6: UUID="dcf45ad1-65d9-4332-8448-7586744c877f" BLOCK_SIZE="512" TYPE="xfs"
/dev/nvme0n7: UUID="ef0ab789-4299-430c-82a5-7bf73d9f0418" BLOCK_SIZE="512" TYPE="xfs"

挂载磁盘:

1
2
3
4
5
6
7
8
9
# for i in {2..7}; do mkdir -p /exp/nvme0n${i}; done
# vim /etc/fstab
# 尾行,添加配置
/dev/nvme0n2 /exp/nvme0n2 xfs defaults 0 0
/dev/nvme0n3 /exp/nvme0n3 xfs defaults 0 0
/dev/nvme0n4 /exp/nvme0n4 xfs defaults 0 0
/dev/nvme0n5 /exp/nvme0n5 xfs defaults 0 0
/dev/nvme0n6 /exp/nvme0n6 xfs defaults 0 0
/dev/nvme0n7 /exp/nvme0n7 xfs defaults 0 0
1
2
3
4
5
6
7
8
# mount -a
# df -h | grep exp
/dev/nvme0n2 20G 176M 20G 1% /exp/nvme0n2
/dev/nvme0n3 20G 176M 20G 1% /exp/nvme0n3
/dev/nvme0n4 20G 176M 20G 1% /exp/nvme0n4
/dev/nvme0n5 20G 176M 20G 1% /exp/nvme0n5
/dev/nvme0n6 20G 176M 20G 1% /exp/nvme0n6
/dev/nvme0n7 20G 176M 20G 1% /exp/nvme0n7

为挂载点创建一个子目录作为glusterfs的brick:

1
# for i in `ls -d /exp/*`; do mkdir -p ${i}/brick; done

添加hosts:

1
2
3
4
5
6
7
8
# vim /etc/hosts
# 尾行,添加配置
10.80.20.1 node1
10.80.20.2 node2
10.80.20.3 node3
10.80.20.4 node4
10.80.20.5 node5
10.80.20.6 node6

下载安装glusterfs:

1
2
3
# dnf install -y glusterfs-server
# systemctl enable glusterd --now
# systemctl status glusterd

node1

添加信任池节点:

1
2
3
4
5
# gluster peer probe node2
# gluster peer probe node3
# gluster peer probe node4
# gluster peer probe node5
# gluster peer probe node6

查看信任池状态,任意节点可查:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# gluster peer status
Number of Peers: 5

Hostname: node2
Uuid: d956c92f-7ee5-40a3-afb3-87c094c9b2ea
State: Peer in Cluster (Connected)

Hostname: node3
Uuid: 82b11a23-0352-4e1a-89da-a013101d94e3
State: Peer in Cluster (Connected)

Hostname: node4
Uuid: 4de3c408-1b51-4339-80ea-1fa2853a8dc2
State: Peer in Cluster (Connected)

Hostname: node5
Uuid: ffa4a642-764d-4997-84b0-16f940d84796
State: Peer in Cluster (Connected)

Hostname: node6
Uuid: 81bfccbc-be61-45c5-afde-77d54b4832b0
State: Peer in Cluster (Connected)

查看信任池列表,任意节点可查:

1
2
3
4
5
6
7
8
# gluster pool list
UUID Hostname State
d956c92f-7ee5-40a3-afb3-87c094c9b2ea node2 Connected
82b11a23-0352-4e1a-89da-a013101d94e3 node3 Connected
4de3c408-1b51-4339-80ea-1fa2853a8dc2 node4 Connected
ffa4a642-764d-4997-84b0-16f940d84796 node5 Connected
81bfccbc-be61-45c5-afde-77d54b4832b0 node6 Connected
dc5bc664-e75c-4582-b6c0-b3329181a197 localhost Connected

创建卷进行测试:

1
2
# gluster volume create test-volume replica 2 node4:/exp/nvme0n2/brick/ node5:/exp/nvme0n2/brick/
y

查看卷信息:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# gluster volume info

Volume Name: test-volume
Type: Replicate
Volume ID: c7d0d7a3-c60e-4738-8da4-ee2315556fe9
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node4:/exp/nvme0n2/brick
Brick2: node5:/exp/nvme0n2/brick
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off

启动卷:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# gluster volume start test-volume
# gluster volume info

Volume Name: test-volume
Type: Replicate
Volume ID: c7d0d7a3-c60e-4738-8da4-ee2315556fe9
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node4:/exp/nvme0n2/brick
Brick2: node5:/exp/nvme0n2/brick
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off

node7

安装客户端:

1
# dnf install -y glusterfs glusterfs-fuse

添加hosts:

1
2
3
4
5
6
7
8
# vim /etc/hosts
# 尾行,添加配置
10.80.20.1 node1
10.80.20.2 node2
10.80.20.3 node3
10.80.20.4 node4
10.80.20.5 node5
10.80.20.6 node6

挂载卷:

1
2
3
4
# mkdir -p /mnt/gfs/test
# mount -t glusterfs node1:test-volume /mnt/gfs/test
# df -h | grep test-volume
node1:test-volume 20G 380M 20G 2% /mnt/gfs/test

创建测试文件:

1
2
3
4
# cd /mnt/gfs/test/
# echo "glusterfs" > test.txt
# cat test.txt
glusterfs

node4、node5

查看创建节点的brick:

1
2
# cat /exp/nvme0n2/brick/test.txt
glusterfs

node1

创建分布式存储卷:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# gluster volume create gv-dis node1:/exp/nvme0n2/brick/ node2:/exp/nvme0n2/brick/
# gluster volume start gv-dis
# gluster volume info

Volume Name: gv-dis
Type: Distribute
Volume ID: 0beaf7a1-d6ef-4278-8b30-651f03be3b68
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: node1:/exp/nvme0n2/brick
Brick2: node2:/exp/nvme0n2/brick
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on

Volume Name: test-volume
Type: Replicate
Volume ID: c7d0d7a3-c60e-4738-8da4-ee2315556fe9
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node4:/exp/nvme0n2/brick
Brick2: node5:/exp/nvme0n2/brick
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off

查看卷列表:

1
2
3
# gluster volume list
gv-dis
test-volume

node7

挂载分布式存储:

1
2
3
4
# mkdir -p /mnt/gfs/dis
# mount -t glusterfs node1:gv-dis /mnt/gfs/dis/
# df -h | grep gv-dis
node1:gv-dis 40G 760M 40G 2% /mnt/gfs/dis

测试卷:

1
2
3
4
5
6
7
8
# cd /mnt/gfs/dis/
# for i in {1..5}; do dd if=/dev/zero of=/mnt/gfs/dis/test${i}.txt bs=1M count=40; done
# ll /mnt/gfs/dis/test*
-rw-r--r-- 1 root root 40M 12月 3 12:44 /mnt/gfs/dis/test1.txt
-rw-r--r-- 1 root root 40M 12月 3 12:44 /mnt/gfs/dis/test2.txt
-rw-r--r-- 1 root root 40M 12月 3 12:44 /mnt/gfs/dis/test3.txt
-rw-r--r-- 1 root root 40M 12月 3 12:44 /mnt/gfs/dis/test4.txt
-rw-r--r-- 1 root root 40M 12月 3 12:44 /mnt/gfs/dis/test5.txt

node1、node2

查看数据存储的位置分布:

1
2
3
4
5
6
# node1
# ls /exp/nvme0n2/brick/
test1.txt test3.txt test4.txt
# node2
# ls /exp/nvme0n2/brick/
test2.txt test5.txt

node1

创建3个副本的复制卷:

1
2
# gluster volume create gv-rep replic 3 node1:/exp/nvme0n3/brick/ node2:/exp/nvme0n3/brick/ node3:/exp/nvme0n3/brick/
# gluster volume start gv-rep

node7

挂载复制卷:

1
2
3
4
# mkdir -p /mnt/gfs/rep
# mount -t glusterfs node1:gv-rep /mnt/gfs/rep/
# df -h | grep rep
node1:gv-rep 20G 380M 20G 2% /mnt/gfs/rep

测试复制卷:

1
# for i in {1..5}; do dd if=/dev/zero of=/mnt/gfs/rep/test${i}.txt bs=1M count=40; done

node1、node2、node3

查看数据,有备份:

1
2
# ls /exp/nvme0n3/brick/
test1.txt test2.txt test3.txt test4.txt test5.txt

node1

创建分布式复制卷:

1
2
3
# gluster volume create gv-disrep replic 2 node1:/exp/nvme0n4/brick/ node2:/exp/nvme0n4/brick/ node3:/exp/nvme0n4/brick/ node4:/exp/nvme0n4/brick/
y
# gluster volume start gv-disrep

查看卷信息:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# gluster volume info gv-disrep

Volume Name: gv-disrep
Type: Distributed-Replicate
Volume ID: 8b5c435a-d706-4169-bbb3-6a02047b17a5
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: node1:/exp/nvme0n4/brick
Brick2: node2:/exp/nvme0n4/brick
Brick3: node3:/exp/nvme0n4/brick
Brick4: node4:/exp/nvme0n4/brick
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off

node7

挂载分布式复制卷:

1
2
3
4
# mkdir -p /mnt/gfs/disrep
# mount -t glusterfs node1:gv-disrep /mnt/gfs/disrep/
# df -h | grep disrep
node1:gv-disrep 40G 760M 40G 2% /mnt/gfs/disrep

测试复制卷:

1
2
3
# for i in {1..5}; do dd if=/dev/zero of=/mnt/gfs/disrep/test${i}.txt bs=1M count=40; done
# ls /mnt/gfs/disrep/
test1.txt test2.txt test3.txt test4.txt test5.txt

node1、node2、node3、node4

查看数据分布:

1
2
3
4
5
6
7
8
9
10
11
12
# node1
# ls /exp/nvme0n4/brick/
test1.txt test3.txt test4.txt
# node2
# ls /exp/nvme0n4/brick/
test1.txt test3.txt test4.txt
# node3
# ls /exp/nvme0n4/brick/
test2.txt test5.txt
# node4
# ls /exp/nvme0n4/brick/
test2.txt test5.txt

node1

创建分散卷:

1
2
# gluster volume create gv-disp disperse 3 node1:/exp/nvme0n6/brick/ node2:/exp/nvme0n6/brick/ node3:/exp/nvme0n6/brick/
# gluster volume start gv-disp

查看卷信息:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# gluster volume info gv-disp

Volume Name: gv-disp
Type: Disperse
Volume ID: 425de265-ff77-4141-900d-63450d3ec5a8
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: node1:/exp/nvme0n6/brick
Brick2: node2:/exp/nvme0n6/brick
Brick3: node3:/exp/nvme0n6/brick
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on

node7

挂载分散卷:

1
2
3
4
# mkdir -p /mnt/gfs/disp
# mount -t glusterfs node1:gv-disp /mnt/gfs/disp/
# df -h | grep disp
node1:gv-disp 40G 760M 40G 2% /mnt/gfs/disp

测试分散卷:

1
2
3
# for i in {1..5}; do dd if=/dev/zero of=/mnt/gfs/disp/test${i}.txt bs=1M count=40; done
# ls /mnt/gfs/disp/
test1.txt test2.txt test3.txt test4.txt test5.txt

node1、node2、node3

查看数据分布:

1
2
# ls /exp/nvme0n6/brick/
test1.txt test2.txt test3.txt test4.txt test5.txt

node1

创建分布式分散卷:

1
2
# gluster volume create gv-dd disperse 3 redundancy 1 node1:/exp/nvme0n7/brick/ node2:/exp/nvme0n7/brick/ node3:/exp/nvme0n7/brick/ node4:/exp/nvme0n7/brick/ node5:/exp/nvme0n7/brick/ node6:/exp/nvme0n7/brick/
# gluster volume start gv-dd

node7

挂载分布式分散卷:

1
2
3
4
# mkdir -p /mnt/gfs/dd
# mount -t glusterfs node1:gv-dd /mnt/gfs/dd/
# df -h | grep dd
node1:gv-dd 80G 1.5G 79G 2% /mnt/gfs/dd

测试分布式分散卷:

1
2
3
# for i in {1..5}; do dd if=/dev/zero of=/mnt/gfs/dd/test${i}.txt bs=1M count=40; done
# ls /mnt/gfs/dd/
test1.txt test2.txt test3.txt test4.txt test5.txt

node1、node2、node3、node4、node5、node6

查看数据分布:

1
2
3
4
5
6
# node1、node2、node3
# ls /exp/nvme0n7/brick/
test1.txt test3.txt test4.txt
# node4、node5、node6
# ls /exp/nvme0n7/brick/
test2.txt test5.txt

node1

向分布式gv-dis卷加一个brick:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# gluster volume add-brick gv-dis node3:/exp/nvme0n2/brick
# gluster volume info gv-dis

Volume Name: gv-dis
Type: Distribute
Volume ID: 0beaf7a1-d6ef-4278-8b30-651f03be3b68
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: node1:/exp/nvme0n2/brick
Brick2: node2:/exp/nvme0n2/brick
Brick3: node3:/exp/nvme0n2/brick
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on

node7

查看卷容量:

1
2
3
4
# df -h | grep gv-dis
node1:gv-dis 60G 1.4G 59G 3% /mnt/gfs/dis
node1:gv-disrep 40G 960M 40G 3% /mnt/gfs/disrep
node1:gv-disp 40G 960M 40G 3% /mnt/gfs/disp

创建文件:

1
# for i in {1..3}; do dd if=/dev/zero of=/mnt/gfs/dis/add${i}.txt bs=1M count=40; done

node1、node2、node3

查看文件分布,未重新平衡不会使用新的block:

1
2
3
4
5
6
7
8
# node1
# ls /exp/nvme0n2/brick/
add1.txt test1.txt test3.txt test4.txt
# node2
# ls /exp/nvme0n2/brick/
add2.txt add3.txt test2.txt test5.txt
# node3
# ls /exp/nvme0n2/brick/

node1

重新平衡布局:

1
2
3
4
5
6
7
8
# gluster volume rebalance gv-dis start
# gluster volume rebalance gv-dis status
Node Rebalanced-files size scanned failures skipped status run time in h:m:s
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
node2 0 0Bytes 4 0 0 completed 0:00:00
node3 0 0Bytes 1 0 0 completed 0:00:00
localhost 1 40.0MB 4 0 0 completed 0:00:00
volume rebalance: gv-dis: success

node1、node2、node3

再次查看文件分布:

1
2
3
4
5
6
7
8
9
# node1
# ls /exp/nvme0n2/brick/
add1.txt test1.txt test3.txt
# node2
# ls /exp/nvme0n2/brick/
add2.txt add3.txt test2.txt test5.txt
# node3
# ls /exp/nvme0n2/brick/
test4.txt

node1

移除gv-dis的brick:

1
2
3
4
5
6
7
# gluster volume remove-brick gv-dis node1:/exp/nvme0n2/brick start
y
# gluster volume remove-brick gv-dis node1:/exp/nvme0n2/brick status
Node Rebalanced-files size scanned failures skipped status run time in h:m:s
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
localhost 3 120.0MB 3 0 0 completed 0:00:01
# gluster volume remove-brick gv-dis node1:/exp/nvme0n2/brick commit

node1、node2、node3

再次查看文件分布:

1
2
3
4
5
6
7
8
# node1
# ls /exp/nvme0n2/brick/
# node2
# ls /exp/nvme0n2/brick/
add1.txt add2.txt add3.txt test1.txt test2.txt test3.txt test5.txt
# node3
# ls /exp/nvme0n2/brick/
test4.txt

node1

查看卷列表:

1
2
3
4
5
6
7
# gluster volume list
gv-dd
gv-dis
gv-disp
gv-disrep
gv-rep
test-volume

停止并删除test-volume卷:

1
2
3
4
5
6
7
8
9
10
# gluster volume stop test-volume
y
# gluster volume delete test-volume
y
# gluster volume list
gv-dd
gv-dis
gv-disp
gv-disrep
gv-rep

灾难测试,关闭glusterfs服务:

1
# systemctl stop glusterd

node2

查看存储池信任列表:

1
2
3
4
5
6
7
8
# gluster pool list
UUID Hostname State
dc5bc664-e75c-4582-b6c0-b3329181a197 node1 Disconnected
82b11a23-0352-4e1a-89da-a013101d94e3 node3 Connected
4de3c408-1b51-4339-80ea-1fa2853a8dc2 node4 Connected
ffa4a642-764d-4997-84b0-16f940d84796 node5 Connected
81bfccbc-be61-45c5-afde-77d54b4832b0 node6 Connected
d956c92f-7ee5-40a3-afb3-87c094c9b2ea localhost Connected

查看不同卷的状态,不包括node1的brick:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# gluster volume status gv-rep
Status of volume: gv-rep
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node2:/exp/nvme0n3/brick 49153 0 Y 440488
Brick node3:/exp/nvme0n3/brick 49152 0 Y 444428
Self-heal Daemon on localhost N/A N/A Y 172271
Self-heal Daemon on node6 N/A N/A Y 172801
Self-heal Daemon on node4 N/A N/A Y 172291
Self-heal Daemon on node3 N/A N/A Y 175742
Self-heal Daemon on node5 N/A N/A Y 172793

Task Status of Volume gv-rep
------------------------------------------------------------------------------
There are no active volume tasks
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# gluster volume status gv-disrep
Status of volume: gv-disrep
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node2:/exp/nvme0n4/brick 49154 0 Y 473630
Brick node3:/exp/nvme0n4/brick 49153 0 Y 477715
Brick node4:/exp/nvme0n4/brick 49153 0 Y 474384
Self-heal Daemon on localhost N/A N/A Y 172271
Self-heal Daemon on node5 N/A N/A Y 172793
Self-heal Daemon on node3 N/A N/A Y 175742
Self-heal Daemon on node6 N/A N/A Y 172801
Self-heal Daemon on node4 N/A N/A Y 172291

Task Status of Volume gv-disrep
------------------------------------------------------------------------------
There are no active volume tasks
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# gluster volume status gv-disp
Status of volume: gv-disp
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node2:/exp/nvme0n6/brick 49155 0 Y 643778
Brick node3:/exp/nvme0n6/brick 49154 0 Y 647777
Self-heal Daemon on localhost N/A N/A Y 172271
Self-heal Daemon on node4 N/A N/A Y 172291
Self-heal Daemon on node3 N/A N/A Y 175742
Self-heal Daemon on node5 N/A N/A Y 172793
Self-heal Daemon on node6 N/A N/A Y 172801

Task Status of Volume gv-disp
------------------------------------------------------------------------------
There are no active volume tasks
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# gluster volume status gv-dd
Status of volume: gv-dd
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node2:/exp/nvme0n7/brick 49156 0 Y 697488
Brick node3:/exp/nvme0n7/brick 49155 0 Y 701165
Brick node4:/exp/nvme0n7/brick 49154 0 Y 698320
Brick node5:/exp/nvme0n7/brick 49153 0 Y 698951
Brick node6:/exp/nvme0n7/brick 49152 0 Y 698403
Self-heal Daemon on localhost N/A N/A Y 172271
Self-heal Daemon on node4 N/A N/A Y 172291
Self-heal Daemon on node5 N/A N/A Y 172793
Self-heal Daemon on node3 N/A N/A Y 175742
Self-heal Daemon on node6 N/A N/A Y 172801

Task Status of Volume gv-dd
------------------------------------------------------------------------------
There are no active volume tasks

node7

卷挂载无影响,数据也在:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# df -h | grep gv
node1:gv-dis 40G 1.1G 39G 3% /mnt/gfs/dis
node1:gv-rep 20G 580M 20G 3% /mnt/gfs/rep
node1:gv-disrep 40G 960M 40G 3% /mnt/gfs/disrep
node1:gv-disp 40G 960M 40G 3% /mnt/gfs/disp
node1:gv-dd 80G 1.7G 79G 3% /mnt/gfs/dd
# ls /mnt/gfs/dd/
test1.txt test2.txt test3.txt test4.txt test5.txt
# ls /mnt/gfs/dis
add1.txt add2.txt add3.txt test1.txt test2.txt test3.txt test4.txt test5.txt
# ls /mnt/gfs/disp/
test1.txt test2.txt test3.txt test4.txt test5.txt
# ls /mnt/gfs/disrep/
test1.txt test2.txt test3.txt test4.txt test5.txt
# ls /mnt/gfs/rep/
test1.txt test2.txt test3.txt test4.txt test5.txt