CentOS6.5下DRBD+HeartBeat+NFS配置(一)

DRBD配置

Distributed Replicated Block Device(DRBD)是一个用软件实现的、无共享的、服务器之间镜像块设备内容的存储复制解决方案。

我们可以理解为它其实就是个网络Raid 1,两台服务器间就算某台因断电或宕机也不会对数据有任何影响,而真正的热切换可以通过Heartbeat方案解决,不需要人工干预。

一、环境描述
系统版本:centos6.6 x64(内核2.6.32-504.16.2.el6.x86_64)
DRBD版本:DRBD-8.4.3

node1(主节点)IP: 192.168.0.191 主机名:drbd1.corp.com
node2(从节点)IP: 192.168.0.192 主机名:drbd2.corp.com

(node1) 仅为主节点配置
(node2) 仅为从节点配置
(node1,node2) 为主从节点共同配置

二、安装前准备:(node1,node2)
1、关闭iptables和SELINUX,避免安装过程中报错。

1234567
# service iptables stop# chkconfig iptables off# setenforce 0# vi /etc/selinux/config---------------SELINUX=disabled---------------

2、配置hosts文件

123
# vi /etc/hosts192.168.0.191  drbd1.corp.com192.168.0.191  drbd2.corp.com

3、在两台虚拟机分别添加一块10G硬盘分区作为DRBD设备磁盘,分别都为sdb1,大小10G,并在本地系统创建/store目录,不做挂载操作。

12345
# fdisk /dev/sdb----------------n-p-1-1-"+10G"-w----------------# mkdir /store

4、时间同步:

1
# ntpdate -u asia.pool.ntp.org

三、DRBD的安装配置:
1、安装依赖包:(node1,node2)

1
# yum install gcc gcc-c++ make glibc flex kernel-devel kernel-headers

2、安装DRBD:(node1,node2)

12345678910
# wget http://oss.linbit.com/drbd/8.4/drbd-8.4.3.tar.gz# tar zxvf drbd-8.4.3.tar.gz# cd drbd-8.4.3# ./configure --prefix=/usr/local/drbd --with-km# make KDIR=/usr/src/kernels/2.6.32-504.16.2.el6.x86_64/# make install# mkdir -p /usr/local/drbd/var/run/drbd# cp /usr/local/drbd/etc/rc.d/init.d/drbd /etc/rc.d/init.d# chkconfig --add drbd# chkconfig drbd on

3、加载DRBD模块:(node1,node2)

1
# modprobe drbd

查看DRBD模块是否加载到内核:

123
# lsmod |grep drbddrbd 310172 4libcrc32c 1246 1 drbd

4、参数配置:(node1,node2)

1
# vi /usr/local/drbd/etc/drbd.conf

清空文件内容,并添加如下配置:

123456789101112131415161718192021222324252627
resource r0{protocol C; startup { wfc-timeout 0; degr-wfc-timeout 120;}disk { on-io-error detach;}net{timeout 60;connect-int 10;ping-int 10;max-buffers 2048;max-epoch-size 2048;}syncer { rate 200M;} on drbd1.corp.com{device /dev/drbd0;disk /dev/sdb1;address 192.168.0.191:7788;meta-disk internal;}on drbd2.corp.com{device /dev/drbd0;disk /dev/sdb1;address 192.168.0.192:7788;meta-disk internal;}}

注:请修改上面配置中的主机名、IP、和disk为自己的具体配置

5、创建DRBD设备并激活r0资源:(node1,node2)

12345678910111213141516171819202122232425
# mknod /dev/drbd0 b 147 0# drbdadm create-md r0 等待片刻,显示success表示drbd块创建成功Writing meta data...initializing activity logNOT initializing bitmapNew drbd meta data block successfully created. --== Creating metadata ==--As with nodes, we count the total number of devices mirrored by DRBDat http://usage.drbd.org. The counter works anonymously. It creates a random number to identifythe device and sends that random number, along with the kernel andDRBD version, to usage.drbd.org. http://usage.drbd.org/cgi-bin/insert_usage.pl? nu=716310175600466686&ru=15741444353112217792&rs=1085704704 * If you wish to opt out entirely, simply enter 'no'.* To continue, just press [RETURN] success

再次输入该命令:

12345678
# drbdadm create-md r0成功激活r0[need to type 'yes' to confirm] yes Writing meta data...initializing activity logNOT initializing bitmapNew drbd meta data block successfully created.

6、启动DRBD服务:(node1,node2)

1
# service drbd start

注:需要主从共同启动方能生效

7、查看状态:(node1,node2)

123456
# service drbd statusdrbd driver loaded OK; device status:version: 8.4.3 (api:1/proto:86-101)GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd1.corp.com, 2015-05-12 21:05:41m:res cs ro ds p mounted fstype0:r0 Connected Secondary/Secondary Inconsistent/Inconsistent C

这里ro:Secondary/Secondary表示两台主机的状态都是备机状态,ds是磁盘状态,显示的状态内容为“Inconsistent不一致”,这是因为DRBD无法判断哪一方为主机,应以哪一方的磁盘数据作为标准。

8、将drbd1.example.com主机配置为主节点:(node1)

1
# drbdsetup /dev/drbd0 primary --force

分别查看主从DRBD状态:
(node1)

123456
# service drbd statusdrbd driver loaded OK; device status:version: 8.4.3 (api:1/proto:86-101)GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd1.corp.com, 2015-05-12 21:05:41m:res cs ro ds p mounted fstype0:r0 Connected Primary/Secondary UpToDate/UpToDate C

(node2)

123456
# service drbd statusdrbd driver loaded OK; device status:version: 8.4.3 (api:1/proto:86-101)GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd2.corp.com, 2015-05-12 21:05:46m:res cs ro ds p mounted fstype0:r0 Connected Secondary/Primary UpToDate/UpToDate C

ro在主从服务器上分别显示 Primary/Secondary和Secondary/Primary
ds显示UpToDate/UpToDate
表示主从配置成功。

9、挂载DRBD:(node1)
从刚才的状态上看到mounted和fstype参数为空,所以我们这步开始挂载DRBD到系统目录/store

12
# mkfs.ext4 /dev/drbd0# mount /dev/drbd0 /store

注:Secondary节点上不允许对DRBD设备进行任何操作,包括挂载;所有的读写操作只能在Primary节点上进行,只有当Primary节点挂掉时,Secondary节点才能提升为Primary节点,并自动挂载DRBD继续工作。

成功挂载后的DRBD状态:(node1)

123456
# service drbd statusdrbd driver loaded OK; device status:version: 8.4.3 (api:1/proto:86-101)GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd1.corp.com, 2015-05-12 21:05:41m:res cs ro ds p mounted fstype0:r0 Connected Primary/Secondary UpToDate/UpToDate C /store ext4
登入/注册
卧槽~你还有脸回来
没有账号? 忘记密码?