搭建MongoDB复制集

目前的MongoDB还是单机使用,虽然有用mongoexport导出数据备份,用着还是慌…学着搭个复制集看看吧

简述复制集

MongoDB复制集的意义在于实现服务高可用、数据分发、读写分离、异地容灾,依赖于两个功能实现:

  • 数据写入时将数据迅速复制到另一个独立节点上
  • 在接受写入的节点发生故障时自动选举出一个新的替代节点(复制集由3个以上具有投票权的节点组成,一般为奇数节点)

准备节点

1、官方手册中有命令行安装的教程:
https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/#overview
由于我在机器中已经按步骤安装过一个MongoDB,又是在单机学习搭建复制的步骤,就采用创建临时目录,用MongoDB压缩包的形式测试下:

1
2
3
4
5
$ mkdir MongoWork/MongoData/db{1,2,3}
$ cd MongoWork
$ wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-ubuntu1804-4.2.1.tgz
$ tar -xvf mongodb-linux-x86_64-ubuntu1804-4.2.1.tgz
$ mv mongodb-linux-x86_64-ubuntu1804-4.2.1 mongodb-4.2.1

在三个db文件夹下分别创建mongo.conf文件,并写入内容:

1
2
3
4
5
6
7
8
9
10
11
12
13
systemLog:
destination: file
path: /home/top/MongoWork/MongoData/db1/mongod.log
logAppend: true
storage:
dbPath: /home/top/MongoWork/MongoData/db1
net:
bindIp: 0.0.0.0
port: 28018
replication:
replSetName: rs0
processManagement:
fork: true

注意上面的pathdbPathport要分别修改。

现在分别启动三个mongodb:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ cd mongodb-4.2.1
$ bin/mongod -f ../MongoData/db1/mongod.conf
about to fork child process, waiting until server is ready for connections.
forked process: 3714
child process started successfully, parent exiting
$ bin/mongod -f ../MongoData/db2/mongod.conf
about to fork child process, waiting until server is ready for connections.
forked process: 3752
child process started successfully, parent exiting
$ bin/mongod -f ../MongoData/db3/mongod.conf
about to fork child process, waiting until server is ready for connections.
forked process: 3790
child process started successfully, parent exiting

通过ps和netstat分别查看一下:

1
2
3
4
5
6
7
8
9
10
11
12
$ ps -ef | grep mongod
top 3714 1 3 22:09 ? 00:00:00 bin/mongod -f ../MongoData/db1/mongod.conf
top 3752 1 6 22:09 ? 00:00:01 bin/mongod -f ../MongoData/db2/mongod.conf
top 3790 1 16 22:09 ? 00:00:01 bin/mongod -f ../MongoData/db3/mongod.conf
top 3827 3111 0 22:09 pts/0 00:00:00 grep --color=auto mongod

$ netstat -antp | grep mongod
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 0.0.0.0:28018 0.0.0.0:* LISTEN 3714/bin/mongod
tcp 0 0 0.0.0.0:28019 0.0.0.0:* LISTEN 3752/bin/mongod
tcp 0 0 0.0.0.0:28020 0.0.0.0:* LISTEN 3790/bin/mongod

配置节点

进入第一个节点:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ bin/mongo top:28018
> rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "top:28018",
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1575211318, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1575211318, 1)
}
rs0:SECONDARY>
rs0:PRIMARY>

信息中显示的rs0:SECONDARY已经进入复制集的从节点状态,rs0:PRIMARY显示成为了主节点。这个时候查看节点的状态:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
rs0:PRIMARY> rs.status()
{
"set" : "rs0",
"date" : ISODate("2019-12-01T14:44:00.477Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"majorityVoteCount" : 1,
"writeMajorityCount" : 1,
......
"members" : [
{
"_id" : 0,
"name" : "top:28018",
"ip" : "127.0.1.1",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 163,
"optime" : {
"ts" : Timestamp(1575211439, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2019-12-01T14:43:59Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1575211318, 2),
"electionDate" : ISODate("2019-12-01T14:41:58Z"),
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
}
],
"ok" : 1,
.......

members表明了复制集中的成员信息,此时还是一个单节点,接下来加入其它两个节点:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
rs0:PRIMARY> rs.add("top:28019")
{
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1575211629, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1575211629, 1)
}
rs0:PRIMARY> rs.add("top:28020")
{
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1575211634, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1575211634, 1)
}
rs0:PRIMARY>

再次键入rs.status()可以看到members中已经三个节点信息了。

验证复制集

在主节点中写入信息:

1
2
3
4
rs0:PRIMARY> db.test.insert({ a: 1})
WriteResult({ "nInserted" : 1 })
rs0:PRIMARY> db.test.insert({ b: 2})
WriteResult({ "nInserted" : 1 })

在另外两个会话窗口中分别键入命令进入mongodb:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ bin/mongo top:28019
rs0:SECONDARY> db.test.find()
Error: error: {
"operationTime" : Timestamp(1575211979, 1),
"ok" : 0,
"errmsg" : "not master and slaveOk=false",
"code" : 13435,
"codeName" : "NotMasterNoSlaveOk",
"$clusterTime" : {
"clusterTime" : Timestamp(1575211979, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
rs0:SECONDARY>

显示报错,不允许在从节点读,此时键入rs.slaveOk()即可:

1
2
3
4
5
rs0:SECONDARY> rs.slaveOk()
rs0:SECONDARY> db.test.find()
{ "_id" : ObjectId("5de3d338198c429c8f8587c0"), "a" : 1 }
{ "_id" : ObjectId("5de3d343198c429c8f8587c1"), "b" : 2 }
rs0:SECONDARY>

可以看到数据已经进行了复制。

关于NTP

使用复制集的话,需要注意在各个节点配置下NTP,如果运行ntpd服务,一般来说ntpd会逐渐调整时钟,避免时间跳变。没有的话可以先安装,然后在ntp.conf中增加一行:

1
server ntp7.aliyun.com

如果你的机器的时钟发生跳变不会有严重后果,可以通过ntpdate ntp7.aliyun.com进行一次性的同步。

参考

1、https://docs.mongodb.com/manual/administration/production-notes/#recommended-configuration
2、https://tuna.moe/help/ntp/
3、https://help.aliyun.com/document_detail/92704.html

0%