学习数据库的最好方法,是从官方文档开始看起。mongodb的官方文档在这里,但是也有中文翻译的版本,可见这里(中文翻译貌似还没完全翻译完,还有不少段落是英文的。)。
mongodb目前最新的版本是3.4版本。注:3.4版本不在支持x86 32位的系统。
(1)安装:
(1.1) 准备3台机器,分别是mongodb1,mongodb2,mongodb3。在3台机器上,分别:
创建yum的repos的文件,以便后续进行yum安装:
vi /etc/yum.repos.d/mongodb-org-3.4.repo [mongodb-org-3.4] name=MongoDB Repository baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.4/x86_64/ gpgcheck=1 enabled=1 gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc
(1.2)yum安装
sudo yum install -y mongodb-org
注,安装的时候可能会报错:
[root@mongodb2 mongodb]# sudo yum install -y mongodb-org Loaded plugins: refresh-packagekit, security Existing lock /var/run/yum.pid: another copy is running as pid 2389. Another app is currently holding the yum lock; waiting for it to exit... The other application is: PackageKit Memory : 151 M RSS (461 MB VSZ) Started: Sat Feb 18 23:48:39 2017 - 42:48 ago State : Sleeping, pid: 2389 Another app is currently holding the yum lock; waiting for it to exit... The other application is: PackageKit Memory : 151 M RSS (461 MB VSZ) Started: Sat Feb 18 23:48:39 2017 - 42:50 ago State : Sleeping, pid: 2389 ^C Exiting on user cancel.
kill掉hold住yum lock的进程即可:
[root@mongodb2 mongodb]# kill -9 2389
分别在3台机器:
建立目录如下:(如果没有tree命令,可以yum -y install tree下载。)
[root@mysqlfb-01-CentOS67-CW-17F u01]# pwd /u01 [root@mysqlfb-01-CentOS67-CW-17F u01]# tree . └── mongodbtest ├── config │?? ├── data │?? └── log ├── mongos │?? └── log ├── shard1 │?? ├── data │?? └── log ├── shard2 │?? ├── data │?? └── log └── shard3 ├── data └── log
因为mongos是不存储数据的,所以mongos不需要data目录。
端口设定:
mongos为 20000, config server 为 21000, shard1为 22001 , shard2为22002, shard3为22003.
(一)config server 配置:
1. 在每一台服务器分别启动配置服务器config server
mongod --configsvr --replSet cfgReplSet --dbpath /u01/mongodbtest/config/data --port 21000 --logpath /u01/mongodbtest/config/log/config.log --fork
[root@mysqlfb-01-CentOS67-CW-17F mongodbtest]# mongod --configsvr --replSet cfgReplSet --dbpath /u01/mongodbtest/config/data --port 21000 --logpath /u01/mongodbtest/config/log/config.log --fork about to fork child process, waiting until server is ready for connections. forked process: 15190 child process started successfully, parent exiting [root@mysqlfb-01-CentOS67-CW-17F mongodbtest]#
注意:–replSet cfgReplSet这个参数是mongodb 3.4之后的要求,因为mongodb3.4之后,要求config server也做成副本集
2.配置config server为replica set。
连接到任意一台config server:
mongo --host 10.13.0.130 --port 21000
[root@mysqlfb-01-CentOS67-CW-17F mongodbtest]# mongo --host 10.13.0.130 --port 21000 MongoDB shell version v3.4.2 connecting to: mongodb://10.13.0.130:21000/ MongoDB server version: 3.4.2 Server has startup warnings: 2017-02-20T18:01:05.528+0800 I STORAGE [initandlisten] 2017-02-20T18:01:05.528+0800 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine 2017-02-20T18:01:05.528+0800 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem 2017-02-20T18:01:05.603+0800 I CONTROL [initandlisten] 2017-02-20T18:01:05.603+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2017-02-20T18:01:05.603+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2017-02-20T18:01:05.603+0800 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. 2017-02-20T18:01:05.603+0800 I CONTROL [initandlisten] 2017-02-20T18:01:05.603+0800 I CONTROL [initandlisten] 2017-02-20T18:01:05.603+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2017-02-20T18:01:05.603+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2017-02-20T18:01:05.603+0800 I CONTROL [initandlisten] 2017-02-20T18:01:05.603+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2017-02-20T18:01:05.603+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2017-02-20T18:01:05.603+0800 I CONTROL [initandlisten] >
3.创建副本集:
在刚才连上的那台config server:
rs.initiate({_id:"cfgReplSet",configsvr:true,members:[{_id:0,host:"10.13.0.130:21000"},{_id:1,host:"10.13.0.131:21000"},{_id:2,host:"10.13.0.132:21000"}]})
2017-02-20T18:01:05.603+0800 I CONTROL [initandlisten] > rs.initiate({_id:"cfgReplSet",configsvr:true,members:[{_id:0,host:"10.13.0.130:21000"},{_id:1,host:"10.13.0.131:21000"},{_id:2,horrsrs.rs.irs.inrs.inirs.initrs.initirs.initiars.initiatrs.initiaters.initiate(rs.initiate({rs.initiate({_rs.initiate({_irs.initiate({_idrs.initiate({_id:rs.initiate({_id:"rs.initiate({_id:"crs.initiate({_id:"cfrs.initiate({_id:"cfgrs.initiate({_id:"cfgRrs.initiate({_id:"cfgRers.initiate({_id:"cfgReplSet",configsvr:true,members:[{_id:0,host:"10.13.0.130:21000"},{_id:1,host:"10.13.0.131:21000"},{_id:2,host:"10.13.0.132:21000"}]}) { "ok" : 1 } cfgReplSet:SECONDARY>
(二)配置分片:
1. 在每一台服务器分别以副本集方式启动分片1
mongod --shardsvr --replSet shard1ReplSet --port 22001 --dbpath /u01/mongodbtest/shard1/data --logpath /u01/mongodbtest/shard1/log/shard1.log --fork --nojournal --oplogSize 10
[root@mysqlfb-01-CentOS67-CW-17F mongodbtest]# mongod --shardsvr --replSet shard1ReplSet --port 22001 --dbpath /u01/mongodbtest/shard1/data --logpath /u01/mongodbtest/shard1/log/shard1.log --fork --nojournal --oplogSize 10 about to fork child process, waiting until server is ready for connections. forked process: 15372 child process started successfully, parent exiting [root@mysqlfb-01-CentOS67-CW-17F mongodbtest]#
2. 连接任意一台分片服务器
mongo --host 10.13.0.130 --port 22001
[root@mysqlfb-01-CentOS67-CW-17F mongodbtest]# mongo --host 10.13.0.130 --port 22001 MongoDB shell version v3.4.2 connecting to: mongodb://10.13.0.130:22001/ MongoDB server version: 3.4.2 Server has startup warnings: 2017-02-20T18:30:11.763+0800 I STORAGE [initandlisten] 2017-02-20T18:30:11.763+0800 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine 2017-02-20T18:30:11.763+0800 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem 2017-02-20T18:30:11.860+0800 I CONTROL [initandlisten] 2017-02-20T18:30:11.860+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2017-02-20T18:30:11.860+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2017-02-20T18:30:11.860+0800 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. 2017-02-20T18:30:11.860+0800 I CONTROL [initandlisten] 2017-02-20T18:30:11.860+0800 I CONTROL [initandlisten] 2017-02-20T18:30:11.860+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2017-02-20T18:30:11.860+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2017-02-20T18:30:11.860+0800 I CONTROL [initandlisten] 2017-02-20T18:30:11.860+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2017-02-20T18:30:11.860+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2017-02-20T18:30:11.860+0800 I CONTROL [initandlisten] >
3. 在那台登录的分片服务器上,创建副本集并初始化
use admin rs.initiate({_id:"shard1ReplSet",members:[{_id:0,host:"10.13.0.130:22001"},{_id:1,host:"10.13.0.131:22001"},{_id:2,host:"10.13.0.132:22001",arbiterOnly:true}]})
2017-02-20T18:30:11.860+0800 I CONTROL [initandlisten] > > use adminuse admin switched to db admin > > rs.initiate({_id:"shard1ReplSet",members:[{_id:0,host:"10.13.0.130:22001"},{_id:1,host:"10.13.0.131:22001"},{_id:2,host:"10.13.0.rrsrs.rs.irs.inrs.inirs.initrs.initirs.initiars.initiatrs.initiaters.initiate({_id:"shard1ReplSet",members:[{_id:0,host:"10.13.0.130:22001"},{_id:1,host:"10.13.0.131:22001",},{_id:2,host:"10.13.0.132:22001"}]}) { "ok" : 1 } shard1ReplSet:SECONDARY>
4. 类似的操作shard2和shard3:
4.1 在每一台主机以副本集方式启动shard2:
mongod --shardsvr --replSet shard2ReplSet --port 22002 --dbpath /u01/mongodbtest/shard2/data --logpath /u01/mongodbtest/shard2/log/shard2.log --fork --nojournal --oplogSize 10
4.2 在每一台主机以副本集方式启动shard3:
mongod --shardsvr --replSet shard2ReplSet --port 22002 --dbpath /u01/mongodbtest/shard2/data --logpath /u01/mongodbtest/shard2/log/shard2.log --fork --nojournal --oplogSize 10
4.3 在任意一台分片服务器上登录,初始化shard2:
mongo --host 10.13.0.131 --port 22002
rs.initiate({_id:"shard2ReplSet",members:[{_id:0,host:"10.13.0.131:22002"},{_id:1,host:"10.13.0.132:22002"},{_id:2,host:"10.13.0.130:22002",arbiterOnly:true}]})
4.4 在任意一台分片服务器上登录,初始化shard3:
mongo --host 10.13.0.132 --port 22003
rs.initiate({_id:"shard3ReplSet",members:[{_id:0,host:"10.13.0.132:22003"},{_id:1,host:"10.13.0.130:22003"},{_id:2,host:"10.13.0.131:22003",arbiterOnly:true}]})
(三)在每一台服务器分别启动mongos服务器。
mongos --configdb cfgReplSet/10.13.0.130:21000,10.13.0.131:21000,10.13.0.132:21000 --port 20000 --logpath /u01/mongodbtest/mongos/log/mongos.log --fork
[root@mysqlfb-01-CentOS67-CW-17F mongodbtest]# mongos --configdb cfgReplSet/10.13.0.130:21000,10.13.0.131:21000,10.13.0.132:21000 --port 20000 --logpath /u01/mongodbtest/mongos/log/mongos.log --fork about to fork child process, waiting until server is ready for connections. forked process: 18094 child process started successfully, parent exiting [root@mysqlfb-01-CentOS67-CW-17F mongodbtest]#
再次强调,如果config server不配置replica set,还是采用mongodb 3.2的mirror模式,会报错:
[root@mysqlfb-01-CentOS67-CW-17F mongodbtest]# mongos --configdb 10.13.0.130:21000,10.13.0.131:21000,10.13.0.132:21000 --port 20000 --logpath /u01/mongodbtest/mongos/log/mongos.log --fork FailedToParse: mirrored config server connections are not supported; for config server replica sets be sure to use the replica set connection string try 'mongos --help' for more information [root@mysqlfb-01-CentOS67-CW-17F mongodbtest]#
至此,mongodb的数据(分片+副本),配置服务器(config server),路由服务器(mongos)都已经配置好了。
安装之后的目录为:
[root@mysqlfb-01-CentOS67-CW-17F mongodbtest]# tree . ├── config │?? ├── data │?? │?? ├── collection-0--2075864821009270561.wt │?? │?? ├── collection-12--2075864821009270561.wt │?? │?? ├── collection-14--2075864821009270561.wt │?? │?? ├── collection-19--2075864821009270561.wt │?? │?? ├── collection-2--2075864821009270561.wt │?? │?? ├── collection-22--2075864821009270561.wt │?? │?? ├── collection-25--2075864821009270561.wt │?? │?? ├── collection-29--2075864821009270561.wt │?? │?? ├── collection-32--2075864821009270561.wt │?? │?? ├── collection-36--2075864821009270561.wt │?? │?? ├── collection-38--2075864821009270561.wt │?? │?? ├── collection-4--2075864821009270561.wt │?? │?? ├── collection-5--2075864821009270561.wt │?? │?? ├── collection-7--2075864821009270561.wt │?? │?? ├── collection-9--2075864821009270561.wt │?? │?? ├── diagnostic.data │?? │?? │?? ├── metrics.2017-02-20T10-01-06Z-00000 │?? │?? │?? └── metrics.interim │?? │?? ├── index-10--2075864821009270561.wt │?? │?? ├── index-11--2075864821009270561.wt │?? │?? ├── index-1--2075864821009270561.wt │?? │?? ├── index-13--2075864821009270561.wt │?? │?? ├── index-15--2075864821009270561.wt │?? │?? ├── index-16--2075864821009270561.wt │?? │?? ├── index-17--2075864821009270561.wt │?? │?? ├── index-18--2075864821009270561.wt │?? │?? ├── index-20--2075864821009270561.wt │?? │?? ├── index-21--2075864821009270561.wt │?? │?? ├── index-23--2075864821009270561.wt │?? │?? ├── index-24--2075864821009270561.wt │?? │?? ├── index-26--2075864821009270561.wt │?? │?? ├── index-27--2075864821009270561.wt │?? │?? ├── index-28--2075864821009270561.wt │?? │?? ├── index-30--2075864821009270561.wt │?? │?? ├── index-31--2075864821009270561.wt │?? │?? ├── index-3--2075864821009270561.wt │?? │?? ├── index-33--2075864821009270561.wt │?? │?? ├── index-34--2075864821009270561.wt │?? │?? ├── index-35--2075864821009270561.wt │?? │?? ├── index-37--2075864821009270561.wt │?? │?? ├── index-39--2075864821009270561.wt │?? │?? ├── index-6--2075864821009270561.wt │?? │?? ├── index-8--2075864821009270561.wt │?? │?? ├── journal │?? │?? │?? ├── WiredTigerLog.0000000001 │?? │?? │?? ├── WiredTigerPreplog.0000000001 │?? │?? │?? └── WiredTigerPreplog.0000000002 │?? │?? ├── _mdb_catalog.wt │?? │?? ├── mongod.lock │?? │?? ├── sizeStorer.wt │?? │?? ├── storage.bson │?? │?? ├── WiredTiger │?? │?? ├── WiredTigerLAS.wt │?? │?? ├── WiredTiger.lock │?? │?? ├── WiredTiger.turtle │?? │?? └── WiredTiger.wt │?? └── log │?? └── config.log ├── mongos │?? └── log │?? └── mongos.log ├── shard1 │?? ├── data │?? │?? ├── collection-0-3233402335532130874.wt │?? │?? ├── collection-2-3233402335532130874.wt │?? │?? ├── collection-4-3233402335532130874.wt │?? │?? ├── collection-5-3233402335532130874.wt │?? │?? ├── collection-7-3233402335532130874.wt │?? │?? ├── collection-9-3233402335532130874.wt │?? │?? ├── diagnostic.data │?? │?? │?? ├── metrics.2017-02-20T10-30-12Z-00000 │?? │?? │?? └── metrics.interim │?? │?? ├── index-10-3233402335532130874.wt │?? │?? ├── index-1-3233402335532130874.wt │?? │?? ├── index-3-3233402335532130874.wt │?? │?? ├── index-6-3233402335532130874.wt │?? │?? ├── index-8-3233402335532130874.wt │?? │?? ├── _mdb_catalog.wt │?? │?? ├── mongod.lock │?? │?? ├── sizeStorer.wt │?? │?? ├── storage.bson │?? │?? ├── WiredTiger │?? │?? ├── WiredTigerLAS.wt │?? │?? ├── WiredTiger.lock │?? │?? ├── WiredTiger.turtle │?? │?? └── WiredTiger.wt │?? └── log │?? └── shard1.log ├── shard2 │?? ├── data │?? │?? ├── collection-0-8872345764405008471.wt │?? │?? ├── collection-2-8872345764405008471.wt │?? │?? ├── collection-4-8872345764405008471.wt │?? │?? ├── collection-6-8872345764405008471.wt │?? │?? ├── diagnostic.data │?? │?? │?? ├── metrics.2017-02-21T08-32-29Z-00000 │?? │?? │?? └── metrics.interim │?? │?? ├── index-1-8872345764405008471.wt │?? │?? ├── index-3-8872345764405008471.wt │?? │?? ├── index-5-8872345764405008471.wt │?? │?? ├── index-7-8872345764405008471.wt │?? │?? ├── _mdb_catalog.wt │?? │?? ├── mongod.lock │?? │?? ├── sizeStorer.wt │?? │?? ├── storage.bson │?? │?? ├── WiredTiger │?? │?? ├── WiredTigerLAS.wt │?? │?? ├── WiredTiger.lock │?? │?? ├── WiredTiger.turtle │?? │?? └── WiredTiger.wt │?? └── log │?? └── shard2.log └── shard3 ├── data │?? ├── collection-0-4649094397759884044.wt │?? ├── collection-12-4649094397759884044.wt │?? ├── collection-13-4649094397759884044.wt │?? ├── collection-2-4649094397759884044.wt │?? ├── collection-4-4649094397759884044.wt │?? ├── collection-6-4649094397759884044.wt │?? ├── diagnostic.data │?? │?? ├── metrics.2017-02-21T08-51-36Z-00000 │?? │?? └── metrics.interim │?? ├── index-14-4649094397759884044.wt │?? ├── index-1-4649094397759884044.wt │?? ├── index-3-4649094397759884044.wt │?? ├── index-5-4649094397759884044.wt │?? ├── index-7-4649094397759884044.wt │?? ├── _mdb_catalog.wt │?? ├── mongod.lock │?? ├── sizeStorer.wt │?? ├── storage.bson │?? ├── WiredTiger │?? ├── WiredTigerLAS.wt │?? ├── WiredTiger.lock │?? ├── WiredTiger.turtle │?? └── WiredTiger.wt └── log └── shard3.log 19 directories, 122 files [root@mysqlfb-01-CentOS67-CW-17F mongodbtest]#
添加分片:
mongos> sh.addShard("shard1ReplSet/10.13.0.130:22001"); { "shardAdded" : "shard1ReplSet", "ok" : 1 } mongos> mongos> sh.addShard("shard2ReplSet/10.13.0.131:22002"); { "shardAdded" : "shard2ReplSet", "ok" : 1 } mongos> mongos> mongos> sh.addShard("shard3ReplSet/10.13.0.132:22003"); { "shardAdded" : "shard3ReplSet", "ok" : 1 } mongos> mongos> mongos> mongos>
检查,显示shard状态:
mongos> mongos> sh.status();sh.status(); --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("58aac2586715e0acb331e106") } shards: { "_id" : "shard1ReplSet", "host" : "shard1ReplSet/10.13.0.130:22001,10.13.0.131:22001", "state" : 1 } { "_id" : "shard2ReplSet", "host" : "shard2ReplSet/10.13.0.131:22002,10.13.0.132:22002", "state" : 1 } { "_id" : "shard3ReplSet", "host" : "shard3ReplSet/10.13.0.130:22003,10.13.0.132:22003", "state" : 1 } active mongoses: "3.4.2" : 3 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no Balancer lock taken at Mon Feb 20 2017 18:18:01 GMT+0800 (CST) by ConfigServer:Balancer Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: mongos>
检查,显示shard配置:
db.runCommand({listShards:1});
mongos> mongos> db.runCommand({listShards:1});db.runCommand({listShards:1}); { "shards" : [ { "_id" : "shard1ReplSet", "host" : "shard1ReplSet/10.13.0.130:22001,10.13.0.131:22001", "state" : 1 }, { "_id" : "shard2ReplSet", "host" : "shard2ReplSet/10.13.0.131:22002,10.13.0.132:22002", "state" : 1 }, { "_id" : "shard3ReplSet", "host" : "shard3ReplSet/10.13.0.130:22003,10.13.0.132:22003", "state" : 1 } ], "ok" : 1 } mongos>
mongodb的CRUD(create,select,update,delete)基本操作有:
==> 数据操作: 插入数据:db.collection.insert 查询数据:db.collection.find() 更新数据:db.collection.update() 删除数据:db.collection.remove() ==> collection操作: 新建collection: sh.shardCollection("xxx.yyy",{col1: 1, col2: 1}) 删除collection: db.yyy.drop() ==> db操作: 新建db sh.enableSharding("xxx") 删除db: use xxx db.dropDatabase();
我们新建一个数据库:
sh.enableSharding("oracleblog");
mongos> sh.enableSharding("oracleblog");sh.enableSharding("oracleblog"); { "ok" : 1 } mongos>
建立collection,和相关字段:
sh.shardCollection("oracleblog.testtab",{age: 1, name: 1})
mongos> use oraclebloguse oracleblog switched to db oracleblog mongos> mongos> sh.shardCollection("oracleblog.testtab",{age: 1, name: 1})sh.shardCollection("oracleblog.testtab",{age: 1, name: 1}) { "collectionsharded" : "oracleblog.testtab", "ok" : 1 } mongos> mongos>
检查分片信息(插入数据前):
mongos> sh.status(); --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("58aac2586715e0acb331e106") } shards: { "_id" : "shard1ReplSet", "host" : "shard1ReplSet/10.13.0.130:22001,10.13.0.131:22001", "state" : 1 } { "_id" : "shard2ReplSet", "host" : "shard2ReplSet/10.13.0.131:22002,10.13.0.132:22002", "state" : 1 } { "_id" : "shard3ReplSet", "host" : "shard3ReplSet/10.13.0.130:22003,10.13.0.132:22003", "state" : 1 } active mongoses: "3.4.2" : 3 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no Balancer lock taken at Mon Feb 20 2017 18:18:01 GMT+0800 (CST) by ConfigServer:Balancer Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: 8 : Success databases: { "_id" : "oracleblog", "primary" : "shard1ReplSet", "partitioned" : true } oracleblog.testtab shard key: { "age" : 1, "name" : 1 } unique: false balancing: true chunks: shard1ReplSet 1 { "age" : { "$minKey" : 1 }, "name" : { "$minKey" : 1 } } -->> { "age" : { "$maxKey" : 1 }, "name" : { "$maxKey" : 1 } } on : shard1ReplSet Timestamp(1, 0) mongos>
插入数据:
for (i=1;i<=10000;i++) db.testtab.insert({name: "user"+i, age: (i%150)})
mongos> for (i=1;i<=10000;i++) db.testtab.insert({name: "user"+i, age: (i%150)})for (i=1;i<=10000;i++) db.testtab.insert({name: "user"+i, age: (i%150)}) WriteResult({ "nInserted" : 1 }) mongos>
检查分片信息(插入数据后):
mongos> sh.status()sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("58aac2586715e0acb331e106") } shards: { "_id" : "shard1ReplSet", "host" : "shard1ReplSet/10.13.0.130:22001,10.13.0.131:22001", "state" : 1 } { "_id" : "shard2ReplSet", "host" : "shard2ReplSet/10.13.0.131:22002,10.13.0.132:22002", "state" : 1 } { "_id" : "shard3ReplSet", "host" : "shard3ReplSet/10.13.0.130:22003,10.13.0.132:22003", "state" : 1 } active mongoses: "3.4.2" : 3 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no Balancer lock taken at Mon Feb 20 2017 18:18:01 GMT+0800 (CST) by ConfigServer:Balancer Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: 10 : Success databases: { "_id" : "oracleblog", "primary" : "shard1ReplSet", "partitioned" : true } oracleblog.testtab shard key: { "age" : 1, "name" : 1 } unique: false balancing: true chunks: shard1ReplSet 1 shard2ReplSet 1 shard3ReplSet 1 { "age" : { "$minKey" : 1 }, "name" : { "$minKey" : 1 } } -->> { "age" : 2, "name" : "user2" } on : shard2ReplSet Timestamp(2, 0) { "age" : 2, "name" : "user2" } -->> { "age" : 22, "name" : "user22" } on : shard3ReplSet Timestamp(3, 0) { "age" : 22, "name" : "user22" } -->> { "age" : { "$maxKey" : 1 }, "name" : { "$maxKey" : 1 } } on : shard1ReplSet Timestamp(3, 1) mongos>
查询age大于130的记录:
db.testtab.find({age: {$gt: 130}})
mongos> db.testtab.find({age: {$gt: 130}})db.testtab.find({age: {$gt: 130}}) { "_id" : ObjectId("58ae5d5546c608a4e50b7f3c"), "name" : "user1031", "age" : 131 } { "_id" : ObjectId("58ae5d5546c608a4e50b7fd2"), "name" : "user1181", "age" : 131 } { "_id" : ObjectId("58ae5d5446c608a4e50b7bb8"), "name" : "user131", "age" : 131 } { "_id" : ObjectId("58ae5d5546c608a4e50b8068"), "name" : "user1331", "age" : 131 } { "_id" : ObjectId("58ae5d5546c608a4e50b80fe"), "name" : "user1481", "age" : 131 } { "_id" : ObjectId("58ae5d5546c608a4e50b8194"), "name" : "user1631", "age" : 131 } { "_id" : ObjectId("58ae5d5646c608a4e50b822a"), "name" : "user1781", "age" : 131 } { "_id" : ObjectId("58ae5d5646c608a4e50b82c0"), "name" : "user1931", "age" : 131 } { "_id" : ObjectId("58ae5d5646c608a4e50b8356"), "name" : "user2081", "age" : 131 } { "_id" : ObjectId("58ae5d5646c608a4e50b83ec"), "name" : "user2231", "age" : 131 } { "_id" : ObjectId("58ae5d5646c608a4e50b8482"), "name" : "user2381", "age" : 131 } { "_id" : ObjectId("58ae5d5646c608a4e50b8518"), "name" : "user2531", "age" : 131 } { "_id" : ObjectId("58ae5d5646c608a4e50b85ae"), "name" : "user2681", "age" : 131 } { "_id" : ObjectId("58ae5d5546c608a4e50b7c4e"), "name" : "user281", "age" : 131 } { "_id" : ObjectId("58ae5d5646c608a4e50b8644"), "name" : "user2831", "age" : 131 } { "_id" : ObjectId("58ae5d5646c608a4e50b86da"), "name" : "user2981", "age" : 131 } { "_id" : ObjectId("58ae5d5746c608a4e50b8770"), "name" : "user3131", "age" : 131 } { "_id" : ObjectId("58ae5d5746c608a4e50b8806"), "name" : "user3281", "age" : 131 } { "_id" : ObjectId("58ae5d5746c608a4e50b889c"), "name" : "user3431", "age" : 131 } { "_id" : ObjectId("58ae5d5746c608a4e50b8932"), "name" : "user3581", "age" : 131 } Type "it" for more mongos>