首页 诗词 字典 板报 句子 名言 友答 励志 学校 网站地图
当前位置: 首页 > 教程频道 > 数据库 > 其他数据库 >

MongoDB线下服务器增加节点

2012-12-15 
MongoDB线上服务器增加节点项目数据库为MongoDB,分片DB,每个片为一个复制集,每复制集三台DB,现在需给其中

MongoDB线上服务器增加节点

项目数据库为MongoDB,分片DB,每个片为一个复制集,每复制集三台DB,

现在需给其中一个复制集增加一台DB,先把用到的资料贴上来,等有时间了再

整理:


?

Add Members to a Replica Set --Production Notes

?

If you have a backup or snapshot of an existing member, you can move the data files (i.e. /data/db or dbpath) to a new system and use them to quickly initiate a new member. These files must be:

??? 1. clean: the existing dataset must be from a consistent copy of the database from a member of the same replica set. See the Backup and Restoration Strategies document for more information.

??? http://docs.mongodb.org/manual/administration/backups/


??? 2. recent: the copy must more recent than the oldest operation in the primary member’s oplog. The new secondary must be able to become current using operations from the primary’s oplog.

?

?

-------------------------------------------------------

?

Creating a slave from an existing master's disk image

If you can stop write operations to the master for an indefinite period, you can copy the data files from the master to the new slave, and then start the slave with --fastsync.

Be careful with --fastsync. If the data is not perfectly in sync, a discrepancy will exist forever.

--fastsync is a way to start a slave starting with an existing master disk image/backup. This option declares that the adminstrator guarantees the image is correct and completely up to date with that of the master. If you have a full and complete copy of data from a master you can use this option to avoid a full synchronization upon starting the slave.

?

/////////////////////////////

Would like to get documentation for the --fastsync feature. My hope is the ability to make raw file system copies to seed slaves, then tell the slave where to "pick up" reads from the oplog. This would make deploying slaves much faster than performing a initial sync, especially when there is a slow connection between master/slave (i.e. across data centers).

//////////////////////////////

?

?

Yes. --fastsync is a way to speed up the sync when you have a recent
copy of all the data and oplog

On Feb 24, 3:06?pm, tetlika <tetl...@xxxxxxxxx> wrote:
> ah ok
>
> I think i understood: fast resync is used just when we have a copy of
> data - including oplogs - it just tells not to do a full resync
>
> when we dont use fastresync all data will be synced, not depending on
> oplog
>
> On 25 ?DD, 00:55, sridhar <srid...@xxxxxxxxx> wrote:
>
>
>
>
>
>
>
> > fastsync does not replay all the oplog. It only replays the necessary
> > entries post where your database is at. If your oplog is not big
> > enough and has rolled over, fastsync falls back to a full resync.
>
> > On Feb 24, 2:49?pm, tetlika <tetl...@xxxxxxxxx> wrote:
>
> > > Hi!
>
> > > According to
> > > thehttp://www.mongodb.org/display/DOCS/Adding+a+New+Set+Member
> > > fastsync is just "replaying" ALL ?oplog on the new slave, so if we
> > > dont have the oplog ?big enough - we need copy data to the new slave
> > > and run it with the fastsync option?

?

?

Creating a slave from an existing slave's disk image

You can just copy the other slave's data file snapshot without any special options. Note data snapshots should only be taken when a mongod process is down or in fsync-and-lock state.

?

?

Sharded Cluster and Replica Set Considerations

The underlying architecture of sharded clusters and replica sets presents severalchallenges for creating backups. This section describes how to makequality backups in environments with these configurations and how toperform restorations.

Back Up Sharded Clusters

Sharding complicates backup operations, because it is impossible tocreate a backup of a single moment in time from a distributed clusterof systems and processes.

Depending on the size of your data, you can back up the cluster as awhole or back up each mongod instance. The followingsection describes both procedures.

Back Up the Cluster as a Whole Using mongodump

If your sharded cluster comprises a small collection of data,you can connect to a mongos and issue themongodump command. You can use this approach if the followingis true:

    It’s possible to store the entire backup on one system or on a singlestorage device. Consider both backups of entire instances andincremental dumps of data.The state of the database at the beginning of the operation isnot significantly different than the state of the database at theend of the backup. If the backup operation cannot capture a backup,this is not a viable option.The backup can run and complete without affecting the performance ofthe cluster.

    Note

    If you use mongodump without specifying the a database orcollection, the output will contain both the collection data and thesharding config metadata from the config servers.

    You cannot use the --oplog option formongodump when dumping from a mongos. This option is onlyavailable when running directly against a replica set member.

    Back Up from All Database Instances

    If your sharded cluster is to large for the mongodumpcommand, then you must back up your data either by creating a snapshot of the clusteror by creating a binary dump of each database. This section describes both.

    In both cases:

      The backups must capture the database in a consistent state.The sharded cluster must be consistent in itself.

      This procedure describes both approaches:

        Disable the balancer process that equalizes thedistribution of data among the shards. To disablethe balancer, use the sh.stopBalancer() method in themongo shell, and see theDisable the Balancer procedure.

        Warning

        It is essential that you stop the balancer before creatingbackups. If the balancer remains active, your resulting backupscould have duplicate data or miss some data, as chunks migrate while recording backups.

        Lock one member of each replica set in shard so that your backups reflect yourentire database system at a single point in time. Lock all shardsin as short of an interval as possible.

        To lock or freeze a sharded cluster, you must:

          use the db.fsyncLock() method in the mongoshell connected to each shard mongod instance andblock write operations.Shutdown one of the config servers, toprevent all metadata changes during the backup process.

          Use mongodump to backup one of the config servers. This backs up the cluster’s metadata. Youonly need to back up one config server, as they all have replicas ofthe same information.

          Issue this command against one of the config server itself or themongos:

          mongodump --db config

          Back up the replica set members of the shards that you locked. You may back upshards one at a time or in parallel. For each shard, do one of thefollowing:

            If your system has disk level snapshot capabilities, create asnapshot each shard. Use the procedures inUsing Block Level Backup Methods.Create a binary dump of each shard using the operations describedin Using Binary Database Dumps for Backups.

            Unlock all locked replica set members of each shard using thedb.fsyncUnlock() method in the mongo shell.

            Restore the balancer with the sh.startBalancer() methodaccording to the Disable the Balancerprocedure.

            Use the following command sequence when connected to themongos with the mongo shell:

            use configsh.startBalancer()
      Schedule Automated Backups

      If you have an automated backup schedule, you can disable allbalancing operations for a period of time. For instance, consider thefollowing command:

      use configdb.settings.update( { _id : "balancer" }, { $set : { activeWindow : { start : "6:00", stop : "23:00" } } }, true )

      This operation configures the balancer to run between 6:00 am and11:00pm, server time. Schedule your backup operation to run andcomplete in this time. Ensure that the backup can complete during thewindow when the balancer is running and that the balancer caneffectively balance the collection among the shards in the windowallotted to each.

      Restore Sharded Clusters

        Stop all mongod and mongos processes.

        If shard hostnames have changed, you must manually update theshards collection in the Config Database Contents to use the newhostnames. Do the following:

          Start the three config servers byissuing commands similar to the following, using values appropriateto your configuration:

          mongod --configsvr --dbpath /data/configdb --port 27018

          Restore the Config Database Contents on each config server.

          Start one mongos instance.

          Update the Config Database Contents collection named shards to reflect thenew hostnames.

        Restore the following:

          Data files for each server in each shard. Because replicasets provide each production shard, restore all the members ofthe replica set or use the other standard approaches forrestoring a replica set from backup.Data files for each config server,if you have not already done so in the previous step.

          Restart the all the mongos instances.

          Restart all the mongod instances.

          Connect to a mongos instance from a mongo shelland run use the db.printShardingStatus() method to ensurethat the cluster is operational, as follows:

          db.printShardingStatus()show collections
      Restore a Single Shard

      Always restore sharded clustersas a whole. When you restore a single shard, keep in mind that thebalancer process might have moved chunks onto oroff of this shard since the last backup. If that’s the case, you mustmanually move those chunks, as described in this procedure.

        Restore the shard.For all chunks that migrated away from this shard, you need not doanything. You do not need to delete these documents from the shardbecause the chunks are automatically filtered out from queries bymongos.For chunks that migrated to this shard since the last backup,you must manually recover the chunks. To determine what chunks havemoved, view the changelog collection in the Config Database Contents.
      Replica Sets

      In most cases, backing up data stored in a replica set issimilar to backing up data stored in a single instance. It’s possible tolock a single secondary or slave database and thencreate a backup from that instance. When you unlock the database, the secondary orslave will catch up with the primary or master. You may alsochose to deploy a dedicated hidden member for backup purposes.

      If you have a sharded cluster where each shard is itself a replicaset, you can use this method to create a backup of the entire clusterwithout disrupting the operation of the node. In these situations youshould still turn off the balancer when you create backups.

      For any cluster, using a non-primary/non-master node to create backups isparticularly advantageous in that the backup operation does notaffect the performance of the primary or master. Replicationitself provides some measure of redundancy. Nevertheless, keepingpoint-in time backups of your cluster to provide for disaster recoveryand as an additional layer of protection is crucial.

      ?

热点排行