こんにちは、hoge太郎です。
もうすっかり寒くなってきましたがプロジェクトのお陰でだいぶ暖かい今日この頃です。

さて、今回は最近のプロジェクトで良く導入しているredisについて記載したいと思います。

https://redis.io/
redis、使っていますか?

一昔前はmemcachedが良く使われていましたが、
memcachedよりも豊富な機能があって最近はどんどんredisに変わってきているようです。

弊社でも色々な用途で利用しているのですが、
redisのプロセスが落ちるとサービス障害に繋がるという状況を避ける為に
redis sentinelを利用して監視、自動フェールオーバーを実現しております。

しかしあまりにガッツリ利用していると単一のredisサーバーに接続が集中してしまい、
そろそろ負荷分散を考える必要が出てきました。
アプリケーション側でキーによって接続先のredisを変えるようにすればいいのですが、
出来ればミドルウェアでどうにかしたいと思って探したら見つかった物が「redis-mgr」でした。

なので今回は「redis-mgr」を紹介しようと思います。
https://github.com/idning/redis-mgr

redis-mgrとは

deploy.py を実行するだけで設定ファイルに基づくredisのクラスタを構築する事が出来て、冗長化、負荷分散、一括コマンド実行などが実現出来る優れ物です。

あまり日本語の情報は少ないですね。
詳しくはgithubを参照して下さい。

動かしてみる

何はともあれ「やってみれば分かる」という事で、まずは動かしてみましょう。
とりあえず簡単に動作検証可能なように、vagrantを利用してサーバーを5台を用意しました。

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "debian7.6"

  config.vm.define :mgr do |mgr|
    mgr.vm.network :private_network, ip:"192.168.60.1", virtualbox__intnet: "intnet"
    mgr.vm.hostname = "mgr"
  end

  config.vm.define :redis1 do |redis|
    redis.vm.network :private_network, ip:"192.168.60.11", virtualbox__intnet: "intnet"
    redis.vm.hostname = "redis1"
  end

  config.vm.define :redis2 do |redis|
    redis.vm.network :private_network, ip:"192.168.60.12", virtualbox__intnet: "intnet"
    redis.vm.hostname = "redis2"
  end

  config.vm.define :redis3 do |redis|
    redis.vm.network :private_network, ip:"192.168.60.13", virtualbox__intnet: "intnet"
    redis.vm.hostname = "redis3"
  end

  config.vm.define :redis4 do |redis|
    redis.vm.network :private_network, ip:"192.168.60.14", virtualbox__intnet: "intnet"
    redis.vm.hostname = "redis4"
  end
end

「vagrant up」 でdebianサーバーが5台立ち上がります。

Current machine states:

mgr                       running (virtualbox)
redis1                    running (virtualbox)
redis2                    running (virtualbox)
redis3                    running (virtualbox)
redis4                    running (virtualbox)

今回は下記のような構成にしてみようと思います。

mgr 192.168.60.1 deploy, proxy用
redis1 192.168.60.11 master, sentinel
redis2 192.168.60.12 master, sentinel
redis3 192.168.60.13 slave(redis1), sentinel
redis4 192.168.60.14 slave(redis2), sentinel

1. redis-mgrの導入

githubに記載されている通りでは出来なかったので少しだけ修正しています。
rootユーザーで作業しました。

apt-get install git python python-pip
pip install redis
pip install -e "git://github.com/idning/pcl.git#egg=pcl"
pip install -e "git://github.com/kislyuk/argcomplete.git#egg=argcomplete"
git clone https://github.com/idning/redis-mgr.git

次に「_binaries」ディレクトリ以下に必要なバイナリをコピーします。
どうやらここに置いたバイナリファイルを各deploy先に転送する仕組みの様です。

2. twemproxy(nutcracker)の導入

twemproxy(nutcracker)とは
https://github.com/twitter/twemproxy

ミドルウェアでredisの水平分散を実現するソリューションです。
水平分散だけであればこれを導入すれば出来ます。
こちらはそこそこ記事がありますので、導入方法はそちらに任せます。

makeして出来上がったnutcrackerをredis-mgrの_binariesディレクトリ以下にコピーします。

3. redis-serverの導入

普通にapt-getで入れます。

/usr/bin/redis-* を_binariesディレクトリ以下にコピーします。

deploy

redis1〜redis4へdeployする為に、redis-mgrのconfファイルを下記のようにしました。

cluster = {
    'cluster_name': 'cluster',
    'user': 'vagrant',
    'REDIS_MONITOR_EXTRA': {
        'used_cpu_user':              (0, 1),
        '_slowlog_per_sec':           (0, 10),
    },
    'sentinel':[
        ('192.168.60.11:26379', '/tmp/r/sentinel-26379'),
        ('192.168.60.12:26379', '/tmp/r/sentinel-26379'),
        ('192.168.60.13:26379', '/tmp/r/sentinel-26379'),
        ('192.168.60.14:26379', '/tmp/r/sentinel-26379'),
    ],
    'redis': [
        # master(host:port, install path)       ,  slave(host:port, install path)
        'cluster-redis1:192.168.60.11:6379:/tmp/r/redis-6379', 'cluster-redis3:192.168.60.13:6379:/tmp/r/redis-6379',
        'cluster-redis2:192.168.60.12:6379:/tmp/r/redis-6379', 'cluster-redis4:192.168.60.14:6379:/tmp/r/redis-6379',
    ],
    'nutcracker': [
        ('127.0.0.1:6379', '/tmp/r/nutcracker-6379'),
    ],
}

またmgrからredis1〜redis4へパスワード無しでssh接続出来るように予め鍵認証の準備をしておきましょう。

実行

vagrant@mgr:~/redis-mgr$ export REDIS_DEPLOY_CONFIG=conf && . bin/active
vagrant@mgr:~/redis-mgr$ ./bin/deploy.py cluster deploy
2014-11-26 01:06:27,245 [MainThread] [NOTICE] start running: ./bin/deploy.py -v cluster deploy
2014-11-26 01:06:27,246 [MainThread] [INFO] Namespace(cmd=[], filter='', logfile='/home/vagrant/redis-mgr/bin/../log/deploy.log', op='deploy', sleep=0, target='cluster', verbose=1, web_port=8080)
2014-11-26 01:06:27,246 [MainThread] [NOTICE] deploy redis
2014-11-26 01:06:27,247 [MainThread] [INFO] deploy [redis:192.168.60.11:6379]
2014-11-26 01:06:27,968 [MainThread] [INFO] deploy [redis:192.168.60.13:6379]
2014-11-26 01:06:28,587 [MainThread] [INFO] deploy [redis:192.168.60.12:6379]
2014-11-26 01:06:29,145 [MainThread] [INFO] deploy [redis:192.168.60.14:6379]
2014-11-26 01:06:29,814 [MainThread] [NOTICE] deploy sentinel
2014-11-26 01:06:29,814 [MainThread] [INFO] deploy [sentinel:192.168.60.11:26379]
2014-11-26 01:06:30,482 [MainThread] [INFO] deploy [sentinel:192.168.60.12:26379]
2014-11-26 01:06:31,152 [MainThread] [INFO] deploy [sentinel:192.168.60.13:26379]
2014-11-26 01:06:31,626 [MainThread] [INFO] deploy [sentinel:192.168.60.14:26379]
2014-11-26 01:06:32,309 [MainThread] [NOTICE] deploy nutcracker
2014-11-26 01:06:32,310 [MainThread] [INFO] deploy [nutcracker:127.0.0.1:6379]

deployされましたね。

vagrant@mgr:~/redis-mgr$ ./bin/deploy.py cluster start
2014-11-26 01:08:57,019 [MainThread] [NOTICE] start running: ./bin/deploy.py -v cluster start
2014-11-26 01:08:57,019 [MainThread] [INFO] Namespace(cmd=[], filter='', logfile='/home/vagrant/redis-mgr/bin/../log/deploy.log', op='start', sleep=0, target='cluster', verbose=1, web_port=8080)
2014-11-26 01:08:57,020 [MainThread] [NOTICE] start redis
2014-11-26 01:08:57,245 [MainThread] [INFO] [redis:192.168.60.11:6379] start ok in 0.22 seconds
2014-11-26 01:08:57,449 [MainThread] [INFO] [redis:192.168.60.13:6379] start ok in 0.20 seconds
2014-11-26 01:08:57,677 [MainThread] [INFO] [redis:192.168.60.12:6379] start ok in 0.22 seconds
2014-11-26 01:08:57,903 [MainThread] [INFO] [redis:192.168.60.14:6379] start ok in 0.22 seconds
2014-11-26 01:08:57,903 [MainThread] [NOTICE] start sentinel
2014-11-26 01:08:58,120 [MainThread] [INFO] [sentinel:192.168.60.11:26379] start ok in 0.21 seconds
2014-11-26 01:08:58,329 [MainThread] [INFO] [sentinel:192.168.60.12:26379] start ok in 0.20 seconds
2014-11-26 01:08:58,536 [MainThread] [INFO] [sentinel:192.168.60.13:26379] start ok in 0.20 seconds
2014-11-26 01:08:58,740 [MainThread] [INFO] [sentinel:192.168.60.14:26379] start ok in 0.20 seconds
2014-11-26 01:08:58,740 [MainThread] [NOTICE] start nutcracker
2014-11-26 01:08:59,411 [MainThread] [INFO] [nutcracker:127.0.0.1:6379] start ok in 0.67 seconds
2014-11-26 01:08:59,412 [MainThread] [NOTICE] setup master <- slave
2014-11-26 01:08:59,425 [MainThread] [INFO] setup [redis:192.168.60.11:6379] <- [redis:192.168.60.13:6379]
2014-11-26 01:08:59,425 [MainThread] [INFO] [redis:192.168.60.13:6379] _binaries/redis-cli -h 192.168.60.13 -p 6379 SLAVEOF 192.168.60.11 6379
OK

2014-11-26 01:08:59,439 [MainThread] [INFO] setup [redis:192.168.60.12:6379] <- [redis:192.168.60.14:6379]
2014-11-26 01:08:59,440 [MainThread] [INFO] [redis:192.168.60.14:6379] _binaries/redis-cli -h 192.168.60.14 -p 6379 SLAVEOF 192.168.60.12 6379
OK

start.

vagrant@mgr:~/redis-mgr$ ./bin/deploy.py cluster status
2014-11-26 01:09:29,388 [MainThread] [NOTICE] start running: ./bin/deploy.py -v cluster status
2014-11-26 01:09:29,389 [MainThread] [INFO] Namespace(cmd=[], filter='', logfile='/home/vagrant/redis-mgr/bin/../log/deploy.log', op='status', sleep=0, target='cluster', verbose=1, web_port=8080)
{'REDIS_MONITOR_EXTRA': {'_slowlog_per_sec': (0, 10), 'used_cpu_user': (0, 1)},
 'cluster_name': 'cluster',
 'nutcracker': [('127.0.0.1:6379', '/tmp/r/nutcracker-6379')],
 'redis': ['cluster-redis1:192.168.60.11:6379:/tmp/r/redis-6379',
           'cluster-redis3:192.168.60.13:6379:/tmp/r/redis-6379',
           'cluster-redis2:192.168.60.12:6379:/tmp/r/redis-6379',
           'cluster-redis4:192.168.60.14:6379:/tmp/r/redis-6379'],
 'sentinel': [('192.168.60.11:26379', '/tmp/r/sentinel-26379'),
              ('192.168.60.12:26379', '/tmp/r/sentinel-26379'),
              ('192.168.60.13:26379', '/tmp/r/sentinel-26379'),
              ('192.168.60.14:26379', '/tmp/r/sentinel-26379')],
 'user': 'vagrant'}
2014-11-26 01:09:29,392 [MainThread] [NOTICE] status redis
2014-11-26 01:09:29,399 [MainThread] [INFO] [redis:192.168.60.11:6379] uptime 32 seconds
2014-11-26 01:09:29,403 [MainThread] [INFO] [redis:192.168.60.13:6379] uptime 32 seconds
2014-11-26 01:09:29,408 [MainThread] [INFO] [redis:192.168.60.12:6379] uptime 32 seconds
2014-11-26 01:09:29,415 [MainThread] [INFO] [redis:192.168.60.14:6379] uptime 32 seconds
2014-11-26 01:09:29,415 [MainThread] [NOTICE] status sentinel
2014-11-26 01:09:29,421 [MainThread] [INFO] [sentinel:192.168.60.11:26379] uptime 31 seconds
2014-11-26 01:09:29,431 [MainThread] [INFO] [sentinel:192.168.60.12:26379] uptime 31 seconds
2014-11-26 01:09:29,437 [MainThread] [INFO] [sentinel:192.168.60.13:26379] uptime 31 seconds
2014-11-26 01:09:29,443 [MainThread] [INFO] [sentinel:192.168.60.14:26379] uptime 31 seconds
2014-11-26 01:09:29,443 [MainThread] [NOTICE] status nutcracker
2014-11-26 01:09:29,445 [MainThread] [INFO] [nutcracker:127.0.0.1:6379] uptime 31 seconds
2014-11-26 01:09:29,449 [MainThread] [NOTICE] status master-slave <all from sentinel>
cluster-redis1 192.168.60.11:6379 <- 192.168.60.13:6379
cluster-redis2 192.168.60.12:6379 <- 192.168.60.14:6379

status.

ものすごく簡単でしたが、これでredisの冗長化と負荷分散が実現されました。
一括でログを見たり状態をチェックしたりする管理機能まで付いています。
素晴らしい!

負荷分散の確認

適当にデータを追加していって、分散されているかを確認してみましょう。

vagrant@mgr:~/redis-mgr$ _binaries/redis-cli set a 1
OK
vagrant@mgr:~/redis-mgr$ _binaries/redis-cli set b 2
OK
vagrant@mgr:~/redis-mgr$ _binaries/redis-cli set c 3
OK
vagrant@mgr:~/redis-mgr$ _binaries/redis-cli set d 4
OK
vagrant@mgr:~/redis-mgr$ _binaries/redis-cli set e 5
OK
vagrant@mgr:~/redis-mgr$ ./bin/deploy.py cluster rediscmd 'keys "*"'
2014-11-26 01:16:14,042 [MainThread] [NOTICE] start running: ./bin/deploy.py -v cluster rediscmd keys "*"
2014-11-26 01:16:14,042 [MainThread] [INFO] Namespace(cmd=['keys "*"'], filter='', logfile='/home/vagrant/redis-mgr/bin/../log/deploy.log', op='rediscmd', sleep=0, target='cluster', verbose=1, web_port=8080)
2014-11-26 01:16:14,143 [MainThread] [INFO] [redis:192.168.60.11:6379] _binaries/redis-cli -h 192.168.60.11 -p 6379 keys "*"
c
e
a

2014-11-26 01:16:14,257 [MainThread] [INFO] [redis:192.168.60.13:6379] _binaries/redis-cli -h 192.168.60.13 -p 6379 keys "*"
e
a
c

2014-11-26 01:16:14,368 [MainThread] [INFO] [redis:192.168.60.12:6379] _binaries/redis-cli -h 192.168.60.12 -p 6379 keys "*"
d
b

2014-11-26 01:16:14,481 [MainThread] [INFO] [redis:192.168.60.14:6379] _binaries/redis-cli -h 192.168.60.14 -p 6379 keys "*"
b
d

redis1-3とredis2-4に分散されていますね。

冗長化の確認

redis1を停止したらどうなるかを見てみましょう。

vagrant@mgr:~/redis-mgr$ ./bin/deploy.py cluster rediscmd 'info replication'
2014-11-26 01:17:50,520 [MainThread] [NOTICE] start running: ./bin/deploy.py -v cluster rediscmd info replication
2014-11-26 01:17:50,521 [MainThread] [INFO] Namespace(cmd=['info replication'], filter='', logfile='/home/vagrant/redis-mgr/bin/../log/deploy.log', op='rediscmd', sleep=0, target='cluster', verbose=1, web_port=8080)
2014-11-26 01:17:50,622 [MainThread] [INFO] [redis:192.168.60.11:6379] _binaries/redis-cli -h 192.168.60.11 -p 6379 info replication
# Replication
role:master
connected_slaves:1
slave0:ip=192.168.60.13,port=6379,state=online,offset=152136,lag=0
master_repl_offset:152283
repl_backlog_active:1
repl_backlog_size:67108864
repl_backlog_first_byte_offset:2
repl_backlog_histlen:152282

2014-11-26 01:17:50,736 [MainThread] [INFO] [redis:192.168.60.13:6379] _binaries/redis-cli -h 192.168.60.13 -p 6379 info replication
# Replication
role:slave
master_host:192.168.60.11
master_port:6379
master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
slave_repl_offset:152283
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:67108864
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

2014-11-26 01:17:50,851 [MainThread] [INFO] [redis:192.168.60.12:6379] _binaries/redis-cli -h 192.168.60.12 -p 6379 info replication
# Replication
role:master
connected_slaves:1
slave0:ip=192.168.60.14,port=6379,state=online,offset=152202,lag=0
master_repl_offset:152202
repl_backlog_active:1
repl_backlog_size:67108864
repl_backlog_first_byte_offset:2
repl_backlog_histlen:152201

2014-11-26 01:17:50,956 [MainThread] [INFO] [redis:192.168.60.14:6379] _binaries/redis-cli -h 192.168.60.14 -p 6379 info replication
# Replication
role:slave
master_host:192.168.60.12
master_port:6379
master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
slave_repl_offset:152202
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:67108864
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

redis1, redis2がmaster、
redis3, redis4はslaveになっています。

redis1を停止します。

vagrant@mgr:~/redis-mgr$ ssh redis1
vagrant@redis1:~$ /tmp/r/redis-6379/bin/redis-cli shutdown
vagrant@redis1:~$ exit
vagrant@mgr:~/redis-mgr$ ./bin/deploy.py cluster status
2014-11-26 01:20:11,075 [MainThread] [NOTICE] start running: ./bin/deploy.py -v cluster status
2014-11-26 01:20:11,076 [MainThread] [INFO] Namespace(cmd=[], filter='', logfile='/home/vagrant/redis-mgr/bin/../log/deploy.log', op='status', sleep=0, target='cluster', verbose=1, web_port=8080)
{'REDIS_MONITOR_EXTRA': {'_slowlog_per_sec': (0, 10), 'used_cpu_user': (0, 1)},
 'cluster_name': 'cluster',
 'nutcracker': [('127.0.0.1:6379', '/tmp/r/nutcracker-6379')],
 'redis': ['cluster-redis1:192.168.60.11:6379:/tmp/r/redis-6379',
           'cluster-redis3:192.168.60.13:6379:/tmp/r/redis-6379',
           'cluster-redis2:192.168.60.12:6379:/tmp/r/redis-6379',
           'cluster-redis4:192.168.60.14:6379:/tmp/r/redis-6379'],
 'sentinel': [('192.168.60.11:26379', '/tmp/r/sentinel-26379'),
              ('192.168.60.12:26379', '/tmp/r/sentinel-26379'),
              ('192.168.60.13:26379', '/tmp/r/sentinel-26379'),
              ('192.168.60.14:26379', '/tmp/r/sentinel-26379')],
 'user': 'vagrant'}
2014-11-26 01:20:11,079 [MainThread] [NOTICE] status redis
2014-11-26 01:20:11,084 [MainThread] [ERROR] [redis:192.168.60.11:6379] is down
2014-11-26 01:20:11,089 [MainThread] [INFO] [redis:192.168.60.13:6379] uptime 673 seconds
2014-11-26 01:20:11,094 [MainThread] [INFO] [redis:192.168.60.12:6379] uptime 673 seconds
2014-11-26 01:20:11,105 [MainThread] [INFO] [redis:192.168.60.14:6379] uptime 673 seconds
2014-11-26 01:20:11,105 [MainThread] [NOTICE] status sentinel
2014-11-26 01:20:11,119 [MainThread] [INFO] [sentinel:192.168.60.11:26379] uptime 672 seconds
2014-11-26 01:20:11,126 [MainThread] [INFO] [sentinel:192.168.60.12:26379] uptime 672 seconds
2014-11-26 01:20:11,132 [MainThread] [INFO] [sentinel:192.168.60.13:26379] uptime 672 seconds
2014-11-26 01:20:11,139 [MainThread] [INFO] [sentinel:192.168.60.14:26379] uptime 672 seconds
2014-11-26 01:20:11,140 [MainThread] [NOTICE] status nutcracker
2014-11-26 01:20:11,141 [MainThread] [INFO] [nutcracker:127.0.0.1:6379] uptime 673 seconds
2014-11-26 01:20:11,146 [MainThread] [NOTICE] status master-slave <all from sentinel>
cluster-redis1 192.168.60.11:6379 <- 192.168.60.13:6379
cluster-redis2 192.168.60.12:6379 <- 192.168.60.14:6379

redis1死亡。
しばらくするとredis3がmasterに昇格します。

vagrant@mgr:~/redis-mgr$ ./bin/deploy.py cluster rediscmd 'info replication'
2014-11-26 01:21:25,080 [MainThread] [NOTICE] start running: ./bin/deploy.py -v cluster rediscmd info replication
2014-11-26 01:21:25,080 [MainThread] [INFO] Namespace(cmd=['info replication'], filter='', logfile='/home/vagrant/redis-mgr/bin/../log/deploy.log', op='rediscmd', sleep=0, target='cluster', verbose=1, web_port=8080)
2014-11-26 01:21:25,181 [MainThread] [INFO] [redis:192.168.60.11:6379] _binaries/redis-cli -h 192.168.60.11 -p 6379 info replication

2014-11-26 01:21:25,288 [MainThread] [INFO] [redis:192.168.60.13:6379] _binaries/redis-cli -h 192.168.60.13 -p 6379 info replication
# Replication
role:master
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:67108864
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

2014-11-26 01:21:25,396 [MainThread] [INFO] [redis:192.168.60.12:6379] _binaries/redis-cli -h 192.168.60.12 -p 6379 info replication
# Replication
role:master
connected_slaves:1
slave0:ip=192.168.60.14,port=6379,state=online,offset=213193,lag=1
master_repl_offset:213501
repl_backlog_active:1
repl_backlog_size:67108864
repl_backlog_first_byte_offset:2
repl_backlog_histlen:213500

2014-11-26 01:21:25,512 [MainThread] [INFO] [redis:192.168.60.14:6379] _binaries/redis-cli -h 192.168.60.14 -p 6379 info replication
# Replication
role:slave
master_host:192.168.60.12
master_port:6379
master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
slave_repl_offset:213648
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:67108864
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

その後元masterであるredis1を起動してみるとslaveで立ち上がります。

vagrant@mgr:~/redis-mgr$ ssh redis1
vagrant@redis1:~$ /tmp/r/redis-6379/bin/redis-server /tmp/r/redis-6379/conf/redis.conf
vagrant@redis1:~$ exit
vagrant@mgr:~/redis-mgr$ ./bin/deploy.py cluster rediscmd 'info replication'
2014-11-26 01:25:02,593 [MainThread] [NOTICE] start running: ./bin/deploy.py -v cluster rediscmd info replication
2014-11-26 01:25:02,594 [MainThread] [INFO] Namespace(cmd=['info replication'], filter='', logfile='/home/vagrant/redis-mgr/bin/../log/deploy.log', op='rediscmd', sleep=0, target='cluster', verbose=1, web_port=8080)
2014-11-26 01:25:02,695 [MainThread] [INFO] [redis:192.168.60.11:6379] _binaries/redis-cli -h 192.168.60.11 -p 6379 info replication
# Replication
role:slave
master_host:192.168.60.13
master_port:6379
master_link_status:up
master_last_io_seconds_ago:1
master_sync_in_progress:0
slave_repl_offset:1361
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:67108864
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

2014-11-26 01:25:02,809 [MainThread] [INFO] [redis:192.168.60.13:6379] _binaries/redis-cli -h 192.168.60.13 -p 6379 info replication
# Replication
role:master
connected_slaves:1
slave0:ip=192.168.60.11,port=6379,state=online,offset=1214,lag=1
master_repl_offset:1361
repl_backlog_active:1
repl_backlog_size:67108864
repl_backlog_first_byte_offset:2
repl_backlog_histlen:1360

2014-11-26 01:25:02,916 [MainThread] [INFO] [redis:192.168.60.12:6379] _binaries/redis-cli -h 192.168.60.12 -p 6379 info replication
# Replication
role:master
connected_slaves:1
slave0:ip=192.168.60.14,port=6379,state=online,offset=275374,lag=1
master_repl_offset:275829
repl_backlog_active:1
repl_backlog_size:67108864
repl_backlog_first_byte_offset:2
repl_backlog_histlen:275828

2014-11-26 01:25:03,031 [MainThread] [INFO] [redis:192.168.60.14:6379] _binaries/redis-cli -h 192.168.60.14 -p 6379 info replication
# Replication
role:slave
master_host:192.168.60.12
master_port:6379
master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
slave_repl_offset:275976
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:67108864
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

slaveで立ち上がっています。

まとめ

sentinelを使うと冗長化、twemproxyを使うと負荷分散が可能となりますが、
実際の運用では両方使いたい筈です。
その為には、sentinelがslaveをmasterへ昇格させた時に
twemproxyの設定を変更しなければならなかったりして面倒ですが、
redis-mgrを使うと簡単にこれらを導入する事が出来ます。

構成の変更も設定ファイルの変更で簡単に出来るので
今後のプロジェクトで導入してみては如何でしょうか。