Reids
not only sql,泛指非关系型数据库
Redis,Remote Dictionary Server 远程字典服务,是一个开源的使用ANSI C语言编写,支持网络、可基于内存亦可持久化的日志型、Key-Value数据库
作用
特点
docker pull redis:6.2.3
docker run -p 6379:6379 --name redis -v /Users/chengxiang92/DockerVolumes/redis/redis.conf:/etc/redis/redis.conf -v /Users/chengxiang92/DockerVolumes/redis/data:/data -d redis:6.2.3 redis-server /etc/redis/redis.conf --appendonly yes
docker exec -it redis redis-cli
Redis-benchmark
h | 指定服务器名 | 127.0.0.1 |
---|---|---|
p | 指定服务器端口 | 6379 |
s | 指定服务器socket | |
c | 指定并发连接数 | 50 |
n | 指定请求数 | 10000 |
d | 以字节的形式指定set、get值得数据大小 | 2 |
k | 1=keep alive 0=reconnect | 1 |
r | Set/get/incr使用随机key,sadd使用随机值 | |
p | 通过管道传输 请求 | 1 |
q | 强制退出redis | |
--csv | 以csv格式输出 | |
-l | 生成循环,永久执行测试 | |
-t | 仅运行以逗号分隔的测试命令列表 | |
-I | Idle模式。仅打开N个idle链接并等待 |
redis-benchmark -h localhost -p 6379 -c 100 -n 10000
docker exec -it redis redis-benchmark -h localhost -p 6379 -c 100 -n 10000
Redis 是单线程的
redis 默认有16个数据库,默认使用的是第0个,select 切换数据库,dbsize查看存储容量
127.0.0.1:6379> select 1
OK
127.0.0.1:6379[1]> dbsize
(integer) 0
127.0.0.1:6379[1]>
Keys * 查看所有key
Flushdb 清空数据库
Flushall 清空全部数据库
127.0.0.1:6379> keys *
1) "name"
2) "key:__rand_int__"
3) "counter:__rand_int__"
4) "mylist"
5) "myhash"
127.0.0.1:6379> flushdb
OK
127.0.0.1:6379> flushall
OK
127.0.0.1:6379>
Redis-key
127.0.0.1:6379> set name test #设置key
OK
127.0.0.1:6379> keys * #查看所有key
1) "name"
127.0.0.1:6379> exists name #key是否存在
(integer) 1
127.0.0.1:6379> exists name1
(integer) 0
127.0.0.1:6379> move name 1 #移动key
(integer) 1
127.0.0.1:6379> set name artisan
OK
127.0.0.1:6379> get name
"artisan"
127.0.0.1:6379> expire name 10 #设置过期时间
(integer) 1
127.0.0.1:6379> ttl name #查看剩余时间
(integer) 8
127.0.0.1:6379> del name #删除key
(integer) 0
127.0.0.1:6379> type name #查看key的类型
string
127.0.0.1:6379>
127.0.0.1:6379> set key1 v1
OK
127.0.0.1:6379> get key1
"v1"
127.0.0.1:6379> append key1 v2 #追加字符串,不存在就set
(integer) 4
127.0.0.1:6379> get key1
"v1v2"
127.0.0.1:6379> strlen key1 #string长度
(integer) 4
127.0.0.1:6379> set nums 0
OK
127.0.0.1:6379> get nums
"0"
127.0.0.1:6379> incr nums #自增1
(integer) 1
127.0.0.1:6379> incr nums
(integer) 2
127.0.0.1:6379> decr nums #自减1
(integer) 1
127.0.0.1:6379> incrby nums 10 #自增,自定义步长
(integer) 11
127.0.0.1:6379> decrby nums 5 #自减,自定义步长
(integer) 6
127.0.0.1:6379>
127.0.0.1:6379> set key1 artisan
OK
127.0.0.1:6379> get key1
"artisan"
127.0.0.1:6379> GETRANGE key1 0 3 #截取字符串的指定范围
"arti"
127.0.0.1:6379> GETRANGE key1 0 -1 #截取字符串的全部
"artisan"
127.0.0.1:6379> SETRANGE key1 1 xx #替换指定位置开始的字符串
(integer) 7
127.0.0.1:6379> get key1
"axxisan"
SETEX(set with expire) 设置key和过期时间
SETNX(set if not exist) 不存在再设置key
127.0.0.1:6379> SETEX key2 30 abc #设置key30秒过期
OK
127.0.0.1:6379> ttl key2
(integer) 26
127.0.0.1:6379> SETNX key1 test #如果key1不存在设置为test
(integer) 0
127.0.0.1:6379> get key1
"axxisan"
MSET 批量设置
MGET 批量获取
MSETNX 批量设置不存在
127.0.0.1:6379> MSET k1 v1 k2 v2 k3 v3
OK
127.0.0.1:6379> KEYS *
1) "k3"
2) "k2"
3) "k1"
127.0.0.1:6379>
127.0.0.1:6379> MGET k1 k2 k3
1) "v1"
2) "v2"
3) "v3"
127.0.0.1:6379> MSETNX k1 v1 k4 v4 #原子性操作
(integer) 0
127.0.0.1:6379> MSET user:1:name artisan user:1:age 30
OK
127.0.0.1:6379> MGET user:1:name user:1:age
1) "artisan"
2) "30"
GETSET:获取并更新key
127.0.0.1:6379> GETSET key1 artisan
(nil)
127.0.0.1:6379> GETSET key1 artisan_v2
"artisan"
列表,可以把list用成,栈、队列、阻塞队列
本质是链表,两边插入或改动,效率最高
命令以L、R开头
#LPUSH
127.0.0.1:6379> LPUSH list v1 v2 #插入到列表头部
(integer) 1
127.0.0.1:6379> LPUSH list v3
(integer) 3
127.0.0.1:6379> LRANGE list 0 -1
1) "v3"
2) "v2"
3) "v1"
127.0.0.1:6379> LRANGE list 0 1
1) "v3"
2) "v2"
#RPUSH
127.0.0.1:6379> RPUSH list v4 #插入尾部
(integer) 4
127.0.0.1:6379> LRANGE list 0 -1
1) "v3"
2) "v2"
3) "v1"
4) "v4"
127.0.0.1:6379> LPOP list
"v3"
127.0.0.1:6379> RPOP list
"v4"
127.0.0.1:6379> LINDEX list 0
"v2"
#LLEN
127.0.0.1:6379> LPUSH list v1 v2 v3 v4 v4
(integer) 5
127.0.0.1:6379> LLEN list
(integer) 5
127.0.0.1:6379> LREM list 2 v4
(integer) 2
#LTRIM
127.0.0.1:6379> LPUSH list v1 v2 v3 v4
(integer) 4
127.0.0.1:6379> LTRIM list 1 2
OK
127.0.0.1:6379> LRANGE list 0 -1
1) "v3"
2) "v2"
127.0.0.1:6379>
#RPOPLPUSH
127.0.0.1:6379> LPUSH list v1 v2 v3 v4
(integer) 4
127.0.0.1:6379> RPOPLPUSH list list2
"v1"
127.0.0.1:6379> LRANGE list 0 -1
1) "v4"
2) "v3"
3) "v2"
127.0.0.1:6379> LRANGE list2 0 -1
1) "v1"
127.0.0.1:6379>
#EXISTS、LSET
127.0.0.1:6379> EXISTS list
(integer) 0
127.0.0.1:6379> LSET list 0 v5
(error) ERR no such key
127.0.0.1:6379> LPUSH list v1 v2 v3 v4
(integer) 4
127.0.0.1:6379> LSET list 0 v5
OK
127.0.0.1:6379> LRANGE list 0 -1
1) "v5"
2) "v3"
3) "v2"
4) "v1"
#LINSERT
127.0.0.1:6379> LPUSH list v1 v2 v3 v4
(integer) 4
127.0.0.1:6379> LINSERT list BEFORE v2 v5
(integer) 5
127.0.0.1:6379> LRANGE list 0 -1
1) "v4"
2) "v3"
3) "v5"
4) "v2"
5) "v1"
127.0.0.1:6379> LINSERT list AFTER v2 v6
(integer) 6
127.0.0.1:6379> LRANGE list 0 -1
1) "v4"
2) "v3"
3) "v5"
4) "v2"
5) "v6"
6) "v1"
127.0.0.1:6379>
set中值不能重复
127.0.0.1:6379> SADD set v1 v2 v3 v4
(integer) 4
127.0.0.1:6379> SMEMBERS set
1) "v2"
2) "v3"
3) "v4"
4) "v1"
127.0.0.1:6379> SISMEMBER set v1
(integer) 1
127.0.0.1:6379> SISMEMBER set v5
(integer) 0
127.0.0.1:6379> SCARD set
(integer) 4
127.0.0.1:6379> SREM set v1
(integer) 1
#SET
127.0.0.1:6379> SRANDMEMBER set
"v3"
127.0.0.1:6379> SRANDMEMBER set 2
1) "v2"
2) "v3"
127.0.0.1:6379> SPOP set
"v2"
#SDIFF、SINTER
127.0.0.1:6379> sadd set v1 v2 v3 v4
(integer) 4
127.0.0.1:6379> SMOVE set set3 v1
(integer) 1
#SDIFF、SINTER、SUNION
127.0.0.1:6379> sadd set v1 v2 v3 v4
(integer) 4
127.0.0.1:6379> sadd set2 v3 v4 v5 v6
(integer) 4
127.0.0.1:6379> SDIFF set set2
1) "v2"
2) "v1"
127.0.0.1:6379> SDIFF set2 set
1) "v5"
2) "v6"
127.0.0.1:6379> SINTER set set2
1) "v3"
2) "v4"
127.0.0.1:6379> SUNION set set2
1) "v2"
2) "v5"
3) "v6"
4) "v1"
5) "v3"
6) "v4"
map集合,key-map
127.0.0.1:6379> HSET hash name artisan
(integer) 1
127.0.0.1:6379> HGET hash name
"artisan"
127.0.0.1:6379> HMSET hsah2 name h2 age 30
OK
127.0.0.1:6379> HMGET hsah2 name age
1) "h2"
2) "30"
127.0.0.1:6379> HGETALL hsah2
1) "name"
2) "h2"
3) "age"
4) "30"
#HDEL
127.0.0.1:6379> HDEL hsah2 age
(integer) 1
127.0.0.1:6379> HGETALL hsah2
1) "name"
2) "h2"
#HLEN
127.0.0.1:6379> HLEN hash
(integer) 1
127.0.0.1:6379> HEXISTS hash name
(integer) 1
127.0.0.1:6379> HKEYS hash
1) "name"
127.0.0.1:6379> HVALS hash
1) "artisan"
#HINCRBY
127.0.0.1:6379> HSET hash age 30
(integer) 1
127.0.0.1:6379> HINCRBY hash age 1
(integer) 31
127.0.0.1:6379> HGET hash age
"31"
127.0.0.1:6379> HSETNX hash age 50
(integer) 0
127.0.0.1:6379> HSET user:1 name artisan age 30
(integer) 2
127.0.0.1:6379> HGET user:1 name
"artisan"
在set基础上,增加了一个值
127.0.0.1:6379> ZADD zset 1 v1 2 v2 3 v3 4 v4
(integer) 4
127.0.0.1:6379> ZRANGE zset 0 -1
1) "v1"
2) "v2"
3) "v3"
4) "v4"
127.0.0.1:6379> ZRANGEBYSCORE zset -inf +inf
1) "v1"
2) "v2"
3) "v3"
4) "v4"
127.0.0.1:6379> ZRANGEBYSCORE zset 2 3
1) "v2"
2) "v3"
127.0.0.1:6379> ZRANGEBYSCORE zset -inf +inf withscores
1) "v1"
2) "1"
3) "v2"
4) "2"
5) "v3"
6) "3"
7) "v4"
8) "4"
127.0.0.1:6379> ZREVRANGEBYSCORE zset +inf -inf
1) "v4"
2) "v3"
3) "v2"
4) "v1"
127.0.0.1:6379> ZREM zset v3
(integer) 1
127.0.0.1:6379> ZCARD zset
(integer) 3
127.0.0.1:6379> ZCOUNT zset 1 2
(integer) 2
低层用的zset,可以用zrem移除
127.0.0.1:6379> geoadd china:city 116.40 39.90 beijing
(integer) 1
127.0.0.1:6379> geoadd china:city 121.47 31.23 shanghai 106.50 29.53 chongqing 114.05 22.52 shenzhen 120.16 30.24 hangzhou 108.96 34.26 xian
(integer) 5
127.0.0.1:6379> GEOPOS china:city beijing
1) 1) "116.39999896287918091"
2) "39.90000009167092543"
#GEODIST
127.0.0.1:6379> GEODIST china:city beijing shanghai km
"1067.3788"
#GEORADIUS
127.0.0.1:6379> GEORADIUS china:city 110 30 500 km
1) "chongqing"
2) "xian"
127.0.0.1:6379> GEORADIUS china:city 110 30 500 km WITHCOORD WITHDIST WITHHASH COUNT 1
1) 1) "chongqing"
2) "341.9374"
3) (integer) 4026042091628984
4) 1) "106.49999767541885376"
2) "29.52999957900659211"
#GEORADIUSBYMEMBER
127.0.0.1:6379> GEORADIUSBYMEMBER china:city beijing 1000 km
1) "beijing"
2) "xian"
#GEOHASH
127.0.0.1:6379> GEOHASH china:city beijing shanghai
1) "wx4fbxxfke0"
2) "wtw3sj5zbj0"
#ZSET操作
127.0.0.1:6379> ZRANGE china:city 0 -1
1) "chongqing"
2) "xian"
3) "shenzhen"
4) "hangzhou"
5) "shanghai"
6) "beijing"
127.0.0.1:6379> ZREM china:city xian
(integer) 1
什么是基数,不重复的元素
0.81%错误率,空间占用少
127.0.0.1:6379> PFADD key a b c d e f g h i j
(integer) 1
127.0.0.1:6379> PFCOUNT key
(integer) 10
127.0.0.1:6379> PFADD key2 i j k l m n o p
(integer) 1
127.0.0.1:6379> PFCOUNT key2
(integer) 8
127.0.0.1:6379> PFMERGE key3 key key2
OK
127.0.0.1:6379> PFCOUNT key3
(integer) 16
Bitmap位图,数据结构,操作二进制位来进行记录,就只有0和1两个状态
127.0.0.1:6379> SETBIT sign 0 1
(integer) 0
127.0.0.1:6379> SETBIT sign 1 0
(integer) 0
127.0.0.1:6379> SETBIT sign 2 0
(integer) 0
127.0.0.1:6379> SETBIT sign 3 0
(integer) 0
127.0.0.1:6379> SETBIT sign 4 1
(integer) 0
127.0.0.1:6379> SETBIT sign 5 1
(integer) 0
127.0.0.1:6379> SETBIT sign 6 0
(integer) 0
127.0.0.1:6379> GETBIT sign 3
(integer) 0
127.0.0.1:6379> GETBIT sign 4
(integer) 1
127.0.0.1:6379> BITCOUNT sign
(integer) 3
redis事务本质,一组命令的集合,一个事物中的所有命令都会被序列化,会按顺序执行
redis事务没有隔离级别的概念
单条命令是保证原子性的,但是事务不保证原子性
Redis事务:
127.0.0.1:6379> multi
OK
127.0.0.1:6379(TX)> set k1 v1
QUEUED
127.0.0.1:6379(TX)> set k2 v2
QUEUED
127.0.0.1:6379(TX)> get k1
QUEUED
127.0.0.1:6379(TX)> exec
1) OK
2) OK
127.0.0.1:6379> multi
OK
127.0.0.1:6379(TX)> set k1 v1
QUEUED
127.0.0.1:6379(TX)> set k3 v3
QUEUED
127.0.0.1:6379(TX)> DISCARD
OK
127.0.0.1:6379>
命令有问题,事务中所有的命令都不会执行
127.0.0.1:6379> multi
OK
127.0.0.1:6379(TX)> set k1 v1
QUEUED
127.0.0.1:6379(TX)> set k2 v2
QUEUED
127.0.0.1:6379(TX)> set k3 v3
QUEUED
127.0.0.1:6379(TX)> getset k3
(error) ERR wrong number of arguments for 'getset' command
127.0.0.1:6379(TX)> set k4 v4
QUEUED
127.0.0.1:6379(TX)> exec
(error) EXECABORT Transaction discarded because of previous errors.
运行时有问题,其它命令正常执行
127.0.0.1:6379> multi
OK
127.0.0.1:6379(TX)> set k1 v1
QUEUED
127.0.0.1:6379(TX)> INCR k1
QUEUED
127.0.0.1:6379(TX)> set k2 v2
QUEUED
127.0.0.1:6379(TX)> get k2
QUEUED
127.0.0.1:6379(TX)> exec
1) OK
2) (error) ERR value is not an integer or out of range
3) OK
4) "v2"
悲观锁
乐观锁
127.0.0.1:6379> clear
127.0.0.1:6379> set money 100
OK
127.0.0.1:6379> set out 0
OK
127.0.0.1:6379> watch money
OK
127.0.0.1:6379> multi
OK
127.0.0.1:6379(TX)> DECRBY money 20
QUEUED
127.0.0.1:6379(TX)> INCRBY out 20
QUEUED
127.0.0.1:6379(TX)> exec
1) (integer) 80
2) (integer) 20
监控money,期间money被修改。事务执行失败
127.0.0.1:6379> watch money
OK
127.0.0.1:6379> multi
OK
127.0.0.1:6379(TX)> DECRBY money 10
QUEUED
127.0.0.1:6379(TX)> INCRBY out 10
QUEUED
127.0.0.1:6379(TX)> exec
(nil)
期间另一线程修改money
127.0.0.1:6379> get money
"80"
127.0.0.1:6379> INCRBY money 1000
(integer) 1080
127.0.0.1:6379>
解除监视(事务执行后也会自动解锁)
127.0.0.1:6379> unwatch
OK
127.0.0.1:6379> watch money
OK
Redis官方推荐的java连接开发工具
SpringBoot2.x之后,jedis被替换为了lettuce
@Configuration(proxyBeanMethods = false)
@ConditionalOnClass(RedisOperations.class)
@EnableConfigurationProperties(RedisProperties.class)
@Import({ LettuceConnectionConfiguration.class, JedisConnectionConfiguration.class })
public class RedisAutoConfiguration {
@Bean
@ConditionalOnMissingBean(name = "redisTemplate")
@ConditionalOnSingleCandidate(RedisConnectionFactory.class)
public RedisTemplate<Object, Object> redisTemplate(RedisConnectionFactory redisConnectionFactory) {
RedisTemplate<Object, Object> template = new RedisTemplate<>();
template.setConnectionFactory(redisConnectionFactory);
return template;
}
@Bean
@ConditionalOnMissingBean
@ConditionalOnSingleCandidate(RedisConnectionFactory.class)
public StringRedisTemplate stringRedisTemplate(RedisConnectionFactory redisConnectionFactory) {
StringRedisTemplate template = new StringRedisTemplate();
template.setConnectionFactory(redisConnectionFactory);
return template;
}
}
对象的保存
include /path/to/local.conf
bind 127.0.0.1 -::1
protected-mode yes
port 6379
daemonize yes #以守护进程的方式运行,默认为no
pidfile /var/run/redis_6379.pid #如果后台方式运行,就需要指定一个pid文件
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice
logfile "" #日志文件名
databases 16 #默认数据库数量
always-show-logo no #是否显示logo
在规定时间内执行了多少次操作,会持久化到文件 .rdb.aof。如果没有持久化,那么会断电及失
save 3600 1 #3600秒内,至少有一个key更新,就会持久化
save 300 100 #300秒内,至少修改100次
save 60 10000 #60秒内,至少10000key
stop-writes-on-bgsave-error yes #持久化出错,是否还要继续工作
rdbcompression yes #是否压缩rdb文件
rdbchecksum yes #保存文件时,检查校验rdb文件
dbfilename dump.rdb #保存目录
requirepass root_redis #设置密码
maxclients 10000 #最大连接数
maxmemory <bytes> #最大内存
#volatile-lru:只对设置了过期时间的key进行LRU
#allkeys-lru:删除LRU算法的key
#volatile-random:随机删除即将过期的key
#allkeys-random:随机删除
#volatile-ttl:删除即将过期的
#noeviction:永不过期,返回错误
maxmemory-policy noeviction #内存上限处理策略
appendonly yes #默认开启
appendfilename "appendonly.aof" #持久化文件名字
# appendfsync always 每次修改都会同步
# appendfsync no 不执行sync,系统自己同步数据
appendfsync everysec #每秒同步
Redis是内存数据库,
在指定的时间间隔将内存中的数据集快照写入磁盘,也就是Snapshot快照,恢复时是将快照文件直接读到内存
Reds会单独创建(fork)一个子进程来进行持久化,会先将数据写入到一个临时文件中,待持久化过程结束后,用临时文件替换上次持久化文件。缺点是最后一次持久化的数据可能丢失
触发快照情况
127.0.0.1:6379> save #save命令主动保存快照
OK
恢复文件,直接将rdb文件放在配置路径下
总结:适合大规模数据恢复、数据完整性不敏感
以日志的形式记录每个写操作,将Redis执行过的所有指令记录下来(不记录读),只许追加文件不可改写文件,redis启动之初会读取该文件重新构造数据
appendonly yes
appendfilename "appendonly.aof"
no-appendfsync-on-rewrite no #是否开启文件重写
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb #重写规则:文件超过64MB
aof-load-truncated yes
aof-use-rdb-preamble yes
aof文件有错误,redis无法启动,可以用redis-check-aof --fix 修复
docker exec -it redis redis-check-aof --fix appendonly.aof
AOF analyzed: size=6433122, ok_up_to=6433122, ok_up_to_line=950644, diff=0
AOF is valid
总结:文件完整性好、aof文件远大于rdb,恢复慢
Redis 发布订阅 (pub/sub) 是一种消息通信模式:发送者 (pub) 发送消息,订阅者 (sub) 接收消息。
Redis 客户端可以订阅任意数量的频道
#订阅
127.0.0.1:6379> SUBSCRIBE test
Reading messages... (press Ctrl-C to quit)
1) "subscribe"
2) "test"
3) (integer) 1
#发送
127.0.0.1:6379> PUBLISH test hello
(integer) 1
127.0.0.1:6379>
#订阅端实时
127.0.0.1:6379> SUBSCRIBE test
Reading messages... (press Ctrl-C to quit)
1) "subscribe"
2) "test"
3) (integer) 1
1) "message" #消息
2) "test" #频道
3) "hello" #内容
数据的复制是单向的,只能由主节点到从节点
环境配置
查看信息
127.0.0.1:6379> info replication
# Replication
role:master
connected_slaves:0
master_failover_state:no-failover
master_replid:355de4a7593ac7a38682b5d274b98bc8e3dd3875
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
127.0.0.1:6379>
准备一个主机,两个从机添加到同一个网络下
docker run -p 6379:6379 --name redis -v /Users/chengxiang92/DockerVolumes/redis/conf:/etc/redis -v /Users/chengxiang92/DockerVolumes/redis/data:/data --net default-net -d redis:6.2.3 redis-server /etc/redis/redis.conf --appendonly yes
docker run -p 6380:6379 --name redis_2 -v /Users/chengxiang92/DockerVolumes/redis_2/conf:/etc/redis -v /Users/chengxiang92/DockerVolumes/redis_2/data:/data --net default-net -d redis:6.2.3 redis-server /etc/redis/redis.conf --appendonly yes
docker run -p 6381:6379 --name redis_3 -v /Users/chengxiang92/DockerVolumes/redis_3/conf:/etc/redis -v /Users/chengxiang92/DockerVolumes/redis_3/data:/data --net default-net -d redis:6.2.3 redis-server /etc/redis/redis.conf --appendonly yes
配置文件修改
#主机配置
bind 0.0.0.0
#从机配置
replicaof redis 6379
masterauth root_redis #主机密码
#redis_2、redis_3
127.0.0.1:6379> SLAVEOF redis 6379 #没有修改配置也可以手动链接
OK
127.0.0.1:6379> INFO replication
# Replication
role:slave
master_host:redis
master_port:6379
master_link_status:up
master_last_io_seconds_ago:4
master_sync_in_progress:0
slave_repl_offset:0
slave_priority:100
slave_read_only:1
replica_announced:1
connected_slaves:0
master_failover_state:no-failover
master_replid:51344c81ec20d22022c30d4917b4b38b2634eeff
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:0
#redis
127.0.0.1:6379> info replication
# Replication
role:master
connected_slaves:2
slave0:ip=199.2.0.3,port=6379,state=online,offset=476,lag=1
slave1:ip=199.2.0.4,port=6379,state=online,offset=476,lag=1
master_failover_state:no-failover
master_replid:51344c81ec20d22022c30d4917b4b38b2634eeff
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:476
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:476
主机可以读写,从机只能读
主机所有的数据,都会被从机自动同步
自动选主机的模式
sentinel.conf
# sentinel monitor <master-name> <ip> <redis-port> <quorum>
sentinel monitor redis 199.2.0.2 6379 1
sentinel auth-pass redis1 root_redis
#告诉sentinel去监听地址为ip:port的一个master,这里的master-name可以自定义,quorum是一个数字,指明当有多少个sentinel认为一个master失效时,master才算真正失效。master-name只能包含英文字母,数字,和“.-_”这三个字符需要注意的是master-ip 要写真实的ip
启动哨兵
docker exec -it redis redis-sentinel /etc/redis/sentinel.conf
总结:主从可以切换,故障可以转移,高可用。
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。