Redis教程

redis问题延申机器思考,快why?

本文主要是介绍redis问题延申机器思考,快why?,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!
快的原因大概如下:
1.KV数据结构,时间复杂度为o(1)
2.单线程,避免的线程上下文切换 (cpu够使用,CPU成为Redis的瓶颈并不常见,因为Redis通常不是内存就是网络)
3.异步阻塞多路复用

号称每秒10W并发(我本地1核2G跑了5w多),开始猜想是不会redis服务做了大量的线程后来发现并不是。以上3点才是原因

 

It's not very frequent that CPU becomes your bottleneck with Redis, as usually Redis is either memory or network bound.  For instance, using pipelining Redis running on an average Linux system can deliver even 1 million requests per second, so if your application mainly uses O(N) or O(log(N)) commands, it is hardly going to use too much CPU. 
 
However, to maximize CPU usage you can start multiple instances of Redis in the same box and treat them as different servers.  At some point a single box may not be enough anyway, so if you want to use multiple CPUs you can start thinking of some way to shard earlier. 
 
You can find more information about using multiple Redis instances in the Partitioning page. 
 
However with Redis 4.0 we started to make Redis more threaded.  For now this is limited to deleting objects in the background, and to blocking commands implemented via Redis modules.  For future releases, the plan is to make Redis more and more threaded


CPU成为Redis的瓶颈并不常见,因为Redis通常不是内存就是网络。 例如,在Linux系统上使用pipelining Redis可以每秒发送100万个请求,所以如果你的应用主要使用O(N)或O(log(N))命令,它几乎不会占用太多CPU。  
 
然而,为了最大化CPU使用率,你可以在同一台机器上启动多个Redis实例,并把它们当作不同的服务器。 在某些情况下,单台机器可能不够用,所以如果您想使用多个cpu,可以考虑更早地进行分片。  
 
你可以在Partitioning页面中找到更多关于使用多个Redis实例的信息。  
 
然而,在Redis 4.0中,我们开始让Redis更多线程化。 目前,这仅限于在后台删除对象,以及阻止通过Redis模块实现的命令。 对于未来的发布,计划是让Redis变得越来越线程化  

 

这篇关于redis问题延申机器思考,快why?的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!