在阅读VictoriaMetrics的源码的时候,读到了那么平平无奇的一段:
// AddRows adds the given mrs to s. func (s *Storage) AddRows(mrs []MetricRow, precisionBits uint8) error { if len(mrs) == 0 { return nil } // Limit the number of concurrent goroutines that may add rows to the storage. // This should prevent from out of memory errors and CPU trashing when too many // goroutines call AddRows. select { case addRowsConcurrencyCh <- struct{}{}: default: // Sleep for a while until giving up atomic.AddUint64(&s.addRowsConcurrencyLimitReached, 1) t := timerpool.Get(addRowsTimeout) // Prioritize data ingestion over concurrent searches. storagepacelimiter.Search.Inc() select { case addRowsConcurrencyCh <- struct{}{}: timerpool.Put(t) storagepacelimiter.Search.Dec() case <-t.C: timerpool.Put(t) storagepacelimiter.Search.Dec() atomic.AddUint64(&s.addRowsConcurrencyLimitTimeout, 1) atomic.AddUint64(&s.addRowsConcurrencyDroppedRows, uint64(len(mrs))) return fmt.Errorf("cannot add %d rows to storage in %s, since it is overloaded with %d concurrent writers; add more CPUs or reduce load", len(mrs), addRowsTimeout, cap(addRowsConcurrencyCh)) } }
仔细看了以后,真是不得了。这感觉就像——
在vm-storage这个组件中,作为存储节点,它一边要负责数据的写入,一边要负责数据的查询。很明显,数据写入的工作很重要,而查询的优先级相比写入就要低一些。
遇到这种问题,我的第一反应就是:把写入的协程数设置得比查询的协程数多不就行了吗?想要多高的优先级就设置为多大的比例。
太天真了!
所以这里我直接总结vm-storage在协程控制的处理思路,再逐段分析源码:
IO协程收到数据后,通过channel转给计算协程。
insert协程执行业务逻辑前,在一个排队channel里面写入一个struct{},这个排队channel的长度与CPU核数相等。写入成功,证明同时进行的写操作小于核数,允许继续写入。
写入队列失败,就证明某个insert协程没有被及时调度,就需要通知select协程主动让出CPU资源。
每当有一个insert操作被阻塞,就会通过原子操作累加计数。这个计数代表了有多少个insert操作处于等待。
如果insert操作排队成功,计数器就会减一。当计数器为0时,通过条件变量来发起 broadcast(),唤醒在等待的select操作。
select协程中,每扫描4095个block就会检查一次是否有insert操作在等待。如果有,调用条件变量 cond.Wait()进入等待,让出协程调度。
(源码位于:https://github.com/VictoriaMetrics/VictoriaMetrics)
lib/protoparser/common/unmarshal_work.go:24
// StartUnmarshalWorkers starts unmarshal workers. func StartUnmarshalWorkers() { if unmarshalWorkCh != nil { logger.Panicf("BUG: it looks like startUnmarshalWorkers() has been alread called without stopUnmarshalWorkers()") } gomaxprocs := cgroup.AvailableCPUs() //获取物理核的个数 unmarshalWorkCh = make(chan UnmarshalWork, gomaxprocs) //创建一个channel,长度与核数相等 unmarshalWorkersWG.Add(gomaxprocs) for i := 0; i < gomaxprocs; i++ { go func() { // 启动N个协程,数量与核数相等 defer unmarshalWorkersWG.Done() for uw := range unmarshalWorkCh { uw.Unmarshal() // 这里调用具体的业务处理函数 } }() } }
IO协程获取数据后,把请求丢到unmarshalWorkCh中:
// ScheduleUnmarshalWork schedules uw to run in the worker pool. // // It is expected that StartUnmarshalWorkers is already called. func ScheduleUnmarshalWork(uw UnmarshalWork) { unmarshalWorkCh <- uw }
lib/storage/storage.go:1617
首先创建了一个用于管理写入并发的channel:
var ( // Limit the concurrency for data ingestion to GOMAXPROCS, since this operation // is CPU bound, so there is no sense in running more than GOMAXPROCS concurrent // goroutines on data ingestion path. addRowsConcurrencyCh = make(chan struct{}, cgroup.AvailableCPUs()) addRowsTimeout = 30 * time.Second )
队列的长度是CPU核数。假设有10个核,则写入操作最多10个并发。
下面是对于写入并发的处理:lib/storage/storage.go:1529
// AddRows adds the given mrs to s. func (s *Storage) AddRows(mrs []MetricRow, precisionBits uint8) error { if len(mrs) == 0 { return nil } // Limit the number of concurrent goroutines that may add rows to the storage. // This should prevent from out of memory errors and CPU trashing when too many // goroutines call AddRows. select { case addRowsConcurrencyCh <- struct{}{}: //如果写入channel成功,说明并发小于最大核数。然后走到插入逻辑去。 default: //如果插入channel失败,说明某个insert操作的协程被阻塞。这时需要通知select协程去让出。 // Sleep for a while until giving up atomic.AddUint64(&s.addRowsConcurrencyLimitReached, 1) t := timerpool.Get(addRowsTimeout) // Prioritize data ingestion over concurrent searches. storagepacelimiter.Search.Inc() // pacelimiter(步长限制器)中有个原子累加的变量,说明有多少个insert操作在等待 select { case addRowsConcurrencyCh <- struct{}{}: //在超时的时间内,等待入队成功的事件。 timerpool.Put(t) //把timer放回对象池,减少GC storagepacelimiter.Search.Dec() // insert操作可以顺利调度了,等待的数量原子减一。 // 等待数量为0的时候,调用 cond.Broadcast() 来通知select协程开始工作。 case <-t.C: //等待30秒 timerpool.Put(t) storagepacelimiter.Search.Dec() atomic.AddUint64(&s.addRowsConcurrencyLimitTimeout, 1) atomic.AddUint64(&s.addRowsConcurrencyDroppedRows, uint64(len(mrs))) return fmt.Errorf("cannot add %d rows to storage in %s, since it is overloaded with %d concurrent writers; add more CPUs or reduce load", len(mrs), addRowsTimeout, cap(addRowsConcurrencyCh)) // 等待了30秒仍然没有CPU资源,只能报错 } } // 这里以下是具体的插入逻辑... <-addRowsConcurrencyCh // insert逻辑执行完成后,出队 return firstErr }
select请求没有区分IO协程和计算协程,因为查询请求通常不多且包很小。
lib/storage/storage.go:1097
var ( // Limit the concurrency for TSID searches to GOMAXPROCS*2, since this operation // is CPU bound and sometimes disk IO bound, so there is no sense in running more // than GOMAXPROCS*2 concurrent goroutines for TSID searches. searchTSIDsConcurrencyCh = make(chan struct{}, cgroup.AvailableCPUs()*2) )
查询的并发数限制为CPU核的两倍。
查询限制的处理代码如下:lib/storage/storage.go:1056
// searchTSIDs returns sorted TSIDs for the given tfss and the given tr. func (s *Storage) searchTSIDs(tfss []*TagFilters, tr TimeRange, maxMetrics int, deadline uint64) ([]TSID, error) { // Do not cache tfss -> tsids here, since the caching is performed // on idb level. // Limit the number of concurrent goroutines that may search TSIDS in the storage. // This should prevent from out of memory errors and CPU trashing when too many // goroutines call searchTSIDs. select { case searchTSIDsConcurrencyCh <- struct{}{}: //处理思路上与insert并发限制一样。入队成功才允许进入查询逻辑 default: // Sleep for a while until giving up atomic.AddUint64(&s.searchTSIDsConcurrencyLimitReached, 1) currentTime := fasttime.UnixTimestamp() timeoutSecs := uint64(0) if currentTime < deadline { timeoutSecs = deadline - currentTime //与insert的超时处理不同,每个查询可能与不同的查询超时时间 } timeout := time.Second * time.Duration(timeoutSecs) t := timerpool.Get(timeout) select { case searchTSIDsConcurrencyCh <- struct{}{}: timerpool.Put(t) case <-t.C: timerpool.Put(t) atomic.AddUint64(&s.searchTSIDsConcurrencyLimitTimeout, 1) return nil, fmt.Errorf("cannot search for tsids, since more than %d concurrent searches are performed during %.3f secs; add more CPUs or reduce query load", cap(searchTSIDsConcurrencyCh), timeout.Seconds()) } }
lib/storage/search.go:188
// NextMetricBlock proceeds to the next MetricBlockRef. func (s *Search) NextMetricBlock() bool { if s.err != nil { return false } for s.ts.NextBlock() { if s.loops&paceLimiterSlowIterationsMask == 0 { //每执行4095次后,检查是否有insert协程在等待 if err := checkSearchDeadlineAndPace(s.deadline); err != nil { // 如果有insert协程等待,在WaitIfNeeded()方法中用条件变量阻塞: cond.Wait() s.err = err return false } } s.loops++ //... } //... }
WaitIfNeeded()方法的实现细节:lib/pacelimiter/pacelimiter.go:43
// WaitIfNeeded blocks while the number of Inc calls is bigger than the number of Dec calls. func (pl *PaceLimiter) WaitIfNeeded() { if atomic.LoadInt32(&pl.n) <= 0 { // Fast path - there is no need in lock. return } // Slow path - wait until Dec is called. pl.mu.Lock() for atomic.LoadInt32(&pl.n) > 0 { // n代表了高优先级协程等到的个数 pl.delaysTotal++ pl.cond.Wait() // 当n==0时,触发 pl.cond.Broadcast(),让低优先级的协程重新调度 } pl.mu.Unlock() }
不管怎么样,感谢valyala大神,后面我们就可以直接import这些代码来抄作业了。