Kubernetes

Kubernetes EventBroadcaster事件管理机制源码分析

本文主要是介绍Kubernetes EventBroadcaster事件管理机制源码分析,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

1、概述

Kubernetes的事件(Event)是一种资源对象(Resource Object),用于展示集群内发生的情况,Kubernetes系统中的各个组件会将运行时发生的各种事件上报给Kubernetes API Server。例如,调度器做了什么决定,某些Pod为什么被从节点中驱逐。由于Kubernetes的事件是一种资源对象,因此它们存储在Kubernetes API Server的Etcd集群中,为避免磁盘空间被填满,故强制执行保留策略:在最后一次的事件发生后,删除1小时之前发生的事件。Kubernetes Event资源对象的概念、使用及持久化方案请参考《Kubernetes Event详述及持久化方案》。

2、EventBroadcaster事件管理机制原理

2.1 谁会发送事件?

Kubernetes以Pod资源为核心,Deployment、StatefulSet、ReplicaSet、DaemonSet、CronJob等,最终都会创建出Pod。因此事件机制也是围绕 pod 进行的,在Pod生命周期的关键步骤都会产生事件消息。比如 Controller Manager 会记录节点注册和销毁的事件、Deployment 扩容和升级的事件;kubelet 会记录镜像回收事件、volume 无法挂载事件等;Scheduler 会记录调度事件等,这些Kubernetes核心组件都是基于EventBroadcaster事件管理机制进行Kubernetes Event处理。本文主要目的是讲解EventBroadcaster事件管理机制,通过讲解EventBroadcaster事件管理机制,我们在自定义组件的时候可以通过EventBroadcaster事件管理机制来管理我们自定义资源类型产生的事件,通过查看自定义资源类型关联的事件可以便于调试、排查、定位问题。

2.2  EventBroadcaster事件管理机制组成及运行原理

Event事件管理机制主要有三部分组成:

  • EventRecorder:事件生产者,也称为事件记录器 ,k8s组件通过调用EventRecorder的方法来生成事件;
  • EventBroadcaster:事件消费者, 也称为事件广播器。事件广播器,负责消费EventRecorder产生的事件,然后分发给broadcasterWatcher;分发过程有两种机制,分别是非阻塞(Non-Blocking )分发机制和阻塞( Blocking )分发机制;
  • broadcasterWatcher:观察者管理,用于定义事件的处理方式,如上报事件至apiserver;

EventBroadcaster事件管理机制运行原理如下图所示:

 

 如上图所示, Actor 可以是 Kubernetes 系统中的任意组件(当然也可以是自定义组件),当组件中发生了些关键性事件时,可通过 EventRecorder 记录该事件。

 注意:此图摘自《Kuberneter源码剖析》,其Kuberneter版本为1.14.0,请注意您使用的Kubernetes版本,不同版本EventBroadcaster三个组件调用的方法可能和上图有所出入,本文以Kuberneter1.21.7进行剖析EventBroadcaster事件管理机制。

3、Event资源数据结构

以下数据结构都来自k8s.io/api/core/v1/types.go文件:

  • Event结构体:
//事件是集群中某处事件的报告。
type Event struct {
	metav1.TypeMeta `json:",inline"`
	//标准对象的元数据。
	metav1.ObjectMeta `json:"metadata" protobuf:"bytes,1,opt,name=metadata"`
	//与此 event 有直接关联的资源对象(触发event的资源对象)
	InvolvedObject ObjectReference `json:"involvedObject" protobuf:"bytes,2,opt,name=involvedObject"`
	//这应该是一个简短的,机器可理解的字符串,该字符串给出了转换为对象当前状态的原因。
    // +optional
	Reason string `json:"reason,omitempty" protobuf:"bytes,3,opt,name=reason"`
    //此操作状态的可读描述。(给一个更易让人读懂的详细说明)
    // +optional
	Message string `json:"message,omitempty" protobuf:"bytes,4,opt,name=message"`
    //报告此事件的组件。 应该是机器可以理解的短字符串。
    // +optional
	Source EventSource `json:"source,omitempty" protobuf:"bytes,5,opt,name=source"`
	//首次记录事件的时间。 (服务器收到时间以TypeMeta表示。)
    // +optional
	FirstTimestamp metav1.Time `json:"firstTimestamp,omitempty" protobuf:"bytes,6,opt,name=firstTimestamp"`
	//最近一次记录此事件的时间。
	// +optional
	LastTimestamp metav1.Time `json:"lastTimestamp,omitempty" protobuf:"bytes,7,opt,name=lastTimestamp"`
	// 此事件发生的次数。
	// +optional
	Count int32 `json:"count,omitempty" protobuf:"varint,8,opt,name=count"`
	// 此事件的类型(正常,警告),将来可能会添加新的类型
	// +optional
	Type string `json:"type,omitempty" protobuf:"bytes,9,opt,name=type"`
	//首次观察到此事件的时间。
	// +optional
	EventTime metav1.MicroTime `json:"eventTime,omitempty" protobuf:"bytes,10,opt,name=eventTime"`
	// 有关此事件表示的事件系列的数据,如果是单例事件,则为nil。
	// +optional
	Series *EventSeries `json:"series,omitempty" protobuf:"bytes,11,opt,name=series"`
	// 针对对象已采取/未采取什么措施。
	// +optional
	Action string `json:"action,omitempty" protobuf:"bytes,12,opt,name=action"`
	// 可选的辅助对象,用于更复杂的操作。
	// +optional
	Related *ObjectReference `json:"related,omitempty" protobuf:"bytes,13,opt,name=related"`
	// 发出此事件的控制器的名称,例如 `kubernetes.io / kubelet`。
	// +optional
	ReportingController string `json:"reportingComponent" protobuf:"bytes,14,opt,name=reportingComponent"`
	// 控制器实例的ID,例如 `kubelet-xyzf`。
	// +optional
	ReportingInstance string `json:"reportingInstance" protobuf:"bytes,15,opt,name=reportingInstance"`
}
  • involvedObject结构体: 定义了与此 Event 有直接关联的资源对象:

type ObjectReference struct {
	Kind string `json:"kind,omitempty" protobuf:"bytes,1,opt,name=kind"`
	Namespace string `json:"namespace,omitempty" protobuf:"bytes,2,opt,name=namespace"`
	Name string `json:"name,omitempty" protobuf:"bytes,3,opt,name=name"`
	UID types.UID `json:"uid,omitempty" protobuf:"bytes,4,opt,name=uid,casttype=k8s.io/apimachinery/pkg/types.UID"`
	APIVersion string `json:"apiVersion,omitempty" protobuf:"bytes,5,opt,name=apiVersion"`
	ResourceVersion string `json:"resourceVersion,omitempty" protobuf:"bytes,6,opt,name=resourceVersion"`
	FieldPath string `json:"fieldPath,omitempty" protobuf:"bytes,7,opt,name=fieldPath"`
}
  • EventSource结构体,定义了与此Event直接关联的组件(上报事件的组件信息):
type EventSource struct {
	// Component from which the event is generated.
	// +optional
	Component string `json:"component,omitempty" protobuf:"bytes,1,opt,name=component"`
	// Node name on which the event is generated.
	// +optional
	Host string `json:"host,omitempty" protobuf:"bytes,2,opt,name=host"`
}
  • types.go常量:定义了两种Event类型:

const (
 // 正常事件
 // Information only and will not cause any problems
 EventTypeNormal string = "Normal"
 // 警告事件
 // These events are to warn that something might go wrong
 EventTypeWarning string = "Warning"
)  

4、EventBroadcaster源码分析

4.1  EventRecorder记录事件(事件生产者/事件记录器)

  • EventRecorder

在client-go中的tools/record/event.go中定义的EventRecorder接口:

// EventRecorder knows how to record events on behalf of an EventSource.
type EventRecorder interface {
	// 对刚发生的事件进行记录
	Event(object runtime.Object, eventtype, reason, message string)

	// 通过使用fmt.Sprintf格式化输出事件的格式。
	Eventf(object runtime.Object, eventtype, reason, messageFmt string, args ...interface{})

	// 功能与Eventf一样,但附加了注释(Annotations )字段。
	AnnotatedEventf(object runtime.Object, annotations map[string]string, eventtype, reason, messageFmt string, args ...interface{})
}

EventRecorder定义了记录Event的三种方法,用以帮助kubernetes组件记录Event。其中Event是可以用来记录刚发生的事件;Eventf通过使用fmt.Sprintf格式化输出事件的格式;AnnotatedEventf功能和Eventf一致,但是附加了注释字段。

  • recorderImpl

结构体recorderImpl是EventRecorder接口的实现:

// client-go/tools/record/event.go
type recorderImpl struct {
  //k8s资源注册表
  scheme *runtime.Scheme
  //上报事件的组件,例如kubelet,kube-controller-manager
  source v1.EventSource
  //事件消费 匿名字段
  *watch.Broadcaster
  clock clock.Clock
}

// recorderImpl实例化方法,基于事件消费者实现类 eventBroadcasterImpl,建立了生产者和消费者之间的联系
func (e *eventBroadcasterImpl) NewRecorder(scheme *runtime.Scheme, source v1.EventSource) EventRecorder {
  return &recorderImpl{scheme, source, e.Broadcaster, clock.RealClock{}}
}

func (recorder *recorderImpl) Event(object runtime.Object, eventtype, reason, message string) {
  recorder.generateEvent(object, nil, eventtype, reason, message)
}

func (recorder *recorderImpl) Eventf(object runtime.Object, eventtype, reason, messageFmt string, args ...interface{}) {
  recorder.Event(object, eventtype, reason, fmt.Sprintf(messageFmt, args...))
}

func (recorder *recorderImpl) AnnotatedEventf(object runtime.Object, annotations map[string]string, eventtype, reason, messageFmt string, args ...interface{}) {
  recorder.generateEvent(object, annotations, eventtype, reason, fmt.Sprintf(messageFmt, args...))
}

recorderImpl结构体中包含apimachinery/pkg/watch/mux.go中的Broadcaster结构体对象地址,因此可以调用Broadcaster实现的方法。
recorderImpl实现了EventRecorder接口定义的三个方法,以Event方法为例,调用链为:
recorderImpl.Event方法→ recorderImpl.generateEvent方法→Broadcaster.ActionOrDrop方法:

// client-go/tools/record/event.go
func (recorder *recorderImpl) generateEvent(object runtime.Object, annotations map[string]string, eventtype, reason, message string) {
  //实例化事件直接关联的资源对象
  ref, err := ref.GetReference(recorder.scheme, object)
  if err != nil {
    klog.Errorf("Could not construct reference to: '%#v' due to: '%v'. Will not report event: '%v' '%v' '%v'", object, err, eventtype, reason, message)
    return
  }

  // 验证事件类型,目前只支持Normal和Warning两种事件类型
  if !util.ValidateEventType(eventtype) {
    klog.Errorf("Unsupported event type: '%v'", eventtype)
    return
  }

  // 实例化Event
  event := recorder.makeEvent(ref, annotations, eventtype, reason, message)
  // 设置上报事件的组件信息
  event.Source = recorder.source

  //将Event写入m.incoming Chan中,完成事件生产过程
  if sent := recorder.ActionOrDrop(watch.Added, event); !sent {
    klog.Errorf("unable to record event: too many queued events, dropped event %#v", event)
  }
}

makeEvent方法会创建Event资源实例

// client-go/tools/record/event.go
func (recorder *recorderImpl) makeEvent(ref *v1.ObjectReference, annotations map[string]string, eventtype, reason, message string) *v1.Event {
  t := metav1.Time{Time: recorder.clock.Now()}
  namespace := ref.Namespace
  // 如果此event有直接关联的资源对象是集群资源,那么此event将创建在default命名空间下
  if namespace == "" {
    namespace = metav1.NamespaceDefault
  }
  return &v1.Event{
    ObjectMeta: metav1.ObjectMeta{
      Name:        fmt.Sprintf("%v.%x", ref.Name, t.UnixNano()),
      Namespace:   namespace,
      Annotations: annotations,
    },
    // 此event有直接关联的资源对象
    InvolvedObject: *ref,
    Reason:         reason,
    Message:        message,
    FirstTimestamp: t,
    LastTimestamp:  t,
    Count:          1,
    Type:           eventtype,
  }
}

generateEvent方法会异步的调用ActionOrDrop方法,将事件写入到incoming中:

// apimachinery/pkg/watch/mux.go
func (m *Broadcaster) ActionOrDrop(action EventType, obj runtime.Object) bool {
	select {
	case m.incoming <- Event{action, obj}:
		return true
	default:
		return false
	}
}

4.2 EventBroadcaster事件广播(事件消费者/事件广播器)

  • EventBroadcaster

在client-go中的tools/record/event.go中定义了EventBroadcaster接口:

// EventBroadcaster knows how to receive events and send them to any EventSink, watcher, or log.
type EventBroadcaster interface {
  // StartEventWatcher starts sending events received from this EventBroadcaster to the given
  // event handler function. The return value can be ignored or used to stop recording, if
  // desired.
  StartEventWatcher(eventHandler func(*v1.Event)) watch.Interface

  // StartRecordingToSink starts sending events received from this EventBroadcaster to the given
  // sink. The return value can be ignored or used to stop recording, if desired.
  StartRecordingToSink(sink EventSink) watch.Interface

  // StartLogging starts sending events received from this EventBroadcaster to the given logging
  // function. The return value can be ignored or used to stop recording, if desired.
  StartLogging(logf func(format string, args ...interface{})) watch.Interface

  // StartStructuredLogging starts sending events received from this EventBroadcaster to the structured
  // logging function. The return value can be ignored or used to stop recording, if desired.
  StartStructuredLogging(verbosity klog.Level) watch.Interface

  // 事件生产者实例化方法,用于发送事件到此事件消费者
  // NewRecorder returns an EventRecorder that can be used to send events to this EventBroadcaster
  // with the event source set to the given event source.
  NewRecorder(scheme *runtime.Scheme, source v1.EventSource) EventRecorder

  // Shutdown shuts down the broadcaster
  Shutdown()
}

EventBroadcaster作为Event消费者和事件广播器,消费EventRecorder记录的事件并将其分发给目前所有已连接的broadcasterWatcher。

结构体eventBroadcasterImpl是其实现:

type eventBroadcasterImpl struct {
	*watch.Broadcaster
	sleepDuration time.Duration
	options       CorrelatorOptions
}

eventBroadcasterImpl结构体中,同样包含Broadcaster结构体对象地址,因此可以调用Broadcaster实现的方法。

在apimachinery中的pkg/watch/mux.go中定义了Broadcaster结构体:

type Broadcaster struct {
	watchers     map[int64]*broadcasterWatcher
	nextWatcher  int64
	distributing sync.WaitGroup

	incoming chan Event
	stopped  chan struct{}

	// How large to make watcher's channel.
	watchQueueLength int
	// If one of the watch channels is full, don't wait for it to become empty.
	// Instead just deliver it to the watchers that do have space in their
	// channels and move on to the next event.
	// It's more fair to do this on a per-watcher basis than to do it on the
	// "incoming" channel, which would allow one slow watcher to prevent all
	// other watchers from getting new events.
	fullChannelBehavior FullChannelBehavior
}

client-go的tools/record/event.go中,提供的实例化eventBroadcasterImpl的函数:

// Creates a new event broadcaster.
func NewBroadcaster() EventBroadcaster {
	return &eventBroadcasterImpl{
		Broadcaster:   watch.NewLongQueueBroadcaster(maxQueuedEvents, watch.DropIfChannelFull),
		sleepDuration: defaultSleepDuration,
	}
}

Broadcaster实际由apimachinery/pkg/watch/mux.go中的NewBroadcaster函数创建:

func NewLongQueueBroadcaster(queueLength int, fullChannelBehavior FullChannelBehavior) *Broadcaster {
  m := &Broadcaster{
    watchers:            map[int64]*broadcasterWatcher{},
    incoming:            make(chan Event, queueLength),
    stopped:             make(chan struct{}),
    watchQueueLength:    queueLength,
    fullChannelBehavior: fullChannelBehavior,
  }
  m.distributing.Add(1)
  go m.loop()
  return m
}

创建时,会在内部启动goroutine,通过m.loop方法监控m.incoming;

// k8s.io/apimachinery/pkg/watch/mux.go
func (m *Broadcaster) loop() {
  //获取m.incoming管道中的数据
  for event := range m.incoming {
    if event.Type == internalRunFunctionMarker {
      event.Object.(functionFakeRuntimeObject)()
      continue
    }
        //进行事件分发
    m.distribute(event)
  }
  m.closeAll()
  m.distributing.Done()
}

同时将监控的事件通过m.distribute函数分发给所有已连接的BroadcasterWatcher: 

func (m *Broadcaster) distribute(event Event) {
    if m.fullChannelBehavior == DropIfChannelFull {
        for _, w := range m.watchers {
            select {
            case w.result <- event:
            case <-w.stopped:
            default:  // 队列满时,不阻塞
            }
        }
    } else {
        for _, w := range m.watchers {
            select {
            case w.result <- event:
            case <-w.stopped:
            }
        }
    }
}

分发过程有两种机制,分别是非阻塞(Non-Blocking)分发机制和阻塞(Blocking)分发机制。
在非阻塞分发机制(默认)下使用DropIfChannelFull标识。DropIfChannelFull标识位于select多路复用中,使用default关键字做非阻塞分发,当w.result缓冲区满的时候,事件会丢失。
在阻塞分发机制下使用WaitIfChannelFull标识。WaitIfChannelFull标识也位于select多路复用中,没有default关键字,当w.result缓冲区满的时候,分发过程会阻塞并等待。
这里之所以需要丢失事件,是因为随着k8s集群越来越大,上报事件也随之增多,那么每次上报都要对etcd进行读写,这样会给etcd集群带来压力。但是事件丢失并不会影响集群的正常工作,所以非阻塞分发机制下事件会丢失。

4.3 broadcasterWatcher事件的处理

eventBroadcasterImpl实现的三种Event的处理方法:

(1)StartLogging:将事件写入日志中。

func (e *eventBroadcasterImpl) StartLogging(logf func(format string, args ...interface{})) watch.Interface {
    return e.StartEventWatcher(
        func(e *v1.Event) {
            logf("Event(%#v): type: '%v' reason: '%v' %v", e.InvolvedObject, e.Type, e.Reason, e.Message)
        })
}

(2)StartStructuredLogging:将事件写入结构化日志中。  

func (e *eventBroadcasterImpl) StartStructuredLogging(verbosity klog.Level) watch.Interface {
	return e.StartEventWatcher(
		func(e *v1.Event) {
			klog.V(verbosity).InfoS("Event occurred", "object", klog.KRef(e.InvolvedObject.Namespace, e.InvolvedObject.Name), "kind", e.InvolvedObject.Kind, "apiVersion", e.InvolvedObject.APIVersion, "type", e.Type, "reason", e.Reason, "message", e.Message)
		})
}

(3)StartRecordingToSink:将事件存储到相应的sink。

func (e *eventBroadcasterImpl) StartRecordingToSink(sink EventSink) watch.Interface {
    eventCorrelator := NewEventCorrelatorWithOptions(e.options)
    return e.StartEventWatcher(
        func(event *v1.Event) {
            recordToSink(sink, event, eventCorrelator, e.sleepDuration)
        })
}

对于StartLogging、StartStructuredLogging方式,都是把事件信息当做日志打印了一下。这里主要看一下StartRecordingToSink方法,调用StartRecordingToSink方法会将数据上报到apiserver。

StartRecordingToSink方法依赖StartEventWatcher方法: 

func (e *eventBroadcasterImpl) StartEventWatcher(eventHandler func(*v1.Event)) watch.Interface {
	watcher := e.Watch()
	go func() {
		defer utilruntime.HandleCrash()
		for watchEvent := range watcher.ResultChan() {
			event, ok := watchEvent.Object.(*v1.Event)
			if !ok { 
				continue
			}
            //回调传入的方法
			eventHandler(event)
		}
	}()
	return watcher
}

StartRecordingToSink会调用StartEventWatcher,StartEventWatcher方法里面会异步的调用 watcher.ResultChan()方法获取到broadcasterWatcher的result管道,result管道里面的数据就是Broadcaster的distribute方法进行分发的。

最后会回调到传入的方法recordToSink中。

func recordToSink(sink EventSink, event *v1.Event, eventCorrelator *EventCorrelator, sleepDuration time.Duration) {
	eventCopy := *event
	event = &eventCopy
	//对事件做预处理,聚合相同的事件
	result, err := eventCorrelator.EventCorrelate(event)
	if err != nil {
		utilruntime.HandleError(err)
	}
	if result.Skip {
		return
	}
	tries := 0
	for {
		// 把事件发送到 apiserver
		if recordEvent(sink, result.Event, result.Patch, result.Event.Count > 1, eventCorrelator) {
			break
		}
		tries++
		if tries >= maxTriesPerEvent {
			klog.Errorf("Unable to write event '%#v' (retry limit exceeded!)", event)
			break
		} 
		if tries == 1 {
			time.Sleep(time.Duration(float64(sleepDuration) * rand.Float64()))
		} else {
			time.Sleep(sleepDuration)
		}
	}
}

recordToSink方法首先会调用EventCorrelate方法对event做预处理,聚合相同的事件,避免产生的事件过多,增加 etcd 和 apiserver 的压力,如果传入的Event太多了,那么result.Skip 就会返回false;

接下来会调用recordEvent方法把事件发送到 apiserver,它会重试很多次(默认是 12 次),并且每次重试都有一定时间间隔(默认是 10 秒钟)。

下面我们分别来看看EventCorrelate方法和recordEvent方法。

  • EventCorrelate
文件位置:client-go/tools/record/events_cache.go
// client-go/tools/record/events_cache.go
func (c *EventCorrelator) EventCorrelate(newEvent *v1.Event) (*EventCorrelateResult, error) {
  if newEvent == nil {
    return nil, fmt.Errorf("event is nil")
  }
  aggregateEvent, ckey := c.aggregator.EventAggregate(newEvent)
  observedEvent, patch, err := c.logger.eventObserve(aggregateEvent, ckey)
  if c.filterFunc(observedEvent) {
    return &EventCorrelateResult{Skip: true}, nil
  }
  return &EventCorrelateResult{Event: observedEvent, Patch: patch}, err
}

EventCorrelate方法会调用EventAggregate、eventObserve进行聚合,调用filterFunc会调用到spamFilter.Filter方法进行过滤。

func (e *EventAggregator) EventAggregate(newEvent *v1.Event) (*v1.Event, string) {
	now := metav1.NewTime(e.clock.Now())
	var record aggregateRecord 
	eventKey := getEventKey(newEvent) 
	aggregateKey, localKey := e.keyFunc(newEvent)
 
	e.Lock()
	defer e.Unlock()
	// 查找缓存里面是否也存在这样的记录
	value, found := e.cache.Get(aggregateKey)
	if found {
		record = value.(aggregateRecord)
	} 
	// maxIntervalInSeconds默认时间是600s,这里校验缓存里面的记录是否太老了
	// 如果是那么就创建一个新的
	// 如果record在缓存里面找不到,那么lastTimestamp是零,那么也创建一个新的
	maxInterval := time.Duration(e.maxIntervalInSeconds) * time.Second
	interval := now.Time.Sub(record.lastTimestamp.Time)
	if interval > maxInterval {
		record = aggregateRecord{localKeys: sets.NewString()}
	} 
	record.localKeys.Insert(localKey)
	record.lastTimestamp = now
	// 重新加入到LRU缓存中
	e.cache.Add(aggregateKey, record)
 
	// 如果没有达到阈值,那么不进行聚合
	if uint(record.localKeys.Len()) < e.maxEvents {
		return newEvent, eventKey
	}
 
	record.localKeys.PopAny()
 
	eventCopy := &v1.Event{
		ObjectMeta: metav1.ObjectMeta{
			Name:      fmt.Sprintf("%v.%x", newEvent.InvolvedObject.Name, now.UnixNano()),
			Namespace: newEvent.Namespace,
		},
		Count:          1,
		FirstTimestamp: now,
		InvolvedObject: newEvent.InvolvedObject,
		LastTimestamp:  now,
		// 将Message进行聚合
		Message:        e.messageFunc(newEvent),
		Type:           newEvent.Type,
		Reason:         newEvent.Reason,
		Source:         newEvent.Source,
	}
	return eventCopy, aggregateKey
}

EventAggregate方法也考虑了很多,首先是去缓存里面查找有没有相同的聚合记录aggregateRecord,如果没有的话,那么会在校验时间间隔的时候顺便创建聚合记录aggregateRecord;

由于缓存时lru缓存,所以再将聚合记录重新Add到缓存的头部;

接下来会判断缓存是否已经超过了阈值,如果没有达到阈值,那么直接返回不进行聚合;

如果达到阈值了,那么会重新copy传入的Event,并调用messageFunc方法聚合Message;

  • eventObserve
func (e *eventLogger) eventObserve(newEvent *v1.Event, key string) (*v1.Event, []byte, error) {
	var (
		patch []byte
		err   error
	)
	eventCopy := *newEvent
	event := &eventCopy

	e.Lock()
	defer e.Unlock()
	// 检查是否在缓存中
	lastObservation := e.lastEventObservationFromCache(key) 
	// 如果大于0说明存在,并且对Count进行自增
	if lastObservation.count > 0 { 
		event.Name = lastObservation.name
		event.ResourceVersion = lastObservation.resourceVersion
		event.FirstTimestamp = lastObservation.firstTimestamp
		event.Count = int32(lastObservation.count) + 1

		eventCopy2 := *event
		eventCopy2.Count = 0
		eventCopy2.LastTimestamp = metav1.NewTime(time.Unix(0, 0))
		eventCopy2.Message = ""

		newData, _ := json.Marshal(event)
		oldData, _ := json.Marshal(eventCopy2)
		patch, err = strategicpatch.CreateTwoWayMergePatch(oldData, newData, event)
	}

	// 最后重新更新缓存记录
	e.cache.Add(
		key,
		eventLog{
			count:           uint(event.Count),
			firstTimestamp:  event.FirstTimestamp,
			name:            event.Name,
			resourceVersion: event.ResourceVersion,
		},
	)
	return event, patch, err
}

eventObserve方法里面会去查找缓存中的记录,然后对count进行自增后更新到缓存中。

  • Filter
func (f *EventSourceObjectSpamFilter) Filter(event *v1.Event) bool {
	var record spamRecord 
	eventKey := getSpamKey(event)
 
	f.Lock()
	defer f.Unlock()
	value, found := f.cache.Get(eventKey)
	if found {
		record = value.(spamRecord)
	}
 
	if record.rateLimiter == nil {
		record.rateLimiter = flowcontrol.NewTokenBucketRateLimiterWithClock(f.qps, f.burst, f.clock)
	}
	// 使用令牌桶进行过滤
	filter := !record.rateLimiter.TryAccept()

	// update the cache
	f.cache.Add(eventKey, record)

	return filter
}

Filter主要时起到了一个限速的作用,通过令牌桶来进行过滤操作。

  •  recordEvent
func recordEvent(sink EventSink, event *v1.Event, patch []byte, updateExistingEvent bool, eventCorrelator *EventCorrelator) bool {
	var newEvent *v1.Event
	var err error
	// 更新已经存在的事件
	if updateExistingEvent {
		newEvent, err = sink.Patch(event, patch)
	}
	// 创建一个新的事件
	if !updateExistingEvent || (updateExistingEvent && util.IsKeyNotFoundError(err)) {
		event.ResourceVersion = ""
		newEvent, err = sink.Create(event)
	}
	if err == nil {
		eventCorrelator.UpdateState(newEvent)
		return true
	}
	// 如果是已知错误,就不要再重试了;否则,返回 false,让上层进行重试
	switch err.(type) {
	case *restclient.RequestConstructionError:
		klog.Errorf("Unable to construct event '%#v': '%v' (will not retry!)", event, err)
		return true
	case *errors.StatusError:
		if errors.IsAlreadyExists(err) {
			klog.V(5).Infof("Server rejected event '%#v': '%v' (will not retry!)", event, err)
		} else {
			klog.Errorf("Server rejected event '%#v': '%v' (will not retry!)", event, err)
		}
		return true
	case *errors.UnexpectedObjectError: 
	default:
	}
	klog.Errorf("Unable to write event: '%v' (may retry after sleeping)", err)
	return false
}

recordEvent方法会根据eventCorrelator返回的结果来决定是新建一个事件还是更新已经存在的事件,并根据请求的结果决定是否需要重试。

5、使用示例

  const (
  // SuccessSynced is used as part of the Event 'reason' when a Foo is synced
  successSynced = "Synced"
  // is synced successfully
  messageResourceSynced = "User synced successfully"
  )

  eventBroadcaster := record.NewBroadcaster()
  eventBroadcaster.StartLogging(klog.Infof)
  eventBroadcaster.StartRecordingToSink(&typedcorev1.EventSinkImpl{Interface: k8sClient.CoreV1().Events("")})
  recorder := eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: controllerName})

  c.recorder.Event(user, corev1.EventTypeNormal, successSynced, messageResourceSynced)

6、总结

    了解完 events 的整个处理流程后,再梳理一下整个流程:

  1. 首先是初始化 EventBroadcaster 对象,同时会初始化一个 Broadcaster 对象,并开启一个loop循环接收所有的 events 并进行广播;

  2. 然后通过 EventBroadcaster 对象的 NewRecorder() 方法初始化 EventRecorder 对象,EventRecorder 对象会生成 events 并通过ActionOrDrop() 方法发送 events 到 Broadcaster 的 channel 队列中;

  3. EventBroadcaster 会调用StartStructuredLogging、StartRecordingToSink方法调用封装好的StartEventWatcher方法,并执行自己的逻辑;

  4. StartRecordingToSink封装的StartEventWatcher方法里面会将所有的 events 广播给每一个 watcher,并调用recordToSink方法对收到 events 后会进行缓存、过滤、聚合而后发送到 apiserver,apiserver 会将 events 保存到 etcd 中。

参考:https://blog.csdn.net/weixin_45413603/article/details/108204904

参考:https://www.cnblogs.com/yangyuliufeng/p/13942789.html

参考:https://www.cnblogs.com/luozhiyun/p/13799901.html

这篇关于Kubernetes EventBroadcaster事件管理机制源码分析的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!