在我们日常运维/运维开发工作中各种系统主要分为两大流派
本文主要讨论下有agent侧一些注意事项
优点
缺点
特点无侵入性agent:典型应用就是基于ssh ansible
优点
缺点
经典client案例
代码应当简洁,避免过多资源消耗
agent资源监控可以使用prometheus的 client_golang ,默认会export 进程的cpu_use 、fd、mem等信息帮助我们定位资源消耗
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds. # TYPE process_cpu_seconds_total counter process_cpu_seconds_total 38913.32 # HELP process_max_fds Maximum number of open file descriptors. # TYPE process_max_fds gauge process_max_fds 6.815744e+06 # HELP process_open_fds Number of open file descriptors. # TYPE process_open_fds gauge process_open_fds 15 # HELP process_resident_memory_bytes Resident memory size in bytes. # TYPE process_resident_memory_bytes gauge process_resident_memory_bytes 1.4659584e+07 # HELP process_start_time_seconds Start time of the process since unix epoch in seconds. # TYPE process_start_time_seconds gauge process_start_time_seconds 1.59350253732e+09 # HELP process_virtual_memory_bytes Virtual memory size in bytes. # TYPE process_virtual_memory_bytes gauge process_virtual_memory_bytes 1.201352704e+09 # HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes. # TYPE process_virtual_memory_max_bytes gauge process_virtual_memory_max_bytes -1
举例:现在要升级agent版本 from v1.0 to v1.1
如ansible-playbook 可以参考我之前的文章 使用ansible-playbook实现dnsdist快速劫持工具
我们可以使用下面python代码将跑playbook封装成一个方法,使用的时候只需要传入 ip列表,yaml,和额外的变量dict
即可
咳咳:这个方案典型问题就是受限于单个ansible性能问题(很多小伙伴都被折磨过吧),当然可以将大量的ip列表分片分发给多个ansible-server执行,再将结果merge一下
t = PlaybookApi([ip], yaml_path, {"conf_dir": conf_dir, "bk_file_name": bk_file_name}) t.run()
from collections import namedtuple from ansible.parsing.dataloader import DataLoader from ansible.vars import VariableManager from ansible.inventory import Inventory from ansible.utils.vars import load_extra_vars from ansible.utils.vars import load_options_vars from ansible.executor.playbook_executor import PlaybookExecutor from ansible.plugins.callback import CallbackBase class ResultsCollector(CallbackBase): def __init__(self, *args, **kwargs): super(ResultsCollector, self).__init__(*args, **kwargs) self.host_ok = {} self.host_unreachable = {} self.host_failed = {} def v2_runner_on_unreachable(self, result): self.host_unreachable[result._host.get_name()] = result def v2_runner_on_ok(self, result, *args, **kwargs): self.host_ok[result._host.get_name()] = result def v2_runner_on_failed(self, result, *args, **kwargs): self.host_failed[result._host.get_name()] = result # class PlaybookApi(PlaybookExecutor): class PlaybookApi(PlaybookExecutor): def __init__(self, host_list, yaml_path, extra_vars): self.host_list = host_list self.yaml_path = yaml_path # self.kcache_path = kcache_path self.callback = ResultsCollector() self.extra_vars = extra_vars self.IpmiPlay() super(PlaybookApi, self).__init__(playbooks=[self.yaml_path], inventory=self.inventory, variable_manager=self.variable_manager, loader=self.loader, options=self.options, passwords={}) self._tqm._stdout_callback = self.callback def IpmiPlay(self): Options = namedtuple('Options', ['listtags', 'listtasks', 'listhosts', 'syntax', 'connection', 'module_path', 'forks', 'remote_user', 'private_key_file', 'ssh_common_args', 'ssh_extra_args', 'sftp_extra_args', 'scp_extra_args', 'become', 'become_method', 'become_user', 'verbosity', 'check', 'extra_vars']) self.options = Options(listtags=False, listtasks=False, listhosts=False, syntax=False, connection='ssh', module_path=None, forks=10, remote_user='', private_key_file=None, ssh_common_args='', ssh_extra_args='', sftp_extra_args='', scp_extra_args='', become=True, become_method='sudo', become_user='root', verbosity=3, check=False, extra_vars={}) self.loader = DataLoader() # create the variable manager, which will be shared throughout # the code, ensuring a consistent view of global variables variable_manager = VariableManager() variable_manager.extra_vars = load_extra_vars(loader=self.loader, options=self.options) variable_manager.options_vars = load_options_vars(self.options) self.variable_manager = variable_manager # create the inventory, and filter it based on the subset specified (if any) self.inventory = Inventory(loader=self.loader, variable_manager=self.variable_manager, host_list=self.host_list) self.variable_manager.set_inventory(self.inventory) self.variable_manager.extra_vars = self.extra_vars def get_result(self): # print("calling in get_result") self.results_raw = {'success': {}, 'failed': {}, "unreachable": {}} for host, result in self.callback.host_ok.items(): self.results_raw['success'][host] = result for host, result in self.callback.host_failed.items(): self.results_raw['failed'][host] = result for host, result in self.callback.host_unreachable.items(): self.results_raw['unreachable'][host] = result._result['msg'] return self.results_raw if __name__ == '__main__': h = ["127.0.0.1"] yaml = "systemd_stop.yaml" api = PlaybookApi(h, yaml, {"app": "falcon-judge"}) api.run() res = api.get_result() for k, v in res.items(): for kk, vv in v.items(): print(kk, vv._result)
以falcon-agent代码为例,代码地址 https://github.com/ning1875/falcon-plus/tree/master/modules/agent 整体实现流程:
ps:原谅我那蜘蛛爬的字吧
实现分析
文件升级完后如何重启服务呢:以systemd为例只需要发送term信号给自身进程即可,即kill 进程pid
pid := os.Getpid() thisPro, _ := os.FindProcess(pid) thisPro.Signal(os.Kill)
agent如何管理版本: 在const中指定
// changelog: // 3.1.3: code refactor // 3.1.4: bugfix ignore configuration // 5.0.0: 支持通过配置控制是否开启/run接口;收集udp流量数据;du某个目录的大小 // 5.1.0: 同步插件的时候不再使用checksum机制 // 5.1.1: 修复往多个transfer发送数据的时候crash的问题 // 5.1.2: ignore mount point when blocks=0 // 6.0.0: agent自升级,新增一些监控项 // 6.0.1: agent collect level // 6.0.2: 添加单核监控开关默认不打开,单核监控tag变更为core=core0x ,添加mem.available.percent // 6.0.3: 增加sys.uptime // 6.0.4: 修复cpu.iowait>100的bug // 6.0.5: 添加进程采集监控,间隔30s // 6.0.6: 调整内置的采集func间隔 disk io相关和tcp 10s-->30s,agent_version 整数代表当前版本,去掉动态监控方法 // 6.0.7: ntp 支持chronyc ,服务监控rpc call 间隔调整为一分钟 // 6.0.8: 修改监控项抓取时间间隔, 10s只保留cpu,解决断点问题 // 6.0.9: 修复dfa dfb块设备采集,修复不同版本ss-s的bug // 6.1.0: 修复机器上主机名被改case,使ip转化为nxx-xx-xx的形式 const ( VERSION = "6.1.0" COLLECT_INTERVAL = time.Second URL_CHECK_HEALTH = "url.check.health" NET_PORT_LISTEN = "net.port.listen" DU_BS = "du.bs" PROC_NUM = "proc.num" UPTIME = "sys.uptime" )
管理员如何发起升级:只需要给hbs发起http请求打开升级开关
curl -X POST http://127.0.0.1:6031/agent/upgrade -d '{"wgeturl":"http://${your_cdn_addr}/file/open-falcon","version":"6.0.1","binfile_md5":"35ac8534c0b31237e844ef8ee2bb9b9e"}'
缺点
http-req --->hbs --->开启升级开关--->检查agent心跳信息中版本号,并检查当前hbs升级队列--->发送升级指令给agent ---> agent通过 升级命令中的url地址和目标版本号下载新的二进制(会有备份和回滚逻辑)--->agent check没有问题后获取自身的pid向自己发送kill信号 --->agent退出然后会被systemd拉起打到升级的目的--->新的心跳信息中版本checkok不会继续升级
1. falcon-agent新加了采集指标,测试OK后在代码中打上新的版本号比如6.0.0(现有是6.0.1) 2. 然后将新版 放到下载服务器的路径下 wget http://${your_cdn_addr}/file/open-falcon/bin_6.0.1 3. 然后向hbs 发送升级的http请求(这里有个保护机制:只能在hbs本机发起) 4. 然后通过hbs 的http接口查询当前心跳上来的agent的版本查看升级进度 ,curl -s http://localhost:6031/agentversions |python -m "json.tool" 5. 同时需要连接的redis集群观察 agent_upgrade_set 这个set的值,redis-cli -h ip -p port -c smembers agent_upgrade_set & scard agent_upgrade_set 6. 目前看并发2000可以把一台下载的nginx万兆网卡流量打满。1.24GB/s ## falcon-agent 自升级命令 curl -X POST http://127.0.0.1:6031/agent/upgrade -d '{"wgeturl":"http://${your_cdn_addr}/file/open-falcon","version":"6.0.1","binfile_md5":"35ac8534c0b31237e844ef8ee2bb9b9e"}' curl -X GET http://127.0.0.1:6031/agent/upgrade/nowargs {"msg":"success","data":{"type":0,"wgeturl":"http://${your_cdn_addr}/file/open-falcon","version":"6.0.1","binfile_md5":"35ac8534c0b31237e844ef8ee2bb9b9e","cfgfile_md5":""}} curl http://127.0.0.1:6031/agentversions {"msg":"success","data":{"n3-021-225":"6.0.1"}} curl -X DELETE http://127.0.0.1:6031/agent/upgrade {"msg":"success","data":"取消升级成功"} uri: url: http://127.0.0.1:6031/agent/upgrade method: POST body: {"wgeturl":"http://${your_cdn_addr}/file/open-falcon","version":"6.0.2","binfile_md5":"f5c597f15e379a77d1e2ceeec7bd99a8"} status_code: 200 body_format: json