今天, 带大家采集地方官方网站的商品数据,其实这些公开的商品数据就是展示给用户看的,只不过我们通过爬虫采集下来可以更加方便我们看数据, 以及方便后续对数据做分析。
一起来看看吧!
受害者地址:http://hljcg.hlj.gov.cn/
原始表单数据
url = 'http://hljcg.hlj.gov.cn/proxy/trade-service/mall/search/searchByParamFromEs' headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36'} json_data = {"queryPage":{"platformId":20,"pageSize":28,"pageNum":1},"orderType":"desc","homeType":"10","isAggregation":"true","publishType":"1","orderColumn":"saleCount","cid":1000033,"businessType":"1","cids":[]}
import requests # 数据请求模块, 第三方模块 import pprint # 格式化输出 response = requests.post(url=url, json=json_data, headers=headers) json_data = response.json() # 变量 pprint.pprint(json_data)
取下每一条商品数据
result_list = json_data['data']['itemList']['resultList'] # print(result_list) for result in result_list: print(result)
键值对取值
skuName = result['skuName'] # 商品名字 maxPrice = result['maxPrice'] # 商品价格 pictureUrl = result['pictureUrl'].replace('\n', ' | ') # 商品图片链接 print(skuName, maxPrice, pictureUrl)
import csv # 数据保存模块, 内置模块 with open('商品数据.csv', mode='a', encoding='utf-8', newline='') as f: csv_write = csv.writer(f) csv_write.writerow([skuName, maxPrice, pictureUrl])
最后运行代码得到结果