废话不多说,直接上代码???
"""实战:起点中文网--万古最强宗师(限免)1802章爬取url: /info/1012369914#Catalog时间:/3/29版本:0.0.1作者:小川Class"""# 爬虫灵魂之一import urllib.request as ur# 自己加的py文件,提供多个User-Agent(呃,就是游览器标识)import user_agent# json相关的一系列操作,这里用到json.loadsimport jsonimport re# 模拟登录def getRequest(url):return ur.Request(url=url,headers={'User-Agent':user_agent.get_user_agent_pc(),'Cookie':'_qda_uuid=47e782b9-1ffd-0a5b-c3f3-f8cbbd765b01; e1=%7B%22pid%22%3A%22qd_P_mulu%22%2C%22eid%22%3A%22%22%7D; e2=%7B%22pid%22%3A%22qd_P_mulu%22%2C%22eid%22%3A%22%22%7D; _csrfToken=YuerC8fhugSeZ3tw0ca69PrsK58tXAKyCjbGHnG2; newstatisticUUID=1585397817_774925897; qdrs=0%7C3%7C0%7C0%7C1; showSectionCommentGuide=1; qdgd=1; rcr=1012369914; e1=%7B%22pid%22%3A%22qd_P_limitfree%22%2C%22eid%22%3A%22qd_E05%22%2C%22l1%22%3A5%7D; e2=%7B%22pid%22%3A%22qd_P_limitfree%22%2C%22eid%22%3A%22qd_E02%22%2C%22l1%22%3A5%7D; lrbc=1012369914%7C419439161%7C0; bc=1012369914'})# 代理IP# 我这没有加,大家随意,要钱。。。。。# url替换成自己在网上买的API;调用方法:response = getProxyOpener().open(request).read()def getProxyOpener():proxy_address = ur.urlopen('http://api./dynamic/get.html?order=d314e5e5e19b0dfd19762f98308114ba&sep=4').read().decode('utf-8').strip()proxy_handler = ur.ProxyHandler({'http':proxy_address})return ur.build_opener(proxy_handler)# 2个固定的套路,请求(request)响应(response),爬虫其实就这2个请求和响应。爬虫2步曲# url:响应回json数据,提取里面的章节名及其对应的id。# 提取目录各章节名及其跳转页面(id)directorys_request = getRequest('/ajax/book/category?_csrfToken=YuerC8fhugSeZ3tw0ca69PrsK58tXAKyCjbGHnG2&bookId=1012369914')directorys_response = ur.urlopen(directorys_request).read()# directorys_lists目录们的Big空列表 ,列表里每一个元素是一个字典{章节名,id}directorys_lists = []# 丰富Big列表# 第1[0]到50[-1]章for directory in range(0,50):directorys_lists.append({json.loads(directorys_response)['data']['vs'][0]['cs'][directory]['cN']:json.loads(directorys_response)['data']['vs'][0]['cs'][directory]['id']})# 第51[0]到161[-2]章for directory in range(0,111):directorys_lists.append({json.loads(directorys_response)['data']['vs'][1]['cs'][directory]['cN']:json.loads(directorys_response)['data']['vs'][1]['cs'][directory]['id']})# 第162[0]到1802[-1]章for directory in range(0,1649):directorys_lists.append({json.loads(directorys_response)['data']['vs'][2]['cs'][directory]['cN']:json.loads(directorys_response)['data']['vs'][2]['cs'][directory]['id']})# 各章节内容# 给各个章节内容取个空列表,收留它们吧。chapters = []for chapter in directorys_lists:title = list(chapter.keys())[0]id = list(chapter.values())[0]# 爬虫2步曲chapters_request = getRequest('/chapter/1012369914/' +str(id))chapters_response = ur.urlopen(chapters_request).read()# 正则表达式太长了,为它取一个变量吧。pattern = '<div class="read-content j_readContent">\n*\s*(.*?)\n*\s*</div>'# 提取,re.findall(正则表达式,被提取的字符串)content = re.findall(pattern, chapters_response.decode('utf-8'))[0]# 替换,re.sub(正则表达式,替换成的字符串,被匹配的字符串)content = re.sub('<p>', '\n', content)# 分布写入章节名(title)和章节名所对应的内容(content)with open('万古最强宗师.txt', 'a', encoding='utf8') as f:f.write(title)f.write(content)
下面来详细解剖讲解
慎看!!!
-------------------------------------------我是分割线-------------------------------------------------
directorys_request = getRequest('/ajax/book/category?_csrfToken=YuerC8fhugSeZ3tw0ca69PrsK58tXAKyCjbGHnG2&bookId=1012369914')directorys_response = ur.urlopen(directorys_request).read()
?????这url怎么来的???
打开网页/info/1012369914#Catalog,F12,因为是要找到数据交互,故点击network里的XHR请求,精确捕获XHR对象,我们发现一个url为/ajax/book/category?_csrfToken=YuerC8fhugSeZ3tw0ca69PrsK58tXAKyCjbGHnG2&bookId=1012369914的请求返回的response是一个包含所有章节名及其对应id的json对象,这就是我们要寻找的交互数据。通过id结合下面的一个url(它就在下面),就得到了我们需要的小说内容了。
-------------------------------------------我是分割线-----------------------------------------
chapters_request = getRequest('/chapter/1012369914/' +str(id))chapters_response = ur.urlopen(chapters_request).read()
?????????这url又是怎么来的??
一个字方便(⊙o⊙)?
这样爬取章节内容全是用这个网页了(url)
大哥url怎么来的???
访问试试:/chapter/1012369914/435915936
解剖:‘43591536’就是438章对应的id(它在那?怎么获取?就是前一个url)
-------------------------------------------我是分割线-------------------------------------------------
总结
爬虫是活的,别弄的太死!
爬虫 = 请求(request)+ 响应(response)
拜拜!!!!
3个for循环自己想了—
分析响应回的json数据;{章节名:章节名对应的内容}