爬取目标
爬取工具
win10 python3 scrapy BeautifulSoup
爬取内容
1 网站缩略图 2 网站名称 3 网址 4 Alexa排名,5 百度权重 6 网站简介 7 网站得分
爬取理由
想着可以通过网站top 来注册一下 .app的域名,同时这也是一个顶级域名。亦或者进行一下数据分析,看下以后做哪种类型的网站会稍微有前途些(异想天开.gif)
爬取代码
因为用scrapy 用得熟练,这里只贴spider代码,其他工程代码,留言索取,即可。
- spider代码
# -*- coding: utf-8 -*-
# @Time : 2018/5/17 19:21
# @Author : 蛇崽
# @Email : 643435675@QQ.com
# @File : chinaztopSpider.py(网站总排名)
import scrapy
from bs4 import BeautifulSoup
class ChinaztopSpider(scrapy.Spider):
name = 'chinaztop'
allowed_domains = ['top.chinaz.com']
start_urls = ['http://top.chinaz.com/all/']
count = 0
def parse(self, response):
soup = BeautifulSoup(response.body,'lxml')
li_list = soup.find('ul',class_='listCentent').find_all('li',class_='clearfix')
for li in li_list:
left_img = li.find('div',class_='leftImg').find('img',class_='')['src']
detail_site_info = li.find('div',class_='leftImg').find('a',class_='')['href']
site_com = li.find('h3',class_='rightTxtHead').find('span',class_='col-gray').get_text()
rtc_data_alexa = li.find('p',class_='RtCData').find('a',class_='').get_text()
site_info = li.find('p',class_='RtCInfo').get_text()
print(response.urljoin(left_img))
print(response.urljoin(detail_site_info))
print(site_com)
print(rtc_data_alexa)
print(site_info)
# 查找下一页
next_links = soup.find('div',class_='ListPageWrap').find_all('a')
for next_link in next_links:
text = next_link.get_text()
print('next-link ===== ',text)
if '>' in text:
nt_link = next_link['href']
nt_link =response.urljoin(nt_link)
print('nt_link ===== ',nt_link)
self.count = self.count +1
print('============== 当前抓取页数: ',self.count)
yield scrapy.Request(nt_link,callback=self.parse)
这里的数据暂时只做打印,日后进行复盘。
爬取部分截图: