nutch杂记
1. 如何绕过目标站点的robots.txt限制
多数站点都是只允许百度、google等搜索引擎抓取的,所以会在robots.txt里限制其他爬虫。
nutch自然是会遵循robots协议的,但是我们可以通过修改nutch源码来绕过限制。
相关代码位于(nutch版本1.5.1,其他版本未测试):
org.apache.nutch.fetcher.Fetcher的run方法.
找到以下几行代码并注释掉就OK了。
if (!rules.isAllowed(fit.u)) { // unblock fetchQueues.finishFetchItem(fit, true); if (LOG.isDebugEnabled()) { LOG.debug("Denied by robots.txt: " + fit.url); } output(fit.url, fit.datum, null, ProtocolStatus.STATUS_ROBOTS_DENIED, CrawlDatum.STATUS_FETCH_GONE); reporter.incrCounter("FetcherStatus", "robots_denied", 1); continue; }<property> <name>http.redirect.max</name> <value>2</value> <description>The maximum number of redirects the fetcher will follow when trying to fetch a page. If set to negative or 0, fetcher won't immediately follow redirected URLs, instead it will record them for later fetching. </description></property>
<property> <name>http.content.limit</name> <value>65536</value> <description>The length limit for downloaded content using the http:// protocol, in bytes. If this value is nonnegative (>=0), content longer than it will be truncated; otherwise, no truncation at all. Do not confuse this setting with the file.content.limit setting. </description></property>
<property> <name>file.content.limit</name> <value>65536</value> <description>The length limit for downloaded content using the file:// protocol, in bytes. If this value is nonnegative (>=0), content longer than it will be truncated; otherwise, no truncation at all. Do not confuse this setting with the http.content.limit setting. </description></property>