在一个网站内抓取网页的最快方法

时间:2011-06-12 21:14:04

标签: c# httpwebrequest screen-scraping

我有一个C#应用程序,需要尽可能快地抓取某个域内的许多页面。我有一个Parallel.Foreach循环遍历所有URL(多线程)并使用下面的代码擦除它们:

private string ScrapeWebpage(string url, DateTime? updateDate)
        {
            HttpWebRequest request = null;
            HttpWebResponse response = null;
            Stream responseStream = null;
            StreamReader reader = null;
            string html = null;

            try
            {
                //create request (which supports http compression)
                request = (HttpWebRequest)WebRequest.Create(url);
                request.Pipelined = true;
                request.KeepAlive = true;
                request.Headers.Add(HttpRequestHeader.AcceptEncoding, "gzip,deflate");
                if (updateDate != null)
                    request.IfModifiedSince = updateDate.Value;

                //get response.
                response = (HttpWebResponse)request.GetResponse();
                responseStream = response.GetResponseStream();
                if (response.ContentEncoding.ToLower().Contains("gzip"))
                    responseStream = new GZipStream(responseStream, CompressionMode.Decompress);
                else if (response.ContentEncoding.ToLower().Contains("deflate"))
                    responseStream = new DeflateStream(responseStream, CompressionMode.Decompress);

                //read html.
                reader = new StreamReader(responseStream, Encoding.Default);
                html = reader.ReadToEnd();
            }
            catch
            {
                throw;
            }
            finally
            {//dispose of objects.
                request = null;
                if (response != null)
                {
                    response.Close();
                    response = null;
                }
                if (responseStream != null)
                {
                    responseStream.Close();
                    responseStream.Dispose();
                }
                if (reader != null)
                {
                    reader.Close();
                    reader.Dispose();
                }
            }
            return html;
        }

如您所见,我有http压缩支持并将request.keepalive和request.pipelined设置为true。我想知道我使用的代码是否是在同一网站内抓取许多网页的最快方法,或者是否有更好的方法可以保持会话对多个请求开放。我的代码是为我点击的每个页面创建一个新的请求实例,我应该尝试只使用一个请求实例来访问所有页面吗?启用流水线和keepalive是否理想?

1 个答案:

答案 0 :(得分:1)

事实证明我缺少的是:

ServicePointManager.DefaultConnectionLimit = 1000000;
相关问题