我对python很陌生,我正尝试从客户的网站上抓取一些基本数据。我在其他网站上尝试了完全相同的方法,并收到了预期的结果。这是我到目前为止所拥有的:
from urllib.request import urlopen
from bs4 import BeautifulSoup
main_url = 'https://www.grainger.com/category/pipe-hose-tube-fittings/hose-products/hose-fittings-couplings/cam-groove-fittings-gaskets/metal-cam-groove-fittings/stainless-steel-cam-groove-fittings'
uClient = urllib.request.urlopen(main_url)
main_html = uClient.read()
uClient.close()
甚至这个简单的阅读网站电话都导致了超时错误。正如我所说,我已经在其他网站上成功使用了完全相同的代码。错误是:
Traceback (most recent call last):
File "Pricing_Tool.py", line 6, in <module>
uClient = uReq(main_url)
File "C:\Users\Brian Knoll\anaconda3\lib\urllib\request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "C:\Users\Brian Knoll\anaconda3\lib\urllib\request.py", line 525, in open
response = self._open(req, data)
File "C:\Users\Brian Knoll\anaconda3\lib\urllib\request.py", line 543, in _open
'_open', req)
File "C:\Users\Brian Knoll\anaconda3\lib\urllib\request.py", line 503, in _call_chain
result = func(*args)
File "C:\Users\Brian Knoll\anaconda3\lib\urllib\request.py", line 1362, in https_open
context=self._context, check_hostname=self._check_hostname)
File "C:\Users\Brian Knoll\anaconda3\lib\urllib\request.py", line 1322, in do_open
r = h.getresponse()
File "C:\Users\Brian Knoll\anaconda3\lib\http\client.py", line 1344, in getresponse
response.begin()
File "C:\Users\Brian Knoll\anaconda3\lib\http\client.py", line 306, in begin
version, status, reason = self._read_status()
File "C:\Users\Brian Knoll\anaconda3\lib\http\client.py", line 267, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "C:\Users\Brian Knoll\anaconda3\lib\socket.py", line 589, in readinto
return self._sock.recv_into(b)
File "C:\Users\Brian Knoll\anaconda3\lib\ssl.py", line 1071, in recv_into
return self.read(nbytes, buffer)
File "C:\Users\Brian Knoll\anaconda3\lib\ssl.py", line 929, in read
return self._sslobj.read(len, buffer)
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
该网站是否可能太大而无法处理?任何帮助将不胜感激。谢谢!
通常,网站会通过发送请求返回响应requests
。但是有些网站需要一些特定的标头,例如User-Agent,Cookie等。这是一个这样的网站。您已发送,User-Agent
以便网站看到请求来自浏览器。以下代码应返回响应代码200。
import requests
headers = {"User-Agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36"}
res = requests.get("https://www.grainger.com/category/pipe-hose-tube-fittings/hose-products/hose-fittings-couplings/cam-groove-fittings-gaskets/metal-cam-groove-fittings/stainless-steel-cam-groove-fittings", headers=headers)
print(res.status_code)
更新:
from bs4 import BeautifulSoup
soup = BeautifulSoup(res.text, "lxml")
print(soup.find_all("a"))
这将给所有锚标签
本文收集自互联网,转载请注明来源。
如有侵权,请联系 [email protected] 删除。
我来说两句