如何在python中抓取td标签内的链接

堆栈交换

这是html我从网站上找到的代码。我想要在标签内找到的vk 链接。td

我在 python 中尝试了很多方法来抓取该链接,但它总是显示某种类型的错误,有时它会显示不同的链接。

<thead>
<tr class="footable-header">
<th scope="col" 
class="ninja_column_0 
ninja_clmn_nm_date ">Date</th><th scope="col"class="ninja_column_1ninja_clmn_nm_download">download</th></tr></thead><tbody><tr data-row_id="0" 
class="ninja_table_row_0 nt_row_id_0"><td>01-05-2022</td><td>https://vk.com/doc722551386_632783806? hash=gjIfCA0ILqZ1LQlzftCyxZ4zOATANYnUqZXiZ1vsAJH&dl=5wFKrFiIzvVfYJ6M4m1z9ALqKzGdXJdsGAXv1NaBtSg</td> </tr>

这是python我尝试过的代码:

import requests
from bs4 import BeautifulSoup

url="https://www.careerswave.in/dainik-jagran-newspaper-download/"
reqs = requests.get(url)
soup = BeautifulSoup(reqs.text,'html.parser')
f = open("vkdain.txt", "w")
for link in soup.find_all("a"):
data = link.get('href')
print(data)
利亚姆

如果您只想获得td对我有用的链接:

import requests
from bs4 import BeautifulSoup

url = "https://www.careerswave.in/dainik-jagran-newspaper-download/"
reqs = requests.get(url)
soup = BeautifulSoup(reqs.text, 'html.parser')
f = open("vkdain.txt", "w")
for link in soup.find_all("td"): # find all the td's
    if link.text.startswith('https://vk'): # check if the pattern is the one you want
        print(link.text)

这将为您提供以下结果:

https://vk.com/doc722551386_632783806?hash=gjIfCA0ILqZ1LQlzftCyxZ4zOATANYnUqZXiZ1vsAJH&dl=5wFKrFiIzvVfYJ6M4m1z9ALqKzGdXJdsGAXv1NaBtSg
https://vk.com/doc722551386_632705478?hash=mXInLmfkZNSLz5UVqRoRW60bRlzynUFUpRZoiBeW4ko&dl=zFzHm0Edhycg4ulJp33jdeFbypSaynNcjpZ41cUnID0
...
https://vk.com/doc623586997_607921843?hash=c6f706ee5f09f4d4e5&dl=f780520e509b9f671b
https://vk.com/doc623586997_607809766?hash=ef486a0fb1e873640e&dl=eeb60781cef9e58541

以下是一些相关问题:

本文收集自互联网,转载请注明来源。

如有侵权,请联系 [email protected] 删除。

编辑于
0

我来说两句

0 条评论
登录 后参与评论

相关文章