如何使用wget / curl下载给定网页上的.zip文件的所有链接?

时间:2012-11-23 17:00:47

标签: curl download wget

页面包含指向一组.zip文件的链接,所有这些文件都要我下载。我知道这可以通过wget和curl来完成。怎么做?

3 个答案:

答案 0 :(得分:111)

命令是:

wget -r -np -l 1 -A zip http://example.com/download/

选项含义:

-r,  --recursive          specify recursive download.
-np, --no-parent          don't ascend to the parent directory.
-l,  --level=NUMBER       maximum recursion depth (inf or 0 for infinite).
-A,  --accept=LIST        comma-separated list of accepted extensions.

答案 1 :(得分:60)

以上解决方案对我不起作用。 对我来说只有这个有效:

wget -r -l1 -H -t1 -nd -N -np -A.mp3 -erobots=off [url of website]

选项含义:

-r            recursive
-l1           maximum recursion depth (1=use only this directory)
-H            span hosts (visit other hosts in the recursion)
-t1           Number of retries
-nd           Don't make new directories, put downloaded files in this one
-N            turn on timestamping
-A.mp3        download only mp3s
-erobots=off  execute "robots.off" as if it were a part of .wgetrc

答案 2 :(得分:3)

对于其他使用并行魔法的场景我使用:

curl [url] | grep -i [filending] | sed -n 's/.*href="\([^"]*\).*/\1/p' |  parallel -N5 wget -