Handy to scrap a website, with choice for levels of digging
https://calomel.org/file_retrival.html
connect to ftp://server_name.com/
continue failed downloads (-c)
recurse only one directory, i.e. the current directory only (-l1)
Recursive mode; copy whole directory trees if needed (-r)
Do not make the directory structure, just download the files (-nd)
get the files symlinks are pointing to (–retr-symlinks)
retry three(3) times when trying to log into the server (-t3)
timeout an inactive connection in 30 seconds (-t30)
limit the download rate to 50 kilobytes per second (–limit-rate=50k)
wait between 1 and 3 seconds before downloading the next file (–random-wait)
do not download any file that ends with “.iso” (–reject “*.iso”) or you use –accept “*.iso” to only get .iso files
download “file_or_directory” (this can be ftp or http)
wget -c -l1 -r -nd –retr-symlinks -t3 -T30 –limit-rate=50k –random-wait –reject “*.iso” ftp://server_name.com/f
another
http://psung.blogspot.hk/2008/06/using-wget-or-curl-to-download-web.html