How to Download a List of URLs via Terminal: wget, curl & More
Downloading a list of URLs from the terminal is fast and scriptable. Here's how to do it with wget, curl, aria2, and Python — with examples for each.
Terminal URL Downloading: When to Use It
Terminal-based downloading is ideal when:
- You have a large list (50+ URLs) to download
- You need to automate downloads in a script or cron job
- You want fine-grained control over headers, rate limits, and retries
- You're working on a remote server without a GUI
Method 1: wget (Most Common)
wget is pre-installed on most Linux distros and macOS. Available on Windows via Chocolatey or WSL.
Download a single URL:
wget https://example.com/file.pdf
Download from a URL list file:
wget -i urls.txt
Where urls.txt has one URL per line.
Download with custom output directory:
wget -P ./downloads/ -i urls.txt
Download only specific file types recursively:
wget -r -np -nd -A "*.pdf,*.docx" https://example.com/docs/
Method 2: curl
curl is more flexible than wget for scripted use and API integrations.
Download a single file:
curl -O https://example.com/file.pdf
Download multiple URLs from a list:
xargs -n 1 curl -O < urls.txt
# Or with parallel execution:
cat urls.txt | parallel -j 4 curl -O {}
Follow redirects and save with remote filename:
curl -L -O https://example.com/download/file
Method 3: aria2c (Fastest for Parallel Downloads)
aria2 is a multi-protocol download utility that supports parallelism out of the box.
aria2c -i urls.txt -j 8 # Download 8 files simultaneously
aria2c -j 16 --split=8 https://example.com/large-file.zip # 16 connections, 8 segments
Best for: Large batches where speed matters.
Method 4: Python with requests
import requests, os
urls = open("urls.txt").read().splitlines()
os.makedirs("downloads", exist_ok=True)
for url in urls:
filename = url.split("/")[-1].split("?")[0] or "file"
r = requests.get(url, timeout=30)
with open(f"downloads/{filename}", "wb") as f:
f.write(r.content)
print(f"✓ {filename}")
Best for: Situations needing custom headers, cookies, or authentication.
When Terminal Isn't the Right Tool
Terminal downloading requires knowing the exact file URLs in advance. If you have page URLs (not direct file links) and need to find all the files on each page, use FileGrab first to discover the file URLs, then download via terminal if preferred.
Quick Comparison
| Tool | Parallel | Recursive | Scripting | Platform |
|---|---|---|---|---|
| wget | Limited | ✓ | Good | Unix/Win |
| curl | Via parallel | — | Excellent | Unix/Win |
| aria2 | ✓✓ | — | Good | Unix/Win |
| Python | Via threading | — | Excellent | All |