Retrieving Web Content with cURL
One of the most common use cases for cURL is retrieving web content. cURL provides a simple and efficient way to fetch data from web servers using various protocols.
Retrieving a Web Page
To retrieve the content of a web page using cURL, you can use the following command:
curl https://www.example.com
This will fetch the HTML content of the https://www.example.com
website and output it to the console. You can also save the output to a file using the -o
or -O
options:
## Save the output to a file named "example.html"
curl -o example.html https://www.example.com
## Save the output using the same name as the URL
curl -O https://www.example.com
Handling HTTP Headers
cURL allows you to view the HTTP headers of a web request by using the -I
or --head
options:
curl -I https://www.example.com
This will display the HTTP headers, such as the response code, content type, and other metadata.
Sending HTTP Requests
cURL can also be used to send HTTP requests with custom methods, headers, and data. For example, to send a POST request with a JSON payload:
curl -X POST \
-H "Content-Type: application/json" \
-d '{"key":"value"}' \
https://api.example.com/endpoint
This command sends a POST request to https://api.example.com/endpoint
with a JSON payload and sets the Content-Type
header to application/json
.
Handling Redirects
cURL can automatically follow redirects by using the -L
or --location
option:
curl -L https://bit.ly/example-url
This will follow any redirects and fetch the final destination URL.
By mastering these basic cURL commands, you'll be able to retrieve web content efficiently and automate various web-related tasks in your Linux environment.