I have an api to export some information to a csv file. The API is correct and it is downloading my file when I access it from the browser. I need to access this API from the terminal and download the file without having to go to the browser.
My route for the API looks like this:
Route::get('/api/file/export', 'File\FileController#export', [
'middleware'=>'auth.basic'
]);
I tried using curl like this:
curl --user email:password http://example.com/api/file/export
I have tried different curl commands but each of then displays the redirect to login html. When I use -O the command for downloading a file, it downloads a file that has the redirect to login link.
curl --user email:password -O http://example.com/api/file/export
Am I calling the API correctly? How else can I access the API from the terminal?
You should first be logged in your website. You can try this:
curl --user email:password http://domain.tld/login_page
And then use the cookies for your second request:
curl --cookie http://domain.tld/file/to/export
If that is not working, you need to do the whole submit action with cURL, meaning doing POST request with email and password etc.
Someone gave a good solution here
PS: Checkout if you don't need a token to request your API too.
Related
so I am building an API in Laravel and I created custom middleware to authenticate the user based on their username and password either as part of authorization header or by passing the credentials in as part of the query string.
Example:
curl http://myapi.com/api/v1/getsomething -u username:password
-- or --
curl http://myapi.com/api/v1/getsomething?username=username&password=password
Both methods worh in the browser as expected, but when I try and run these via curl in the terminal, my middleware is not being hit during the request. Is there any reason why this may be?
I need to download several zip files from this web page ....
http://www.geoportale.regione.lombardia.it/download-pacchetti?p_p_id=dwnpackageportlet_WAR_geoportaledownloadportlet&p_p_lifecycle=0&metadataid=%7B16C07895-B75B-466A-B980-940ECA207F64%7D
using curl or wget, so not in interactive way,
A sample url is the follow ...
http://www.geoportale.regione.lombardia.it/rlregis_download/service/package?dbId=323&cod=12
If I use this link in a new browser tab or window, all works fine but using curl or wget it's not possible to download the zipfile.
Trying to see what happen in the browser using Firebug, or in general the browser console, I can see that there is first a POST request and then a GET request (using Firebug ... ), so I'm not able to reproduce these requests using curl or wget.
Could be also that some cookies are sets in the browser session and the links do not work without that cookie?
Any suggestion will be appreciated ....
Cesare
NOTE: when I try to use a wget this is my result
NOTE 2: 404 Not found
NOTE 3 (the solution): the right command is
wget "http://www.geoportale.regione.lombardia.it/rlregis_download/service/package?dbId=323&cod=12"
then I've to rename the file in something like "pippo.zip" and this is my result, or, better using the -O option in this manner
wget "http://www.geoportale.regione.lombardia.it/rlregis_download/service/package?dbId=323&cod=12" -O pippo.zip
Looking at your command, you're missing the double quotes. Your command should be:
wget "http://www.geoportale.regione.lombardia.it/rlregis_download/service/package?dbId=323&cod=12"
That should download it properly.
I am trying to access the following URL using cURL:
http://bizsearch.penrithcity.nsw.gov.au/eplanning/Pages/XC.Track/SearchApplication.aspx
However, when I attempt to access the web page I am redirected to:
http://bizsearch.penrithcity.nsw.gov.au/eplanning/Common/Common/terms.aspx
I tried utilizing the following cURL command to get passed this:
curl --cookie-jar "CookieTest.txt" url(common terms) -d "ctl00$ctMain1$chkAgree$chk1=on&ctl00$ctMain1$BtnAgree=I Agree"
curl --cookie "CookieTest.txt" url(search application)
Any help would be greatly appreciated as I am new to cURL and am having difficulty troubleshooting. I am wanting to pull the XML from the search application page.
I have a page (realized with a php framework) that add records in a MySQL db in this way:
www.mysite.ext/controller/addRecord.php?id=number
that add a row in a table with the number id passed via post and other informations such as timestamp, etc.
So, I movedo my eintire web applicazione to another domain and all HTTP requests works fine from old to new domain.
Only remaining issue is the curl: I wrote a bash script (under linux) that run curl of this link. Now, obviously it does not works because curl returns an alert message in which I read the page was moved.
Ok, I edited the curl sintax in this way
#! /bin/sh
link="www.myoldsite.ext/controlloer/addRecord.php?id=number"
curl --request -L GET $link
I add -L to follow url in new location but curl returns the error I wrote in this topic title.
It would be easier if I could directly modify the link adding the new domain but I do not have physical access to all devices.
GET is the default request type for curl. And that's not the way to set it.
curl -X GET ...
That is the way to set GET as the method keyword that curl uses.
It should be noted that curl selects which methods to use on its own depending on what action to ask for. -d will do POST, -I will do HEAD and so on. If you use the --request / -X option you can change the method keyword curl selects, but you will not modify curl's behavior. This means that if you for example use -d "data" to do a POST, you can modify the method to a PROPFIND with -X and curl will still think it sends a POST. You can change the normal GET to a POST method by simply adding -X POST in a command line like:
curl -X POST http://example.org/
... but curl will still think and act as if it sent a GET so it won't send any request body etc.
More here: http://curl.haxx.se/docs/httpscripting.html#More_on_changed_methods
Again, that's not necessary. Are you sure the link is correct?
as my client needs, I developed a code to login via cURl.
login to www.web1.com and store cookies in cookie.txt
go to www.web2.com and browse a page using that cookie.txt
no problem with www.web2.com
so when i want to do this with www.web3.com, the problem appears.
the www.web3.com uses session and cookies itself and I have to gather and use them.
it means I should have tow series of cookies, first those from www.web1.com , and second those from www.web3.com , then request the www.web3.com/somepage
how I can do that?
You can execute a command line call to curl from php to save cookies to a file like so:
curl -c '/tmp/mycookies.txt' 'http://www.site.com/login.php
Then use those cookies when submiting to the page like so:
curl -b '/tmp/mycookies.txt' -d 'uname=MyLoginName&pass=MyPassword&action=login&x=67&y=11' 'http://www.site.com/login.php'
For more info about these command line flags:
http://curl.haxx.se/docs/manpage.html
You can user the following line to get the cookie informations:
curl -k -s -d'user=foo&pass=bar' -D- https://server1.com/login/ -o/dev/null -f
Use shell_exec or exec to run this command. After getting the header information you can parse the cookie information. Use a helper class or write your own parser -> http://framework.zend.com/manual/en/zend.http.cookies.html (Zend_Http_Cookie::fromString)
You can store this information in a session and not in a text file. For web3.com grab also the cookie information and save it in the session or the cookie.txt file.