My php Curl request(also curl command in terminal) unfortunately shows a different code than opening the URL manually in the browser.
Here is my problem: I want to display the currently available films from https://reservierung.kinolambach.at/filmlist/?location_id=1 (german cinema)
on my website. When you check your network tab, the website sends an ajax call to a url with the specific date to the server and gets a response. Try it out by calling https://reservierung.kinolambach.at/filmlist/?location_id=1&date=4.1.2021&film_table=true in your browser(due to Covid there is a test film available on 4. Jan 2021)
My website does not run on the same domain.
Using an ajax call did not work because of CORS-Policy, so my second approach is to load the data via curl from php.
Here is my php code, which does not work. For every request I make I get the result "No film available on this day". It seems as if "No film available on this day" (german: 'Für diesen Tag ist kein Programm verfügbar.') is the "default" response.
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'https://reservierung.kinolambach.at/filmlist/?location_id=1&date=4.1.2021&film_table=true');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 0);
curl_setopt($ch, CURLOPT_HEADER, 1);
curl_setopt($ch, CURLOPT_HTTPGET, 1);
curl_setopt($ch, CURLOPT_COOKIESESSION, 0);
$result = curl_exec($ch);
echo($result);
curl_close($ch);
There are several curl_setopt which I tried out, none did work.
Here is my question
How is it possible, that curl shows a different result than calling the url in the browser? You can even try it in your terminal with curl https://reservierung.kinolambach.at/filmlist/?location_id=1&date=1.1.2021&film_table=true
How can I change this? I guess it has to do with the header.
Thanks in advance and all the best!
curl_setopt($ch, CURLOPT_URL, 'https://reservierung.kinolambach.at/filmlist/?
location_id=1&date=4.1.2021&film_table=true');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HEADER, false);
curl_setopt($ch, CURLOPT_COOKIEFILE, 'C:/tmp/cookies.txt');
curl_setopt($ch, CURLOPT_COOKIEJAR, 'C:/tmp/cookies.txt');
curl_setopt($ch,CURLOPT_SSL_VERIFYPEER, false);
Works form me with cookies enabled. On second request as my comment before.
Related
Guys, I am currently working on file hosting premium link generator basically it will be a website from where you can get a premium link of uptobox,rapidgator,uploaded.net and other file hosts sites without purchasing the premium account. Basically, We are purchasing the accounts of this website on behalf of the users and offering this service at a low price. So when I was setting up API of direct download link of rapidgator I was able to get that link but I was getting session is over. I was trying to that API via a software, not via manual coding and I am facing this problem
So I have been getting Rapidgator API reference from Tihs Site:- https://gist.github.com/Chak10/f097b77c32a9ce83d05ef3574a30367d
So I am doing the following Thing With My Debugging Software And I am getting success response but when I just open that URL in my browser it shows Session Id Failed.
So Here Are Steps What I am Doing
Sending a post request on https://rapidgator.net/api/user/login with username and data and I am getting this output
{"response":{"session_id":"g8a13f32hr4cbbbo54qdigrcb3","expire_date":1542688501,"traffic_left":"13178268723435"},"response_status":200,"response_details":null}
Now I am sending a get request (I tried Post Request Too But the Same Thing Happened) on this url with session id and URL embedded in URL https://rapidgator.net/api/file/download?sid=&url=
and I am getting this output
{"response":{"url":"http:\/\/pr56.rapidgator.net\/\/?r=download\/index&session_id=uB9st0rVfhX2bNgPrFUri01a9i5xmxan"},"response_status":200,"response_details":null}
When I try to download the file from the Url through my browser It says Invalid Session and sometimes too many open connections error
Link of the error:- https://i.imgur.com/wcZ2Rh7.png
Success Response:- https://i.imgur.com/MqTsB8Q.png
Rapidgator needs its api to be hit three times with different URLs.
$cookie = $working_dir.rand();
$headers = array("header"=>"Referer: https://rapidgator.net");
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://rapidgator.net/api/user/login");
curl_setopt($ch, CURLOPT_ENCODING, 'gzip, deflate');
curl_setopt($ch, CURLOPT_POSTFIELDS, "username=email#domain.ext&password=myplaintextpassword");
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_VERBOSE, 0);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_COOKIEJAR, $cookie);
curl_setopt($ch, CURLOPT_COOKIEFILE, $cookie);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0);
$result = curl_exec($ch);
curl_close ($ch);
$rapidgator_json = json_decode($result,true);
return array($rapidgator_json['response']['session_id'],$cookie);
http://rapidgator.net/api/user/login (this is the initial login)
Above link gives you a session id that you need. The response is in JSON
Now we need to request a download link that will allow us to download without having to log in to a human input form. So we will use its api to request a download link using the intial session id we got from the 1st url.
$headers = array("header"=>"Referer: http://rapidgator.net/api/user/login");
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://rapidgator.net/api/file/download?sid=$rapidgator_session&url=$rapidgator_file");
curl_setopt($ch, CURLOPT_ENCODING, 'gzip, deflate');
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, 'GET');
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_VERBOSE, 0);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_COOKIEJAR, $working_dir.$rapidgator_cookie);
curl_setopt($ch, CURLOPT_COOKIEFILE, $working_dir.$rapidgator_cookie);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0);
$result = curl_exec($ch);
curl_close ($ch);
$rapidgator_json = json_decode($result,true);
return array($rapidgator_json['response']['url']);
Basically, we pass the session id Rapidgator gave us assuming you have properly passed a valid account. Then we include the source url you had obtained (Link to file) http://rapidgator.net/api/file/download?sid=$rapidgator_session&url=$rapidgator_file
After that. Rapidgator will return a JSON response with an url that u can use to obtain the file in question. This allows you to use whatever download method you want
as that link is a session url is valid for a short period of time.
$rapidgator_json['response']['url']
All code above is somewhat incomplete. Some extra checks on the json responces for possible errors/limits are recommended. I used functions on my end but this is enough for you to see what you should be doing. Rapidshare API has other data that can be useful in determining if you have gone over your daily quota. How long the session url is going to last and so on.
So, I need to parse some content from http://israelbar.org.il, and using for this cURL, but when I run script - tab in browser all time reload, and nothing showed.
$browser = curl_init();
curl_setopt($browser, CURLOPT_URL, $url);
curl_setopt($browser, CURLOPT_REFERER, $referer);
curl_setopt($browser, CURLOPT_USERAGENT, $agent);
curl_setopt($browser, CURLOPT_SSL_VERIFYPEER, FALSE);
curl_setopt($browser, CURLOPT_SSL_VERIFYHOST, FALSE);
curl_setopt($browser, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($browser, CURLOPT_CONNECTTIMEOUT, 10); //times out after 11s
curl_setopt($browser, CURLOPT_TIMEOUT, 50); //times out after 51s
curl_setopt($browser, CURLOPT_COOKIEJAR, $cookie_file_path);
curl_setopt($browser, CURLOPT_COOKIEFILE, $cookie_file_path);
$retVal = curl_exec ($browser);
curl_close ($browser);
unset($browser);
return $retVal;
Also, I try NodeJS, and get some, unknown for me listing of JS code in console.
I think, that main problem - it's different headers, and I must send the same headers via cURL, like a browser.
Have you tried capturing the working browser request with a tool like Fiddler? (http://www.telerik.com/fiddler). Also, can you clarify the language and set up you are using (looks like PHP?) And which 'tab' is reloading (is it your own site's?)
Other things to try:
- Call a site you know will work first -- to ensure your code is correct
- Adjust the timeout values to much larger values when testing
EDIT: The proper thing to do is just to send a response from Node-red as hardillb pointed out below.
My CURL request is working fine and instantly, but I simply need to have the page visit the url and not wait around for a response. I have tried every combination I can think of and my browser still sits waiting for a server response until timeout.
$url = 'http://example.com:1880/get?temperature='.$temperature;
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT_MS, 1);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HEADER, 0);
// 3. execute and fetch the resulting HTML output
$output = curl_exec($ch);
// 4. free up the curl handle
curl_close($ch);
}
As mentioned in the comments.
The correct solution is to ensure your http-in node is paired with a http-response node in your Node-RED flow
I use the following command in some old scripts:
curl -Lk "https:www.example.com/stuff/api.php?"
I then record the header into a variable and make comparisons and so forth. What I would really like to do is convert the process to PHP. I have enabled curl, openssl, and believe I have everything ready.
What I cannot seem to find is a handy translation to convert that command line syntax to the equivalent commands in PHP.
I suspect something in the order of :
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
// What goes here so that I just get the Location and nothing else?
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
// Get the response and close the channel.
$response = curl_exec($ch);
curl_close($ch);
The goal being $response = the data from the api “OK=1&ect”
Thank you
I'm a little confused by your comment:
// What goes here so that I just get the Location and nothing else?
Anyway, if you want to obtain the response body from the remote server, use:
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$response = curl_exec($ch);
If you want to get the headers in the response (i.e.: what your comment might be referring to):
curl_setopt($ch, CURLOPT_HEADER, 1);
If your problem is that there is a redirection between the initial call and the response, use:
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
I'm trying to learn cURL with PHP to spoof the referrer to a website.
With the following script I expected to accomplish this...but it seems to not work.
Any ideas/suggestion where I am going wrong??
Or do you know of any tutorials that could help me figure this out?
Thanks!
Jessica
<?php
$host = "http://mysite.com";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $host);
curl_setopt($ch, CURLOPT_VERBOSE, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_AUTOREFERER, false);
curl_setopt($ch, CURLOPT_REFERER, "http://google.com");
curl_setopt($ch, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_1);
curl_setopt($ch, CURLOPT_HEADER, 0);
$result = curl_exec($ch);
curl_close($ch);
?>
You wont be able to see the result in webserver's analytics because it might probably using a javascript to get the analytics and curl wont run/execute the javascript. All Curl will do is get the content of the page as it like it is a text file. It wont run any of the scripts or anything.
To be more clear if you have an html tag like
<img src="path/to/image/image.jpg" />
The curl will treat it as a line of text. it wont load the image.jpg from the server. The same goes with the js if their is a
<script type="text/javascript" src="analytics.js"></script>
Normally the browser will load that analytics.js and run it, but the curl wont.