The scenario: There's this voting form (vote.php) with 3 fields where one is a hidden field containing a hash. Once you submit the form, it requests using GET to a separate script (process.php) via XHR. I am trying to simulate this via cURL but only gotten so far to getting the hash and preserving the phpsessionid using a cookie jar.
The problem: The processing script (process.php) seems to be able to detect if a request didn't push thru using XHR and will return an error if I just submit its required parameters using regular cURL GET.
So how do I simulate XHR in cURL? Or I may be even wrong in saying there's XHR in cURL so can you please advice any methods on how to achieve this.
There is a good chance that process.php checks for an XHR request by looking at the HTTP_X_REQUESTED_WITH header, for example:
if (isset($_SERVER['HTTP_X_REQUESTED_WITH']) && $_SERVER['HTTP_X_REQUESTED_WITH']=="XMLHttpRequest") { /* Do stuff here */ }
So you could try setting that header in your cUrl request:
curl -H "HTTP_X_REQUESTED_WITH:XMLHttpRequest"
That has a good chance of working. Good luck!
Add the following header: X-Requested-Width: XMLHttpRequest. This is added by all major JS libraries to ease identifying such requests on the server.
Related
I have a simple PHP script which shows some information to a user. I want to shorten this information as muss as possible if the same page is requested with cURL or saved with Wget.
I saw several similar question on Stackoverflow, but they have some extras like “I want to block cURL” or “redirect a form request if…”. The answers usually tell that it is not possible to detect a cURL request reliably, since cURL lets the user change all request parameters and pretend to be a browser. Thats okay for me, I dont want to block cURL, I want to offer an extra service for a generic cURL (and Wget) request.
If not configured otherwise cURL and Wget use a custom »User Agent« string for their requests.
For example curl/7.47.0 or Wget/1.17.1 (linux-gnu). You can test this easiliy on https://requestb.in.
Several applications may access the User Agent string in the request header. In PHP its available in the $_SERVER['HTTP_USER_AGENT'] variable.
So to detect a cURL or Wget request and offer different content, you may use
<?php
// Catch cURL/Wget requests
if (isset($_SERVER['HTTP_USER_AGENT']) && preg_match('/^(curl|wget)/i', $_SERVER['HTTP_USER_AGENT'])) {
echo 'Hi curl user!';
}
else {
echo 'Hello browser user!';
}
?>
In my app I detect the cURL request and then let the process die() in the if loop. So if its just a browser, the the condition doesnt match and executes all the following PHP code.
As said before, both cURL and Wget allow the user to set an arbitrary User Agent. But for the requested service, this solution is sufficient.
I'm using Guzzle that I installed via composer and failing to do something relatively straightforward.
I might be misunderstanding the documentation but essentially what I'm wanting to do is run a POST request to a server and continue executing code without waiting for a response. Here's what I have :
$client = new \GuzzleHttp\Client(/*baseUrl, and auth credentials here*/);
$client->post('runtime/process-instances', [
'future'=>true,
'json'=> $data // is an array
]);
die("I'm done with the call");
Now lets say the runtime/process-instances runs for about 5mn, I will not get the die message before those 5mn are up... When instead I want it right after the message is sent to the server.
Now I don't have access to the server so I can't have the server respond before running the execution. I just need to ignore the response.
Any help is appreciated.
Things I've tried:
$client->post(/*blabla*/)->then(function ($response) {});
It is not possible in Guzzle to send a request and immediately exit. Asynchronous requests require that you wait for them to complete. If you do not, the request will not get sent.
Also note that you are using post instead of postAsync, the former is a synchronous (blocking) request. To asynchronously send a post request, use the latter. In your code example, by changing post to postAsync the process will exit before the request is complete, but the target will not receive that request.
Have you tried setting a low timeout?
So in JavaScript, I used to be able to have an http request initiate a callback when AJAX sent a response back to some data I sent to the server, successfully being a callback function. I'm now experimenting with the OAuth2 gem for Ruby, and I'm finding callbacks to not be the same;
I have a web server and facebook app set up, and I have a small php script that writes the current URL (including the auth code, for example) to a file, no problem. All the settings in the facebook app are set up, and if I put this in the URL in the browser:
http://graph.facebook.com/oauth/authorize?client_id=[my_client_id]&redirect_uri=http://localhost/oauth/callback/index.php
It redirects successfully to that script, which then writes the authorization code to a file which I can then use to get the access token. Problem is that I can only do this process manually; using the Net::HTTP.get(URI(address)) command in ruby doesn't seem to initiate the php script.
Ayone have any ideas?
I have no idea why you posted your history with javascript ajax requests, as it has no bearing on your ruby script, which by the way doesn't even use a callback method/function. Using a callback function just means you are calling some function and passing it another function as an argument. When I started programming, the term callback function was very confusing to me, and in my opinion the term should be dropped from the lingo.
As for your ruby script, you need to use something like Firebug to look at the request headers that are being sent by your browser to the server when you manually enter the url in your browser. If you use those same headers in your ruby script, then it should work, e.g.:
req['header1'] = 'hello'
req['header2'] = '10'
or:
headers = {
'header1' => 'hello',
'header2' => '10',
...
}
req = Net::HTTP::Get.new(uri.request_uri, headers)
http = Net::HTTP.new(uri.host, uri.port)
resp = http.request(req)
It's possible that you have a cookie set in your browser, which your browser automatically adds to the request headers when it sends the request to the server. Your browser probably adds thousands of headers to the request--many of which will have no bearing on your problem. If you have the patience, you can try to figure out which header is causing your ruby script's request to malfunction.
Another option is to use the mechanize gem, which will automatically handle cookies and redirects for requests sent by ruby scripts:
http://docs.seattlerb.org/mechanize/GUIDE_rdoc.html
(Read the section Let's Fetch a Page; Don't use the line require 'rubygems' if you are using ruby 1.9+).
I'm writing a very basic Facebook app, but I'm encountering an issue with cross-domain AJAX requests (using jQuery).
I've written a proxy page to make requests to the graph via cURL that I'm calling via AJAX. I can visit the page in the browser and see it has the correct output, but requesting the page via always causes jQuery to fire the error handler callback.
So I have two files:
Proxy, which does the cURL request
<?php
//Do some cURL requests, manipulate some data
//return it as JSON
print json_encode($data);
?>
The facebook canvas, which contains this AJAX call
$.getJSON("http://myDomain.com/proxy.php?get=stuff",
function(JSON)
{
alert("success");
})
.error(function(err)
{
alert("err");
});
Inspecting the call with Firebug shows it returns with HTTP code 200 OK, but the error handler is always fired, and no content is returned. This happens whether I set Content-Type: application/json or not.
I have written JSON-returning APIs in PHP before using AJAX and never had this trouble.
What could be causing the request to always trigger the error handler?
Recently I experienced the same issue and my problem was the fact that there was a domain difference between the webpage and the API, due to the SSL.
The web page got a HTTP address (http://myDomain.com) and the content I was requesting with JQuery was on the same domain but HTTPS protocol (https://myDomain.com). The browser (Chrome in this case) considered that the domains were differents (the first one with HTTP, the second one with HTTPS), just because of the protocol, and because the request response type was "application/json", the browser did not allowed it.
Basically, the request worked fine, but your browser did not allowed the response content.
I had to add a "Access-Control-Allow-Origin" header to make it work. If you're in the same case, have a look there: https://developer.mozilla.org/en/http_access_control.
I hope that'll help you, I got a headache myself.
I want to use the google images api. In the past when I worked with json I simply used the ajax function to get the json from my own server. But now I will be getting it from an external domain:
https://ajax.googleapis.com/ajax/services/search/images?q=fuzzy monkey&v=1.0
Obviously I can't load this using js since its not from an internal url. So in these cases how does one work with json data. Are you supposed to load it via CURL using a server side script or is there another way?
You can make use of JSONP by adding a callback GET param.
https://ajax.googleapis.com/ajax/services/search/images?q=fuzzy%20monkey&v=1.0&callback=hello
Then you can request it with jQuery's $.getJSON().
$.getJSON('https://ajax.googleapis.com/ajax/services/search/images?q=fuzzy%20monkey&v=1.0&callback=?', function(response) {
console.log(response.responseData);
});
jsFiddle.
You must use Cross Origin Resource Sharing (CORS http://en.wikipedia.org/wiki/Cross-Origin_Resource_Sharing)
It's not as complicated as it sounds...simply set your request headers appropriately...in Python it would look like:
self.response.headers.add_header('Access-Control-Allow-Origin', '*');
self.response.headers.add_header('Access-Control-Allow-Methods', 'GET, POST, OPTIONS');
self.response.headers.add_header('Access-Control-Allow-Headers', 'X-Requested-With');
self.response.headers.add_header('Access-Control-Max-Age', '86400');