So, I've got this PHP script that calls a REST API with curl. The URL basically looks like this:
https://firewall1/api/?type=config&action=set&xpath=/config/devices/entry[#name='localhost.localdomain']/vsys/entry[#name='vsys1']/rulebase/security/rules/entry[#name='RULENAME']&element=<disabled>no</disabled>&key=APIKEY
The response comes back as a success, but the change is not actually made in the firewall, which seems odd. If I take and run this same URL with command-line curl, it works as expected.
curl -v -k -g "https://firewall1/api/?type=config&action=set&xpath=/config/devices/entry[#name='localhost.localdomain']/vsys/entry[#name='vsys1']/rulebase/security/rules/entry[#name='RuleName']&element=<disabled>no</disabled>&key=APIKEY"
My curl settings look like this:
$failover1 = curl_init($enableFailover1);
$failback1 = curl_init($disableFailover1);
$commit1 = curl_init($commitFW1);
//set curl options
curl_setopt_array($failover1, array(
CURLOPT_SSL_VERIFYHOST => 0,
CURLOPT_SSL_VERIFYPEER => 0,
CURLOPT_POST => TRUE,
CURLOPT_RETURNTRANSFER => TRUE
));
$responseFail1 = curl_exec($failover1);
$responseBack1 = curl_exec($failback1);
$responseCommit1 = curl_exec($commit1);
//failover and take approprate action for errors
if($responseFail1 === FALSE) {
die(curl_error($failover1));
} else {
//do some stuff
}
Running the PHP script returns the same response as the curl command line, but the result is not the same. Is there some header I'm not passing or something I should do to get this working properly? I should also add that it works if I take the URL and paste in a browser and if I pass the command to shell_exec. Thanks for the help!
Response from curl command line:
* Connection #0 to host firewall1 left intact
<response status="success" code="20"><msg>command succeeded</msg></response>
Response from curl in PHP script:
<response status="success" code="20"><msg>command succeeded</msg></response>
Looks like you are omitting the option -g in the PHP call. As I can see below description from manual:
"When this style is used, the -g option must be given to stop curl from interpreting the square brackets as special globbing characters. Link local and site local addresses including a scope identifier, such as fe80::1234%1, may also be used, but the scope portion must be numeric or match an existing network interface on Linux and the percent character must be URL escaped. The previous example in an SFTP URL might look like :sftp://[fe80::1234%251]/"
https://curl.haxx.se/docs/manual.html
A better option would be to call the overall URL string using PHP shell script execution function shell_exec() in your case. PHP curl is a wrapper library to be used for curl using PHP there may be few options that PHP curl library may not be supporting like -g option in your case which is available in command line.
Related
I have a PHP script that I'm trying to get the contents of a page. The code im using is below
$url = "http://test.tumblr.com";
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
$txt = curl_exec($ch);
curl_close($ch);
echo "$txt";
It works fine for me as it is now. The problem I'm having is, if I change the string URL to
$url = "http://-test.tumblr.com"; or $url = "http://test-.tumblr.com";
It will not work. I understand that -test.example.com or test-.example.com is not a valid hostnames but with Tumblr they do exists. Is there a work around for this?
I even tried creating a header redirect on another php file so cURL would be first getting a valid hostname but works the same way.
Thank you
Domain Names with hyphens
As you can see in a previous question about the allowed characters in a subdomain, - is not a valid character to start or end a subdomain with. So this is actually correct behavior.
The same problem was reported over the curl mailing list some time ago but since curl follows the standard, there is actually nothing to change on their site.
Most likely tumblr knows about this and therefore offers some alternative address leading to the same site.
Possible workaround
However you could try using nslookup to manually lookup the IP and then send your request directly to this IP (and manually setting the hostname to the correct value). I didn't try this out, but it seems as if nslookup is capable to resolve malformatted domain names that start or end in a hyphen.
curl
Additionally you should know, that the php curl function should be a direct interface to the curl command line tool and therefore, if you would encounter special behavior it would most likely be due to the logic in the curl command line tool and not the php function.
I have a php file let's say A.php that gets some variables by $_POST method and updates a local database.
Another php file with the name dataGather.php gathers the data in the correct form and after that it tries to send the data to the local database by using the A.php file. Note that both files are in the same directory.
The code where I use the curl functions to do the POST request is the following:
$url = "A.php";
$ch = curl_init();
$curlConfig = array(
CURLOPT_URL => $url,
CURLOPT_POST => true,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_POSTFIELDS => $datatopost
);
curl_setopt_array($ch, $curlConfig);
$result = curl_exec($ch);
curl_close($ch);
echo $result;
$datatopost
is an array like the following:
$datatopost = array (
"value1" => $val1,
"value2" => $val2,
etc
}
The problem is that when I run my program I get the following result:
Fatal error: Maximum execution time of 30 seconds exceeded in
C:\xampp\htdocs\dataGather.php on line 97
does anyone know why this is happening? Thanks in advance
PS: The file A.php is 100% correct because I have tested it by gathering the information needed with javascript. It informs the database the way I want. Also the array $datatopost has all the information in the correct form.
I suspect you directly run your php script without using a web server but by simply starting the script as executable. This is suggested by the fact that you have an absolute path in your error message. Whilst it is absolutely fine to run a php script like that you have to ask yourself: what does that cURL call actually make? It does not open and run the php file A.php you tried to reference. Why not? Because cURL opens URLs, not files. And without using a server that can react to url requests (ike a http server), what do you expect to happen?
The error you get is a timeout, since cURL tries to contact a http server. Since you did not specify a valid URL it most likely falls back to 'localhost'. but there is not server listening there...
An API I'm trying to program to requires multipart/form-data content with the HTTP GET verb. From the command line I can make this work like this:
curl -X GET -H "Accept: application/json" -F grant_type=consumer_credentials -F consumer_key=$key -F consumer_secret=$secret https://example.com/api/AccessToken
which seems like a contradiction in terms to me, but it actually works, and from what I see tracing it actually uses GET. I've tried a bunch of things to get this working using PHP's cURL library, but I just can't seem to get it to not use POST, which their servers kick out with an error.
Update to clarify the question: how can I get php's cURL library to do the same thing as that command line?
which seems like a contradiction in terms to me, but it actually
works, and from what I see tracing it actually uses GET
Not exactly. curl uses a feature of the HTTP/1.1. It inserts additional field to the header Expect: 100-continue, on which, if supported by server, server should response by HTTP/1.1 100 Continue, which tells the client to continue with its request. This interim response is used to inform the client that the initial part of the request has been received and has not yet been rejected by the server. The client SHOULD continue by sending the remainder of the request or, if the request has already been completed, ignore this response. The server MUST send a final response after the request has been completed.
Since they are insisting on HTTP GET, then just encode the form elements into query parameters on the URL you are GETing and use cURL's standard get options instead of posting multipart/formdata.
-X will only change the method keyword, everything else will remain acting the same which in this case (with the -F options) means like multipart formpost.
-F is multipart formpost and you really cannot convert that to a query part in the URL suitable for a typical GET so this was probably not a good idea to start with.
I would guess that you actually want to use -d to specify the data to post, and then you use -G to convert that data into a string that gets appended to the URL so that the operation turns out to a nice and clean GET.
I'm trying to use a web service REST API for which I need to add a parameter for authorization (with the appropriate key, of course) to get a XML result. I'm developing in PHP. How can I add a parameter to the request header in such a situation?
Edit: The way I'm doing the request right now is $xml = simplexml_load_file($query_string);
Are you using curl? (recommended)
I assume that you are using curl to do these requests towards the REST API, if you aren't; use it.
When using curl you can add a custom header by calling curl_setopt with the appropriate parameters, such as in below.
curl_setopt (
$curl_handle, CURLOPT_HTTPHEADER,
array ('Authentication-Key: foobar')
); // make curl send a HTTP header named 'Authentication-key'
// with the value 'foobar'
Documentation:
PHP: cURL - Manual
PHP: curl_setopt - Manual
Are you using file_get_contents or similar?
This method is not recommended, though it is functional.
Note: allow_url_fopen needs to be enabled for file_get_contents to be able to access resources over HTTP.
If you'd like to add a custom header to such request you'll need to create yourself a valid stream context, as in the below snippet:
$context_options = array(
'http' =>array (
'method' => 'GET',
'header' => 'Authentication-Key'
)
);
$context = stream_context_create ($context_options);
$response = file_get_contents (
'http://www.stackoverflow.com', false, $context_options
);
Documentation:
PHP: file_get_contents - Manual
PHP: stream_context_create - Manual
PHP: Runtime Configuration, allow_url_fopen
I'm using neither of the above solutions, what should I do?
[Post OP EDIT]
My recommendation is to fetch the data using curl and then pass it off to the parser in question when all the data is received. Separate data fetching from the processing of the returned data.
[/Post OP EDIT]
When you use $xml = simplexml_load_file($query_string);, the PHP interpreter invokes it's wrapper over fopen to open the contents of a file located at $query_string. If $query_string is a remote file, the PHP interpreter opens a stream to that remote URL and retrieves the contents of the file there (if the HTTP response code 200 OK). It uses the default stream context to do that.
There is a way to alter the headers sent by altering that stream context, however, in most cases, this is a bad idea. You're relying on PHP to always open all files, local or remote, using a function that was meant to take a local file name only. Not only is it a security problem but it also could be the source of a bug that is very hard to track down.
Instead, consider splitting the loading of the remote content using cURL (checking the returned HTTP status code and other sanity checks) and then parsing that content into a SimpleXMLElement object to use. When you use cURL, you can set any headers you want to send with the request by invoking something similar to curl_setopt($ch, CURLOPT_HTTPHEADER, array('HeaderName' => 'value');
Hope this helps.
Given a list of urls, I would like to check that each url:
Returns a 200 OK status code
Returns a response within X amount of time
The end goal is a system that is capable of flagging urls as potentially broken so that an administrator can review them.
The script will be written in PHP and will most likely run on a daily basis via cron.
The script will be processing approximately 1000 urls at a go.
Question has two parts:
Are there any bigtime gotchas with an operation like this, what issues have you run into?
What is the best method for checking the status of a url in PHP considering both accuracy and performance?
Use the PHP cURL extension. Unlike fopen() it can also make HTTP HEAD requests which are sufficient to check the availability of a URL and save you a ton of bandwith as you don't have to download the entire body of the page to check.
As a starting point you could use some function like this:
function is_available($url, $timeout = 30) {
$ch = curl_init(); // get cURL handle
// set cURL options
$opts = array(CURLOPT_RETURNTRANSFER => true, // do not output to browser
CURLOPT_URL => $url, // set URL
CURLOPT_NOBODY => true, // do a HEAD request only
CURLOPT_TIMEOUT => $timeout); // set timeout
curl_setopt_array($ch, $opts);
curl_exec($ch); // do it!
$retval = curl_getinfo($ch, CURLINFO_HTTP_CODE) == 200; // check if HTTP OK
curl_close($ch); // close handle
return $retval;
}
However, there's a ton of possible optimizations: You might want to re-use the cURL instance and, if checking more than one URL per host, even re-use the connection.
Oh, and this code does check strictly for HTTP response code 200. It does not follow redirects (302) -- but there also is a cURL-option for that.
Look into cURL. There's a library for PHP.
There's also an executable version of cURL so you could even write the script in bash.
I actually wrote something in PHP that does this over a database of 5k+ URLs. I used the PEAR class HTTP_Request, which has a method called getResponseCode(). I just iterate over the URLs, passing them to getResponseCode and evaluate the response.
However, it doesn't work for FTP addresses, URLs that don't begin with http or https (unconfirmed, but I believe it's the case), and sites with invalid security certificates (a 0 is not found). Also, a 0 is returned for server-not-found (there's no status code for that).
And it's probably easier than cURL as you include a few files and use a single function to get an integer code back.
fopen() supports http URI.
If you need more flexibility (such as timeout), look into the cURL extension.
Seems like it might be a job for curl.
If you're not stuck on PHP Perl's LWP might be an answer too.
You should also be aware of URLs returning 301 or 302 HTTP responses which redirect to another page. Generally this doesn't mean the link is invalid. For example, http://amazon.com returns 301 and redirects to http://www.amazon.com/.
Just returning a 200 response is not enough; many valid links will continue to return "200" after they change into porn / gambling portals when the former owner fails to renew.
Domain squatters typically ensure that every URL in their domains returns 200.
One potential problem you will undoubtably run into is when the box this script is running on looses access to the Internet... you'll get 1000 false positives.
It would probably be better for your script to keep some type of history and only report a failure after 5 days of failure.
Also, the script should be self-checking in some way (like checking a known good web site [google?]) before continuing with the standard checks.
You only need a bash script to do this. Please check my answer on a similar post here. It is a one-liner that reuses HTTP connections to dramatically improve speed, retries n times for temporary errors and follows redirects.