Here is my current code:
$SQL = mysql_query("SELECT url FROM urls") or die(mysql_error()); //Query the urls table
while($resultSet = mysql_fetch_array($SQL)){ //Put all the urls into one variable
// Now for some cURL to run it.
$ch = curl_init($resultSet['url']); //load the urls
curl_setopt($ch, CURLOPT_TIMEOUT, 2); //No need to wait for it to load. Execute it and go.
curl_exec($ch); //Execute
curl_close($ch); //Close it off
} //While loop
I'm relatively new to cURL. By relatively new, I mean this is my first time using cURL. Currently it loads one for two seconds, then loads the next one for 2 seconds, then the next. however, I want to make it load ALL of them at the same time. I'm sure its possible, I'm just unsure as to how. If someone could point me in the right direction, I'd appreciate it.
You set up each cURL handle in the same way, then add them to a curl_multi_ handle. The functions to look at are the curl_multi_* functions documented here. In my experience, though, there were issues with trying to load too many URLs at once (though I can't find my notes on it at the moment), so the last time I used curl_mutli_, I set it up to do batches of 5 URLs at a time.
edit: Here is a reduced version of the code I have using curl_multi_:
edit: Slightly rewritten and lots of added comments, which hopefully will help.
// -- create all the individual cURL handles and set their options
$curl_handles = array();
foreach ($urls as $url) {
$curl_handles[$url] = curl_init();
curl_setopt($curl_handles[$url], CURLOPT_URL, $url);
// set other curl options here
}
// -- start going through the cURL handles and running them
$curl_multi_handle = curl_multi_init();
$i = 0; // count where we are in the list so we can break up the runs into smaller blocks
$block = array(); // to accumulate the curl_handles for each group we'll run simultaneously
foreach ($curl_handles as $a_curl_handle) {
$i++; // increment the position-counter
// add the handle to the curl_multi_handle and to our tracking "block"
curl_multi_add_handle($curl_multi_handle, $a_curl_handle);
$block[] = $a_curl_handle;
// -- check to see if we've got a "full block" to run or if we're at the end of out list of handles
if (($i % BLOCK_SIZE == 0) or ($i == count($curl_handles))) {
// -- run the block
$running = NULL;
do {
// track the previous loop's number of handles still running so we can tell if it changes
$running_before = $running;
// run the block or check on the running block and get the number of sites still running in $running
curl_multi_exec($curl_multi_handle, $running);
// if the number of sites still running changed, print out a message with the number of sites that are still running.
if ($running != $running_before) {
echo("Waiting for $running sites to finish...\n");
}
} while ($running > 0);
// -- once the number still running is 0, curl_multi_ is done, so check the results
foreach ($block as $handle) {
// HTTP response code
$code = curl_getinfo($handle, CURLINFO_HTTP_CODE);
// cURL error number
$curl_errno = curl_errno($handle);
// cURL error message
$curl_error = curl_error($handle);
// output if there was an error
if ($curl_error) {
echo(" *** cURL error: ($curl_errno) $curl_error\n");
}
// remove the (used) handle from the curl_multi_handle
curl_multi_remove_handle($curl_multi_handle, $handle);
}
// reset the block to empty, since we've run its curl_handles
$block = array();
}
}
// close the curl_multi_handle once we're done
curl_multi_close($curl_multi_handle);
Given that you don't need anything back from the URLs, you probably don't need a lot of what's there, but this is how I chunked the requests into blocks of BLOCK_SIZE, waited for each block to run before moving on, and caught errors from cURL.
Related
Alright, so I have a script that uses a rolling curl for multiple requests. The past two days, it works for a couple times, but then eventually it starts providing a 500 error. I try a regular curl_init and that works, so I know I'm getting connection with the site I'm connecting too. And if I wait for tomorrow it will work again. So I imagine there is some leak or something going on. I can't figure out how to check 500 errors on Godaddy. But is there a way I can check curl to see what the issue is or stop it from the infinite loop. Because it's not connecting. And just to clarify the very same script works at first, but after a certain number of tries, it stops.
while (($execrun = curl_multi_exec($this->multi_handle, $running)) ==
CURLM_CALL_MULTI_PERFORM) {
;
}
if ($execrun != CURLM_OK) {
break;
}
//not entering this loop
while ($info = curl_multi_info_read($this->multi_handle)) {
}
Edit: I'm doing about 30 requests at a time. It works very fast and works for like the first 4-5 times I load the script. But then eventually it will just stop and I'll have to wait for the next day. Here's the while loop where the only occurrences of closing the handles are. I've never ran in to this problem before. Have no clue where to look. It just won't connect to the website and enter this loop.
while ($info = curl_multi_info_read($this->multi_handle)) {
$ch = $info['handle'];
$ch_array_key = (int)$ch;
if (!isset($this->outstanding_requests[$ch_array_key])) {
die("Error - handle wasn't found in requests: '$ch' in ".
print_r($this->outstanding_requests, true));
}
$request = $this->outstanding_requests[$ch_array_key];
$url = $request['url'];
$content = curl_multi_getcontent($ch);
$callback = $this->curl['CALLBACK'];
$user_data = $request['user_data'];
if(curl_getinfo($ch, CURLINFO_HTTP_CODE) == 200)
{
call_user_func($callback, $content, $url, $ch, $user_data, $user_data);
unset($this->outstanding_requests[$ch_array_key]);
curl_multi_remove_handle($this->multi_handle, $ch);
curl_close($ch);
}
else
{
//unset the outstanding request so it doesn't get stuck in a loop
unset($this->outstanding_requests[$ch_array_key]);
curl_multi_remove_handle($this->multi_handle, $ch);
curl_close($ch);
//these come back as 0's, so not found. Restart the request
self::startRequest($url);
}
//self::msg('USER END');
}
I have a website that pulls prices from an API. The problem is that if you send more than ~10 requests to this API in a short amount of time, your ip gets blocked temporarily (I'm not sure if this is just a problem on localhost or if it would also be a problem from the webserver, I assume the latter).
The request to the API returns a JSON object, which I then parse and store certain parts of it into my database. There are about 300 or so entries in the database, so ~300 requests I need to make to this API.
I will end up having a cron job that every x hours, all of the prices are updated from the API. The job calls a php script I have that does all of the request and db handling.
Is there a way to have the script send the requests over a longer period of time, rather than immediately? The problem I'm running into is that after ~20 or so requests the ip gets blocked, and the next 50 or so requests after that get no data returned.
I looked into sleep(), but read that it will just store the results in a buffer and wait, rather than wait after each request.
Here is the script that the cron job will be calling:
define('HTTP_NOT_FOUND', false);
define('HTTP_TIMEOUT', null);
function http_query($url, $timeout=5000) {
$curl = curl_init($url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl, CURLOPT_TIMEOUT_MS, $timeout);
$text = curl_exec($curl);
if($text) {
$code = curl_getinfo($curl, CURLINFO_HTTP_CODE);
switch($code){
case 200:
return $text;
case 404:
return -1;
default:
return -1;
}
}
return HTTP_TIMEOUT;
}
function getPrices($ID) {
$t = time();
$url = url_to_API;
$result = http_query($url, 5000);
if ($result == -1) { return -1; }
else {
return json_decode($result)->price;
}
}
connectToDB();
$result = mysql_query("SELECT * FROM prices") or die(mysql_error());
while ($row = mysql_fetch_array($result)) {
$id = $row['id'];
$updatedPrice = getItemPrices($id);
.
.
echo $updatedPrice;
. // here I am just trying to make sure I can get all ~300 prices without getting any php errors or the request failing (since the ip might be blocked)
.
}
sleep() should not affect/buffer queries to the database. You can use ob_flush() if you need to print something immediately. Also make sure to set max execution time with set_time_limit() so your script don't timeout.
set_time_limit(600);
while ($row = mysql_fetch_array($result)) {
$id = $row['id'];
$updatedPrice = getItemPrices($id);
.
.
echo $updatedPrice;
//Sleep 1 seconds, use ob_flush if necessary
sleep(1);
//You can also use usleep(..) to delay the script in milliseconds
}
I'm here again, learning more and more about PHP, but still have some problems for my scenario, most of my scenario has been programmed and solved without problem, but I found an issue, but to understand it, I need to explain it first:
I have a PHP script which can be invoked by any client and its work is to receive a request, ping to a proxy from a list which I define manually, to know if a proxy is available, if it is available, I proceed to retrieve a response using "curl" with a POST method. The logic is like this:
$proxyList = array('192.168.3.41:8013'=> 0, '192.168.3.41:8023'=>0, '192.168.3.41:8033'=>0);
$errorCounter = 0;
foreach ($proxyList as $key => $value){
if(!isUrlAvailable($key){ //It means it is NOT available so I count errors
$errorCounter++;
} else { //It means it is AVAILABLE
$result = callThisProxy($key);
}
}
The function "isUrlAvailable" uses a $fsockopen to know if the proxy is available. If not, I make a POST with CURL as mentioned before, the function has callThisProxy() something like:
$ch = curl_init($proxyUrl);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS,'xmlQuery='.$rawXml);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$info = curl_exec ($ch);
if($isDebug){echo 'Info in the moment: '.$info.'<br/>';}
curl_close ($ch);
But, we're testing some scenarios, what happen if I turn off the proxy between the verification of the proxy availability and the call? I mean:
foreach ($proxyList as $key => $value){
if(!isUrlAvailable($key){ //It means it is NOT available so I count errors
$errorCounter++;
} else { //It means it is AVAILABLE
$result = callThisProxy($key);//What happen if I kill the proxy when the result is being processed?
}
}
I tested it and when I do that, the $result comes as empty string ''. But the problem is that I lost that request, and my goal is to retry it with the next $key which is a proxy. So, I've been thinking of a "do, while" when I invoke the result. But not sure, if it is ok or there's a better way to do it, so please I ask for help with this issue. Thanks in advance for your time any answer is welcome. Thanks.
Maybe something like:
$result = "";
while ($result == "")
{
foreach ($proxyList as $key => $value)
{
if (!isUrlAvailable($key))
{
$errorCounter++;
}
else
{
$result = callThisProxy($key);
}
}
}
// Now check $result, which should contain the first successful callThisProxy()
// result, or nothing if none of the keys worked.
You could just keep a list of proxies that you still need to try. When you hit the error or get a valid response then you remove the proxy from the list of proxies to try. If you do not get a good response then keep it in the list and try it again later.
$proxiesToTry = $proxyList;
$i = 0;
while (count($proxiesToTry) != 0) {
// reset to beginning of array
if($i >= count($proxiesToTry))
$i = 0;
$proxy = $proxiesToTry[$i];
if (!isUrlAvailable($proxy)) { //It means it is NOT available so I count errors
$errorCounter++;
unset($proxiesToTry[$i]);
} else { //It means it is AVAILABLE
$result = callThisProxy($proxy);
if($result != "") // If we got a response remove it from the array of proxies to try.
unset($proxiesToTry[$i]);
}
$i++;
}
NOTE: You will never break out of this loop if you don't ever get a valid response from some proxy.
How to make a foreach or a for loop to run only when the curl response is received..
as example :
for ($i = 1; $i <= 10; $i++) {
$ch = curl_init();
curl_setopt($ch,CURLOPT_URL,"http://www.example.com");
if(curl_exec($ch)){ // ?? - if request and data are completely received
// ?? - go to the next loop
}
// DONT go to the next loop until the above data is complete or returns true
}
i don't want it to move to the next loop without having the current curl request data received.. one by one, so basically it opens up the url at first time, waits for the request data, if something matched or came true then go to the next loop,
you dont have to be bothered about the 'curl' part, i just want the loop to move one by one ( giving it a specific condition or something ) and not all at once
The loop ought to already work that way, for you're using the blocking cURL interface and not the cURL Multi interface.
$ch = curl_init();
for ($i = 1; $i <= 10; $i++)
{
curl_setopt($ch, CURLOPT_URL, "http://www.example.com");
$res = curl_exec($ch);
// Code checking $res is not false, or, if you returned the page
// into $res, code to check $res is as expected
// If you're here, cURL call completed. To know if successfully or not,
// check $res or the cURL error status.
// Removing the examples below, this code will hit always the same site
// ten times, one after the other.
// Example
if (something is wrong, e.g. False === $res)
continue; // Continue with the next iteration
Here extra code to be executed if call was *successful*
// A different example
if (something is wrong)
break; // exit the loop immediately, aborting the next iterations
sleep(1); // Wait 1 second before retrying
}
curl_close($ch);
Your code (as is) will not move to the next iteration until the curl call is completed.
A couple of issues to consider
You could set a higher timeout for curl to ensure that there are no communication delays. CURLOPT_CONNECTTIMEOUT, CURLOPT_CONNECTTIMEOUT_MS (milliseconds), CURLOPT_DNS_CACHE_TIMEOUT, CURLOPT_TIMEOUT and CURLOPT_TIMEOUT_MS (milliseconds) can be used to increase the timeouts. 0 makes curl wait indefinitely for any of these timeouts.
If your curl request fails for whatever reason, you can just put an exit there to stop execution, This way it will not move to the next URL.
If you want the script to continue even after the first failure, you can just log the result (after the failed request) and let it continue in the loop. Examining the log file will give you information as to what happened.
The continue control structure should be what you are looking for:
continue is used within looping structures to skip the rest of the current loop iteration and continue execution at the condition evaluation and then the beginning of the next iteration.
http://php.net/manual/en/control-structures.continue.php
for ($i = 1; $i <= 10; $i++) {
$ch = curl_init();
curl_setopt($ch,CURLOPT_URL,"http://www.example.com");
if(curl_exec($ch)){ // ?? - if request and data are completely received
continue; // ?? - go to the next loop
}
// DONT go to the next loop until the above data is complete or returns true
}
You can break out of a loop with the break keyword:
foreach ($list as $thing) {
if ($success) {
// ...
} else {
break;
}
}
for($i = 1; $i <= 10; $i++) {
$ch = curl_init();
curl_setopt($ch,CURLOPT_URL,"http://www.example.com");
if(curl_exec($ch)){ // ?? - if request and data are completely received
continue;
}else{
break;
}
// DONT go to the next loop until the above data is complete or returns true
}
or
for($i = 1; $i <= 10; $i++) {
$ch = curl_init();
curl_setopt($ch,CURLOPT_URL,"http://www.example.com");
if(curl_exec($ch)===false){ // ?? - if request and data are completely received
break;
}
}
I have around 295 domains to check if they contain files in their public_html directory's. Currently I am using the PHP FTP functions but the script takes around 10 minutes to complete. I am trying to shorten down this time, what methods could I use to achieve this.
Here is my PHP code
<?php
foreach($ftpdata as $val) {
if (empty($val['ftp_url'])) {
echo "<p>There is no URL provided</p>";
}
if (empty($val['ftp_username'])) {
echo "<p>The site ".$val['ftp_url']." dosent have a username</p>";
}
if (empty($val['ftp_password'])) {
echo "<p>The site ".$val['ftp_url']." dosent have a password</p>";
}
if($val['ftp_url'] != NULL && $val['ftp_password'] != NULL && $val['ftp_username'] != NULL) {
$conn_id = #ftp_connect("ftp.".$val['ftp_url']);
if($conn_id == false) {
echo "<p></br></br><span>".$val['ftp_url']." isnt live</span></p>";
}
else {
$login_result = ftp_login($conn_id, $val['ftp_username'], $val['ftp_password']);
ftp_chdir($conn_id, "public_html");
$contents = ftp_nlist($conn_id, ".");
if (count($contents) > 3) {
echo "<p><span class='green'>".$val['ftp_url']." is live</span><p>";
}
else {
echo "<p></br></br><span>".$val['ftp_url']." isnt live</span></p>";
}
}
}
}
?>
If it is a publicly available file you can use file_get_contents() to try to grab it. If it is successful you know it is there. If it fails then it is not. You don't need to download the entire file. Just limit it to a small amount of characters so it's fast and not wasting bandwidth.
$page = file_get_contents($url, NULL, NULL, 0, 100);
if ($page !== false)
{
// it exists
}
Use curl. With option CURLOPT_NOBODY set to true request method is set to HEAD and do not transfer body.
<?php
// create a new cURL resource
$ch = curl_init();
// set URL and other appropriate options
curl_setopt($ch, CURLOPT_URL, "http://google.com/images/srpr/logo3w.png"); //for example google logo
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt ($ch, CURLOPT_FOLLOWLOCATION, true);
//get content
$content = curl_exec($ch);
// close
curl_close($ch);
//work with result
var_dump($content);
?>
In output if isset "HTTP/1.1 200 OK" then the file/resourse exists.
PS. Try to use curl_multi_*. It's very fast.
M, this is really just an explanation of AlexeyKa's answer. The reason for your scan talking 10 minutes is that you are serialising some 300 network transactions, each of which is taking roughly 2 seconds on average, and 300 x 2s gives you your total 10min elapsed time.
The various approaches such as requesting a header and no body can trim the per-transaction cost but the killer is that you are still running your queries one at a time. What the curl_multi_* routines allow you do you is to run batches in parallel, say 30 x batches of 10 taking closer to 30s. Scanning through the PHP documentation's user contributed notes give this post which explains how to set this up:
Executing multiple curl requests in parallel with PHP and curl_multi_exec.
The other option (if you are using php-cli) is simply to kick off, say, ten batch threads each one much as your current code, but with its own sublist of one tenth of the sites to check.
Since either approach is largely latency bound rather specific link capacity-bound, the time should fall largely by the same factor.