what the best options for CURLOPT_ENCODING in PHP - php

am facing issue of fetching data from big file of multi weblinks.
in my code use :
curl_setopt($ch, CURLOPT_ENCODING, 'gzip');
i used it to speed the curl request,
i tested my code on a few links and its works fine but after using the big file i get a bad results .
As i understand it used to compress the data while retrieve. so i used it to fast the curl request.
could it make any error in data retrieve like that the site not using gzip?
my code also using fork with no sleep time, maybe the problem from the fork ?
$pid = #pcntl_fork();
$execute++;
if ($execute >= $maxproc)
{
while (pcntl_waitpid(0, $status) != -1)
{
$status = pcntl_wexitstatus($status);
$execute =0;
//usleep(250000);
//sleep(1);
//echo " [$ipkey] Child $status completed\n";
}
}
if (!$pid)
{
do
exit;
}
any one have idea where the probliem is ?

Related

PHP Wait before sending next api request

I have a website that pulls prices from an API. The problem is that if you send more than ~10 requests to this API in a short amount of time, your ip gets blocked temporarily (I'm not sure if this is just a problem on localhost or if it would also be a problem from the webserver, I assume the latter).
The request to the API returns a JSON object, which I then parse and store certain parts of it into my database. There are about 300 or so entries in the database, so ~300 requests I need to make to this API.
I will end up having a cron job that every x hours, all of the prices are updated from the API. The job calls a php script I have that does all of the request and db handling.
Is there a way to have the script send the requests over a longer period of time, rather than immediately? The problem I'm running into is that after ~20 or so requests the ip gets blocked, and the next 50 or so requests after that get no data returned.
I looked into sleep(), but read that it will just store the results in a buffer and wait, rather than wait after each request.
Here is the script that the cron job will be calling:
define('HTTP_NOT_FOUND', false);
define('HTTP_TIMEOUT', null);
function http_query($url, $timeout=5000) {
$curl = curl_init($url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl, CURLOPT_TIMEOUT_MS, $timeout);
$text = curl_exec($curl);
if($text) {
$code = curl_getinfo($curl, CURLINFO_HTTP_CODE);
switch($code){
case 200:
return $text;
case 404:
return -1;
default:
return -1;
}
}
return HTTP_TIMEOUT;
}
function getPrices($ID) {
$t = time();
$url = url_to_API;
$result = http_query($url, 5000);
if ($result == -1) { return -1; }
else {
return json_decode($result)->price;
}
}
connectToDB();
$result = mysql_query("SELECT * FROM prices") or die(mysql_error());
while ($row = mysql_fetch_array($result)) {
$id = $row['id'];
$updatedPrice = getItemPrices($id);
.
.
echo $updatedPrice;
. // here I am just trying to make sure I can get all ~300 prices without getting any php errors or the request failing (since the ip might be blocked)
.
}
sleep() should not affect/buffer queries to the database. You can use ob_flush() if you need to print something immediately. Also make sure to set max execution time with set_time_limit() so your script don't timeout.
set_time_limit(600);
while ($row = mysql_fetch_array($result)) {
$id = $row['id'];
$updatedPrice = getItemPrices($id);
.
.
echo $updatedPrice;
//Sleep 1 seconds, use ob_flush if necessary
sleep(1);
//You can also use usleep(..) to delay the script in milliseconds
}

retry a statement inside an array loop in php but with the next value of the array

I'm here again, learning more and more about PHP, but still have some problems for my scenario, most of my scenario has been programmed and solved without problem, but I found an issue, but to understand it, I need to explain it first:
I have a PHP script which can be invoked by any client and its work is to receive a request, ping to a proxy from a list which I define manually, to know if a proxy is available, if it is available, I proceed to retrieve a response using "curl" with a POST method. The logic is like this:
$proxyList = array('192.168.3.41:8013'=> 0, '192.168.3.41:8023'=>0, '192.168.3.41:8033'=>0);
$errorCounter = 0;
foreach ($proxyList as $key => $value){
if(!isUrlAvailable($key){ //It means it is NOT available so I count errors
$errorCounter++;
} else { //It means it is AVAILABLE
$result = callThisProxy($key);
}
}
The function "isUrlAvailable" uses a $fsockopen to know if the proxy is available. If not, I make a POST with CURL as mentioned before, the function has callThisProxy() something like:
$ch = curl_init($proxyUrl);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS,'xmlQuery='.$rawXml);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$info = curl_exec ($ch);
if($isDebug){echo 'Info in the moment: '.$info.'<br/>';}
curl_close ($ch);
But, we're testing some scenarios, what happen if I turn off the proxy between the verification of the proxy availability and the call? I mean:
foreach ($proxyList as $key => $value){
if(!isUrlAvailable($key){ //It means it is NOT available so I count errors
$errorCounter++;
} else { //It means it is AVAILABLE
$result = callThisProxy($key);//What happen if I kill the proxy when the result is being processed?
}
}
I tested it and when I do that, the $result comes as empty string ''. But the problem is that I lost that request, and my goal is to retry it with the next $key which is a proxy. So, I've been thinking of a "do, while" when I invoke the result. But not sure, if it is ok or there's a better way to do it, so please I ask for help with this issue. Thanks in advance for your time any answer is welcome. Thanks.
Maybe something like:
$result = "";
while ($result == "")
{
foreach ($proxyList as $key => $value)
{
if (!isUrlAvailable($key))
{
$errorCounter++;
}
else
{
$result = callThisProxy($key);
}
}
}
// Now check $result, which should contain the first successful callThisProxy()
// result, or nothing if none of the keys worked.
You could just keep a list of proxies that you still need to try. When you hit the error or get a valid response then you remove the proxy from the list of proxies to try. If you do not get a good response then keep it in the list and try it again later.
$proxiesToTry = $proxyList;
$i = 0;
while (count($proxiesToTry) != 0) {
// reset to beginning of array
if($i >= count($proxiesToTry))
$i = 0;
$proxy = $proxiesToTry[$i];
if (!isUrlAvailable($proxy)) { //It means it is NOT available so I count errors
$errorCounter++;
unset($proxiesToTry[$i]);
} else { //It means it is AVAILABLE
$result = callThisProxy($proxy);
if($result != "") // If we got a response remove it from the array of proxies to try.
unset($proxiesToTry[$i]);
}
$i++;
}
NOTE: You will never break out of this loop if you don't ever get a valid response from some proxy.

Faster methods than PHP FTP functions for checking if files are in public html directory

I have around 295 domains to check if they contain files in their public_html directory's. Currently I am using the PHP FTP functions but the script takes around 10 minutes to complete. I am trying to shorten down this time, what methods could I use to achieve this.
Here is my PHP code
<?php
foreach($ftpdata as $val) {
if (empty($val['ftp_url'])) {
echo "<p>There is no URL provided</p>";
}
if (empty($val['ftp_username'])) {
echo "<p>The site ".$val['ftp_url']." dosent have a username</p>";
}
if (empty($val['ftp_password'])) {
echo "<p>The site ".$val['ftp_url']." dosent have a password</p>";
}
if($val['ftp_url'] != NULL && $val['ftp_password'] != NULL && $val['ftp_username'] != NULL) {
$conn_id = #ftp_connect("ftp.".$val['ftp_url']);
if($conn_id == false) {
echo "<p></br></br><span>".$val['ftp_url']." isnt live</span></p>";
}
else {
$login_result = ftp_login($conn_id, $val['ftp_username'], $val['ftp_password']);
ftp_chdir($conn_id, "public_html");
$contents = ftp_nlist($conn_id, ".");
if (count($contents) > 3) {
echo "<p><span class='green'>".$val['ftp_url']." is live</span><p>";
}
else {
echo "<p></br></br><span>".$val['ftp_url']." isnt live</span></p>";
}
}
}
}
?>
If it is a publicly available file you can use file_get_contents() to try to grab it. If it is successful you know it is there. If it fails then it is not. You don't need to download the entire file. Just limit it to a small amount of characters so it's fast and not wasting bandwidth.
$page = file_get_contents($url, NULL, NULL, 0, 100);
if ($page !== false)
{
// it exists
}
Use curl. With option CURLOPT_NOBODY set to true request method is set to HEAD and do not transfer body.
<?php
// create a new cURL resource
$ch = curl_init();
// set URL and other appropriate options
curl_setopt($ch, CURLOPT_URL, "http://google.com/images/srpr/logo3w.png"); //for example google logo
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt ($ch, CURLOPT_FOLLOWLOCATION, true);
//get content
$content = curl_exec($ch);
// close
curl_close($ch);
//work with result
var_dump($content);
?>
In output if isset "HTTP/1.1 200 OK" then the file/resourse exists.
PS. Try to use curl_multi_*. It's very fast.
M, this is really just an explanation of AlexeyKa's answer. The reason for your scan talking 10 minutes is that you are serialising some 300 network transactions, each of which is taking roughly 2 seconds on average, and 300 x 2s gives you your total 10min elapsed time.
The various approaches such as requesting a header and no body can trim the per-transaction cost but the killer is that you are still running your queries one at a time. What the curl_multi_* routines allow you do you is to run batches in parallel, say 30 x batches of 10 taking closer to 30s. Scanning through the PHP documentation's user contributed notes give this post which explains how to set this up:
Executing multiple curl requests in parallel with PHP and curl_multi_exec.
The other option (if you are using php-cli) is simply to kick off, say, ten batch threads each one much as your current code, but with its own sublist of one tenth of the sites to check.
Since either approach is largely latency bound rather specific link capacity-bound, the time should fall largely by the same factor.

Cache using PHP cURL

I'm using PHP cURL to fetch information from another website and insert it into my page. I was wondering if it was possible to have the fetched information cached on my server? For example, when a visitor requests a page, the information is fetched and cached on my server for 24 hours. The page is then entirely served locally for 24 hours. When the 24 hours expire, the information is again fetched and cached when another visitor requests it, in the same way.
The code I am currently using to fetch the information is as follows:
$url = $fullURL;
$ch = curl_init();
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_URL, $url);
$result = curl_exec($ch);
curl_close($ch);
echo $result;
Is this possible? Thanks.
You need to write or download a php caching library (like extensible php caching library or such) and adjust your current code to first take a look at cache.
Let's say your cache library has 2 functions called:
save_cache($result, $cache_key, $timestamp)
and
get_cache($cache_key, $timestamp)
With save_cache() you will save the $result into the cache and with get_cache() you will retrieve the data.
$cache_key would be md5($fullURL), a unique identifier for the caching library to know what you want to retrieve.
$timestamp is the amount of minutes/hours you want the cache to be valid, depending on what your caching library accepts.
Now on your code you can have a logic like:
$cache_key = md5($fullURL);
$timestamp = 24 // assuming your caching library accept hours as timestamp
$result = get_cache($cache_key, $timestamp);
if(!$result){
echo "This url is NOT cached, let's get it and cache it";
// do the curl and get $result
// save the cache:
save_cache($result, $cache_key, $timestamp);
}
else {
echo "This url is cached";
}
echo $result;
You can cache it using memcache ( a session ) you can cache it using files on your server and you can cache it using a database, like mySQL.
file_put_contents("cache/cachedata.txt",$data);
You will need to set the permissions of the folder you want to write the files to, otherwise you might get some errors.
Then if you want to read from the cache:
if( file_exists("cache/cachedata.txt") )
{ $data = file_get_contents("cache/cachedate.txt"); }
else
{ // curl here, we have no cache
}
Honza's suggestion to use Nette cache worked great for me, and here's the code I wrote to use it. My function returns the HTTP result if it worked, false if not. You'll have to change some path strings.
use Nette\Caching\Cache;
use Nette\Caching\Storages\FileStorage;
Require("/Nette/loader.php");
function cached_httpGet($url) {
$storage = new FileStorage("/nette-cache");
$cache = new Cache($storage);
$result = $cache->load($url);
if ($result) {
echo "Cached: $url";
}
else {
echo "Fetching: $url";
$ch = curl_init();
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_URL, $url);
$result = curl_exec($ch);
if (curl_errno($ch)) {
echo "ERROR " . curl_error($ch) . " loading: $url";
return false;
} else
$cache->save($url, $result, array(Cache::EXPIRE => '1 day'));
curl_close($ch);
}
return $result;
}
Use Nette Cache. All you need solution, simple to use and of course - thread-safe.
If you've got nothing against file system access, you could just store it in a file. Then maybe use a script on the server that checks the file's timestamp against the current time and deletes it if it's too old.
If you don't have access to all aspects of the server you could just use the above idea and store a timestamp with the info. Every time the page is requested check against the timestamp.
And if you're having problems with the fs bottlenecking, you could use a MySQL database stored entirely in RAM.
I made a pretty cool simple function to store data gotten from your curl for 1 hour or 1 day off Antwan van Houdt's comment (shout out to him) .. firstly create a folder with name "zcache" in public_html and make sure the permission is at "755"
1 hour:
if( file_exists('./zcache/zcache-'.date("Y-m-d-H").'.html') )
{ $result = file_get_contents('./zcache/zcache-'.date("Y-m-d-H").'.html'); }
else
{
// put your curl here
file_put_contents('./zcache/zcache-'.date("Y-m-d-H").'.html',$result);
}
1 day:
if( file_exists('./zcache/zcache-'.date("Y-m-d").'.html') )
{ $result = file_get_contents('./zcache/zcache-'.date("Y-m-d").'.html'); }
else
{
// put your curl here
file_put_contents('./zcache/zcache-'.date("Y-m-d").'.html',$result);
}
you are welcome
The best way to avoid caching is applying the time or any other random element to the url, like this:
$url .= '?ts=' . time();
so for example instead of having
http://example.com/content.php
you would have
http://example.com/content.php?ts=1212434353

How can I use cURL to open multiple URLs simultaneously with PHP?

Here is my current code:
$SQL = mysql_query("SELECT url FROM urls") or die(mysql_error()); //Query the urls table
while($resultSet = mysql_fetch_array($SQL)){ //Put all the urls into one variable
// Now for some cURL to run it.
$ch = curl_init($resultSet['url']); //load the urls
curl_setopt($ch, CURLOPT_TIMEOUT, 2); //No need to wait for it to load. Execute it and go.
curl_exec($ch); //Execute
curl_close($ch); //Close it off
} //While loop
I'm relatively new to cURL. By relatively new, I mean this is my first time using cURL. Currently it loads one for two seconds, then loads the next one for 2 seconds, then the next. however, I want to make it load ALL of them at the same time. I'm sure its possible, I'm just unsure as to how. If someone could point me in the right direction, I'd appreciate it.
You set up each cURL handle in the same way, then add them to a curl_multi_ handle. The functions to look at are the curl_multi_* functions documented here. In my experience, though, there were issues with trying to load too many URLs at once (though I can't find my notes on it at the moment), so the last time I used curl_mutli_, I set it up to do batches of 5 URLs at a time.
edit: Here is a reduced version of the code I have using curl_multi_:
edit: Slightly rewritten and lots of added comments, which hopefully will help.
// -- create all the individual cURL handles and set their options
$curl_handles = array();
foreach ($urls as $url) {
$curl_handles[$url] = curl_init();
curl_setopt($curl_handles[$url], CURLOPT_URL, $url);
// set other curl options here
}
// -- start going through the cURL handles and running them
$curl_multi_handle = curl_multi_init();
$i = 0; // count where we are in the list so we can break up the runs into smaller blocks
$block = array(); // to accumulate the curl_handles for each group we'll run simultaneously
foreach ($curl_handles as $a_curl_handle) {
$i++; // increment the position-counter
// add the handle to the curl_multi_handle and to our tracking "block"
curl_multi_add_handle($curl_multi_handle, $a_curl_handle);
$block[] = $a_curl_handle;
// -- check to see if we've got a "full block" to run or if we're at the end of out list of handles
if (($i % BLOCK_SIZE == 0) or ($i == count($curl_handles))) {
// -- run the block
$running = NULL;
do {
// track the previous loop's number of handles still running so we can tell if it changes
$running_before = $running;
// run the block or check on the running block and get the number of sites still running in $running
curl_multi_exec($curl_multi_handle, $running);
// if the number of sites still running changed, print out a message with the number of sites that are still running.
if ($running != $running_before) {
echo("Waiting for $running sites to finish...\n");
}
} while ($running > 0);
// -- once the number still running is 0, curl_multi_ is done, so check the results
foreach ($block as $handle) {
// HTTP response code
$code = curl_getinfo($handle, CURLINFO_HTTP_CODE);
// cURL error number
$curl_errno = curl_errno($handle);
// cURL error message
$curl_error = curl_error($handle);
// output if there was an error
if ($curl_error) {
echo(" *** cURL error: ($curl_errno) $curl_error\n");
}
// remove the (used) handle from the curl_multi_handle
curl_multi_remove_handle($curl_multi_handle, $handle);
}
// reset the block to empty, since we've run its curl_handles
$block = array();
}
}
// close the curl_multi_handle once we're done
curl_multi_close($curl_multi_handle);
Given that you don't need anything back from the URLs, you probably don't need a lot of what's there, but this is how I chunked the requests into blocks of BLOCK_SIZE, waited for each block to run before moving on, and caught errors from cURL.

Categories