I have an article on my website with the (markdown) content:
# PHP Proper Class Name
Class names in PHP are case insensitve. If you have a class declaration like:
```php
class MyWeirdClass {}
```
you can instantiate it with `new myWEIRDclaSS()` or any other variation on the case. In some instances, you may want to know, what is the correct, case-sensitive class name.
### Example Use case
For example, in one of my libraries under construction [API Doccer](https://github.com/ReedOverflow/PHP-API-Doccer), I can view documentation for a class at url `/doc/class/My-Namespace-Clazzy/` and if you enter the wrong case, like `/doc/class/my-NAMESPACE-CLAzzy`, it should automatically redirect to the proper-cased class. To do this, I use the reflection method below as it is FAR more performant than the `get_delcared_classes` method
## Reflection - get proper case
Credit goes to [l00k on StackOverflow](https://stackoverflow.com/a/35222911/802469)
```php
$className = 'My\caseINAccuRATE\CLassNamE';
$reflection = new ReflectionClass($className);
echo $reflection->getName();
```
results in `My\CaseInaccurate\ClassName`;
Running the benchmark (see below) on localhost on my laptop, getting the proper case class name of 500 classes took about 0.015 seconds, as opposed to ~0.050 seconds using the `get_declared_classes` method below.
## get_declared_classes - get proper case
This was my idea, as I hadn't even considered using reflection, until I saw [l00k's answer on StackOverflow](https://stackoverflow.com/a/35222911/802469). Guessing it would be less efficient, I wrote the code and figured it out anyway, because it's fun!
```php
$wrongCaseName = 'Some\classy\THIng';
class_exists($wrongCaseName); //so it gets autoloaded if not already done
$classes = get_declared_classes();
$map = array_combine(array_map('strtolower',$classes),$classes);
$proper = $map[strtolower($wrongCaseName)];
```
results in `$proper = 'Some\Classy\Thing'`;
Running the bencmark (see below) on localhost on my laptop, getting the proper case class name of 500 classes took about 0.050 seconds, as opposed to ~0.015 seconds with reflection (above).
## Benchmark:
I used the following code to do the benchmark, removing the `classes` directory between each run of the benchmark. It's not perfect. At all. But it gets the job done well enough, I think:
```php
<?php
$times = [];
$times['begin'] = microtime(TRUE);
spl_autoload_register(function($className){
if (file_exists($name=__DIR__.'/classes/'.strtolower($className).'.php')){
include($name);
}
});
if (is_dir(__DIR__.'/classes'))return;
mkdir(__DIR__.'/classes');
function generateRandomString($length = 10) {
$characters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ';
$charactersLength = strlen($characters);
$randomString = '';
for ($i = 0; $i < $length; $i++) {
$randomString .= $characters[rand(0, $charactersLength - 1)];
}
return $randomString;
}
$times['start_file_write'] = microtime(TRUE);
$names = [];
for ($i=0;$i<500;$i++){
$className = generateRandomString(10);
$file = __DIR__.'/classes/'.strtolower($className).'.php';
if (file_exists($file)){
$i = $i-1;
continue;
}
$code = "<?php \n\n".'class '.$className.' {}'."\n\n ?>";
file_put_contents($file,$code);
$names[] = strtoupper($className);
}
$times['begin_get_declared_classes_benchmark'] = microtime(TRUE);
$propers = [];
// foreach($names as $index => $name){
// $wrongCaseName = strtoupper($name);
// class_exists($wrongCaseName); //so it gets autoloaded if not already done
// $classes = get_declared_classes();
// $map = array_combine(array_map('strtolower',$classes),$classes);
// $proper = $map[strtolower($wrongCaseName)];
// if ($index%20===0){
// $times['intermediate_bench_'.$index] = microtime(TRUE);
// }
// $propers[] = $proper;
// }
// the above commented lines are the get_declared_classes() method.
// the foreach below is for reflection.
foreach ($names as $index => $name){
$className = strtoupper($name);
$reflection = new ReflectionClass($className);
if ($index%20===0){
$times['intermediate_bench_'.$index] = microtime(TRUE);
}
$propers[] = $reflection->getName();
}
$times['end_get_declared_classes_benchmark'] = microtime(TRUE);
$start = $times['begin'];
$bench = $times['begin_get_declared_classes_benchmark'];
$lastTime = 0;
foreach($times as $key => $time){
echo "\nTime since begin:".($time-$start);
echo "\nTime since last: ".($time-$lastTime)." key was {$key}";
echo "\nTime since bench start: ".($time - $bench);
$lastTime = $time;
}
print_r($times);
print_r($propers);
exit;
```
### Results
```
// get_declared_classes method
//Time since bench start: 0.052499055862427 is total time for processing get_declared_classes w/ $i=500
//Time since bench start: 0.047168016433716
// last bench time Time since begin:0.062150955200195
// 100 intermediate bench: Time since bench start: 0.0063230991363525
// 200 : Time since bench start: 0.015070915222168
// 300 intermediate bench: Time since bench start: 0.02455997467041
// 400 intermediate bench: Time since bench start: 0.033944129943848
// 480 : Time since bench start: 0.044310092926025
//reflection method:
//Time since bench start: 0.01493501663208
//Time since bench start: 0.017416954040527
// 100 intermediate: Time since bench start: 0.0035450458526611
// 200 intermediate: Time since bench start: 0.0066778659820557
// 300 intermediate: Time since bench start: 0.010055065155029
// 400 intermediate: Time since bench start: 0.014182090759277
// 480 intermediate: Time since bench start: 0.01679801940918
```
#### Results' notes
- "Time since bench start" is the entire time it took to run all the iterations. I share this twice above.
- "100 Intermediate" (200, 300, etc) are actually the results at 120, 220, etc... I messed up in copy+pasting results & didn't want to do it again. Yes. I'm lazy :)
- The results would of course vary between runs of the code, but it's pretty clear that the reflection option is significantly faster.
- All was run on a localhost server on an Acer laptop.
- PHP Version 7.2.19-0ubuntu0.19.04.1 (from `php info()`)
As shown above, I'm able to submit the article & everything works as expected - saves to DB & everything. The very last line, if I change php info() to phpinfo() (removing the space), I get this error:
Not Acceptable!
An appropriate representation of the requested resource could not be found on this server. This error was generated by Mod_Security.
When I try to submit with phpinfo() (no space), my PHP does not execute at all and I only get this error. The network tab in firefox shows "406 Not Acceptable" for the status code. Nothing is being written to my error log in $_SERVER['DOCUMENT_ROOT'].'/error_log', which is where all the PHP errors log to, anyway. In my home folder, there is a logs folder, but it remains empty. No logs in /etc/ or /etc/my_website_name.com either.
What could be causing this problem? Is there something in the PHP.ini I could change? Could .htaccess affect this at all?
At the very least, how do I troubleshoot this problem?
Troubleshooting
I can submit an article which only contains - PHP Version 7.2.19-0ubuntu0.19.04.1 (from `phpinfo()`) in the body.
If I remove phpinfo() and add more content to the body of the post (more total data being submitted), it works
putting a space, like php info() makes it work, and is how the post currently exists.
I don't know what else to do
I am using Simple MDE now, but it happened on multiple other occasions before I started using Simple MDE. It has only been with relatively large posts that also contain code.
I am on Shared Hosting with HostGator, using HTTPS:// and PHP 7.2.19
I contacted HostGator. They added something to a white list, but didn't give me intimate details. It fixed the problem.
First agent took awhile, failed to resolve the issue, and disconnected prematurely.
The second agent was reasonably prompt & resolved the problem, saying I shouldn't have this issue with similar types of POST requests which contain code.
Related
I am using php, Laravel, Redis, and SQL on an Ubuntu localhost server. I have made a bunch of methods that return results from API searches after some processing. I am calling 5 of these methods which will be very slow if done synchronously, so I've been experimenting with async approaches (which I know php isn't optimised for). After a few approaches I have found some success with pcntl_fork(), but I'm running into some nasty problems.
Edit: After some messing around I have found that if I remove the while loop then the code afterward executes properly, I have removed the while loop and placed it in the second 'search' method. However it still causes a freeze of the system. This makes no sense as there shouldn't be an infinite loop as if I manually query the Redis db, all 5 results are there.
This is my code: (I have a few custom classes for making and processing the API calls, fyi these methods work flawlessly)
//this caches the individual api results to a Redis list
public static function cacheAsyncApiSearch(string $searchQuery, int $maxResults = 20)
{
$key = "search:".$searchQuery; //for Redis
if(!Redis::client()->exists($key)) {
for ($i = 0; $i < 5; $i++) {
// Create a child process
$pid = pcntl_fork();
if ($pid == -1) {
// Fork failed
exit(1);
} elseif ($pid) {
// This is the parent process
// I have tried many versions of pcntl_wait, none work! They all still don't allow code to be ran afterwards (even within this elseif block), and the best it does is cache the 1st api case (YouTube)
// while (!pcntl_wait($status, WNOHANG)) {
// $exitStatus = pcntl_wexitstatus($status);
// // Do something with the exit status of the child process
// }
// dd($pid);
// pcntl_waitpid($pid, $status, WUNTRACED);
} else {
//child processes
switch ($i) {
case 0:
$results = YouTube::search($searchQuery, $maxResults)['results'];
Redis::client()->rPush($key,SearchResultDTO::jsonEncodeArray($results));
SearchResultDTO::convertResultDTOToModels($results);
break;
case 1:
$results = Dailymotion::search($searchQuery, $maxResults)['results'];
Redis::client()->rPush($key,SearchResultDTO::jsonEncodeArray($results));
SearchResultDTO::convertResultDTOToModels($results);
break;
case 2:
$results = Vimeo::search($searchQuery, $maxResults)['results'];
Redis::client()->rPush($key,SearchResultDTO::jsonEncodeArray($results));
SearchResultDTO::convertResultDTOToModels($results);
break;
case 3:
$results = Twitch::search($searchQuery, 2)['results'];
Redis::client()->rPush($key,SearchResultDTO::jsonEncodeArray($results));
SearchResultDTO::convertResultDTOToModels($results);
break;
case 4:
$results = Podcasts::getPodcastsFromItunesResults(Podcasts::search($searchQuery, 2)["response"]->results);
Redis::client()->rPush($key,SearchResultDTO::jsonEncodeArray($results));
SearchResultDTO::convertResultDTOToModels($results);
break;
}
$i = 10000;
exit(0);
}
}
// for noting the process id of the given process that gets to this point
Redis::client()->lPush("search_pid:".$searchQuery, $pid);
// sets a time out for the redis cache
Redis::client()->expire($key, 60*60*4);
while (is_numeric( Redis::client()->lLen($key)) && Redis::client()->lLen($key) < 5) {
usleep(500000); // 0.5 seconds
// pcntl_waitpid(-1, $status); //does this even do anything? not for me
}
return false; // not already cached
}
return true; // already cached
}
This code somewhat works, It performs the api calls and caches the Redis perfectly. However when the method is ran, no code will be ran after it (unless redis has found a cached version and the process is not forked).
This made me think that all processes are being exited (possibly true? if so i dont know why), so I tried writing a version without the exit(0) line. This works, I can then perform code after the method call, however I noticed (when getting SQL race conditions) that all 6 (5 child, 1 parent) processes continued to run their own version of the code after this method (e.g. some database writes)
public static function search(string $searchQuery, int $maxResults = 20): array
{
$key = "search:".$searchQuery;
$results = [];
// the quoted method above
self::cacheAsyncApiSearch($searchQuery, $maxResults);
foreach (Redis::client()->lRange($key,0,-1) as $result){
$results = array_merge($results, SearchResultDTO::jsonDecodeArray($result));
}
$creatorDTOs = [];
$videoDTOs = [];
$streamDTOs = [];
$playlistDTOs = [];
$podcastDTOs = [];
/** #var SearchResultDTO $result */
foreach ($results as $result) {
match ($result->kind) {
Kind::Creator => $creatorDTOs[] = $result,
Kind::Video => $videoDTOs[] = $result,
Kind::Stream => $streamDTOs[] = $result,
Kind::Playlist => $playlistDTOs[] = $result,
Kind::Podcast => $podcastDTOs[] = $result,
};
}
// did this to test how many times the code was being ran (the list has 6 1's in it)
Redis::client()->lPush("here", '1');
// I know this code isn't completely efficient since I already called these conversion methods before, however I am just trying to get the forking stuff to work right now.
return [
"creators" => SearchResultDTO::convertResultDTOToModels($creatorDTOs),
"videos" => SearchResultDTO::convertResultDTOToModels($videoDTOs),
"streams" => SearchResultDTO::convertResultDTOToModels($streamDTOs),
"playlists" => SearchResultDTO::convertResultDTOToModels($playlistDTOs),
"podcasts" => SearchResultDTO::convertResultDTOToModels($podcastDTOs)
];
}
These DTO's (Data Transfer Objects) are being used to populate a UI. So for example, when I make a search (that isn't cached), the page is blank forever. But if I refresh the page (after the search is cached) then the results show just fine.
This is the most bizarre problem I have ever ran into and I really appreciate any help.
Edit please read:
After some messing around I have found that if I remove the while loop then the code afterward executes properly, I have removed the while loop and placed it in the second 'search' method. However it still causes a freeze of the system. This makes no sense as there shouldn't be an infinite loop as if I manually query the Redis db, all 5 results are there. And the dd("two") can never be excecated unless the usleep() is removed. Hopefully this narrows the problem down.
Edit 2 please read:
I have figured out that I can get the dd("two") to work when usleep() is reduced to 0.05s from 0.5 seconds, but it still doesnt seem to run long enough for it to work.
if(!self::cacheAsyncApiSearch($searchQuery, $maxResults))
{
// make sure Redis is properly returning a number not object
$len = Redis::client()->lLen($key);
while(!is_numeric($len)){
usleep(500000); // 0.5 seconds
$len = Redis::client()->lLen($key);
}
//dd($len); //this dd() works
while ($len < 5) {
dd("one"); // this dd() works
usleep(500000); // 0.5 seconds
dd("two"); **//$this does not work, why?**
$len = Redis::client()->lLen($key);
}
}
function cronProcess() {
# > 100,000 users
$users = $this->UserModel->getUsers();
foreach ($users as $user) {
# Do lots of database Insert/Update/Delete, HTTP request stuff
}
}
The problem happens when the number of users reaches ~ 100,000.
I called the function by CURL via CronTab.
So what is the best solution for this?
I do a lot of bulk tasks in CakePHP, some processing millions of records. It's certainly possible to do, the key as others suggested is small batches in a loop.
If this is something you're calling from Cron, it's probably easier to use a Shell (< v3.5) or the newer Command class (v3.6+) than cURL.
Here's generally how I paginate large batches, including some helpful optional things like a progress bar, turning off hydration to speed things up slightly, and showing how many users/second the script was able to process:
<?php
namespace App\Command;
use Cake\Console\Arguments;
use Cake\Console\Command;
use Cake\Console\ConsoleIo;
class UsersCommand extends Command
{
public function execute(Arguments $args, ConsoleIo $io)
{
// I'd guess a Finder would be a more Cake-y way of getting users than a custom "getUsers" function:
// See https://book.cakephp.org/3.0/en/orm/retrieving-data-and-resultsets.html#custom-finder-methods
$usersQuery = $this->UserModel->find('users');
// Get a total so we know how many we're gonna have to process (optional)
$total = $usersQuery->count();
if ($total === 0) {
$this->abort("No users found, stopping..");
}
// Hydration takes extra processing time & memory, which can add up in bulk. Optionally if able, skip it & work with $user as an array not an object:
$usersQuery->enableHydration(false);
$this->info("Grabbing $total users for processing");
// Optionally show the progress so we can visually see how far we are in the process
$progress = $io->helper('Progress')->init([
'total' => 10
]);
// Tune this page value to a size that solves your problem:
$limit = 1000;
$offset = 0;
// Simply drawing the progress bar every loop can slow things down, optionally draw it only every n-loops,
// this sets it to 1/5th the page size:
$progressInterval = $limit / 5;
// Optionally track the rate so we can evaluate the speed of the process, helpful tuning limit and evaluating enableHydration effects
$startTime = microtime(true);
do {
$users = $usersQuery->offset($offset)->toArray();
$count = count($users);
$index = 0;
foreach ($users as $user) {
$progress->increment(1);
// Only draw occasionally, for speed
if ($index % $progressInterval === 0) {
$progress->draw();
}
### WORK TIME
# Do your lots of database Insert/Update/Delete, HTTP request stuff etc. here
###
}
$progress->draw();
$offset += $limit; // Increment your offset to the next page
} while ($count > 0);
$totalTime = microtime(true) - $startTime;
$this->out("\nProcessed an average " . ($total / $totalTime) . " Users/sec\n");
}
}
Checkout these sections in the CakePHP Docs:
Console Commands
Command Helpers
Using Finders & Disabling Hydration
Hope this helps!
I'm tying to extract data from thousands of premade sql files. I have a script that does what I need using the Mysqli driver in PHP, but it's really slow since it's one sql file at a time. I modified the script to create unique temp database names, which each sql file is loaded into. Data is extracted to an archive database table, then the temp database is dumped. In an effort to speed things up, I created a script structured 4 scripts similar to the one below, where each for loop is stored in it's own unique PHP file (the code below is only for a quick demo of what's going on in 4 separate files), they are setup to grab only 1/4 of the files from the source file folder. All of this works perfectly, the scripts run, there is zero interference with file handling. The issue is that I seem to get almost zero performance boost. Maybe 10 seconds faster :( I quickly refreshed my PHPmyadmin database listing page and could see the 4 different databases loaded at anytime, but I also noticed that it looked like it was still running more or less sequentially as the DB names were changing on the fly. I went the extra step of creating an unique user for each script with it's own connection. No improvement. Can I get this to work with mysqli / PHP or do I need to look into some other options? I'd prefer to do this all in PHP if I can (version 7.0). I tested by running the PHP scripts in my browser. Is that the issue? I haven't written any code to execute them on the command line and set them to the background yet. One last note, all the users in my mysql database have no limits on connections, etc.
$numbers = array('0','1','2','3','4','5','6','7','8','9','10','11','12','13','14','15','16','17','18','19','20');
$numCount = count($numbers);
$a = '0';
$b = '1';
$c = '2';
$d = '3';
$rebuild = array();
echo"<br>";
for($a; $a <= $numCount; $a+=4){
if(array_key_exists($a, $numbers)){
echo $numbers[$a]."<br>";
}
}
echo "<br>";
for($b; $b <= $numCount; $b+=4){
if(array_key_exists($b, $numbers)){
echo $numbers[$b]."<br>";
}
}
echo "<br>";
for($c; $c <= $numCount; $c+=4){
if(array_key_exists($c, $numbers)){
echo $numbers[$c]."<br>";
}
}
echo "<br>";
for($d; $d <= $numCount; $d+=4){
if(array_key_exists($d, $numbers)){
echo $numbers[$d]."<br>";
}
}
Try this:
<?php
class BackgroundTask extends Thread {
public $output;
protected $input;
public function run() {
/* Processing here, use $output for... well... outputting data */
// Here you would implement your for() loops, for example, using $this->input as their data
// Some dumb value to demonstrate
$output = "SOME DATA!";
}
function __construct($input_data) {
$this->input = $input_data;
}
}
// Create instances with different input data
// Each "quarter" will be a quarter of your data, as you're trying to do right now
$job1 = new BackgroundTask($first_quarter);
$job1->start();
$job2 = new BackgroundTask($second_quarter);
$job2->start();
$job3 = new BackgroundTask($third_quarter);
$job3->start();
$job4 = new BackgroundTask($fourth_quarter);
$job4->start();
// ==================
// "join" the first job, i.e. "wait until it's finished"
$job1->join();
echo "First output: " . $job1->output;
$job2->join();
echo "Second output: " . $job2->output;
$job3->join();
echo "Third output: " . $job3->output;
$job4->join();
echo "Fourth output: " . $job4->output;
?>
When using four calls to your own script through HTTP, you're overloading your connections for no useful reason. Instead, you're taking away spaces for other users who may be trying to access your website.
I have run into a rather strange issue with a particular part of a large PHP application. The portion of the application in question loads data from MySQL (mostly integer data) and builds a JSON string which gets output to the browser. These requests were taking sevar seconds (8 - 10 seconds each) in Chrome's developer tools as well as via curl. However the PHP shutdown handler I had reported that the requests were executing in less than 1 second.
In order to debug I added a call to fastcgi_finish_request(), and suddenly my shutdown handler reported the same time as Chrome / curl.
With some debugging, I narrowed it down to a particular function. I created the following simple test case:
<?php
$start_time = DFStdLib::exec_time();
$product = new ApparelQuoteProduct(19);
$pmatrix = $product->productMatrix();
// This function call is the problem:
$ranges = $pmatrix->ranges();
$end_time = DFStdLib::exec_time();
$duration = $end_time - $start_time;
echo "Output generation duration was: $duration sec";
fastcgi_finish_request();
$fastcgi_finish_request_duration = DFStdLib::exec_time() - $end_time;
DFSkel::log(DFSkel::LOG_INFO,"Output generation duration was: $duration sec; fastcgi_finish_request Duration was: $fastcgi_finish_request_duration sec");
If I call $pmatrix->ranges() (which is a function that executes a number of calls to mysql_query to fetch data and build an in-memory PHP object structure from that data) then I get the output:
Output generation duration was: 0.2563910484314 sec; fastcgi_finish_request Duration was: 7.3854329586029 sec
in my log file. Note that the call to $pmatrix->ranges() does not take long at all, yet somehow it causes the PHP FastCGI handler to take seven seconds to fihish the request. (This is true even if I don't call fastcgi_finish_request -- the browser takes 7-8 seconds to display the data either way)
If I comment out the call to $pmatrix->ranges() I get:
Output generation duration was: 0.0016419887542725 sec; fastcgi_finish_request Duration was: 0.00035214424133301 sec
I can post the entire source for the $pmatrix->ranges() function, but it's very long. I'd like some advice on where to even start looking.
What is it about the PHP FastCGI request process which would even cause such behavior? Does it call destructor functions / garbage collection? Does it close open resources? How can I troubleshoot this further?
EDIT: Here's a larger source sample:
<?php
class ApparelQuote_ProductPricingMatrix_TestCase
{
protected $myProductId;
protected $myQuantityRanges;
private $myProduct;
protected $myColors;
protected $mySizes;
protected $myQuantityPricing;
public function __construct($product)
{
$this->myProductId = intval($product);
}
/**
* Return an array of all ranges for this matrix.
*
* #return array
*/
public function ranges()
{
$this->myLoadPricing();
return $this->myQuantityRanges;
}
protected function myLoadPricing($force=false)
{
if($force || !$this->myQuantityPricing)
{
$this->myColors = array();
$this->mySizes = array();
$priceRec_finder = new ApparelQuote_ProductPricingRecord();
$priceRec_finder->_link = Module_ApparelQuote::dbLink();
$found_recs = $priceRec_finder->find(_ALL,"`product_id`={$this->myProductId}","`qtyrange_id`,`color_id`");
$qtyFinder = new ApparelQuote_ProductPricingQtyRange();
$qtyFinder->_link = Module_ApparelQuote::dbLink();
$this->myQuantityRanges = $qtyFinder->find(_ALL,"`product_id`=$this->myProductId");
$this->myQuantityPricing = array();
foreach ($found_recs as &$r)
{
if(false) $r = new ApparelQuote_ProductPricingRecord();
if(!isset($this->myColors[$r->color_id]))
$this->myColors[$r->color_id] = true;
if(!isset($this->mySizes[$r->size_id]))
$this->mySizes[$r->size_id] = true;
if(!is_array($this->myQuantityPricing[$r->qtyrange_id]))
$this->myQuantityPricing[$r->qtyrange_id] = array();
if(!is_array($this->myQuantityPricing[$r->qtyrange_id][$r->color_id]))
$this->myQuantityPricing[$r->qtyrange_id][$r->color_id] = array();
$this->myQuantityPricing[$r->qtyrange_id][$r->color_id][$r->size_id] = &$r;
}
$this->myColors = array_keys($this->myColors);
$this->mySizes = array_keys($this->mySizes);
}
}
}
$start_time = DFStdLib::exec_time();
$pmatrix = new ApparelQuote_ProductPricingMatrix_TestCase(19);
$ranges = $pmatrix->ranges();
$end_time = DFStdLib::exec_time();
$duration = $end_time - $start_time;
echo "Output generation duration was: $duration sec";
fastcgi_finish_request();
$fastcgi_finish_request_duration = DFStdLib::exec_time() - $end_time;
DFSkel::log(DFSkel::LOG_INFO,"Output generation duration was: $duration sec; fastcgi_finish_request Duration was: $fastcgi_finish_request_duration sec");
Upon continued debugging I have narrowed down to the following lines from the above:
if(!is_array($this->myQuantityPricing[$r->qtyrange_id][$r->color_id]))
$this->myQuantityPricing[$r->qtyrange_id][$r->color_id] = array();
These statements are building an in-memory array structure of all the data loaded from MySQL. If I comment these out, then fastcgi_finish_request takes roughly 0.0001 seconds to run. If I do not comment them out, then fastcgi_finish_request takes 7+ seconds to run.
It's actually the function call to is_array that's the issue here
Changing to:
if(!isset($this->myQuantityPricing[$r->qtyrange_id][$r->color_id]))
Resolves the problem. Why is this?
I'm still relatively new to PHP and trying to use pthreads to solve an issue. I have 20 threads running processes that end at varying times. Most finish around < 10 seconds or so. I don't need all 20, just 10 detected. Once I get to 10, I would like to kill the threads, or to continue on to the next step.
I have tried using set_time_limit to about 20 seconds for each of the threads, but they ignore it and keep running. I am looping through the jobs looking for the join because I didn't want the rest of the program to run but I'm stuck until the slowest one has finished. While pthreads has reduced the time from around a minute to about 30 seconds, I can shave even more time since the first 10 run in about 3 seconds.
Thanks for any help and here is my code:
$count = 0;
foreach ( $array as $i ) {
$imgName = $this->smsId."_$count.jpg";
$name = "LocalCDN/".$imgName;
$stack[] = new AsyncImageModify($i['largePic'], $name);
$count++;
}
// Run the threads
foreach ( $stack as $t ) {
$t->start();
}
// Check if the threads have finished; push the coordinates into an array
foreach ( $stack as $t ) {
if($t->join()){
array_push($this->imgArray, $t->data);
}
}
class class AsyncImageModify extends \Thread{
public $data;
public function __construct($arg, $name, $container) {
$this->arg = $arg;
$this->name = $name;
}
public function run() {
//tried putting the set_time_limit() here, didn't work
if ($this->arg) {
// Get the image
$didWeGetTheImage = Image::getImage($this->arg, $this->name);
if($didWeGetTheImage){
$timestamp1 = microtime(true);
print_r("Starting face detection $this->arg" . "\n");
print_r(" ");
$j = Image::process1($this->name);
if($j){
// lets go ahead and do our image manipulation at this point
$userPic = Image::process2($this->name, $this->name, 200, 200, false, $this->name, $j);
if($userPic){
$this->data = $userPic;
print_r("Back from process2; the image returned is $userPic");
}
}
$endTime = microtime(true);
$td = $endTime-$timestamp1;
print_r("Finished face detection $this->arg in $td seconds" . "\n");
print_r($j);
}
}
}
It is difficult to guess the functionality of Image::* methods, so I can't really answer in any detail.
What I can say, is that there are very few machines I can think of that are suitable to run 20 concurrent threads in any case. A more suitable setup would be the worker/stackable model. A Worker thread is a reuseable context, and can execute task after task, implemented as Stackables; execution in a multi-threaded environment should always use the least amount of threads to get the most work done possible.
Please see pooling example and other examples that are distributed with pthreads, available on github, additionally, much information regarding usage is contained in past bug reports, if you are still struggling after that ...