Can php-fpm pools have NICE values?
Each one of my pools run under it's own user and group, like:
[pool1]
user = pool1
group = pool1
...
I've tried to create /etc/security/limits.d/prio.conf with contents:
#pool1 hard priority 39
But with htop that pool still have she same PRI and NI values than other pools after reboot.
Just use https://www.php.net/manual/install.fpm.configuration.php#worker-process-priority
[pool]
process.priority = 10
I am trying to communicate with the Stockfish chess engine from a PHP script. For that I am using the UCI protocol, so I will need to send commands line by line. For example, this is what I would normally enter on the Windows Command Prompt:
Stockfish.exe
position startpos
go depth 16
Stockfish.exe is the 64-bit version of the Stockfish chess engine.
I failed to do it using exec(). Here is how I attempted it:
exec("stockfish.exe\nposition startpos\ngo depth 16");
The engine runs, but the commands are not executed, so I get:
Stockfish 10 64 by T. Romstad, M. Costalba, J. Kiiski, G. Linscott
where I should get something like:
info depth 1 seldepth 1 multipv 1 score cp 116 nodes 20 nps 10000 tbhits 0 time 2 pv e2e4
info depth 2 seldepth 2 multipv 1 score cp 112 nodes 54 nps 27000 tbhits 0 time 2 pv e2e4 b7b6
info depth 3 seldepth 3 multipv 1 score cp 148 nodes 136 nps 45333 tbhits 0 time 3 pv d2d4 d7d6 e2e4
info depth 4 seldepth 4 multipv 1 score cp 137 nodes 247 nps 61750 tbhits 0 time 4 pv d2d4 e7e6 e2e4 c7c6
bestmove d2d4 ponder e7e6
I've read many threads on related issues, but couldn't find a workaround. Is there any way to accomplish this?
for($i=0;$i<=$var;$i++)
{
$argv1=$i*$countind;
$argv2=($i*$countind)+$countind;
exec("php /home/indiamart/public_html/prod-clickstream/cron/EmailMarketing.php $argv1 $argv2 $mon $week_hour > /dev/null 2>&1 & echo $!",$output);
sleep(2);
$mon++;
}
Above Code Run on:
Linux Server-[Apache with PHP 5 installed].
In the above code,I am running a command[php /home/indiamart/public_html/prod-clickstream/cron/EmailMarketing.php $argv1 $argv2 $mon $week_hour > /dev/null 2>&1 & echo $!] to execute the code written in the file(/home/indiamart/public_html/prod-clickstream/cron/EmailMarketing.php).
The Output from the above code open several threads(based upon the number of iteration of for loop).Below is the sample output.
php /home/indiamart/public_html/prod-clickstream/cron/EmailMarketing.php 0 10000 1 12
php /home/indiamart/public_html/prod-clickstream/cron/EmailMarketing.php 10000 20000 1 12
php /home/indiamart/public_html/prod-clickstream/cron/EmailMarketing.php 20000 30000 1 12
My concern is that if suppose x threads are opened out of which only y completed successfully.
How to track the PHP Error or the reason for the threads which didn't execute successfully(x-y)?
i have a php script that processes large XML files, and saves data from them into a database. I use several classes, to process, and store the data in the PHP script, before saving to the database, and read in the XML, node by node, to preserve memory. Basicaly, a loop in my file looks like this:
while ($Reader->read()) {
$parsed++;
if (time() >= $nexttick) {
$current=microtime(true)-$ses_start;
$eta=(($this->NumberOfAds-$parsed)*$current)/$parsed;
$nexttick=time()+3;
$mem_usage=memory_get_usage();
echo "Parsed $parsed # $current secs \t | ";
echo " (mem_usage: " . $mem_usage . " \t | ";
echo "ETA: $eta secs\n";
}
$node=$Reader->getNode();
// OMMITED PART: $node is an array, I make some processing, and check if everything exists in the array that I need in the following section
$Ad=new Ad($node); // creating an Ad object from the node
// OMMITED PART: Making some additional SQL queries, to check the integrity of the data, before uploading it to the database
if (!$Ad->update()) {
//add wasn't inserted succesfully, saving a row in a second database table, to log this information
} else {
//add succesfully inserted, saving a row in a second database table, to log this information
}
}
You notice, that the first part of the loop, is a little output tool, that outputs the progress of the file, every 3 seconds, and also outputs the memory usage of the script. I need that because I ran into a memory problem, the last time I was trying to upload a file, and wanted to figure out, what's eating away the memory.
The output of this script looked something like this when I ran it:
Parsed 15 # 2.0869598388672 secs | (mem_usage: 1569552 | ETA: 1389.2195994059 secs
Parsed 30 # 5.2812438011169 secs | (mem_usage: 1903632 | ETA: 1755.1333565712 secs
Parsed 38 # 8.4330480098724 secs | (mem_usage: 2077744 | ETA: 2210.7901124829 secs
Parsed 49 # 11.377414941788 secs | (mem_usage: 2428624 | ETA: 2310.5440017496 secs
Parsed 59 # 14.204828023911 secs | (mem_usage: 2649136 | ETA: 2393.3931421304 secs
Parsed 69 # 17.032008886337 secs | (mem_usage: 2831408 | ETA: 2451.3750760901 secs
Parsed 79 # 20.359696865082 secs | (mem_usage: 2968656 | ETA: 2556.8171214997 secs
Parsed 87 # 23.053930997849 secs | (mem_usage: 3102360 | ETA: 2626.8231951916 secs
Parsed 98 # 26.148546934128 secs | (mem_usage: 3285096 | ETA: 2642.0705279769 secs
Parsed 107 # 29.092607021332 secs | (mem_usage: 3431944 | ETA: 2689.8426286172 secs
Now, I know for certainty, that in my MySQL object, I have a runtime cache, which stores the results of some basic select queries in an array, for quick access later. This is the only variable in the script (that I know of), which increases in size throughout the whole script, so I tried turning of this option. The memory usage dropped, but only by a tiny bit, and it was still rising throughout the whole script.
My questions are the following:
Is the slow rising of the memory usage throughout a long running script a normal behaviour in php, or I should search through the whole code, and try to find out what is eating up my memory?
I know that by using unset() on a variable, I can free up the space it takes away from the memory, but do I need to use unset() even if I am overwriting the same variable throughout the whole file?
A slight rephrasing of my second question with an example:
Are the following two code blocks produce the same result regarding memory usage, or if not, which one is more optimal?
BLOCK1
$var = str_repeat("Hello", 4242);
$var = str_repeat("Good bye", 4242);
BLOCK2
$var = str_repeat("Hello", 4242);
unset($var);
$var = str_repeat("Good bye", 4242);
If you install the xdebug module on your development machine, you can get it to do a function trace - and that will show memory usage for each line - https://xdebug.org/docs/execution_trace
That'll probably help you identify where the space is going up - you can then try to unset($var) etc and see if it makes any difference.
I have a PHP function that is getting data from an SQL DB. Sometimes I get this fatal error while doing odbc_result
Step 30 from 60 | Memory 191908
Step 31 from 60 | Memory 191908
Step 32 from 60 | Memory 191908
Step 33 from 60 | Memory 191908
Step 34 from 60 | Memory 191892
Step 35 from 60 | Memory 191908
Step 36 from 60 | Memory 191908
Step 37 from 60 | Memory 191892
Step 38 from 60 | Memory 191908
Step 39 from 60 | Memory 191908
PHP Fatal error: Out of memory (allocated 262144) (tried to allocate 4294967293 bytes)
This is a piece of my output from the function.
Can someone please help me with this problem?
while(odbc_fetch_row($exec_query_tblKlantFactuurOpdrachten))
{
$dataADDRow_1 = $doc->createElement("tblKlantFactuurOpdrachten");
$dataADDRow->appendChild($dataADDRow_1);
$i_1 = 1;
while($i_1 < $numFields_1)
{
echo "Step ".$i_1." from ".$numFields_1." | Memory ".memory_get_usage()."\n";
$dataFromDB = odbc_result($exec_query_tblKlantFactuurOpdrachten,$i_1);
$dataADD = $doc->createElement($tmpArr_FieldName_1[$i_1],protectInfo($dataFromDB));
$dataADDRow_1->appendChild($dataADD);
flush();
++$i_1;
}
flush();
}
I've retried to get the data from the DB with a simple query and when i try to get the same exact field I get that error also. So it seems that is something with the relationship between DB and PHP. I am using freetds driver on linux server.
When I get the data with SQL Server Management there is no problem.
It's probably related to an unending loop or some function called recursively that keeps allocating data.
You could try with:
ini_set('memory_limit', '-1');
but if I'm right you'll get that error again. If so, then look for loops and recursive calls.
It seems like a bug in the ODBC-handling of PHP.
Something similar like this:
https://bugs.php.net/bug.php?id=51606
Try the patch given there