I have a script using com_dotnet, that makes PHP segfault (not sure if segfault is the technical correct term) under certain circumstances.
Using PHP 7.4.2 NTS 32bit on Windows 2019 64bit. The same error happens when using FCGI or CLI. The same script worked will with PHP 7.0.30.
In case of FCGI, only this error is in the Apache log:
[...] [fcgid:warn] [pid ...] (OS 109)The pipe has been ended. : [client ...] mod_fcgid: get overlap result error
[...] [fcgid:warn] [pid ...] (OS 109)The pipe has been ended. : [client ...] mod_fcgid: ap_pass_brigade failed in handle_request_ipc function
In the event log, Application log, there is an Application Error event with id 1000:
Faulting application name: php-cgi.exe, version: 7.4.2.0, time stamp: 0x5e273bd1
Faulting module name: php7.dll, version: 7.4.2.0, time stamp: 0x5e274c52
Exception code: 0xc0000005
Fault offset: 0x004a1554
Faulting process id: 0x1d3c
Faulting application start time: 0x01d5e7035b9ba103
Faulting application path: F:\wamp\bin\php\php7.4.2nts86\php-cgi.exe
Faulting module path: F:\wamp\bin\php\php7.4.2nts86\php7.dll
Report Id: 83f813df-1bda-48f9-89f6-2374f9b04af5
Faulting package full name:
Faulting package-relative application ID:
And event 1001 Windows Error Reporting
Fault bucket 1785407568804712926, type 1
Event Name: APPCRASH
Response: Not available
Cab Id: 0
Problem signature:
P1: php-cgi.exe
P2: 7.4.2.0
P3: 5e273bd1
P4: php7.dll
P5: 7.4.2.0
P6: 5e274c52
P7: c0000005
P8: 004a1554
P9:
P10:
Attached files:
[Temp files no longer on disk]
These files may be available here:
\\?\C:\ProgramData\Microsoft\Windows\WER\ReportArchive\AppCrash_php-cgi.exe_db93fdd1eb9fce3ac32a6e96e418914fb9fca_32a55e52_0a30ae4b
Analysis symbol:
Rechecking for solution: 0
Report Id: 83f813df-1bda-48f9-89f6-2374f9b04af5
Report Status: 268435456
Hashed bucket: 85fc32c6322e070cd8c70ab96de611de
Cab Guid: 0
The WER\ReportArchive-file does not seem to contain interesting data.
I was not able to locate the exact place in the code where the fault is happening. It happens while reading data using the com_dotnet module.
if($this->statement->State == 1){
$result = [];
while(!$this->statement->EOF) {
$row = [];
for( $x = 0; $x < $this->statement->Fields->Count; $x++) {
$name = $this->statement->Fields[$x]->Name;
$value = $this->statement->Fields[$x]->Value;
$row[$name] = $value;
}
$this->statement->MoveNext();
$result[] = $row;
}
return $result;
}
The fault appears all tested data sources as soon as I read more than 9000 to 9500 rows. It does not seem to be related to the data itself.
The minimum code that leads to a fault is:
if($this->statement->State == 1){
$result = [];
while(!$this->statement->EOF) {
$row = [];
$row[] = ''; // <==
$this->statement->MoveNext();
$result[] = $row; // <==
}
return $result;
}
There is no fault when I remove either of the two lines marked with an arrow. Here the content of $result and the com-object are completely unrelated, and even so, PHP faults if there are more than 9000 loops. The number of lines leading to the fault does not seem to change neither with the data source, neither between the two code snippets above.
This here interestingly does not fault, even though $row = ['']; is functionally equivalent to $row = []; $row[] = ''; (above).
if($this->statement->State == 1){
$result = [];
while(!$this->statement->EOF) {
$row = [''];
$this->statement->MoveNext();
$result[] = $row;
}
return $result;
}
Here I'm at the end of my knowledge. Things do not seem to behave logically anymore. How could I narrow down the problem ? Any advice ?
Edit: PHP 7.3.15 is not affected, only PHP 7.4.2.
Related
i want to ask about the problem i'm having. I'm using 2 desktops i.e. ubuntu and mint, when I run my code on ubuntu it runs smoothly. but if i run on mint desktop i have an error that says "Symfony\Component\ErrorHandler\Error\FatalError
Maximum execution time of 60 seconds exceeded"
and i get this log on my terminal
Starting Laravel development server: http://127.0.0.1:8000
[Tue Nov 9 16:18:53 2021] PHP 8.0.12 Development Server (http://127.0.0.1:8000) started
[Tue Nov 9 16:18:55 2021] 127.0.0.1:38908 Accepted
[Tue Nov 9 16:18:55 2021] 127.0.0.1:38910 Accepted
[Tue Nov 9 16:20:22 2021] PHP Fatal error: Maximum execution time of 60 seconds exceeded in /home/aditya/Documents/Laravel/eyrin/vendor/symfony/polyfill-mbstring/Mbstring.php on line 632
[Tue Nov 9 16:20:23 2021] 127.0.0.1:38908 Closing
[Tue Nov 9 16:20:23 2021] 127.0.0.1:38910 Closed without sending a request; it was probably just an unused speculative preconnection
[Tue Nov 9 16:20:23 2021] 127.0.0.1:38910 Closing
and this is code on controller
$store = Store::where('user_id',Helper::getSession('user_id'))->first();
$match_report = [];
$top_weekly_product = [];
$compressed_date = [];
$uncompressed_date = Report::where('store_id',$store->id)->whereBetween('created_at', [Carbon::now()->startOfWeek(), Carbon::now()->endOfWeek()])->select('created_at')->distinct()->get();
foreach ($uncompressed_date as $item) {
if(!in_array(Carbon::parse($item['created_at'])->format('d/m/Y'),$match_report)){
$match_report[] = Carbon::parse($item['created_at'])->format('d/m/Y');
$compressed_date[] = $item;
}
}
$match_report = [];
$compressed_weekly_product = [];
$uncompressed_weekly_product = Report::where('store_id',$store->id)->whereBetween('created_at', [Carbon::now()->startOfWeek(), Carbon::now()->endOfWeek()])->get()->map(function($report){
return [
'product_name'=>$report->product_name,
'product_variant'=>$report->product_variant,
'product_sku'=>$report->product_sku,
'weekly_amount'=>sizeof(Report::where(['store_id'=>$report->store_id, 'product_sku'=>$report->product_sku])->whereBetween('created_at', [Carbon::now()->startOfWeek(), Carbon::now()->endOfWeek()])->get())
];
});
foreach ($uncompressed_weekly_product as $item) {
if(!in_array($item['product_sku'],$match_report)){
$match_report[] = $item['product_sku'];
$compressed_weekly_product[] = $item;
}
}
foreach ($compressed_weekly_product as $key => $item) {
$rows = [];
foreach ($compressed_date as $obj) {
$rows[] = sizeof(Report::where(['store_id'=>$store->id, 'product_sku'=>$item['product_sku']])->whereDate('created_at', Carbon::parse($obj['created_at']))->get());
}
$compressed_weekly_product[$key]['daily_amount'] = $rows;
}
foreach ($compressed_date as $key => $item) {
$compressed_date[$key]['formated'] = Carbon::parse($item->created_at)->format('m/d/Y');
}
$match_report = [];
usort($compressed_weekly_product, function($a, $b) {
return $a['weekly_amount'] > $b['weekly_amount'] ? -1 : 1;
});
foreach ($compressed_weekly_product as $item) {
if(sizeof($top_weekly_product) < 3){
$top_weekly_product[] = $item;
}
}
//testing
$growth_percentage = 1.8;
return view('panel.outlet.dashboard.index', [
'is_dashboard'=>true,
'total_customer'=>sizeof(Customer::where('store_id',$store->id)->get()),
'total_revenue'=>Order::where('store_id',$store->id)->whereIn('status',['2','3','-'])->sum('total_amount'),
'total_order'=>sizeof(Order::where('store_id',$store->id)->get()),
'total_sales'=>sizeof(Order::where('store_id',$store->id)->whereIn('status',['2','3','-'])->get()),
'total_product'=>sizeof(Product::where('store_id',$store->id)->get()),
'total_sales_income'=>Order::where('store_id',$store->id)->whereIn('status',['2','3','-'])->sum('total_amount'),
'growth_percentage'=>round($growth_percentage,2),
'lastest_order'=>Order::where(['store_id'=>$store->id,'type'=>'app'])->orderBy('id','DESC')->limit(10)->get(),
'report_date'=>$compressed_date,
'top_weekly_product'=>$top_weekly_product,
'weekly_product'=>$compressed_weekly_product,
'weekly_report'=>DailyReport::where('store_id',$store->id)->whereBetween('created_at', [Carbon::now()->startOfWeek(), Carbon::now()->endOfWeek()])->get()]);
}
can anyone help me with this problem? i had a similar experience when i tried to truncrate a string in my blade view. does it have something to do with the configuration in my php.ini?
thankss i hope get solution for this problem...
This error appends when the max_execution_time of your PHP is reached. From the look of your error, it is probably set to 60 seconds.
You can increase this limit directly into your php.ini file (see with the command php --ini where it is located on your machine) or try to optimize your code.
If you don't want to edit the max_execution_time permanently, you can also add the instruction:
set_time_limit($seconds);
at the beginning of your script. I would not recommend this solution.
You can set it in the php.ini file in the max_execution_time variable, the default is 60 seconds, you can change it according to your needs
Symfony\Component\ErrorHandler\Error\FatalError
Maximum execution time of 60 seconds exceeded
There was a problem with the route. check your web.php
Route::get('feedback', 'App\Http\Controllers\FeedBackController#index')->name('feedback.index');
changed to
Route::get('cfeedback', 'App\Http\Controllers\FeedBackController#index')->name('feedback.index');
added only c in before feedback
I was having the same issue.
Running php via software collection the mbstring package was not installed.
# dnf install -y php73-php-mbstring
# systemctl restart php73-php-fpm
After installing packages and restarting service it was working well.
In your php.ini file, Uncomment extension=mbstring and you will see the error goes away.
In PHP, its possible to register shutdown functions, which (sometimes gets ignored, however) is definetely called in my scenario, see below.
PHP/libxml supported by the DOMDocument class in PHP does not play along well w/ my registered shutdown functions, if I want to call ->save() (->saveXML() works fine) after user abort (e.g. from registered shutdown function or a class instance destructor). Related is also the PHP connection handling.
Let the examples speak:
PHP version:
php --version
PHP 7.1.4 (cli) (built: Apr 25 2017 09:48:36) ( NTS )
Copyright (c) 1997-2017 The PHP Group
Zend Engine v3.1.0, Copyright (c) 1998-2017 Zend Technologies
To reproduce user_abort I'm running the php through python2.7 run.py:
import subprocess
cmd = ["/usr/bin/php", "./user_aborted.php"]
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE)
# As this process exists here and the user_aborted.php
# has sleeps/blocks in for-cycle thus simulating a user-abort
# for the php subprocess.
The php script user_aborted.php to try saving XML in shutdown function :
<?php
ignore_user_abort(false);
libxml_use_internal_errors(true);
$xml = new DOMDocument();
$xml_track = $xml->createElement( "Track", "Highway Blues" );
$xml->appendChild($xml_track);
function shutdown() {
global $xml;
$out_as_str = $xml->saveXML();
fwrite(STDERR, "\nout_as_str: " . var_export($out_as_str, true) . "\n");
$out_as_file = $xml->save('out.xml');
fwrite(STDERR, "\nout_as_file: >" . var_export($out_as_file, true) . "<\n");
fwrite(STDERR, "\nerrors: \n" . var_export(libxml_get_errors(), true) . "\n");
}
register_shutdown_function('shutdown');
$i = 2;
while ($i > 0) {
fwrite(STDERR, "\n PID: " . getmypid() . " aborted: " . connection_aborted());
echo("\nmaking some output on stdout"); // so user_abort will be checked
sleep(1);
$i--;
}
Now, if I run this script w/o user abort (simply calling PHP) with:
php user_aborted.php the XML gets saved properly.
However when calling this through python2.7 (which simulates the user abort by exiting the parent process), python2.7 run.py the weirdest things happen:
the out_as_str value is fine, looks the XML I wanted
BUT the file out.xml is an empty file
ALSO the libxml_get_errors reports FLUSH problems
The output w/ python looks:
python2.7 run.py
PID: 16863 aborted: 0
out_as_str: '<?xml version="1.0"?>
<Track>Highway Blues</Track>
'
out_as_file: >false<
errors:
array (
0 =>
LibXMLError::__set_state(array(
'level' => 2,
'code' => 1545,
'column' => 0,
'message' => 'flush error',
'file' => '',
'line' => 0,
))
)
Sorry for the long post, but I was looking through PHP/libxml2 code the whole day w/o any success. :/
Reason:
It turns out this is due to a fix for a previous bug.
Links:
previous PHP bug ticket which fix introduced the deffect
commit that introduces the deffect (GitHub)
The linked php_libxml_streams_IO_write function is the writecallback (set in ext/libxml/libxml.c) for the buffer of the docp object, which is handed over for the libxml call on ext/dom/document.c. Ending up in libxml xmlIO.c where the buffer is NULL hence the file given over for ->save(*) does not get written.
Workaround:
Use the ->saveXML() to get the XML representation in string and write it using file_put_contents(*) by "hand":
$xml_as_str = $xml->saveXML();
file_put_contents('/tmp/my.xml', $xml_as_str);
I am trying to write a piece of php code with zend framework. I`m using zend_http_client.The code works randomly!I mean , It works fine sometimes and sometimes get an empty page and this error from Apache error log :
[Mon May 27 16:46:37 2013] [error] [client 4.4.4.4] PHP Warning: require_once(/var/www/my.somesite.com/library/Zend/Http/Client/Adapter/Exception.php): failed to open stream: Too many open files in /var/www/my.somesite.com/library/Zend/Http/Client/Adapter/Socket.php on line 222
[Mon May 27 16:46:37 2013] [error] [client 4.4.4.4] PHP Fatal error: require_once(): Failed opening required 'Zend/Http/Client/Adapter/Exception.php' (include_path='/var/www/my.somesite.com/application/../library:../application/models:.:/usr/share/php:/usr/share/pear') in /var/www/my.somesite.com/library/Zend/Http/Client/Adapter/Socket.php on line 222
[Mon May 27 16:46:37 2013] [error] [client 4.4.4.4] PHP Fatal error: Undefined class constant 'PRIMARY_TYPE_NUM' in /var/www/my.somesite.com/library/Zend/Session/SaveHandler/DbTable.php on line 522
php code sth like this :
public function Request($server_method, $params_arr) {
$httpClient = new Zend_Http_Client;
$httpClient->setConfig(array('timeout' => '900'));
$client = new Zend_XmlRpc_Client ( Zend_Registry::getInstance ()->config->ibs->xmlrpc_url ,$httpClient);
$request = new Zend_XmlRpc_Request ( );
$response = new Zend_XmlRpc_Response ( );
$request->setMethod ( $server_method );
$request->setParams ( array ($params_arr ) );
$client->doRequest ( $request, $response );
if ($response->isFault ()) {
$fault = $response->getFault ();
//echo '<pre>' . $fault->getCode () . '' . $fault->getMessage () . '</pre>';
$this->response = array (FALSE, $fault->getMessage () );
return array (FALSE, $fault->getMessage () );
}
//return $response;
$this->response = array (TRUE, $response->getReturnValue () );
return array (TRUE, $response->getReturnValue () );
//var_dump($response->getReturnValue());
}
Where is the problem ?
The problem may be not related to your method itself.
You are opening many files and not closing them (a socket count as a file open too). The socket adapter itself has a configuration called persistent, set false to prevent TCP reuse.
Try to check if your http client is properly destroyed at end of use and is not refered in another place of your code (that prevents garbage collector cleaning).
More info:
Check the limits with ulimit -aH (max limit for number of open files)
There some numbers too in /etc/security/limits.conf
soft nofile 1024 <- Soft limit
hard nofile 65535 <- Hard limit
You could increase ulimit by ulimit -n 65535 and echo 65535 > /proc/sys/fs/file-max to set a higher value, but this is strongly discouraged.
To set this permamently, in /etc/sysctl.conf set fs.file-max=65535
I am trying to run my cake shell script but the output looks like the following:
-bash-3.2$ ../cake/console/cake audit
../cake/console/cake: line 30:/root/site/app: is a directory
Array
(
[0] => /root/site/cake/console/cake.php
[1] => -working
[2] =>
[3] => audit
)
Notice: Uninitialized string offset: 0 in /root/site/cake/console/cake.php on line 550
What am I doing wrong? Here are the contents of this file:
cake.php
function __parseParams($params) {
$count = count($params);
for ($i = 0; $i < $count; $i++) {
if (isset($params[$i])) {
if ($params[$i]{0} === '-') {
$key = substr($params[$i], 1);
$this->params[$key] = true;
unset($params[$i]);
if (isset($params[++$i])) {
if ($params[$i]{0} !== '-') {//This is line 550
$this->params[$key] = str_replace('"', '', $params[$i]);
unset($params[$i]);
} else {
$i--;
$this->__parseParams($params);
}
}
} else {
$this->args[] = $params[$i];
unset($params[$i]);
}
}
}
}
Focus on the first error
Whenever debugging something that's broken, it's a good idea to focus on the first error and not the fallout from it. The first error message is this line:
line 30:/root/site/app: is a directory
It comes from the cake bash script, before calling php. That line in the most recent 1.3 version is blank, so it's not obvious what specific version of cake you are using, but it isn't the latest 1.3 release.
The consequence of the above error is that the following is the command called:
exec php -q "/root/site/cake/console/"cake.php -working "" "audit"
^^
The parameters passed to cake.php specify that the working directory is an empty string, something which is abnormal and later causes an undefined index error.
Upgrading cures all ailes
Most likely, this specific error can be solved by copying cake.php from the latest version of the same release cycle you are using.
Also consider simply upgrading CakePHP itself to the latest release (from the same major version in use) which will likely fix this specific problem, and others - especially relevant if there have been security releases, which recently there have been.
I have a script to limit the execution time length of commands.
limit.php
<?php
declare(ticks = 1);
if ($argc<2) die("Wrong parameter\n");
$cmd = $argv[1];
$tl = isset($argv[2]) ? intval($argv[2]) : 3;
$pid = pcntl_fork();
if (-1 == $pid) {
die('FORK_FAILED');
} elseif ($pid == 0) {
exec($cmd);
posix_kill(posix_getppid(), SIGALRM);
} else {
pcntl_signal(SIGALRM, create_function('$signo',"die('EXECUTE_ENDED');"));
sleep($tl);
posix_kill($pid, SIGKILL);
die("TIMEOUT_KILLED : $pid");
}
Then I test this script with some commands.
TEST A
php limit.php "php -r 'while(1){sleep(1);echo PHP_OS;}'" 3
After 3s, we can find the processes were killed as we expected.
TEST B
Remove the output code and run again.
php limit.php "php -r 'while(1){sleep(1);}'" 3
Result looks not good, the process created by function "exec" was not killed like TEST A.
[alix#s4 tmp]$ ps aux | grep whil[e]
alix 4433 0.0 0.1 139644 6860 pts/0 S 10:32 0:00 php -r while(1){sleep(1);}
System info
[alix#s4 tmp]$ uname -a
Linux s4 2.6.18-308.1.1.el5 #1 SMP Wed Mar 7 04:16:51 EST 2012 x86_64 x86_64 x86_64 GNU/Linux
[alix#s4 tmp]$ php -v
PHP 5.3.9 (cli) (built: Feb 15 2012 11:54:46)
Copyright (c) 1997-2012 The PHP Group
Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies
Why the processes killed in TEST A but not in TEST B? Does the output impact the SIGKILL?
Any suggestion?
There is a PIPE between php -r 'while(1){sleep(1);echo PHP_OS;} (process C) and it's parent (process B), posix_kill($pid, SIGKILL) sends KILL signal to process B, then process B is terminated, but process C doesn't know anything about the signal and continues to run and outputs something to the broken pipe, when process C receives the SIGPIPE signal but has no idea how to handle it so it exits.
You can verify it with strace (run php limit.php "strace php -r 'while(1){sleep(1); echo PHP_OS;};'" 1), and you will see something like this:
14:43:49.254809 write(1, "Linux", 5) = -1 EPIPE (Broken pipe)
14:43:49.254952 --- SIGPIPE (Broken pipe) # 0 (0) ---
14:43:49.255110 close(2) = 0
14:43:49.255212 close(1) = 0
14:43:49.255307 close(0) = 0
14:43:49.255402 munmap(0x7fb0762f2000, 4096) = 0
14:43:49.257781 munmap(0x7fb076342000, 1052672) = 0
14:43:49.258100 munmap(0x7fb076443000, 266240) = 0
14:43:49.258268 munmap(0x7fb0762f3000, 323584) = 0
14:43:49.258555 exit_group(0) = ?
As to php -r 'while(1){sleep(1);}, because there is no broken pipe occurs after it's parent dies, so it continues to run as expected.
Generally speaking, you should kill the whole process group but not only the process itself if you want to kill it's children too, with PHP you can add process B to its own process group, and kill the whole group then, here is the diff with your code:
--- limit.php 2012-08-11 20:50:22.000000000 +0800
+++ limit-new.php 2012-08-11 20:50:39.000000000 +0800
## -9,11 +9,13 ##
if (-1 == $pid) {
die('FORK_FAILED');
} elseif ($pid == 0) {
+ $_pid = posix_getpid();
+ posix_setpgid($_pid, $_pid);
exec($cmd);
posix_kill(posix_getppid(), SIGALRM);
} else {
pcntl_signal(SIGALRM, create_function('$signo',"die('EXECUTE_ENDED');"));
sleep($tl);
- posix_kill($pid, SIGKILL);
+ posix_kill(-$pid, SIGKILL);
die("TIMEOUT_KILLED : $pid");
}
You send the kill signall to your forked process, but that does not propagate to it's children or grandchildren. As such they are orphaned and continue running until something stops them from doing so. (In this case, any attempts to write to stdout should cause an error that then forces them to exit. Redirection of output would also probably result in indefinitely-running orphans.)
You want to send a kill signal to the process and all it's children. Unfortunately I lack the knowledge to tell you a good way to do that. I'm not very familiar with the process control functionality of PHP. Could parse the output of ps.
One simple way I found that works though is to send a kill signal to the whole process group with the kill command. It's messy, and it adds an extra "Killed" message to output on my machine, but it seems to work.
<?php
declare(ticks = 1);
if ($argc<2) die("Wrong parameter\n");
$cmd = $argv[1];
$tl = isset($argv[2]) ? intval($argv[2]) : 3;
$pid = pcntl_fork();
if (-1 == $pid) {
die('FORK_FAILED');
} elseif ($pid == 0) {
exec($cmd);
posix_kill(posix_getppid(), SIGALRM);
} else {
pcntl_signal(SIGALRM, create_function('$signo',"die('EXECUTE_ENDED');"));
sleep($tl);
$gpid = posix_getpgid($pid);
echo("TIMEOUT_KILLED : $pid");
exec("kill -KILL -{$gpid}"); //This will also cause the script to kill itself.
}
For more information see: Best way to kill all child processes