Intermittent PHP Abstract Class Error - php

I've been fighting this for a bit, and can't figure it out, maybe someone else has or maybe there's a deeper issue here with Slim, PHP, Apache etc. After working just fine for hours, my Slim install will start giving this on all routes:
Fatal error: Class Slim\Collection contains 1 abstract method and must therefore be declared abstract or implement the remaining methods (IteratorAggregate::getIterator) in F:\Projects\example\server\vendor\slim\slim\Slim\Collection.php on line 21
Maddeningly this issue goes away if I restart Apache. (For a few hours anyway.)
I found this where someone had a similar problem two years ago, and the person helping badgered them without actually assisting at all: https://community.apachefriends.org/viewtopic.php?p=250966&sid=96ef58aaeb7fe142a7dcdfd506a8683f
I've tried doing a clean wipe and install of my composer vendor directory. This doesn't fix it. I can clearly see that getIterator is implemented as expected in the file in the error message.
PHP Version 7.0.12, Windows 7, x86 PHP Build
It happened again after a few hours, with a different but similar error message:
Fatal error: Class Pimple\Container contains 1 abstract method and must therefore be declared abstract or implement the remaining methods (ArrayAccess::sqlserver) in F:\Projects\example\server\vendor\pimple\pimple\src\Pimple\Container.php on line 34
This question has a similar problem and "solves" it by restarting PHP, but that clearly isn't an actual solution, and I don't have opcache enabled:
PHP 7, Symfony 3: Fatal error 1 abstract method and must therefore be declared abstract or implement the remaining methods
Any guesses? Remember: This message is in files I didn't write, and goes away on Apache restart. Is there some caching with PHP 7 that would cause this?
Edit 3/10/17:
Yes, I've opened a ticket with Slim. I also saw it in a non-slim file (Pimple) so I don't think it is a Slim issue.
https://github.com/slimphp/Slim/issues/2160
As I said, my opcache is off. I've confirmed this is true both in the php.ini file and looking at phpinfo().

I think you've run into this opcache bug. This isn't exactly the same situation but probably related.
After calling opcache_reset() function we encounter some weird errors.
It happens randomly on servers (10 of 400 servers production)
Some letter a replaced by others, Class seems to be already declared..
etc
Example of errors triggered after opcache_reset():
PHP Fatal error: Class XXX contains 1 abstract method and must therefore be declared abstract or implement the remaining methods
(YYY::funczzz) in /dir/dir/x.php on line 20
The ticket is closed because the developers don't have enough information to reproduce it. If you could come up with the smallest reproducible case I recommend reporting it. Create a very small Slim app and then use JMeter or another tool to make many requests. Post your findings.
Meanwhile the only workaround might be to turn off opcache in php.ini:
opcache.enable=0
Of course this will drastically hurt performance. Until it's fixed you'll have to choose between performance or periodically restarting Apache.
If turning the cache off doesn't work then the only cause I could think of is an intermittent problem with the opcode compiler. Cached or not the compiled version must have an error in it. Opening a reproducible ticket with the PHP devs or debugging the PHP source yourself would be the only way forward if this is the cause.

I had the same problem using CodeIgniter and PHP 7.1.x.
I upgraded to PHP 7.2 and the problem no longer occurred.

If you develop on Windows, I would recommend that you DON'T use XAMPP or WAMPP, and try out a real development server using Linux on a VM.
Try installing Vagrant and Virtualbox, then head to puphpet.com, which can generate you a virtual machine configuration. Unzip the download, cd in to the folder, type vagrant up. Then just point your host at the VM. I'll bet once you have a real development environment that this error will go away. Your other option is Docker, but that has a bit of a learning curve.
The problem isn't your code (or your vendor code), but your platform.

I have encountered this exact behaviour and it was not exactly an opcache bug, even if it was caused by opcache.
The problem was that we had several classes with the same base name, e.g.
Request\GenericProtocol\Dispatcher abstract
Request\Protocol1\Dispatcher
Request\Protocol2\Dispatcher
Now by default on our installation opcache used an "optimization" that used the basename only as cache key. As a result, whenever a script happened to instantiate a Protocol2 Dispatcher on a clean cache, it subtly sabotaged all subsequent calls with Protocol1. Due to usage patterns, this masqueraded as any other kind of bug.
In the end we just activated the appropriate option:
opcache.use_cwd boolean
If enabled, OPcache appends the current working directory to the script key, thereby eliminating possible collisions between files with the same base name. Disabling this directive improves performance, but may break existing applications.
The breaking condition is this: you have at least two classes with the same basename.
Our next iteration indeed is scheduled to rename a lot of classes
Request\Protocol1\Dispatcher ==> Request\Protocol1\Protocol1Dispatcher
to be able to re-disable use_cwd and squeeze a few percents of performance (PTBs and PHBs believe it is worth it), but I know that this may not be possible with every framework out there.

Related

Laravel 5 losing sessions and .env configuration values in AJAX-intensive applications

I am using Laravel 5 (to be specific, "laravel/framework" version is "v5.0.27"), with session driver = 'file'.
I'm developing on Windows 7 64 bit machine.
I noticed that sometimes (once a week or so) I get unexpectedly and randomly logged out. Sometimes this happens even immediately after I log in. I have added log messages to my auth logic code, but the log code was not triggered. Laravel behaved as if it has completely lost the session file.
Another, more serious issue, was that sometimes after debugging sessions (using xdebug and Netbeans) Laravel started losing also other files - .env settings, some debugbar JS files etc. The error log had messages like:
[2015-07-08 13:05:31] local.ERROR: exception 'ErrorException' with message 'mcrypt_encrypt(): Key of size 7 not supported by this algorithm. Only keys of sizes 16, 24 or 32 supported' in D:\myproject\vendor\laravel\framework\src\Illuminate\Encryption\Encrypter.php:81
[2015-07-08 13:05:31] local.ERROR: exception 'PDOException' with message 'SQLSTATE[HY000] [1044] Access denied for user ''#'localhost' to database 'forge'' in D:\myproject\vendor\laravel\framework\src\Illuminate\Database\Connectors\Connector.php:47
This clearly signals that .env file was not read by Laravel, so it is using default settings:
'database' => env('DB_DATABASE', 'forge'),
'key' => env('APP_KEY', 'somekey'),
Losing files happened rarely, maybe once a month or so, and it always happened after debugging sessions. I always had to restart Apache to make it work again.
To stress-test the system and reproduce the issues reliably, I used a quick hack in my Angular controller:
setInterval(function(){
$scope.getGridPagedDataAsync();
}, 500);
It is just a basic data request from Angular to Laravel.
And that was it - now I could reproduce the session losing and .env losing in 3 minutes or less.
I have developed AJAX-intensive web applications earlier on the same PC with the same Apache+PHP, but without Laravel, without .env, and I hadn't noticed such issues before.
While debugging through code, I found out that Laravel is not using PHP built-in sessions at all, but has implemented their own files-based session. Obviously, it does not provide the same reliability as default PHP sessions and I'm not sure why.
Of course, in real life scenarios my app won't be that AJAX-intensive, but in my experiences on some occasions it is enough with just two simultaneous AJAX requests to lose the session.
I have seen some related bug reports on Laravel for various session issues. I haven't yet seen anything about dot-env, though, but it seems suffering from the same issue.
My guess is that Laravel does not use file locks and waiting, thus if a file cannot be read for some reason (maybe locked by some parallel process or Apache) then Laravel just gives up and returns whatever it can.
Is there any good solution to this? Maybe it is specific to Windows and the problems will go away on a Linux machine?
Curious, why Laravel (or Symfony) developers haven't fixed their session file driver yet. I know that locking/waiting would slow it down, but it would be great to at least have some option to turn on "reliable sessions".
Meanwhile I'll try to step through Laravel code and see if I can invent some "quick&dirty" fix, but it would be much better to have some reliable and "best practices" solution.
Update about .env
The issue turned to be not related to locking files. I found the Laravel bug report for .env issue, which lead me to a linked report for Dotenv project which, in turn, says that it is a core PHP issue. What disturbs me is that Dotenv devs say that Dotenv was never meant to be used for production, but Laravel seems to rely upon Dotenv.
In https://github.com/laravel/framework/pull/8187 there seems to be a solution which should work in one direction but some commenter says that in their case the issue was the opposite. Someone called crynobone gave a clever code snippet to try:
$value = array_get($_ENV, $key, getenv($key));
There appeared another suggestion to use "makeMutable()" on both Dotenv and Laravel Githubs, but commenters report that this might break tests.
So I tried the crynobone's code but it did not work for me. While debugging, I found out that in my case when things break down for concurrent requests, the $key cannot be found nor in getenv(), nor in $_ENV and not even in $_SERVER.
The only thing that worked (quick&dirty experminet) was to add:
static::$cached[$name] = $value;
to Dotenv class and then in helpers.php env() method I see that:
Dotenv::$cached[$key]
is always good, even when $_ENV and getenv both give nothing.
Although Dotenv was not meant for production, I don't want to change our deployment and configuration workflow.
Next I'll have to investigate the session issues...
Addendum
Related Laravel bug reports (some even from version 4. and it seems, not fixed):
https://github.com/laravel/framework/issues/4576
https://github.com/laravel/framework/issues/5416
https://github.com/laravel/framework/issues/8172
and an old article which sheds some light on what's going on (at least with session issues):
http://thwartedefforts.org/2006/11/11/race-conditions-with-ajax-and-php-sessions/
After two days of intensive debugging I have some workarounds which might be useful to others:
Here is my patch for Dotenv 1.1.0 and Laravel 5.0.27 to fix .env issues:
https://gist.github.com/progmars/db5b8e2331e8723dd637
And here is my workaround patch to make session issues much less frequent (or fix them completely, if you don't write to session yourself on every request):
https://gist.github.com/progmars/960b848170ff4ecd580a
I tested them with Laravel 5.0.27 and Dotenv 1.1.0.
Also recently recreated patches for Laravel 5.1.1 and Dotenv 1.1.1:
https://gist.github.com/progmars/e750f46c8c12e21a10ea
https://gist.github.com/progmars/76598c982179bc335ebb
Make sure you add
'metadata_update_threshold' => 1,
to your config/session.php for this patch to become effective.
All the patches should be applied to vendor folder each time it gets recreated.
Also you might want to separate the session patch out because you update session.php just once, but the other parts of the patch should be applied to vendor folder each time it gets recreated before deployment.
Be warned: "It works on my machine". I really hope Laravel and Dotenv developers will come up with something better, but meanwhile I can live with these fixes.
My personal opinion that using .env to configure Laravel is a bad decision. Having .php files that contained key:value style of configuration was much better.
However, the problem you are experiencing is not PHP's fault, nor Apache's - it's most likely Windows issue.
A few other things: Apache contains a module that allows PHP binary to be integrated into Apache's process or thread, called mod_php - the issue with this is that PHP is not only slow, but getting another binary integrated into an existing one is super tricky and things might be missed. PHP also must be built with thread-safety in this case. If it's not, then weird bugs can (and will) occur.
To circumvent the problem of tricky integration of one program into another, we can avoid this completely and we can have .php served over FastCGI protocol. This means that the web server (Apache or Nginx) will take the HTTP request and pass it to another "web" server. In our case, this will be PHP FastCGI Process Manager or PHP-FPM.
PHP-FPM is preferred way of serving .php pages - not only because it's faster (much, much faster than integrating via mod_php), but you can easily scale your HTTP frontend and have multiple machines serve .php pages, allowing you to easily horizontally scale your HTTP frontend.
However, PHP-FPM is something called a supervisor process and it relies on process control. As far as I'm aware, Windows do not support process control in the way *nix does, therefore php-fpm is unavailable for Windows (in case I am wrong here, please correct me).
What does all of this mean for you? It means that you should use software that's designed to play nicely with what you want to do.
This is the logic that should be followed:
A web server accepts HTTP requests (Apache or Nginx)
Web server validates the request, parses the raw HTTP request, determines whether the request is too big and if everything goes well in this case, it proxies the request to php-fpm.
php-fpm processes the request (in your case it boots up Laravel) and returns the HTML which the web server shows to the user
Now, this process while great, comes with a few issues and one huge problem here is how PHP deals with sessions. A default PHP session is a file stored somewhere on the server. This means that if you have 2 physical machines serving your php-fpm, you're going to have problems with sessions. This is where Laravel does something great - it lets you use encrypted cookie based sessions. It comes with limitations (you can't store resources in those sessions and you have a size limit), but a correctly built app wouldn't store too much info in a session in the first place. There are, of course, multiple ways of dealing with sessions, but in my opinion the encrypted cookie is super, super trivial to use and powerful. When such a cookie is used, it's the client who carries the session information and any machine that contains decryption key can read this session, which means that you can easily scale your setup to multiple servers - all they have to do is have access to same decryption key (it's the APP_KEY in the .env). Basically you need to copy the same Laravel installation to machines that you wish to serve your project.
The way I would deal with the issue that you have while developing is the following:
Use a virtual machine (let's say Oracle Virtualbox)
Install Ubuntu 14.04
Map a network drive to your Windows host machine (using Samba)
Use your preferred way of editing PHP files, but they would be stored on the mapped drive
Boot an nginx or Apache, along with php-fpm on the VM to serve your project
Now what you gain via this process is this: you don't pollute your Windows machine with a program that listens on ports 80 / 443, when you're done working you can just shut the VM down without losing work, you can also easily simulate how your website would behave on an actual production machine and you wouldn't have surprises such as "it works on my dev machine but it doesn't work on my production machine" because you'd have the same software for both purposes.
These are my opinions, they are not all cold-facts and what I wrote here should be taken with a grain of salt. If you think what I wrote might help you, then please try to approach the problem that way. If not, well, no hard feelings and I wish you good luck with your projects.

PHP make test failures

I'm rebuilding and upgrading PHP(5.3.2 -> 5.5.14) to match the current installation except with the addition of the pthread module.
My main question is about the seriousness of make test failures. Currently I'm sitting with 29 failures out of roughly 12,000 tests. (They are mostly DBA related and I probably just need to re compile that with different options or something). A few of the failures give a number that relates to a PHP bug case. I've visited the pages for the cases and they are all closed from a year or two ago and are for PHP 4.3 or something like that and they say the issues have been fixed.
Everything is compiling and installing just fine (I haven't started apache yet, so I don't know if it works 100%, but I've been able to run PHP scripts via php command), so do I need to worry about the failures from make test? Or are they actually resolved as the case pages say and the tests just have not been updated? (I can link actually cases if need be).
Bug Codes:
Bug #36436 (DBA problem with Berkeley DB4) [ext/dba/tests/bug36436.phpt]
Bug #48240 (DBA Segmentation fault dba_nextkey) [ext/dba/tests/bug48240.phpt]
Bug #49125 (Error in dba_exists C code) [ext/dba/tests/bug49125.phpt]
Bug #42298 (pcre gives bogus results with /u) [ext/pcre/tests/bug42298.phpt]
Bug #42737 (preg_split('//u') triggers a E_NOTICE with newlines[ext/pcre/tests/bug42737.phpt]
Bug #52971 (PCRE-Meta-Characters not working with utf-8) [ext/pcre/tests/bug52971.phpt]
The point of a test is that is should pass. If a test passes after a change is made, then the tester can be confident that the underlying behavior under test has not been changed as a result of the change. If a test fails after you make a change, then it means that the change caused the underlying behavior to change or break altogether.
Typically, breaking tests in the test suite and not fixing them is not a good practice, because then you are losing the value of implementing the tests in the first place. If you are not trying to preserve the functionality then what is the point of the test in the first place?
Go back to your upgrade and see if any of those bugs are serious. If they aren't, then consider modifying or deleting them. It is possible that the tests failed because there are some differences between your PHP versions, or possibly because the way you upgraded the language caused a problem.

PHP auto-prepend buggy after out of memory error

This may be better suited to server fault but I thought I'd ask here first.
We have a file that is prepended to every PHP file on our servers using auto-prepend that contains a class called Bootstrap that we use for autoloading, environment detection, etc. It's all working fine.
However, when there is an "OUT OF MEMORY" error directly preceding (i.e., less than a second or even at the same time) a request to another file on the same server, one of three things happens:
Our code for checking if(class_exists('Bootstrap'), which we used to wrap the class definition when we first got this error, returns true, meaning that class has already been declared despite this being the auto-prepend file.
We get a "cannot redeclare class Bootstrap" error from our auto-prepended file, meaning that class_exists('Bootstrap') returned false but it somehow was still declared.
The file is not prepended at all, leading to a one-time fatal error for files that depend on it.
We could, of course, try to fix the out of memory issues since those seem to be causing the other errors, but for various reasons, they are unfixable in our setup or very difficult to fix. But that's beside the point - it seems to me that this is a bug in PHP with some sort of memory leak causing issues with the auto-prepend directive.
This is more curiosity than anything since this rarely happens (maybe once a week on our high-traffic servers). But I'd like to know - why is this happening, and what can we do to fix it?
We're running FreeBSD 9.2 with PHP 5.4.19.
EDIT: A few things we've noticed while trying to fix this over the past few months:
It seems to only happen on our secure servers. The out of memory issues are predominantly on our secure servers (they're usually from our own employees trying to download too much data), so it could just be a coincidence, but it deserves pointing out
The dump of get_declared_classes when we have this issue contains classes that are not used on the page that is triggering the error. For example, the output of $_SERVER says the person is on xyz.com, but one of the declared classes is only used in abc.com, which is where the out of memory issues usually originate from.
All of this leads me to believe that PHP is not doing proper end-of-cycle garbage collection after getting an out of memory error, which causes the Bootstrap class to either be entirely or partly in memory on the next page request if it's soon enough after the error. I'm not familiar enough with PHP garbage collection to actually act on this, but I think this is most likely the issue.
You might not be able to "fix" the problem without fixing the out of memory issue. Without knowing the framework you're using, I'll just go down the list of areas that come to mind.
You stated "they're usually from our own employees trying to download too much data". I would start there, as it could be the biggest/loudest opportunity for optimizations, a few idea come to mind.
if the data being downloaded is files, perhaps you could use streams to chunk the reads, to a constant size, so the memory is not gobbled up on big downloads.
can you do download queueing, throttling.
if the data is coming from a database, besides optimizing your queries, you could rate limit them, reduce the result set sizes and ideally move such workloads to a dedicated environment, with mirrored data.
ensure your code is releasing file pointers and database connections responsibly, leaving it to PHP teardown, could result in delayed garbage collection and a sort of cascading effect, in high traffic situations.
Other low hanging fruit when it comes to memory limits
you are running php 5.4.19, if your software permits it, consider updating to more resent version "PHP 5.4 has not been patched since 2015" besides PHP 7 comes with a whole slew of performance improvements.
if you have a client side application involved monitor it's xhr and overall network activity, look for excessive polling and hanging connections.
as for your autoloader, based on your comment "The dump of get_declared_classes when we have this issue contains classes that are not used on the page that is triggering the error" you may want to check the implementation, to make sure it's not loading some sort of bundled class cache, if you are using composer, dump-autoload might be helpful.
sessions, I've seen some applications load files based on cookies and sessions, if you have such a setup, I would audit that logic and ensure there are no sticky sessions loading unneeded resources.
It's clear from your question you are running a multi-tenency server. Without proper stats it hard to be more specific, but I would think it's clear the issue is not a PHP issue, as it seems to be somewhat isolated, based on your description.
Proper Debugging and Profiling
I would suggest installing a PHP profiler, even for a short time, new relic is pretty good. You will be able to see exactly what is going on, and have the data to fix the right problem. I think they have a free trial, which should get you pointed in the right direction... There are others too, but their names escape me at the moment.
Even if class_exists returns false, it would never return true if an interface of the same name exists. However, you cannot declare an interface and class of the same name.
Try running class_exists('Bootstrap') && interface_exists('Bootstrap') to make sure you do not redeclare.
Did you have a look at __autoload function?
I believe that you could workaround this issue by creating some function like that in your code:
function __autoload($className)
{
if (\file_exists($className . '.php'))
include_once($className . '.php');
else
eval('class ' . $className . ' { function __call($method, $args) { return false; } }');
}
If you have a file called Bootstrap.php with class Bootstrap declared inside it, PHP will automatically load file, otherwise declare a ghost class that could handle any function call inside it, avoiding any error messages. Note that for ghost function I used __call magic method.

Corrupt heap in PHP script

zend_mm_heap corrupted is coming up as an error message on a PHP program I wrote to pre-render a large environment.
I suspect it's being caused by having too many variable assignments in the script, although I'm uncertain of this since I wrote the script to only have about 20 variables at any given time, of which one is an array that may hold up to 500 elements. That said, the number of iterations in total is on the order of a few billion.
Am I correct in my suspicion, and if so is there anything that can be done about it? Would it be better, for instance, to run the script for a while, then dump out important variables to a file and restart the script, making it pick up those variables and continuing?
I've seen this problem, and can reproduce it using phalcon, but it seems to originate from APC cache. I fixed by switching from APC to zend opcache. You can try disabling APC to see if it goes away.
Best I can reason from my investigations is that APC is doing something to memory that zend is using. PS, it doesn't have anything to do with zend framework, it's an error related to the parts of zend that were merged into php.
The solution to your problem is to dowload the latest version of APC compatible with your PHP version.
You'll have to force install it making it overwrite the old version of APC. This will in many cases fix the issue you're having.

how to maintain older versions when php and its libraries keep upgrading

I had been using sqlite 2 which was included in the xampp bundle.
after while i installed latest xampp bundle which had sqlite3.
Now when i run my code i get error and found that sqlite 2 is not available with the bundle.
Things like this happens with php and all its related libraries for example split function others.
If it is the localsystem no problem we are going to update it anyway but in the shared hosting when they upgrade to new php versions the existing webpages through errors.
how do we know that php is going to remove some function and replace it with other new functions instead of retaining the same name but with upgraded functionality?
what happens is when they upgrade or change the versions of the current existing functions in the server is the website breaks. you can see errors all over the page. many page wont work. seo rating goes down if not noticed. the users will not trust the site. this happened with wordpress also and mediawiki which i had been using for a while and when they upgraded php recently the modules did not load instead i got fatal errors. this is nasty.
in this case it will be hectic to keep upgrading your code for a specific interval(whenever php upgrades their functionality)
this is going endless as far as i have known.
what is the solution for this in the server side and in localhost.
This is an issue indeed. And the only solution to this is:
Consider your dependencies very well before writing code against them.
Stay on the ball.
Before you decide to use any one particular technology to depend on, research whether it is slated for deprecation or is otherwise not recommended to be used in the future. The PHP guys are pretty good about pointing these things out in the manual, so reading the related pages on php.net is often good enough.
The PHP developers are pretty good about their deprecation process, having a very slow deprecation process for most of their APIs.
You will nevertheless need to stay on the ball. Follow the infrequent official announcements to get a feeling for what's changing and where you need to pay attention. The change logs for each major PHP release are usually worth studying.
If you have code running on a system which frequently changes without your doing, you need to pay attention to your host's announcements as well. If they don't announce major changes in advance, look for another host.
Build your error handlers so you'll be notified via email about serious errors. You may want to include a script which checks for the availability of major dependencies and notifies you if they're missing.
If you have critical code, you should not run it on shared hosts which do not offer you enough control over the platform. Run your own servers and be careful about upgrading PHP. There's a reason why old versions receive maintenance updates for a while.
Typically you have a system administrator manage the deployed code and the servers, you should communicate with this person what you require from the server and that person should talk to you if some major changes to the server are happening. If you are that person, like many sys-dev-ops are these days, you need to make this part of your job.
Good question.
I say you worry too much. The PHP team really cares of downward compatibility. I am using PHP for over 10 years now, web changed a lot in this time, so did PHP. Changes are running through a very long deprecation process and are announced very long before they actually happen. Even then, in the cases I remember, it still was possible to do the deprecated way with newer versions.
This all is valid for the PHP core and extensions which are delivered with PHP itself.
In case of SQLlite I can't recall what was the deal there, never really used it.
In one of the newer PHP 5 versions they introduced the deprecated bit in the log level.
If you switch your log level to E_ALL or -1 you'll get a line logged if you are using any deprecated function, so you are able to react early.
http://php.net/manual/en/function.error-reporting.php
In addition the release a list of depreciations and backward incompatible changes e.g.
http://php.net/manual/en/migration54.incompatible.php
All changes announced here were handled as bad practice many years before already so nobody should have to change code now.
I hope this is no longer killing you :) good luck

Categories