Handle large .plist files with CFPropertyList - php

I'm using CFPropertyList from https://github.com/rodneyrehm/CFPropertyList for handling content I add with PHP.
It all worked fine, but now that all content is added my file has about 700KB which is not big but seems big enough to let Apache crash on trying to save a file.
child pid 1278 exit signal Segmentation fault
I see in CacheGrind that a lot of time in my application is taken by calls to CFPropertyList->import() and CFDictionary->toXML() so where could be the bottleneck there???
Am I making to many changes at once? Should I load() and save() inbetween changes more to avoid having too many changes saved at once?
Any clue?

I do not think that it's the size that makes problems but a bug in PHP. Segfaults occur only if there is a serious bug in PHP itself.
The next steps:
First, upgrade to the latest PHP version (5.3.6)
If it does not happen anymore, feel happy
It still happens:
Reproduce the issue with a PHP script no longer than 20 lines.
Report the issue to bugs.php.net

When you implement a searchNode() function in an document of unknown size, you should always use a "depth" parameter to avoid stepping down in the document and calling your function enormous times in a recursive loop.
Because that creates infinite loops that also cause a segfault in PHP which don't end in a fatal error or warning.

Related

Phing throws "Segmentation Fault" on a Copy Task with stripphpcomments in the filter chain

I'm using Phing to set up a build process for a large PHP project. I thought the stripphpcomments directive would be useful when copying files, so I added it. When I run Phing with this directive included, however, the copy process errors out with a "segmentation fault" message. After a lot of testing with exclude/include statements, I narrowed the culprits down to two files in particular -- jquery-1.4.2.min.js and a rather large HTML file.
I solved my problem by splitting my one fileset into two filesets: PHP class files and everything else, and applying the filterchain with stripphpcomments only to the first set, but I'm curious to know if anyone has run into this problem before, and what the condition is that causes the segmentation fault to be thrown. The only thing I can possibly imagine the two files above have in common is that they're both really long.
A "segmentation fault" is a crash of PHP itself. Upgrade to the latest php version (5.3.6 at the moment) and try it with that. If you still get the fault, look at #php.pecl on the EFnet IRC servers for instructions how to file a good PHP bug report.

How to isolate server disaster script in PHP?

Oh my goodness. I never thought that I will need to ask you this. But unfortunately yes, I need to!
I have a PHP written script of my own that uses ffmpeg-php. And ffmpeg-php is a bastard. For some input it works ok, but for some it crashes my whole PHP and server throws Internal Server Error 500. I've tried several times to update ffmpeg-php, ffmpeg itself and so on, but when for some input it works in version 0.5 in 0.6 it wont work. And what i need is be sure that rest of the script will be processed correctly. And now it does not, because when it comes to run toGDImage() on movie frame I have Internal Server Error 500 and no feedback why from any source.
So for peace of mind of my users I decided that I need to isolate this part of script that messes with ffmpeg-php. I need a way to assure that if something will go terribly wrong in this part, it rest will go on.
Try catch does not work because this is not a warning, nor a fatal error, it is a horrible server-disaster. So what are your suggestions?
I think about putting this script into another file called ffmpeg-php-process.php and call it via HTTP and read result, if it is Internal Server Error 500 - I will know that it was not ok.
Or are there any other, more neat ways to isolate disaster scripts in PHP?
Ps. Don't write that I need to diagnose or debug or find the source of the error. I'm not a damn beginner and I'm not a ffmpeg dev to mess in it's code, I need to make my users safe now, and it's everything that i care now.
If you're getting a 500 error, it's because an exception of some sort is being thrown at a level lower than that of PHP itself. Unless your code is spinning into some kind of infinite loop or hitting a recursion limit (and especially since it worked with version 0.5), there's a good chance that ffmpeg or ffmpeg-php is crashing and taking the instance of PHP that launched it down with it.
Frankly, there's nothing you can do from PHP.
Your best bet would be, since you've already got access to the server, to write the script in question using a language like Python. There's a ton of ffmpeg python plugins, so you shouldn't have a difficult time setting that up at all. Call your Python script from PHP and pull in the output from a file. What this will do is isolate PHP from your script failing. It'll also get you away from ffmpeg-php (which, at least to me, seems like an unholy combination).
If you're dead-set on using PHP (which I don't recommend), you can launch another PHP script using php-cli from your outward-facing PHP script and do the work from there (as you would with Python). Again, I highly recommend that you avoid this.
Hope this helps!
You could spawn a new process containing your php-ffmpeg script. There are some functions to do that: proc_open() as instance.
The documentation has a not bad example about it:
http://php.net/proc_open
I have something similar going with a convoluted, large, bulky legacy php-email system I support. When it got apparent that the email system was becoming it's own beast, we split it off as its own virtual server entirely. There's no separation like PHYSICAL separation. And hey, virtual servers are cheap....
On the plus side, you can start, restart, and generally destroy the separate server with little affect on the rest of your code. It may also have improved backup implications (isolate media and logic) Since going this route, we've not ever taken the main application server down.
However, it does create a connection challenge as now rather than working local you're going to have your server talking to another separated by at the very least a bit of wire in the same cabinet (hopefully)

php 5.2.12 Maximum execution time when using include()

Anyone got a problem with php 5.2.12 getting a lot of " Maximum execution time" error when trying to include() files?
I can't seem to find the bug in php.net, but it's consistently giving us that error on numerous scripts.
Anyone can recommend solutions?
The same script runs on a few other servers with php 5.2 without any problems. So just to let you guys know it isn't a script problem.
This is much, much more likely to be a problem with your code rather than with a specific version of PHP. PHP by default has a maximum execution time of 30 seconds, which you can modify by calling set_time_limit() or adjusting your php.ini settings.
If you're not doing something that you expect to take a long time, then usually the cause of this error is an infinite loop somewhere in your code. I'd throw a debug_print_backtrace() and a couple of exit() calls into some key locations and try to figure out which file is giving you grief, and then take a closer look in there. Perhaps you're stuck in an infinite include() hierarchy, in which case you should be using include_once() for all your class and function library files.
I would check to make sure the same include isn't getting requested time and time again somehow. You might try include_once() just to see if it changes things for you. That isn't a solution so much as it's a potential temporary fix. You should find out what is causing this if indeed it is getting called over and over again.
If you have xdebug setup and an IDE that supports debugging this would be a great way to dig into the code.
Otherwise, can you try putting some output statements in the first line of the included file and in the line PROIR to calling the include. See what's going on ...

PDFLib in PHP hogging resources and not flushing to file

I just inherited a PHP project that generates large PDF files and usually chokes after a few thousand pages and several gigs of server memory. The project was using PDFLib to generate these files 'in memory'.
I was tasked with fixing this, so the first thing I did was to send PDFLib output to a file instead of building in memory. The problem is, it still seems to be building PDFs memory. And much of the memory never seems to be returned to the OS. Eventually, the whole things chokes and dies.
When I task the program with building only snippets of the large PDFs, it seems that the data is not fully flushed to the file on end_document(). I get no errors, yet the PDF is not readable and opening it in a hex editor makes it obvious that the stream is incomplete.
I'm hoping that someone has experienced similar difficulties.
Solved! Needed to call PDF_delete_textflow() on each textflow, as they are given document scope and don't go away until the document is closed, which was never since all available memory was exhausted before that point.
You have to make sure that you are closing each page as well as closing the document. This would be done by calling the "end_page_ext" at the end of every written page.
Additionally if you are importing pages from another PDF you have to call "close_pdi_page" after each improted page and "close_pdi_document" when you're done with each imported document.

Connection Interrupted. The connection to the server was reset while the page was loading

I am calling a PHP-Script belonging to a MySQL/PHP web application using FF3. I run XAMPP on localhost. All I get is this:
Connection Interrupted
The connection to the server was reset while the page was loading.
The network link was interrupted while negotiating a connection. Please try again.
There are a number of possible solutions ... depends on the "why" ... so it ends up being a bit of trial and error. On a fresh install, that's tricky to determine. But, if you made a recent "major" change that's a place to start looking - like modifying virtual hosts or adding/enabling XDebug.
Here's a list of things I've used/done/tried in the past
check for infinite loops ... in particular looping through a SQL fetch result which works 99% of the time except the 1% it doesn't. In one case, I was using the results of two previous queries as the upper and lower bounds of a for loop ... and occasionally got a upper bound of a UINT max ... har har har (vomit)
copying the ./php/libmysql.dll to the windows/system32 directory (Particularly if you see Parent: child process exited with status 3221225477 -- Restarting in your log files ... check out: http://www.java-samples.com/showtutorial.php?tutorialid=1050)
if you modify PHP's error_reporting at runtime ... in certain circumstances this can cause PHP to degenerate into an unstable state if, say, in your PHP code you modify the superglobals or fiddle around with other deep and personal background system variables (Nah, who would ever do such evil hackery? ahem)
if you convert your MySQL to something other than MyISAM or mysqli
There is a known bug with MySQL related to MyISAM, the UTF8 character set and indexes (http://bugs.mysql.com/bug.php?id=4541)
Solution is to use InnoDB dialect (eg sql set GLOBAL storage_engine='InnoDb';)
Doing that changes how new tables are created ... which might slightly alter the way results are returned to a fetch statement ... leading to an infinite loop, a malformed dataset, etc. (although this change should not hang the database itself)
Other helpful items are to ramp up the debug reporting for PHP and apache in their config files and restart the servers. The log files sometimes give a clue as to at least where the problem might reside. If it happens after your page content was finished it's more likely in the php settings. If it's during page construction, check your PHP code. Etc. etc.
Hope the above laundry list helps somebody someday ... probably myself when I run into it again and come back here looking for "how the heck did I fix it last time?" ... :)
It's possible that your script could be caught in an infinite loop. If that doesn't apply, then I'd check the error logs like TimB suggested.
It sounds like the PHP script you're calling is failing without returning a valid response. Depending on the level of logging that you have set up, this should generate an error in the Apache logfile, which will give you some idea of the problem. I'm not familiar with XAMPP, but you should be able to find out where the logs are, and look for an error that occurred at the time you made your request to the PHP script.
copying libmysql.dll to apache\bin folder may help you overcome this strange error
I solved this problem Upgrading the xampp\php\ext\xdebug\php_xdebug.dll
(changed to php xdebug v.2.0.5-5.3-vc9 )
I had the same problem and this is what i did.
I issued the http get command through php cli script, and as it turns out I had declared one class twice somewhere.
By the way , i use AMPPS on an mac
Hope this helps some one!
Try doing the request with Firebug enabled and see what info you can get out of that; I always find that using wget is helpful for seeing the raw HTTP interaction without worrying about Firefox's UI elements interfering.
If you are using certificates for ssl in Windows 2008 Server(iis 7) from old selfssl tool(iis 6), that is the problem. Sometimes Microsoft releases patches which can destruct all these old certificates. The solution is to generate them again.
copying libmysql.dll to apache\bin folder may help you overcome this strange error
Indeed this helped me to solve this problem
The connection to the server was reset while the page was loading.
Incase the issue is not working this did the trick for me.
1. I got a new zip directory for PHP and connected it with apache
2. I searched for the libmysql in the new php and inserted this to the apache/bin
its this libmysql.dll that is needed there and not the one form mySQL/bin.
ok at least thats the one that worked.
I experienced a very similar issue - which doesn't apply to the person who asked this question - but may be of help to others who are reading this page...
I had an issue where in certain cases PHP 5.4 + eAccelerator = connection reset. There was no error output in any log files, and it only happened on certain URLs, which made it difficult to diagnose. Turns out it only happened for certain PHP code / certain PHP files, and was due to some incompatibilities with specific PHP code and eAccelerator. Easiest solution was to disable eAccelerator for that specific site, by adding the following to .htaccess file
php_flag eaccelerator.enable 0
php_flag eaccelerator.optimizer 0
(or equivalent lines in php.ini):
eaccelerator.enable="0"
eaccelerator.optimizer="0"

Categories