I was thinking of using HHVM in my development environment. However, I have encountered two significant issues.
First, no matter what I try, I cannot get HHVM to display a PHP stack trace in the browser window when an exception or error occurs. I've tried using set_error_handler and various other approaches. Has anyone figured out a way to display errors and exception stack traces in the browser window?
Second, the error log file is filled with "\n". I know that is supposed to cause a carriage return. However, it's not. Instead, it just makes the logs hard to read. Has anyone figured out a way to get rid of all of the "\n" in the log files?
Related
Beginner question: I have done around 30 hours trying to sort out an error handler, essential as I am not a great programmer. I am 95% sure I can’t do anything about fatal-fatal errors but I am still 5% hopeful.
My error handler was working well sending out emails and text messages when it encounters problems but then I got an empty page with just:
Fatal error: Cannot use try without catch or finally
in /directory/ etc ...filename.php on line 999
(I had accidentally deleted the catch block.) The question: Someone somewhere mentioned htaccess 500 pages.
I did not understand what was described when I read it. I have done almost nothing with htaccess up to now.
Is there a way to trip some sort of static page? (I am 95% sure I can do nothing but I am stuck and still have a 5% hope and this is really important for me.) I am still running PHP 5.6 but do not want to upgrade to 7 yet. Catching these errors is far more important for me than the warnings, notices, deprecateds etc that I can catch.
Update
I saw that question and used some of the techniques there BUT it is 11 years old, huge, partly outdated and does NOT primarily address the problem I now want to solve.
I have no problem dealing with "fatal errors" such as calling a non existent function. My problem is about errors found when the script is parsed and are "unrecoverable". In my case a missing catch when a try is present.
The other answer, answers this in parts but not in ways that I can seem to use. I think there maybe a way of forcing a 500 error rather perversely by stopping error display which I will investigate soon/tomorrow. I would be grateful for 24 hours to check. I am quite happy for someone more knowledgeable to put up a better question/answer and useful info could be culled from that thread but, frankly it is a mess unsurprisingly after 11 years.
Answer - almost
Switch display_errors to off and you have a 500 error. Sadly I cannot get an .htaccess redirect to work (404 works fine). If you are good with .htaccess hopefully you will have some joy.
In some discussions there is talk of some 500 errors being "CORE" errors and REALLY unrecoverable even by .htaccess. My logs are very sparse and I cannot see any useful indication if this is the case for the catch when a try is present error.
(With a big thank you to #Dharman (if it works)). PS Will tidy this up when/if I get to the end of this.)
I don't think PHP can do anything with parse errors (or other errors during the compilation phase), but you should be able to configure your web server to display an error page of your choosing.
You don't say what web server you are using, but for example with Apache these are the Custom Error Response settings. Your errors will be HTTP 500 errors.
I'm pretty sure I'm not only one who has noticed that simple parse errors on PHP, if present in very nested scenarios (eg, an object instance which references another object instance which references another object instance that has a very tiny parse error, all of them being auto loaded) can make PHP hang forever instead of reporting the parse error and halting the execution like it would normally do — I've seen this many times and in very different code bases, always with the proper error_reporting setting set.
Is there any way around it? i.e., can it be forced to display the parse error report as it should somehow?
For the record, I'm 100% sure these hangs were caused as a result of PHP not handling the parse error correctly, as I have debugged this behaviour many times; the reason I ask is because when these hangs happen one is basically left in the dark, not even being able to tell whether PHP is acting funny or there really is an malfunctioning loop in the code somewhere — this takes time to debug, time that could be saved if, you know, PHP reported the parse error like it should.
As partially mentioned in the comments, error_reporting(E_ALL) can help display all errors. You might also have to use ini_set and make display_errors have a value of on.
Personally, I think your question is not very clear, and you should improve formatting and make it more understandable.
UPDATE: Your server / computer you're running the code on seems to be very slow. No 'hanging-around' should really occur. Or could you describe it with further detail?
Also, you might be stuck in an infinite or near-infinite loop. Check closely in your code, because unless you post all your code, this is the limit to which we can help you.
UPDATE 2: It seems that you may have mistyped the name of an object when you are trying to call it. Otherwise, it may be that you have not declared your object correctly.
Most likely one or the other.
Turns out the culprit was xdebug.collect_params, which the documentation very correctly suggests to keep disabled. Certain errors were simply generating a very large amount of data in the arguments of the call stack trace which exhausted xdebug with collect_params set to 4 and made xdebug and by extension PHP to hang, even though I have a custom exception handler in place which never actually retrieves the stack trace from xdebug, but apparently xdebug collects this data anyway.
This was hard to debug because: a) it was not straightforward to replicate b) profiling with xdebug did not help c) stepping through the code with xdebug + dbgp was not helping either d) almost no trace (no pun intended) was left other than very ocasionally logging the errors to the php error_log file and e) with a custom exception handler it was not obvious to suspect of xdebug, since I didn't involve it in the process of handling the exception, or so I thought.
So there is no such thing as the parse error of death, and I learned to never assume it's not my fault :) Hopefully this answer will help others in the future at least.
UPDATE 1:
After doing an strace on the server, I have discovered that the mmap's process is taking 90% of this processing time. I've discoverd that 1 of the pages is taking a minute to load.
So I found this link:
PHP script keeps doing mmap/munmap
It possibly shows the same problem. However, I don't understand what the anwer means by correctly disabling the php error handlers?
ORIGINAL QUESTION:
How do I check for bottle necks on my web server when loading a specific web page which is being served by my server?
For some reason, a couple of pages on my site have become very slow, and I am not sure where the slowness is happening.
Screenshot from Chrome Dev Tools:
Click here to enlarge:
So basically, I need to find out what is taking this section to long to load? Client Side web tools can't seem to break this down?
Xdebug: Profiling PHP Scripts - pay attention to KCacheGrind tool or, alternatively, you can use Advanced PHP debugger apd_set_pprof_trace() function and pprofp to process generated data file.
Derick Rethans (author of Xdebug) released quite a nice article today called What is PHP doing?
It covers the strace that you've already done, but also shows you how you can use a custom .gdbinit to get the actual php function name that is causing the problem.
Of course you have to run your script from the command line with gdb, so I hope that your problem is reproducible in this way.
mmap is for creating a memory mapped view of a file.
If it really is the error handler causing it, I'd guess that your script is generating a lot of errors (which you are trying to log), maybe a NOTICE for an undefined index in a loop or something).
Check the log file (is anything being logged at all?), check permissions on the log file if nothing is logged, also double check what your error reporting level is set to.
var_dump(ini_get('error_reporting') & E_NOTICE); - Non-zero if you are reporting notices.
error_reporting(E_ALL & ~E_NOTICE); - Turn off reporting notices.
I would suggest looking into Xdebug profiling. The other two answers deal with client side loading issues but if your bottleneck is server side it won't become apparent from the use of those tools.
You may also want to look into the database queries that are being run to serve the pages in question. You could be missing an index somewhere which would explain a recent slowness with specific pages as your database tables grow in size.
I would extract those queries and run them using MySQL EXPLAIN (assuming you are using MySQL) to see if there is slowness there.
Using an application such as Fiddler or YSlow Firefox addin will help identify slow loading elements in your website. This should make any issues apparent.
http://fiddler2.com/fiddler2/
https://addons.mozilla.org/en-US/firefox/addon/yslow/
Hope this helps
Page Speed for Chrome is also an option:
https://developers.google.com/speed/docs/insights/using_chrome
I inherited some PHP source code, and I have to maintain it. It has been built from scratch using no framework, just the former developer's own creation.
Now, I ask this:
Is there a way to ignore fatal errors in php.ini/ini_settings only without modifying the code?
Scenario:
SomeClass.php:
<?php class SomeClass {
...)?>
index.php:
include("SomeClass.php");
...
include("SomeClass.php");
In my development box, this throws a Fatal Error exception (because SomeClass has been declared twice), which is the obvious and expected behavior.
Here is the kicker: This source is hosted somewhere, and it works. I just don't ANY access to that server.
So I see here two scenarios:
1.) There is a way to silence this Fatal Error via 2 includes by an ini setting. This I have to know.
2.) The former developer did NOT give me the exact, updated source code that is currently up and running. I then have to insist that he give me the latest source code, but I can only do this if I am 100% sure that there is no way #1 can happen.
What do you guys think?
I tried setting a set_error_handler() function that doesn't die on fatal errors, but instead Apache crashed. In other words, PHP needs to die so that the system doesn't.
So, sorry, I really don't think there is a solution for that.
Fatal errors don't come from the include function - only warnings. You'd get fatals from require, though. Use #include and it won't even generate the warning.
Error reporting is mostly discouraged on production servers. Why let the user see if your script didn't find a file. Have a look at this http://www.php.net/manual/en/errorfunc.configuration.php#ini.display-errors and http://www.php.net/manual/en/function.set-error-handler.php. The latter could be helpful, go through the page for examples. I suggest to log errors instead of displaying it. But that will involve some sort of workaround with code.
inform your developer that he should use __autoload or spl_autoload to avoid such errors…
I have a php script that is rather hairy and I'm trying to troubleshoot it. No errors are happening, but I'm having trouble seeing what execution path it took to create the output I've gotten. Is there a way I can see at what line the script stopped execution?
Folks, sorry I didn't make this clearer. No errors are happening. No exceptions are being raised. From a computer point of view, nothing 'bad' is happening. But the output is not what I'm expected. I'm trying to track down where exactly the script is exiting normally, but it's a challenge. My life would be much easier if it said something like "Script finished parsing at line 422".
Using Xdebug and running a function trace should give you the info you're looking for.
I'm not sure I'm completely understanding the question. An exception is being thrown and the program is terminating but there is no output?
Try turning error reporting on to the most sensitive level:
error_reporting(E_ALL)
If an error occurs, there is an error message (except for a small number of parse errors). As Brad already suggests, turn up the volume on your error reporting first.
Otherwise, the script will run until its end, or a exit or die() command. I don't think it's possible to find out where a script was exited, but then, it should never be really necessary to.
When debugging hairy errors, either use a debugger and/or debug_backtrace(). Debug_backtrace() can give you the exact call stack. It's most powerful in combination with a custom_error_handler().