We have a page that is mostly static with a few PHP includes, each of which pull data from our MSSQL database.
There is a very strange issue where pages will randomly stop rendering. The problem is sporadic and not always visible. Sometimes the pages load correctly, sometimes they stop before reaching the end of the file.
The page in question where you can see the problem is at
Dev: http://author.www.purdue.edu/discoverypark/climate/
Prod: http://www.purdue.edu/discoverypark/climate/index.php
If you refresh the page repeatedly you will hopefully be able to see the issue. The problem only exists on pages that include calls to our database, but again the pages load completely normally most of the time; only sometimes it will stop outputting the page. It has broken inside of normal html as well as before and inside php blocks.
The issue seems to almost be worse in the production environment; the only difference between the two would be the datasource connection to the DB.
Are there any known issues of this with PHP, ODBC, and MSSQL? It is obviously tied to the calls to the database, which are all stored procedures. Could it be an issue with the datasource?
Any input would be appreciated.
I consistently see this in "View Source" when it dies:
<div class="wrap">
OVPR
<img alt=">" src=".
I would guess that your image caching or image URL generating or image handling is probably broken somewhere, and it's aborting for lack of an image.
The > INSIDE the alt value is also not kosher. That needs to be escaped with http://php.net/htmlentities
It might "work" but it won't validate, and a page that doesn't validate is just plain broken.
The DB connection differences between, say, localhost in DEV and separate boxes in PROD is probably changing the timing / frequency of the issue, but is almost for sure a red herring...
Though if a DB call to look up the OVPR image is doing a die()...
For sure, though, if you don't have 10 lines of error handling around every call to odbc_* or mssq_* in your database code, then you've done it wrong, and need to add that.
PS
It should be trivial to switch from ODBC to mssql_* or sybase_* driver, or PDO::* and eliminate at least one possible contender, if none of the above work out. I say again, though, that the DB is 99% for sure a red herring, and you've done something that will be obvious, dare I say silly, once you trace it through to the real cause...
Make sure there isn't a die or exit in the code anywhere
Edit -- If there is, remove this, and view the error
Have you checked normal debugging methods? What does the code look like - specifically, the error handling around your ODBC calls? You don't have a top level return or a misplaced die(), do you?
When I see the page in it's not-rendering state it seems to be because the page is clearly incomplete and it's XHTML.
I see it normally die here -
OVPR
Try bumping up your error reporting level so that you can see any warnings, errors, infos that might be suppressed at the server level.
http://php.net/manual/en/function.error-reporting.php
// Report all PHP errors
<?php error_reporting(-1); ?>
Related
There's an issue I'm experiencing that seems related to caching, but I'm not 100% positive.
I'm working with MAMP to render PHP inside of HTML. When I write some lines of code and go to the appropriate page, my code is manifest. However, if I make small changes to the code, I'm not able to get it to change with a refresh.
More than that, I'm finding that hard refreshes with and without cache clearing aren't solving the issue as well.
Here's a concrete example. Within PHP I declare a couple of numerical variables. I then set an if statement so that if the second variable is higher than the first, text is displayed on the page. When I change the value of the variables so the opposite would be true, the text continues to show on the page, even though the if statement is false.
I refresh, hard refresh, clear the cache, and even close the browser window. It often takes several combinations of these to finally register my change.
I thought the issue might have been with MAMP and so I tried it with XAMPP, but the results were the same. This makes sense, as they both use an Apache server (provided the issue is server related).
This is becoming very bothersome and I'm not sure where the issue lies. Any idea what might be the problem here?
I have built a robot which basicaly crawls websites starting at the root, parses the page, saves all the internal links, and then moves to the first page in the link list, parses it and so on.
I know its not really optimized to run this sort of bot in PHP but as it's the only language I happen to know well enough, thats the one I chose.
I came accross all sort of issues : pages returning 404, then pages being redirected then pages which are not parsable (werid case of few pages that return a few words when being parsed but return the entire expected body when you send a GET http request), etc...
Anyway I reckon I have made the robot so it can go through 99.5% of the pages it parses, but yet there are some pages that are not parsable and at that point my bot crashes (about 1 page out of 400 make the bot crash, and as crashing I mean, I just get a fatal error, the code just stops then).
Now my question is : how can I prevent this from happening ? I am not asking how to fix a bug I cant even debug (they re most of the time times out, so not very easy to debug), I'd liek to know how to handle those errors. Is there a way to refresh the page in case a certain type of error occurs ? Is there a way to go around those time out fatal errors ?
I cannot see the point of showing you any sort of code, although i will if you feel the need of checking a certain part of it.
Thank you
Simplest way I can think of is to use a try{} catch(){} block.
[http://www.php.net/manual/en/language.exceptions.php][1]
You put the part of the parser into the try block, and if an error is thrown, feed some default values and go to the next link.
If you are getting fatal errors (which I think you can't catch with try), then you could also try to break each step download/parsing into a separate php file that is called with the url it needs to lookup via curl. This kind of poor man's parallelization will cause you to incurr a lot of overhead though and is probably not necessarily how php "should" be used, but should work. You'll also need to store the results in a database / text file.
I'm building a site using PHP which allows users to add in a lot of data to various tables in a MYSQL database. This requires a lot of inserting, updating and dropping of tables in my datatbase, sometimes running several commands in one script.
I'm concerned about catching potential errors occurring when live (I've tested, tested and tested but still want a back up plan).
I've searched everywhere for a solution but cannot find one that would satisfy my needs so wondered if anyone here had any practices they use or advice they can give me.
What I want to happen:
If an error occurs connecting to my database (for instance) I want to display a page or alert window with a "sorry we've had a problem" message with a button to log the error. When a user clicks the button I want to be able to log a mysql_error() to the database with a description of the failed command/function and page name along with time/date stamp which I can track.
Is this something anyone has done before or can offer an alternative? Or is there a built in function that does exactly this which I have missed?
Any advice would be much appreciated.
If you fail connecting to the DB, you won't be able to log the error to the db. The bad connection scenario aside, you should use a php mysql library that supports exceptions (like PDO) and use try-catch blocks to catch error states you want to log.
You'll probably want to just write to the apache error log on DB connection failure (can be done in a try-catch block).
I've worked on a CMS which would use Smarty to build the content pages as PHP files, then save them to disc so all subsequent views of the same page could bypass the generation phase, keeping DB load and page loading times down. These pages would be completely standalone and not have to run in the context of another script.
The problem was the instance where a user first visited a page that wasn't cached, they'd still have to be displayed the generated content. I was hoping I could save my generated file, then include() it, but filesystem latency meant that this wasn't an option.
The only solution I could find was using eval() to run the generated string after it was generated and saved to disc. While this works, it's not nice to have to debug in, so I'd be very interested in finding an alternative.
Is there some method I could use other than eval in the above case?
Given your scenario, I do not think there is an alternative.
As for the debugging part, you could always write it to disc and include it for the development to test / fix it up that way and then when you have the bugs worked out, switch it over to eval.
Not knowing your system, I will not second guess that you know it better than I do, but it seems like a lot of effort, especially since that the above scenario will only happen once per page...ever. I would just say is it really worth it for that one instance to display the initial page through eval and why could you not be the initial user to generate the pages?
What is the best way to record errors experienced by the user?
My initial thought was to make a function that recorded the error with a unique number and maybe a dump of the variables into a record on the database.
Is there a better approach? Should I use a text file log instead?
How about overriding the default PHP errorhandler?
This site should give some basic information: http://www.php.net/manual/en/function.set-error-handler.php and the first comment on http://www.php.net/manual/en/function.set-exception-handler.php
You might also want to store database errors, perhaps some kind of custom function that allows you to use code like:
<?php
$objQueryResult = mysql_query("query here") or some_kind_of_function_here();
?>
You might want to store the recorded errors in a file, which is outside your public html root folder, to make sure people can't access it by accident.
I would also assume, you'd want to store a complete stacktrace in such a file, because then you can actually debug the problem.
When overriding the default errorhandlers, please note you don't forget to send a nice message to the user (and exit the script, when needed).
I would recommend storing:
$_POST
$_GET
A complete dump of
debug_print_backtrace()
Possibly the SQL that triggered this?
I would suggest you to use debug_print_backtrace() to make sure you get a summary of data. The debug_backtrace() function gives about the same information, but it can sometimes just give you too much information.
The code you could use to catch backtraces:
<?php
ob_start();
debug_print_backtrace();
$trace = ob_get_contents();
ob_end_clean();
?>
To store this, you could use a plain text output, if you don't get too much errors, otherwise perhaps use something like sqlite? - Just don't use the same SQL connection to store the errors, as that might trigger more problems, if you're having webserver to SQL connection errors.
Well, at least writing to text files on the local system should be less error prone, thus allowing you to catch DB errors too :)
I would prefer to write a decent dump of the current state to a simple log file. In addition to your "own" state (i.e. your application's variables and objects), you might consider doing a phpinfo() to get inspiration as to which environment and request variables to include.
PEAR::Log is handy for this kind of logging. e.g.
$logger->alert("your message");
$logger->warning("your message");
$logger->notice("your message");
etc.
You can log to a file or to a database, I wrote a PDO enabled sqlite extension , pretty simple.
These are handy to put into exception handling code too.
PEAR::Log
Records: id, logtime, identity, severity 1-7( ie "Alert"), and your message.
I think #Icheb's answer covers it all.
I have tried something new this year in a project that I thought I'd share.
For a PHP based content aggregation / distribution service, an application that runs quietly in the background on some server and you tend to forget, we needed an error reporting system that makes sure we notice errors.
Every error that occurs has an Error ID that is specified in the code:
$success = mysql_query(this_and_that);
if (!$success) log_error ("Failed Query: ".mysql_error(), "MYSQL_123");
Errors get logged in a file, but more importantly sent out by mail to the administrator, together with a full backtrace and variable dump.
To avoid flooding with mails - the service has tens of thousands of users on a good day - error mails get sent out only once every x hours for each error code. When an error of the same code occurs twice within that timespan, no additional mail will be sent. It means that every kind of error gets recorded, but you don't get killed by error messages when it's something that happens to hundreds or thousands of users.
This is fairly easy to implement; the art is getting the error IDs right. You can, for example, give every failed mySQL query in your system the same generic "MYSQL" Error ID. In most cases, that will be too generic and block too much. If you give each mySQL query a unique error ID, you might get flowed with mails and the filtering effect is gone. But wWhen grouped intelligently, this can be a very good setup.
From the usability point of view, the user should not Ever experience errors.
Depending on the error you should make different strategies:
non catchable errors or difficult to catch from PHP, read the logs for each application
Apache
MySQL and DB errors, transactions
prepare php with "site being updated" or error controllers for emergencies.
PHP errors
these should be detected through Exceptions
silenced but not forgotten, don't try to fix them on the fly
log them and treat them
interface errors
an advice: allow user to submit suggestions or bugs
I know this does't cover all, is only an addendum to the others have suggested.