How to debug AJAX (PHP) code that calls SQL statements? - php

I'm not sure if this is a duplicate of another question, but I have a small PHP file that calls some SQL INSERT and DELETE for an image tagging system. Most of the time both insertions and deletes work, but on some occasions the insertions don't work.
Is there a way to view why the SQL statements failed to execute, something similar to when you use SQL functions in Python or Java, and if it fails, it tells you why (example: duplicate key insertion, unterminated quote etc...)?

There are two things I can think of off the top of my head, and one thing that I stole from amitchhajer:
pg_last_error will tell you the last error in your session. This is awesome for obvious reasons, and you're going to want to log the error to a text file on disk in case the issue is something like the DB going down. If you try to store the error in the DB, you might have some HILARIOUS* hi-jinks in the process of figuring out why.
Log every query to this text file, even the successful ones. Find out if the issue affects identical operations (an issue with your DB or connection, again) or certain queries every time (issue with your app.)
If you have access to the guts of your server (or your shared hosting is good,) enable and examine the database's query log. This won't help if there's a network issue between the app and server, though.
But if I had to guess, I would imagine that when the app fails it's getting weird input. Nine times out of ten the input isn't getting escaped properly or - since you're using PHP, which murders variables as a matter of routine during type conversions - it's being set to FALSE or NULL or something and the system is generating a broken query like INSERT INTO wizards (hats, cloaks, spell_count) VALUES ('Wizard Hat', 'Robes', );
*not actually hilarious

Start monitoring your SQL queries by starting the log. There you can look what all queries are fired and errors if any.
This tutorial to start the logger will help.

Depending on which API your PHP file uses (let's hope it's PDO ;) you could check for errors in your current transaction with s.th. like
$naughtyPdoStatement->execute();
if ($naughtyPdoStatement->errorCode() != '00000')
DebuggerOfChoice::log( implode (' ', $naughtyPdoStatement->errorInfo() );
When using the legacy-APIs there's equivalents like mysql_errno, mysql_error, pg_last_error, etc... which should enable to do the same. DebuggerOfChoice::Log of course can be whatever log function you'd like to utilise

Related

Amending the CodeIgniter Active Record Query command?

I am developing a Codeigniter (2.0.2) Application, which will utilise a Master database for all write operations (INSERT/UPDATE/DELETE) and a read replica for all read operations (SELECT).
Now I know I can access two different database objects within the code to route the individual requests to the specific database server, but i'm thinking there has a better way, automated way. I'll be using MySQL and Active Record, and also want to build in Memcache checking - although it won't be used immediately, I'd like the option there for the future, built in at this stage.
I'm thinking if its possible to add a hook/library of some kind to intercept the $this->db->query so that the following happens:
1) SQL Query received
2) Check if SELECT query
2a) If SELECT, see if Memcache is active, if so encode SQL and check Memcache for response.
2b) If no memcache response, or Memcache is not active, execute query as normal through READ MySQL server.
3) Query was NOT select, so execute query as normal through the WRITE MySQL server.
4) Return response.
I'm sure that looking at this, it should be quite simple to do, but no matter how I look at it i'm just not seeing a potential answer - but there's got to be one! Can anyone help/assist?
In addition, I also want the ability to be able to log all write SQL commands for troubleshooting, presumably the best way is to introduce 3a) Write SQL command to plain text file ... into the above scheme. I don't believe MySQL actually logs the non-SELECT queries in anyway ... does it?
That type of behavior is a little bit beyond the normal scope of CI. Unfortunately, your best bet is to manually extend the database drivers, specifically override the function simple_query or _execute (simple_query is a wrapper around _execute which simply ensures initialization). That is really the only place where you can guarantee that you can catch all of the queries and branch the logic accordingly. (You may also want to override close as that is the cleanup script)
(Personally, I would have a the SELECT DB load a secondary DB into itself and just call $write_db->simple_query conditionally, that seems like it would be the least trouble).

Keep checking for errors in queries

I'm a bit obsessed now. I'm writing a PHP-MYSQL web application, using PDO, that have to execute a lot of queries. Actually, every time i execute a query, i also check if that query gone bad or good. But recently i thought that there's no reason for it, and that's it is a wast of line to keep checking for an error.
Why should a query go wrong when your database connection is established and you are sure that your database is fine and has all the needed table and columns?
You're absolutely right and you're following the correct way.
In correct circumstances there should be no invalid queries at all. Each query should be valid with any possible input value.
But something still can happen:
You can lose the connection during the query
Table can be broken
...
So I offer you to change PDO mode to throw exception on errors and write one global handler which will catch this kind of errors and output some kind of sorry-page (+ add a line to a log file with some details)

MySQL Triggers: How to know which script called it?

I have a mysql trigger that logs every time a specific table is updated.
Is there a way to also log WHICH PHP SCRIPT triggered it? (without modifying each php script of course, that would defeat my purpose)
Also, is there a way to log what was the SQL statement right before the UPDATE that triggered it?
Thanks
Nathan
Short answers: no and no. Sorry.
What are you trying to achieve? Perhaps there's another way....
no, but you can get some more specific direction.
first, if you're using persitent connections, turn them off. this will make your logs easier to use.
second, since it sounds like you have multiple code bases accessing the same database, create a different user for each code base with exactly the same rights and make each code base log in with a different user. now when you look at the log, you can see which application is doing what.
third, if you have the query log on, then the UPDATE immediately preceding the trigger will be the UPDATE that caused the trigger.
fourth, if your apps use any sort of encapsulation for the mysql connection, it should be trivial to modify it to write the call stack at the time a query is sent to the database to a file.
I've read through a few of the answers and the comments. I had one idea that would be usefuls only if your queries are passing through a single point. For example, if you have a database class that all queries are executed through.
If that is the case, you could possibly add a comment to the query itself. The comment would include the function call trace, and would be added to the query as an SQL comment.
Next, you would turn query logging on and be able to see where each query is getting called from in the log file.
If your queries do not pass through a single point, you may be out of luck.
One final suggestion would be to take a look at MySQL Proxy. I have not used it much but it is designed to do intermediate processing of queries. However, I still think you would need to modify your PHP scripts to pass additional information.

What should I do if anything related to database communication goes wrong?

Here's the problem: When a script starts modifying the database and something goes wrong, the database is usually corrupted. For example, lets say we have a User table and a Photos table.
A script creates a user dataset and in the next lines it attempts to create a photo dataset. The photo has a user_id column. Now lets assume something goes wrong and PDO's lastInserId() doesn't return the id of the user. So what happens in worst case: We get a user with no photo, and a photo with no valid user_id. Broken reference. 3 weeks to debug.
Are there any good strategies to follow, to prevent exactly this kind of problems? In my code below, you can see that I at least try to log that to a file and quit the script execution to prevent more damage and db curruption.
public function lastInsertId() {
$id = $this->dbh->lastInsertId();
if (!is_numeric($id)) {
$this->logError("DB::lastInsertId() did not return an id as expected!");
die();
}
return $id;
}
Maybe I have to use Transactions all over the place, at any time where an query B depends on a query A, and so forth? Is that the solution to go?
Should I do a "precaution rollback" before the die() call? I guess it would not hurt much at this point, would it? I'm not sure...
The solution would be to use transactions each time you have several queries for which it should be "all or none", yes -- that's the A of ACID : Atomicity.
You can do a rollback before your die, if you want ; it won't change much (a transaction that is not commited will automatically be rolled-back by the DB engine), but it will make your code more clear, and easier to understand.
As a sidenote : using die this way is probably not the "right" way to deal with errors : it'll prevent you from displaying any kind of "nice" error page, for instance.
A solution that's more often used is to have some kind of exception be thrown when there is such kind of problem -- and in a higher layer of your application (in one single place) deal with those exceptions, to display an error page.
Outside of using a transactional engine (InnoDB if you're using MySQL, or just use PostgreSQL, etc.) and wrapping the relevant atomic activities there's not a great deal you can do.
As #Seb says, you can create a transactional log and you could even use a master/slave database setup, but this won't really add much in terms of coverage.
You should keep a log of all transactions, so if the automated process goes wrong (even your rollbacks, fallback procedures, etc), you still can revert all effects back manually.

find time to execute MySQL query via PHP

I've seen this question around the internet (here and here, for example), but I've never seen a good answer. Is it possible to find the length of time a given MySQL query (executed via mysql_query) took via PHP?
Some places recommend using php's microtime function, but this seems like it may be inaccurate. The mysql_query may be bogged down by network latency, or a sluggish system which isn't responding to your query quickly, or some other unrelated cause. None of these are directly related to the quality of your query, which is the only thing I really want to test out here. (Please mention in the comments if you disagree!)
My answer is similar, but varied. Record the time before and after the query, but do it within your database query class. Oh, you say you are using mysql_query directly? Well, now you know why you should use a class wrapper around those raw php database functions (pardon the snark). Actually, one is already built called PDO:
http://us2.php.net/pdo
If you want to extend the functionality to do timing around each of your queries... extend the class! Simple enough, right?
I you are only checking the quality of the query itself, then remove PHP from the equation. Use a tool like the MySQL Query Browser or SQLyog.
Or if you have shell access, just connect directly. Any of these methods will be superior in determining the actual performance of your queries.
At the php level you pretty much would need to record the time before and after the query.
If you only care about the query performance itself you can enable the slow query log in your mysql server: http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html That will log all queries longer than a specified number of seconds.
If you really need query information maybe you could make use of SHOW PROFILES:
http://dev.mysql.com/doc/refman/5.0/en/show-profiles.html
Personally, I would use a combination of microtime-ing, the slow query log, mytop, and analyzing problem queries with the MySQL client (command line).

Categories