mysql surpress dupe key error - php

When i need to know if something in unique before it gets inserted. i usually just attempt to insert it and then if it fails, check if the mysql_errno() is 1062. If it is i know it failed as a duplicate key and i can do whatever i need to do.
The most common place for this is in a user table. I set the email as unique as thats the "username" for logging in. Instead of running additional queries to check uniqueness when processing registration forms, i just compile the query, execute it and check for the 1062 error number. If it fails with 1062 i tell the user nicely that the email is registered and all is good.
However i recently set up a very basic MITM sql query function which gives myself and other developers on the system access to query times, a log of all the sql queries at the bottom of the page, and most importantly, a function which establishes the mysql connection to the correct database on demand (rather than having to do the connect and pass link identifiers manually).
In the sql error query log this function creates on disk, is all my duplicate entries. This obviously doesn't look good to other people seeing errors (even though there handled and expected). Is there a way of surpressing errors somehow for this but still being able to check the mysql_errno() ?

Whilst doing a bit of housework on my account here at SO, I thought it best to answer this with my findings so i can close it. This is basically a conclusion from my last comment above.
If you (like me) use certain error codes in mysql in your application to reduce validation queries or code (duplicate key being the most common i find). The only way to stop an error being thrown is to catch the error inside mysql and handle it. I wont go into the how-to here but a good place to get started is:
http://dev.mysql.com/doc/refman/5.0/en/declare-handler.html
Note: just for the new dev's out there, also dont forget to check out "ON DUPLICATE KEY" (google it). It was something blindly suggested to me elsewhere. It doesn't fit in this example but i've used it for year's to save checking for duplicate records before insertion (it does not return a failure on duplicate entries, so its only good if you were thinking of using a duplicate error handler to instead perform an update... hence finding your way here)

Related

Recover from failed INSERT query without additional db interrogations

I have to perform some INSERT upon creating users in a web application.
In the case that the query fails I want to know if it did because of a duplicate entry for the PRIMARY or for one of the other two indexes.
I'd like to avoid testing for each one with additional queries because I need to create around a half hundred users at a time, and it may take too long.
I searched the PHP manual looking for MYSQLI's error handling, but I only found $errno (code 1062 in my case) and $sqlstate (code 25000);
This info does not tell me which one of the indexes is the culprit.
Since the $error string reports the value that caused the failure (for example it says "Duplicate entry 'someValue' for key 'indexName_UNIQUE'") I was wondering if I can get 'someValue' somehow and therefore identify the culprit.
Running a strpos() on the string message doesn't look like good practice.
We are using MariaDB.

Should I rely on MySQL errors in PHP code?

I was wondering if logic duplication can be reduced on this one. Let's say I have a users table and email column, which should be unique per record. What I normally do, is having a unique index on the column and validation code that checks if the value is already used:
SELECT EXISTS (SELECT * FROM `users` WHERE `email` = 'foo#bar.com')
Is it possible and practical to skip the explicit check and just rely on the database error when trying to put non-unique value? If we repeat the logic of uniqueness in two layers (database and application code), it's not really DRY.
I do that pretty often. In my custom database class I throw a specific exception for violated restrictions (this can be easily deduced from the numeric error code returned by MySQL) and I catch such exception upon insert.
It has the benefit of simplicity and it also prevents race conditions—MySQL takes care of data integrity in both variants, data itself and concurrent accesses.
The only drawback is that it can be tricky to figure out which index is failing when you have more than one (for instance, you may want to have a unique email and a unique username). MySQL drivers only report the violated key name in the text of the error message, there's no specific API for it.
In any case, you may want to inform the user about available names in an earlier stage, so a prior check can also make sense.
It makes sense to enforce the uniqueness of the email address in the database. Only that way you can be sure it is really unique.
If you do it only in the PHP code then any error in that code may corrupt the database.
Doing it in both is not needed but does, in my opinion, not offend against the DRY rule. For instance, you might want to check the presence of an email address during registration of a new user, and not only rely on the database reporting an error.
I assume by "DRY" you mean Don't Repeat Yourself. Applying the same logic in more than one place is not intrinsically bad - there's the old adage "measure twice, cut once".
In a more general case, I usually follow the pattern of applying the insert and catching the constraint violation, but for users with email addresses it's a much more complicated story. If your email is the only attribute required to be unique, then we can skip over a lot of discussion about a person having more than one account and working out which attribute is not unique when a constraint violation is reported. That the email is the only unqiue attribute is implied in your question, but not stated.
Based on your comments you appear to be sending this SQL from your PHP code. Given that, there are 2 real issues with polling the record first:
1) Performance and Capacity: it's an extra round trip, parse and read operation on the database
2) Security: Giving your application user direct access (particularly to tables controlling access) is not good for security. Its is much safer to encapsulate this as a stored procedure/function running with definer privileges and returning messages more aligned to the application logic. Even if you still go down the route of implementing poll first / insert if absent, you are eliminating most of the overhead in issue 1.
You might also spend a moment considering the difference between your query and...
SELECT 1 FROM `users` WHERE `email` = 'foo#bar.com'
On top of the database constraint, you should check if the email given already exists in it before trying to insert. Handling it that way is cleaner and allows for better validation and response for the client, without throwing an error.
The same goes for classic constraints such as MIN / MAX (note that such constraints are ignored on MySQL). You should check, validate and return a validation error message to the client before committing any change to the database.

Should I check a unique constraint with php?

Maybe this question has already been asked, but I don't really know how to search for it:
I have the postgres-table "customers", and each customer has it's own unique name.
In order to achieve this, I added an unique-constraint to this column.
I access the table with php.
When the user now tries to create a new customer, with a name that has already been taken, the database says "Integrity Constraint Violation", and php throws an error.
What I want to do is to show an error in the html-input-field: "Customer-Name already taken" when this happens.
My question is how I should do this.
Should I catch the PDO-Exception, check if the error-Code is "UNIQUE VIOLATION", and than display a message according to the Exception-Message, or should I check for duplicate names with an additional statement before I even try to insert a new row?
What is better practice? Making a further sql-statement, or catching and analyzing error-codes.
EDIT:
I'm using transactions, and I'm catching any exception in order to rollback.
The question is, if I should filter out Unique-violations so they don't lead to a rollback.
EDIT2:
If I'm using the exception-method, I would have to analyse the exception-message in order to ensure that the unique-constraint really belongs to the "name"-column.
This is everything I get from the exception:
["23505",7,"FEHLER: doppelter Schlüsselwert verletzt Unique-Constraint <customers_name_unique>\nDETAIL: Schlüssel <(name)=(test)> existiert bereits."]
The only way to get information about the column is to check if "customers_name_unique" exists (it's the name of the unique-constraint).
But as you can also see, the message is in german, so the output depends on the system / might be able to change.
You should catch the PDO exception.
It quicker to let the database fail, than to look up and see if the record already exists.
This also makes the application "less aware" of the business logic in the database. When you tell the database about the unique index that's really a business logic, and since the database is handling that particular logic it's better to skip the same check in the other layers (the application).
Also when the database layer is handling the exception you avoid race conditions. If your application is checking for consistency then you may risk that another user adds the same record after the first application has checked that it's available.
The question doesn't really belong here but I'll answer you.
Exceptions are situations when something exceptional happens. It means that you shouldn't use them to handle situation that may happen oftenly. If you do it then it's like GOTO code. The better solution is to check previosly if there is any duplicate row. However, the solution with exceptions is easier so you need to decide if you want something to work or if you want to have something that works written as it should be.
I would catch the exception, because (thanks to concurrency) that can happen anyway, even if you check with an extra query beforehand.
Errors are bad, I'd rather check if name does not exist before adding it. Well you should still check if no errors on insert, to avoid situation when concurrent scripts are trying to insert same name (there is a little time between checking for existance and insert, since its not a transaction).
On SAVE check if Exists (by a simple field, in your case: the Constraint column).
If affirmative - show notification to the user about duplication. But don't force the DB server to return you exceptions.

How to debug AJAX (PHP) code that calls SQL statements?

I'm not sure if this is a duplicate of another question, but I have a small PHP file that calls some SQL INSERT and DELETE for an image tagging system. Most of the time both insertions and deletes work, but on some occasions the insertions don't work.
Is there a way to view why the SQL statements failed to execute, something similar to when you use SQL functions in Python or Java, and if it fails, it tells you why (example: duplicate key insertion, unterminated quote etc...)?
There are two things I can think of off the top of my head, and one thing that I stole from amitchhajer:
pg_last_error will tell you the last error in your session. This is awesome for obvious reasons, and you're going to want to log the error to a text file on disk in case the issue is something like the DB going down. If you try to store the error in the DB, you might have some HILARIOUS* hi-jinks in the process of figuring out why.
Log every query to this text file, even the successful ones. Find out if the issue affects identical operations (an issue with your DB or connection, again) or certain queries every time (issue with your app.)
If you have access to the guts of your server (or your shared hosting is good,) enable and examine the database's query log. This won't help if there's a network issue between the app and server, though.
But if I had to guess, I would imagine that when the app fails it's getting weird input. Nine times out of ten the input isn't getting escaped properly or - since you're using PHP, which murders variables as a matter of routine during type conversions - it's being set to FALSE or NULL or something and the system is generating a broken query like INSERT INTO wizards (hats, cloaks, spell_count) VALUES ('Wizard Hat', 'Robes', );
*not actually hilarious
Start monitoring your SQL queries by starting the log. There you can look what all queries are fired and errors if any.
This tutorial to start the logger will help.
Depending on which API your PHP file uses (let's hope it's PDO ;) you could check for errors in your current transaction with s.th. like
$naughtyPdoStatement->execute();
if ($naughtyPdoStatement->errorCode() != '00000')
DebuggerOfChoice::log( implode (' ', $naughtyPdoStatement->errorInfo() );
When using the legacy-APIs there's equivalents like mysql_errno, mysql_error, pg_last_error, etc... which should enable to do the same. DebuggerOfChoice::Log of course can be whatever log function you'd like to utilise

Keep checking for errors in queries

I'm a bit obsessed now. I'm writing a PHP-MYSQL web application, using PDO, that have to execute a lot of queries. Actually, every time i execute a query, i also check if that query gone bad or good. But recently i thought that there's no reason for it, and that's it is a wast of line to keep checking for an error.
Why should a query go wrong when your database connection is established and you are sure that your database is fine and has all the needed table and columns?
You're absolutely right and you're following the correct way.
In correct circumstances there should be no invalid queries at all. Each query should be valid with any possible input value.
But something still can happen:
You can lose the connection during the query
Table can be broken
...
So I offer you to change PDO mode to throw exception on errors and write one global handler which will catch this kind of errors and output some kind of sorry-page (+ add a line to a log file with some details)

Categories