I have some code that creates a user and later down the line it checks if the user is real. Essentially this:
// INSERT statement fired in here
$user = self::_createUser( $params );
// Performs a sanity check by hitting DB with
// SELECT for the ID returned from creation within object
if ( !$user->isReal() ) {
throw new Exception( "User failed to create: " . var_export( $params, 1 ), MYCODE );
}
Since the user was just created this exception should never be thrown. This never happens in production, or my sandbox environment. However our test environment uses Jenkins to kick off multiple tests at once, where the lines above live.
The exception will be thrown at random, on different tests at each run of our suite.
We turned on all MySQL logging and found that the sanity SELECT is being called BEFORE the INSERT however the SELECT is clearly selecting the proper ID from the DB - which it couldn't have unless the create worked.
How can MySQL server randomly receive a query in the wrong order? Never seen anything like this before.
EDIT Here is more code for clarification
function _createUser( $params ) {
// db returns connection using Zend, which translates to something like
// INSERT INTO users SET name='a'
// Returns ID of row inserted
$this->_id = self::db()->insert( 'users', $params );
}
function isReal() {
// Returns false when row is not there
return self::db()->fetchRow( "SELECT * FROM users WHERE id={$this->_id}" );
}
Also the MySQL logs show the query as I expect in all cases, with NO DEFER
EDIT 2
Using command line rather than Jenkins to run tests parallel is still making this happen.
Meanwhile it will only run up to 7 tests at the same time, and there no where in the code that would delete the user at the time. There are no blanket deletes except BEFORE all the tests run.
EDIT 3
Okay so in the tests that run, there are some persistent connections. One is for MySQL and one is for Mongo. Before my suite runs it was wiping MySQL by rebuilding DB from scratch, and erasing memcache. It was not doing this for mongo and thus was causing some other random errors. Once I added mongo to the reset script, it seems that the MySQL errors went away. This makes zero sense to anyone on my team or myself. Can anyone make sense of that?
I don't know why, but two days later, this error went away. It went away like my 3rd edit said... after I made sure that when I emptied MySQL and memcache, I also emptied mongo. The best guess I have is that when bad mongo data caused an exception, PHP sputtered in some way. Not sure if it's a PHP bug or if I'd sound crazy trying to report it.
Although I'm posting this answer, I'd still appreciate any additional input.
Related
UPDATE AT THE END
I'm following this Automatic Partition Maintenance in MySQL tutorial, which details a generic method for removing and adding mySQL table partitions based on date ranges.
The idea is that you can jettison older table data automatically after a certain length of time, and create new table partitions for current data as needed.
However, since my site will likely be hosted on a "shared" provider package, it seems likely that mySQL events will be unavailable to me.
So I'm cross fertilizing the Stored Procedures described in the first tutorial, with an alternative method of invoking them using the method detailed in this Stack Overflow answer, with some modifications: Partition maintainance script for Mysql
On my local test machine, I want to run the PHP script as a CRON job from Webmin.
When I run the Stored Procedures from Adminer (which has similar functionality to phpMyAdmin), using the mySQL test database, they execute as expected - partitions are deleted, and the whole process takes a couple of minutes to complete.
However, when I run my modified PHP script from Webmin as a CRON job, nothing seems to happen. There are no errors, but the script returns immediately with "OK".
Similarly, when I run the script from my LAMP machine's shell, it immediately returns with "OK".
This is the PHP script:
#!/usr/bin/env php
<?php
$connection = mysqli_connect('localhost', 'my_username', 'my_password', 'employees');
$result = mysqli_query($connection, "CALL perform_partition_maintenance('employees', 'titles', 3, 216, 5)") or die('Query fail: ' . mysqli_error($connection));
if ($result)
echo "OK";
else
echo "FAIL";
mysqli_close($connection);
I'd be very grateful for any suggestions about where I might be going wrong.
UPDATE
In line with Nick's suggestion, I've been adding a lot of debug statements. I went a slightly different route because it was a bit easier to do - lots of new "into outfile" statements.
But what I've observed has baffled me. A small segment of the Stored Procedure is below:
OPEN cur1;
read_loop: LOOP
FETCH cur1 INTO current_partition_name;
IF done THEN
LEAVE read_loop;
END IF;
IF ! #first AND p_seconds_to_sleep > 0 THEN
SELECT CONCAT('Sleeping for ', p_seconds_to_sleep, ' seconds');
SELECT SLEEP(p_seconds_to_sleep);
END IF;
SELECT CONCAT('Dropping partition: ', current_partition_name);
...
SET #first = FALSE;
END LOOP;
CLOSE cur1;
This is all taken, unmodified, from the web tutorial at Geoff Montee's page, and works flawlessly in other contexts (i.e., within Adminer, from the sql console - just not in combination with a PHP script).
However, when I comment out the line that says:
SELECT CONCAT('Dropping partition: ', current_partition_name);
Everything works just fine, but the script chokes when I put that line back in. I can't make any sense of this. Particularly since - in testing - I'm writing out "current_partition_name" into a file on disk for the first three iterations of the loop, and referencing the string in that situation doesn't cause any issues. It's very odd.
This other (apparently unresolved) stackoverflow question sounds somewhat similar.
Belatedly, I've become aware that partitioning is not available when a table has foreign keys. I'm not sure how I missed this fundamental detail when I was first exploring partitioning as an option.
It's very unfortunate because it renders the entire exercise redundant. I'll have to investigate some sort of a solution involving conventional table deletes, with all of the associated headaches.
Separately, I'm not closer to understanding why commenting out that particular line from Geoff Montee's Stored Procedure was pivotal in allowing the function to run successfully when invoked from PHP. I'd be tempted to put it down to an interpreter bug (I'm running mySQL 5.5.62 in my test environment), but as mentioned previously, the Stored Procedure executes flawlessly when initiated from Adminer.
In the website I am developing the Users have to sign Up or register.
when the users submits the form for registration the user should get logged automatically with his UserName...
Now, I insert the data into Mysql database followed by retrieving the same data from database with Select statement but the problem is that Select statement is executed faster then the Insert (or something) and it results in Fatal error... I want the PHP script for retrieving data wait until the Insertion is committed... How can I do that ? thanks
First check whether query has been executed successfully or not.
$result = mysql_query('SELECT * WHERE 1=1');
if (!$result) {
die('Invalid query: ' . mysql_error());
}
Using PHP, requests to a RDBMS such as MySQL are always performed synchronously. Then, the behaviour you describe might come from a not-commited transaction initiated prior your INSERT SQL statement. Indeed, if the transaction is not finished and you perform a SELECT SQL statement on the previously inserted table, you will see nothing new because the RDBMS does not know yet if the inserted data must commited or not.
The first thing you might want to check is whether or not a transaction is begun:
Look in the execution stack if an SQL transaction is begun prior your call to INSERT.
Check if the configuration parameter autocommit is set or not (see https://dev.mysql.com/doc/refman/5.0/en/commit.html).
If you are not performing formal transactions in your code, enabling autocommit might be a solution. Otherwise, simply do not forget to send a COMMIT statement when appropriate.
I am facing a similar problem.
After being unable to fix it properly, i've ended UP adding a sleep(1); after my INSERT/UPDATE query.
It seems that the server may not been working properly, because queries are executed syncronous. Im feeling that my problem is related to memory or filesystem/cache delays at VPS. Data takes around 0.100 to be really updated, when it should be reflected at runtime.
I am not sure where the problem really comes, just telling you "how to delay PHP script".
Regards.
I have a PHP script designed to scrape data from websites. The script checks a locally-hosted mysql database each time it finds a new item to see whether or not that item has already been downloaded and already exists in the Mysql database. If it sees the item already exists in the database, it should ignore it and move on. This is the code I am using to do that:
$result = mysql_query("SELECT * FROM web_media WHERE sourceForum LIKE '%$ForumtoGrab%' AND titleThreadNum=$threadTitleExists");
if((!mysql_num_rows($result)) && (mysql_num_rows($result) !== FALSE))
{}
In other words, if it comes up with zero results, then that item is considered new. This script ran fine on my old hosting company for several months. I have recently moved to a new hosting provider, and I'm suddenly running into a very strange issue. Every 12 hours or so, the expression seems to randomly fail and the script finds a bunch of "new" data that already exists in the mysql database. I've tried running the query manually, and the code appears to have no problem finding the pre-existing entry no problem.
Does anyone have any idea what's going on here? I already checked with the hosting provider, and they say that the number of aborted Mysql connections we have is very low and isn't anything to worry about…so it doesn't seem like it's an issue with MySQL itself. I suspect it may be an issue with the mysql query?
Thanks
Try to check the MySQL error log for errors in your query.
PS: I hope you are preparing the $threadTitleExists for usage in a MySQL query (smth like (int)$threadTitleExists or mysqli_real_escape_string($threadTitleExists))
I'm not sure if this is a duplicate of another question, but I have a small PHP file that calls some SQL INSERT and DELETE for an image tagging system. Most of the time both insertions and deletes work, but on some occasions the insertions don't work.
Is there a way to view why the SQL statements failed to execute, something similar to when you use SQL functions in Python or Java, and if it fails, it tells you why (example: duplicate key insertion, unterminated quote etc...)?
There are two things I can think of off the top of my head, and one thing that I stole from amitchhajer:
pg_last_error will tell you the last error in your session. This is awesome for obvious reasons, and you're going to want to log the error to a text file on disk in case the issue is something like the DB going down. If you try to store the error in the DB, you might have some HILARIOUS* hi-jinks in the process of figuring out why.
Log every query to this text file, even the successful ones. Find out if the issue affects identical operations (an issue with your DB or connection, again) or certain queries every time (issue with your app.)
If you have access to the guts of your server (or your shared hosting is good,) enable and examine the database's query log. This won't help if there's a network issue between the app and server, though.
But if I had to guess, I would imagine that when the app fails it's getting weird input. Nine times out of ten the input isn't getting escaped properly or - since you're using PHP, which murders variables as a matter of routine during type conversions - it's being set to FALSE or NULL or something and the system is generating a broken query like INSERT INTO wizards (hats, cloaks, spell_count) VALUES ('Wizard Hat', 'Robes', );
*not actually hilarious
Start monitoring your SQL queries by starting the log. There you can look what all queries are fired and errors if any.
This tutorial to start the logger will help.
Depending on which API your PHP file uses (let's hope it's PDO ;) you could check for errors in your current transaction with s.th. like
$naughtyPdoStatement->execute();
if ($naughtyPdoStatement->errorCode() != '00000')
DebuggerOfChoice::log( implode (' ', $naughtyPdoStatement->errorInfo() );
When using the legacy-APIs there's equivalents like mysql_errno, mysql_error, pg_last_error, etc... which should enable to do the same. DebuggerOfChoice::Log of course can be whatever log function you'd like to utilise
I'm a bit obsessed now. I'm writing a PHP-MYSQL web application, using PDO, that have to execute a lot of queries. Actually, every time i execute a query, i also check if that query gone bad or good. But recently i thought that there's no reason for it, and that's it is a wast of line to keep checking for an error.
Why should a query go wrong when your database connection is established and you are sure that your database is fine and has all the needed table and columns?
You're absolutely right and you're following the correct way.
In correct circumstances there should be no invalid queries at all. Each query should be valid with any possible input value.
But something still can happen:
You can lose the connection during the query
Table can be broken
...
So I offer you to change PDO mode to throw exception on errors and write one global handler which will catch this kind of errors and output some kind of sorry-page (+ add a line to a log file with some details)