Here's a strange case that has never happened to me before, I've been coding with mysqli and php for about 2 years now, very basic stuff, prepared statements, query executes, etc, started with WAMP, then moved to ubuntu, tried LAMP, didn't like it, went on investigating a proper testing enviroment, and I found PuPHPet, which was perfect, have been working with it a few months, and everything is going great.
Never in all this time I had to worry about opening, closing or commiting connections, like I do in JAVA, I read somewhere that php handles connections, it opens and closes them when the scripts runs. I've never really pay much attention, which I should have, I found this very useful post PHP + MySQL transactions examples, which I will start using from now on.
But the reason I'm posting here, is because all of a sudden autocommit is off. I opened my usual puphpet project, tried inserting some new entries, and they did not get inserted. So, I made some test in workbench, inserted some entries manually, they got inserted, but my AI index had incremented, meaning that my app did insert something, or at least it tried, or something happened because the AI counter had increased. Got some help from a couple of DBA and we got to the conclusion that it is the autocommit.
Using this piece of code:
$res = mysqli_query($mysqli, "SELECT ##autocommit");
$row = mysqli_fetch_row($row);
printf("Autocommit: %s\n", $row[0]);
It prints "Autocommit: 0", so it is off, but I didn't change anything, even in production enviroment autocommit is on, or at least my inserts stay there, but I don't know why it changed in my test enviroment, so I'm wondering what could have happened? Anybody got any ideas?
Related
UPDATE AT THE END
I'm following this Automatic Partition Maintenance in MySQL tutorial, which details a generic method for removing and adding mySQL table partitions based on date ranges.
The idea is that you can jettison older table data automatically after a certain length of time, and create new table partitions for current data as needed.
However, since my site will likely be hosted on a "shared" provider package, it seems likely that mySQL events will be unavailable to me.
So I'm cross fertilizing the Stored Procedures described in the first tutorial, with an alternative method of invoking them using the method detailed in this Stack Overflow answer, with some modifications: Partition maintainance script for Mysql
On my local test machine, I want to run the PHP script as a CRON job from Webmin.
When I run the Stored Procedures from Adminer (which has similar functionality to phpMyAdmin), using the mySQL test database, they execute as expected - partitions are deleted, and the whole process takes a couple of minutes to complete.
However, when I run my modified PHP script from Webmin as a CRON job, nothing seems to happen. There are no errors, but the script returns immediately with "OK".
Similarly, when I run the script from my LAMP machine's shell, it immediately returns with "OK".
This is the PHP script:
#!/usr/bin/env php
<?php
$connection = mysqli_connect('localhost', 'my_username', 'my_password', 'employees');
$result = mysqli_query($connection, "CALL perform_partition_maintenance('employees', 'titles', 3, 216, 5)") or die('Query fail: ' . mysqli_error($connection));
if ($result)
echo "OK";
else
echo "FAIL";
mysqli_close($connection);
I'd be very grateful for any suggestions about where I might be going wrong.
UPDATE
In line with Nick's suggestion, I've been adding a lot of debug statements. I went a slightly different route because it was a bit easier to do - lots of new "into outfile" statements.
But what I've observed has baffled me. A small segment of the Stored Procedure is below:
OPEN cur1;
read_loop: LOOP
FETCH cur1 INTO current_partition_name;
IF done THEN
LEAVE read_loop;
END IF;
IF ! #first AND p_seconds_to_sleep > 0 THEN
SELECT CONCAT('Sleeping for ', p_seconds_to_sleep, ' seconds');
SELECT SLEEP(p_seconds_to_sleep);
END IF;
SELECT CONCAT('Dropping partition: ', current_partition_name);
...
SET #first = FALSE;
END LOOP;
CLOSE cur1;
This is all taken, unmodified, from the web tutorial at Geoff Montee's page, and works flawlessly in other contexts (i.e., within Adminer, from the sql console - just not in combination with a PHP script).
However, when I comment out the line that says:
SELECT CONCAT('Dropping partition: ', current_partition_name);
Everything works just fine, but the script chokes when I put that line back in. I can't make any sense of this. Particularly since - in testing - I'm writing out "current_partition_name" into a file on disk for the first three iterations of the loop, and referencing the string in that situation doesn't cause any issues. It's very odd.
This other (apparently unresolved) stackoverflow question sounds somewhat similar.
Belatedly, I've become aware that partitioning is not available when a table has foreign keys. I'm not sure how I missed this fundamental detail when I was first exploring partitioning as an option.
It's very unfortunate because it renders the entire exercise redundant. I'll have to investigate some sort of a solution involving conventional table deletes, with all of the associated headaches.
Separately, I'm not closer to understanding why commenting out that particular line from Geoff Montee's Stored Procedure was pivotal in allowing the function to run successfully when invoked from PHP. I'd be tempted to put it down to an interpreter bug (I'm running mySQL 5.5.62 in my test environment), but as mentioned previously, the Stored Procedure executes flawlessly when initiated from Adminer.
So I'm currently having a problem with my DB's auto increments. ( NOTE THIS WORKED ON MY OLD HOST I'VE SINCE THEN CHANGED SERVERS AND NOW HOSTING ON A UBUNTU SERVER ). Which I'm not really sure if that makes a difference.
But here is the data that gets "INSERTED" into the DB or what should be.
mysql_query("INSERT INTO table_fruits VALUES ('', '$description', '$keywords', '$fruits')");
so the first '' should be the ID that auto increments but it doesn't.
I've tried removing the '' and leaving just the ,
I've tried totally removing the '',
Only thing I've tried and succeeded is changing it to '0', for a strange reason that worked but there are a lot of files and pages to check if that worked correctly and to edit.
Surly that's not an efficient way to resolve this anyway. I don't currently have the old hosting anymore so I can't even check on a PHP version if that was causing the problems.
Are there any query's I can run to resolve this dramatic stressful problem that's been driving me batty for hours now? I appreciate any help.
Actually you can change from column settings as autoincrement. But at this point, I have encountered different problems with different versions of MySQL. So if you want to achieve the direct result in practice, export the whole table. Define autoincrement over SQL code. Run the new code.
Note: Make sure you get a backup!
My MySQL db has inserted rows which should not be there, in a particular there is data in a column not generally used. I thought this would make it easy to find which PHP script was inserting the rows but i have searchd all insert querys for the entire site and cannot find which php script is running the insert query.
Its also very hard to replicate as this particular table has many crons updating it.
Can anyone please try point me in the right direction of how I might go about debugging this. Is there a stack track I can use to determine the originating php script. Because it hard to replicate and I've spend two days searching for the code causing the inserts Im open to suggestions.
Im normally quite good at debugging but this bug is like a ghost.
The only thing I can think of, if you have looked at all your code including in all your old cron scripts, is to put an insert trigger on that table, and use it to find out what time of day your extra rows get inserted.
Nasty problem!
This might be a horrible idea depending on your error reporting settings and what type of environment you are debugging in, but if you are in a position to have some errors thrown I'd remove write permissions for the column in question and wait to see what script throws the error. If you are lucky log/report for the error will be written to include a script name and line.
So I wanted to start a browser game project. Just for practise.
After years of PHP programming, today I heard about transactions and innoDB the first time ever.
So I googled it and still have some questions.
My first encounter with it was on a website that says, InnoDB would be necessary when programming a browsergame. Because it might be used by many people at the same time and if two people access to a database table at the same time (with one nanosecond difference for example), it could get confusing and data might be lost or your SELECT is not updated although it should have been updated by the access one nanosecond ago (but the script was still running and couldn't change it yet) ... and so on.
And apparently, transactions solve this problem by first handling the first access (until it is completed) and then handling the second one. Is this correct?
And another function is, that if you have for example 2 queries in your transaction and the second one fails, it "rolls back" and "deletes"(or never applies) the changes of the first (successful) query. Right? So either everything goes as it should or nothing changes at all. That would be great I think.
Another question: When should I use transactions? Everytime I access the database? Or is it better to use it just for some particular accesses to the database? And should I always use try {} catch() {}?
And one last question:
How does this transaction proceeds?
My understanding is the following:
You start a transaction
You do your queries and change the database or SELECT something
If everything went well, you commit the changes so they get applied to the database
If something went wrong with queries it cancels and jumps to the catch() {} where you rollback the transaction and the changes don't get applied
Is this correct? Of course, besides the question how to start, commit and rollback a transaction in your code.
Yes this is correct.You can also create savepoints to save your current point before running the query.I stricly recommend you to look into the documentation of mysql references it is explained there clearly.
I do have 1 million datas in my MySQL database and when I export whole data it is getting stuck in between and showing the download box for long time. sometimes it will export without any issues. but if I do multiple table exports then couple of tables may export and others are getting stuck. why is this happening and what will be the work around for the same??
well I am using PhpMyadmin to export
It is most likely due to the data size. The webserver could have timeout issues or run out of memory when exporting big amounts of data.
I suggest exporting one table at a time with phpMyAdmin (in SQL format, avoid using XLS), but if it still fails, you may consider using mysqldump.
What you do is, delete phpmyadmin from your system, write to the developers, and tell them to immediately discontinue development and destory all copies of the source.
You get everyone who has ever installed phpmyadmin to delete their copies too, and then bing the world will be a better place...
It is alas, but a dream.
PHPMyAdmin is a wart on the arse of the universe and should be eliminated; it is a kind of fungus which poisons any data it touches with a painful, lingering death.
Moreover, the developers appear keen to insist that it is actually useful; it has an interface which makes things which fail appear to work, thus fooling the naive user into believing that it has actually DONE what it was asked to do.
Its backups give an overwhelmingly false sense of security; they cannot be considered to be "backups" insofar as one might hope to restore them.