My MySQL db has inserted rows which should not be there, in a particular there is data in a column not generally used. I thought this would make it easy to find which PHP script was inserting the rows but i have searchd all insert querys for the entire site and cannot find which php script is running the insert query.
Its also very hard to replicate as this particular table has many crons updating it.
Can anyone please try point me in the right direction of how I might go about debugging this. Is there a stack track I can use to determine the originating php script. Because it hard to replicate and I've spend two days searching for the code causing the inserts Im open to suggestions.
Im normally quite good at debugging but this bug is like a ghost.
The only thing I can think of, if you have looked at all your code including in all your old cron scripts, is to put an insert trigger on that table, and use it to find out what time of day your extra rows get inserted.
Nasty problem!
This might be a horrible idea depending on your error reporting settings and what type of environment you are debugging in, but if you are in a position to have some errors thrown I'd remove write permissions for the column in question and wait to see what script throws the error. If you are lucky log/report for the error will be written to include a script name and line.
Related
I have Debian VPS configured with a standard LAMP.
On this server, there is only one site (shop) which has a few cron jobs - mostly PHP scripts. One of them is update script executed by Lynx browser, which sends tons of queries.
When this script runs (it takes 3-4 minutes to complete) it consumes all MySQL resources, and the site almost doesn't work (page generates in 30-60 seconds instead of 1-2s).
How can I limit this script (i.e. extending its execution time limiting available resources) to allow other services to run properly? I believe there is a simple solution to the problem but can't find it. Seems my Google superpowers are limited last two days.
You don't have access to modify the offending script, so fixing this requires database administrator work, not programming work. Your task is called tuning the MySQL databse.
(I guess you already asked your vendor for help with this, and they said no.)
Ron top or htop while the script runs. Is CPU pinned at 100%? Is RAM exhausted?
1) Just live with it, and run the update script at a time of day when your web site doesn't have many visitors. Fairly easy, but not a real solution.
2) As an experiment, add RAM to your VPS instance. It may let MySQL do things all-in-RAM that it's presently putting on the hard drive in temporary tables. If it helps, that may be a way to solve your problem with a small amount of work, and a larger server rental fee.
3) Add some indexes to speed up the queries in your script, so each query gets done faster. The question is, what indexes will help? (Just adding indexes randomly generally doesn't help much.)
First, figure out which queries are slow. Give the command SHOW FULL PROCESSLIST repeatedluy while your script runs. The Info column in that result shows all the running queries. Copy them into a text file to keep them. (Or you can use MySQL's slow query log, about which you can read online.)
Second, analyze the worst offending queries to see whether there's an obvious index to add. Telling you how to do that generally is beyond the scope of a Stack Overflow answer. You might ask another question about a specific query. Before you do, please
reead this note about asking good SQL questions, and pay attention to the section on query performance.
3) It's possible your script is SELECTing many rows, or using SELECT to summarize many rows, from tables that also need to be updated when users visit your web site. In that case your visitors may be waiting for those SELECTs to finish. If you could change the script, you could put this statement right before long-running SELECTS.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
This allows the SELECT statement after it to do a "dirty read", in which it might get an earlier version of an updated row. See here.
Or, if you can figure out how to insert one statement into your obscured script, put this one right after it opens a database session.
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
Without access to the source code, though, you only have one way to see if this is the problem. That is, access the MySQL server from a privileged account, right before your script runs, and give these SQL commands.
SHOW VARIABLES LIKE 'tx_isolation';
SET GLOBAL TRANSACTION ISOLATION LEVEL READ UNCOMMITED;
then see if the performance problem is improved. Set it back after your script finishes, probably like this (depending on the tx_isolation value retrieved above)
SET GLOBAL TRANSACTION ISOLATION LEVEL READ COMMITED;
Warning a permanent global change to the isolation level might foul up your application if it relies on transaction consistency. This is just an experiment.
4) Harass the script's author to fix this problem.
Slow queries? High CPU? High I/O? Then you must look at the queries. You cannot "tune your way out of a performance problem". Tuning might give you a few percent improvement; fixing the indexes and queries is likely to give you a lot more improvement.
See this for finding the 'worst' queries; then come back with SELECTs, EXPLAINs, and SHOW CREATE TABLEs for help.
So I wanted to start a browser game project. Just for practise.
After years of PHP programming, today I heard about transactions and innoDB the first time ever.
So I googled it and still have some questions.
My first encounter with it was on a website that says, InnoDB would be necessary when programming a browsergame. Because it might be used by many people at the same time and if two people access to a database table at the same time (with one nanosecond difference for example), it could get confusing and data might be lost or your SELECT is not updated although it should have been updated by the access one nanosecond ago (but the script was still running and couldn't change it yet) ... and so on.
And apparently, transactions solve this problem by first handling the first access (until it is completed) and then handling the second one. Is this correct?
And another function is, that if you have for example 2 queries in your transaction and the second one fails, it "rolls back" and "deletes"(or never applies) the changes of the first (successful) query. Right? So either everything goes as it should or nothing changes at all. That would be great I think.
Another question: When should I use transactions? Everytime I access the database? Or is it better to use it just for some particular accesses to the database? And should I always use try {} catch() {}?
And one last question:
How does this transaction proceeds?
My understanding is the following:
You start a transaction
You do your queries and change the database or SELECT something
If everything went well, you commit the changes so they get applied to the database
If something went wrong with queries it cancels and jumps to the catch() {} where you rollback the transaction and the changes don't get applied
Is this correct? Of course, besides the question how to start, commit and rollback a transaction in your code.
Yes this is correct.You can also create savepoints to save your current point before running the query.I stricly recommend you to look into the documentation of mysql references it is explained there clearly.
I'm not sure if this is a duplicate of another question, but I have a small PHP file that calls some SQL INSERT and DELETE for an image tagging system. Most of the time both insertions and deletes work, but on some occasions the insertions don't work.
Is there a way to view why the SQL statements failed to execute, something similar to when you use SQL functions in Python or Java, and if it fails, it tells you why (example: duplicate key insertion, unterminated quote etc...)?
There are two things I can think of off the top of my head, and one thing that I stole from amitchhajer:
pg_last_error will tell you the last error in your session. This is awesome for obvious reasons, and you're going to want to log the error to a text file on disk in case the issue is something like the DB going down. If you try to store the error in the DB, you might have some HILARIOUS* hi-jinks in the process of figuring out why.
Log every query to this text file, even the successful ones. Find out if the issue affects identical operations (an issue with your DB or connection, again) or certain queries every time (issue with your app.)
If you have access to the guts of your server (or your shared hosting is good,) enable and examine the database's query log. This won't help if there's a network issue between the app and server, though.
But if I had to guess, I would imagine that when the app fails it's getting weird input. Nine times out of ten the input isn't getting escaped properly or - since you're using PHP, which murders variables as a matter of routine during type conversions - it's being set to FALSE or NULL or something and the system is generating a broken query like INSERT INTO wizards (hats, cloaks, spell_count) VALUES ('Wizard Hat', 'Robes', );
*not actually hilarious
Start monitoring your SQL queries by starting the log. There you can look what all queries are fired and errors if any.
This tutorial to start the logger will help.
Depending on which API your PHP file uses (let's hope it's PDO ;) you could check for errors in your current transaction with s.th. like
$naughtyPdoStatement->execute();
if ($naughtyPdoStatement->errorCode() != '00000')
DebuggerOfChoice::log( implode (' ', $naughtyPdoStatement->errorInfo() );
When using the legacy-APIs there's equivalents like mysql_errno, mysql_error, pg_last_error, etc... which should enable to do the same. DebuggerOfChoice::Log of course can be whatever log function you'd like to utilise
I have developed a news website in a local language(utf-8) which server average 28k users a day. The site has recently started to show much errors and slow down. I got a call from the host saying that the db is using almost 150GB of space. I believe its way too much for the db and think there something critically wrong however i cannot understand what it could be. The site is in Drupal and the db is Mysql(innoDb). Can any one give directions as to what i should do.
UPDATE: Seems like innoDb dump is using the space. What can be done about it? Whats the standard procedure to deal with this issue.
The question does not have enough info for a specific answer, maybe your code is writing the same data to the DB multiple times, maybe you are logging to the table and the logs have become very big, maybe somebody managed to get access to your site/DB and is misusing it.
You need to login to your database and check which table is taking the most space. Use SHOW TABLE STATUS (link) which will tell you the size of each table. Then manually check the data in the table to figure out what is wrong.
I have a mysql trigger that logs every time a specific table is updated.
Is there a way to also log WHICH PHP SCRIPT triggered it? (without modifying each php script of course, that would defeat my purpose)
Also, is there a way to log what was the SQL statement right before the UPDATE that triggered it?
Thanks
Nathan
Short answers: no and no. Sorry.
What are you trying to achieve? Perhaps there's another way....
no, but you can get some more specific direction.
first, if you're using persitent connections, turn them off. this will make your logs easier to use.
second, since it sounds like you have multiple code bases accessing the same database, create a different user for each code base with exactly the same rights and make each code base log in with a different user. now when you look at the log, you can see which application is doing what.
third, if you have the query log on, then the UPDATE immediately preceding the trigger will be the UPDATE that caused the trigger.
fourth, if your apps use any sort of encapsulation for the mysql connection, it should be trivial to modify it to write the call stack at the time a query is sent to the database to a file.
I've read through a few of the answers and the comments. I had one idea that would be usefuls only if your queries are passing through a single point. For example, if you have a database class that all queries are executed through.
If that is the case, you could possibly add a comment to the query itself. The comment would include the function call trace, and would be added to the query as an SQL comment.
Next, you would turn query logging on and be able to see where each query is getting called from in the log file.
If your queries do not pass through a single point, you may be out of luck.
One final suggestion would be to take a look at MySQL Proxy. I have not used it much but it is designed to do intermediate processing of queries. However, I still think you would need to modify your PHP scripts to pass additional information.