I'm using MySql 5.0.37 on my server and I'm able to get profiling data when I use set profiling=1 in a mysql server window but that only works while queries are executed in that window.
I'm able to log queries without the timings by adding a line similar to "log=/path/to/log" in my my.cnf file.
What I want instead is mysql to produce a log file (as queries are executed) that shows the query and the amount of time spent on each query and the value can be similar to what is displayed in the Duration column when I execute show profiles in the mysql server window.
The queries are executed via mysqli calls in a PHP program which makes the back-end of a website. That's why I need the timings in a log file.
Does anyone know how I can make mysql produce such a log?
Look into
The "general log" -- be cautious; it fills up disk fast. (Or it can go to a table.)
The slowlog, but with long_query_time=0 in order to catch everything. Again it can be a disk hog.
Related
I have an application written in PHP which contains a function to perform a complex MySQL query to gather statistics and export it as CSV. Usually the process takes a good 20-30 seconds to complete due to the complexity of the query but I can live with this as it's just one query once a week.
The issue I have is now and again the server just appears to timeout with the word 'Default' outputted to the browser and nothing else
I'm sure this isn't being set/printed in the application logic because I wrote it myself and after looking at the database class I searched every single file in the application for the word Default with no results
I'm also pretty sure it can't be output from the MySQL server because it can't directlty print output without going through PHP can it?
What could be causing this? I'm thinking the only function that could be printing it is my mysql_query() function. Obviously my aim is to optimize the query to stop the timeout but I'd like to find out what is ouputting text as I don't like errors/messages like that being displayed to our users
I have a upload location so users can update a portion of my database with an uploaded file. The files are often up to 9gb, so inserting the 150,000,000 lines can take a few minutes.
After clicking the button on the website to update the database, PHP (using mysqli) basically goes on mysql lock down. If I open other tabs, they get nothing until the large update is complete.
However, I know it's not actually locking the database/table, because from CLI i can still "SELECT count(*) FROM table" and it gives me a result right away.
What would be the best method of inserting 150,000,000 records while still letting other php pages access the db (for reading only)?
You can use "INSERT DELAYED". The DELAYED option for the INSERT statement is a MySQL extension to standard SQL that is very useful if you have clients that cannot or need not wait for the INSERT to complete. This is a common situation when you use MySQL for logging and you also periodically run SELECT and UPDATE statements that take a long time to complete.
You can read about this resource on the official documentation here.
;-)
The issue was with sessions.
Since the upload validated against the login in the session, and I failed to session_write_close() before starting the db writes the session file remained locked for the entire 9gb read/write to the db.
This explains why I could still use CLI mysql, and basic php (the basic php I was using to test had no sessions in it).
I have a MySQL server running that will be queried regularly through a php front end. I'm slightly worried about server load as there will be a fair amount of people accessing the webpage, with each session querying the database regularly. The results of the query, and in essence the webpage will be the same for all users.
Is there a way of querying the database once, and outputting the data/results to the webpage, from which all users connect to and view? Basically running the query for all users that connect to the webpage, rather than each user querying the database.
Any suggestions appreciated.
Thanks
You don't have to worry.
Databases intended for that.
Most sites in the world run exactly the same way: MySQL server running that will be queried regularly through a php front end. Nothing bad with it.
Well tuned SQL server and properly designed query will serve much more than you think.
You will need exceptionally high traffic to start worrying about such things.
Don't forget that MysQL has it's own query cache.
Also please note that there are no users "connected" to the webpage. They connect, get page contents and disconnect.
You should give the server a try. If the server is overloaded,
you can always try Memcached tool. It can be used via PHP or by MySQL directly. It will save you from querying DB server with similar queries, i.e. the load on server will be decreased drastically.
If the webpage will be the same for all users, why do you even need to have a MySQL backend?
I think the best solution would be to have a standalone script running periodically (e.g. as a cron) which generates the static HTML for your web pages. That way, there is no need for users to query the database when they are just going to end up with the exact same page anyway.
If its a large query with joins you could create a view in mysql with the queried data and query the view, and update the view if the data changes.
Is there a Mysql statement which provides full details of any other open connection or user? Or, an equally detailed status report on myisam tables specifically. Looking at Mysql's SHOW TABLE STATUS documentation, it's missing some very important information for my purpose.
What I'm trying to do: remote odbc connection one is inserting several thousand records, which due to a slow connection speed can take up to an hour. Tcp connection two, using PHP on the server's localhost, is running select queries with aggregate functions on that data. Before allowing connection two to run those queries, I'd like connection two to first check to make sure there's no pending inserts on any other connection on those specific tables so it can instead wait until all data is available. If the table is currently being written to, I'd like to spit back to the user of connection two an approximation of how much longer to wait based on the number of pending inserts. Ideally by table, I'd like to get back using a query the timestamp when connection one began the write, total inserts left to be done, and total inserts already completed. Instead of insert counts, even knowing number of bytes written and left to write would work just fine here.
Obviously since connection two is a tcp connection via a PHP script, all I can really use in that script is some sort of query. I suppose if I have to, since it is on localhost, I can exec() it if the only way is by a mysql command line option that outputs this info, but I'd rather not. I suppose I could simply update a custom-made transaction log before and after this massive insert task which the PHP script can check, but hopefully there's already a built-in Mysql feature I can take advantage of.
Edit: "Transaction" was the wrong word - a language collision. I'm not actually using Mysql transactions. What I meant was currently pending tasks, queries, and requests.
You can issue SHOW FULL PROCESSLIST; to show the active connections.
As for the rest, mysql doesn't know how many inserts are left, and how long they'll take. (And if you're using MyISAM tables, they dont support transactions). The server have no way of knowing whether your PHP scripts intend to send 10 more inserts, or 10000 - and if you're doing something like insert into xxx select ... from ... - mysql doesn't track/expose info on how much/many is done/is left .
You're better off handling this yourself via other tables where you update/insert data about when you started aggregating data, track the state,when it finished etc.
If the transactions are being performed on InnoDB tables, you can get full transaction details with SHOW INNODB STATUS. It's a huge blob of output, but part of it is transactions/lock status for each process/connection.
I would like to know from my application if a myisam table can accept writes (i.e. not locked). If an exception is thrown, everything is fine as I can catch this and log the failed statement to a file. However, if a 'flush tables with read lock' command has been issued (possibly for backup), the query I send will pretty much hang out forever.
If one table is locked at a time, insert delayed works well. But when this global lock is applied, my query just waits.
The query I run is an insert statement. If this statement fails or hangs, user experience is degraded. I need a way to send the query to the server and forget about it (pretty much).
Does anyone have any suggestions on how to deal with this?
-set a query timeout?
-run asyncronous request and allow for the lock to expire while application continues?
-fork my php process?
Please let me know if I can provide and clarification or details.
SHOW OPEN TABLES LIKE "table_name";
Gives you results something like:
database, table, in_use, name_locked
test, my_table, 1, 0
The 'in_use' column will tell you if someone else has a read/write lock on that table.