I am building a report using php and mysql, i have multiple queries going on in one go on one page and as you can imagine this is putting a lot of stress on the server, now what i wish to do is get the first query to start and before launching the second query, it checks if the first query has finished and it goes on like this until it reaches the last query. And just to be clear, one query at a time does not put that much stress on the server but several in one go does. If anybody has any idea or has an alternative please let me know.
By default, PHP will not execute next MySQL query or any other code at all before previous query is finished.
Related
In PHP, is there equivalent functionality to sqlsrv_has_rows?
I don't want to know how many rows, just has it got any at all.
I don't really want to fetch a row, as that puts the row pointer out.
It seems there is no equivalent. You have to fetch the first row. If you want to then start at the first row again, you would have to use oci_execute again. Not a great idea if the query takes a long time to run.
So some logic to store the first row would be a likely route to go down.
I have a fairly large amount of data that I'm trying to insert into MySQL. It's a data dump from a provider that is about 47,500 records. Right now I'm simply testing the insert method through a PHP script just to get things dialed in.
What I'm seeing is that, first, the inserts will continue to happen long after the PHP script "finishes". So by the time I can see the browser no longer has an "X" to cancel the request and now has a "reload" (indicating the script is done from the browser perspective) I can see for a good 10+ minutes that inserts are still occurring. I assume this is MySQL caching the queries. Is there any way to keep the script "alive" until all queries have completed? I put a 15 minute timeout on my script.
Second, and more disturbing, is that I won't get every insert. Of the 47,500 records I'll get anywhere between 28,000 and 38,000 records but never more - and that number is random each time I run the script. Anything I can do about that?
Lastly, I have a couple simple echo statements at the end of my script for debugging, these never fire - leading me to believe that a time out might be happening (although I don't get any errors about time-outs or memory outages). I'm thinking this has something to do with the problem but am not sure.
I tried changing my table to an archive table but not only didn't that help but it also means I lose the ability to update the records in the table when I want to, I did it only as a test.
Right now the insert is in a simple loop, it loops each record in the JSON data that I get from the source and runs an insert statement, then on to the next iteration. Should I be trying to instead using the loop to build a massive insert and run a single insert statement at the end? My concern with this is that I fear I would go beyond my max_allowed_packet configuration that is hard coded by my hosting provider.
So I guess the real question is what is the best method to insert nearly 50,000 records into MySQL using PHP based on what I've explained here.
Lets say a have Web page with some classes. One is loaded Mysqli connect it to DB at the beginning and keep connected. Now question is:
Is good solution make in (example setting class) prepared statement for calling value from DB table 'settings' and keep it open (statement) until finish (at footer close statement and connection) or just load all data from 'settings' DB table to array() in php and just call it from array not binding it from DB.
Second question is if I have statement open may I open another statement for another class (example class for calling text from DB) and do it same like in previous example? And than, of course close it at finish page.
Is there any performance or security problem, you can see there...
As far as I know, nobody is doing it this way. Mostly because the real benefit from the multiple execution is not that grand as some imagine, and just doesn't worth the trouble. For the short primary key lookups run in small numbers (several dozens at max) you'll hardly be able to tell the difference.
(However, there are no arguments against such practice either - you can make it this way, with single statement prepared/multiple executions, if you wish).
Yet single query that is fetching no more than couple hundreds of records still would be faster than separate queries (even prepared) to get the same amount. So, as long as your settings keep at moderate amount, it's better to get them all at once.
Yes, of course you can have as many statements prepared as you need.
(The only problem could be with fetching results. You have to always get_result/store_result, to make sure there are no results left pending and preventing other queries to run, either regular or prepared).
The statement executes as one SQL statement over your DB connection. It's not going to keep going back to the database and grabbing a single row one at a time, so don't worry about that.
In general, you should be loading everything into some data structure. If your query is returning more data than you need, then that's something you need to fix in your query. Don't run SQL that returns a huge set of data, then rely on PHP to go through it row by row and perform some hefty operations on it. Just write SQL that gets what you need in the first place. I realize this isn't always possible, but when people talk about optimizing their website, query optimization is usually at/near the top of that list, so it's pretty important.
You're definitely supposed to execute multiple statements. It's silly to keep opening and closing entire db connections before getting any data.
I have select which is counting number of rows from 3 tables, I'm using wp function $wpdb->get_var( $sql ), there are about 10 000 rows in tables. Sometimes this select takes <1 second to load sometimes more than 15. If I run this sql in phpmyadmin it always returns number of rows in less than 1 second, where could be problem?
There are a couple of things you can do.
First, do an analysis of your query. Putting EXPLAIN before the query will output data about the query and you may be able to find any problems with that.
Read more about EXPLAIN here
Also, WordPress may not have indexed the most commonly used columns.
Try indexing some of the columns which you most commonly use within your query and see if it helps.
For example:
ALTER TABLE wp_posts ADD INDEX (post_author,post_status)
Plugin
You could try a plugin such as Debug Queries which prints the queries onto the front-end and it helps debug where things are taking along time. This is recommended to run only in the dev area and not on a live website
I would also recommending hooking up something like New Relic and trying to profile what's happening on the application side. If New Relic is not an option, you might be able to use xhprof (http://pecl.php.net/package/xhprof) and/or IfP (https://code.google.com/p/instrumentation-for-php/). Very few queries will perform the same in production in an application as they do in direct SQL queries. You may have contention, read locks, or any other number of things that cause a query from php to effectively stall on its way over to MySQL. In which case you might literally see the query running very fast, but the time it takes to actually begin executing that query from PHP would be very slow. You'll definitely need to profile what's happening on the way from WordPress to MySQL and back, based on what you're saying. The tools I mentioned should all be very useful for helping you accomplish that.
I have a mysql trigger that logs every time a specific table is updated.
Is there a way to also log WHICH PHP SCRIPT triggered it? (without modifying each php script of course, that would defeat my purpose)
Also, is there a way to log what was the SQL statement right before the UPDATE that triggered it?
Thanks
Nathan
Short answers: no and no. Sorry.
What are you trying to achieve? Perhaps there's another way....
no, but you can get some more specific direction.
first, if you're using persitent connections, turn them off. this will make your logs easier to use.
second, since it sounds like you have multiple code bases accessing the same database, create a different user for each code base with exactly the same rights and make each code base log in with a different user. now when you look at the log, you can see which application is doing what.
third, if you have the query log on, then the UPDATE immediately preceding the trigger will be the UPDATE that caused the trigger.
fourth, if your apps use any sort of encapsulation for the mysql connection, it should be trivial to modify it to write the call stack at the time a query is sent to the database to a file.
I've read through a few of the answers and the comments. I had one idea that would be usefuls only if your queries are passing through a single point. For example, if you have a database class that all queries are executed through.
If that is the case, you could possibly add a comment to the query itself. The comment would include the function call trace, and would be added to the query as an SQL comment.
Next, you would turn query logging on and be able to see where each query is getting called from in the log file.
If your queries do not pass through a single point, you may be out of luck.
One final suggestion would be to take a look at MySQL Proxy. I have not used it much but it is designed to do intermediate processing of queries. However, I still think you would need to modify your PHP scripts to pass additional information.