I have a SugarCRM installation.
My problem is when I do a search in sugarcrm the search query block all other queries.
Id User Host db Command Time State Info
49498 sugar xx.xx.xx.xx:59568 sugarcrm Query 5 Sorting result SELECT leads.id ,leads_cstm.cedula_c,leads_cstm.numplanilla_c,leads_cstm.profession_c,leads_cstm.b
49502 sugar xx.xx.xx.xx:59593 sugarcrm Sleep 5 NULL
As you can see the query Id 49502 is, I presume, waiting for query 49498 to finalize.
first query is a search query which last a looong time to execute
second query is a query for the index page
The odd thing is that if I open two terminals and connect to mysql using the same user as my sugarcrm installation I can execute both queries concurrently but if I make a search in the browser and open a new tab and try to access the index page, that second tab hungs until the first tab completes execution or it get a timeout from server.
I have tested this using php both as a module and as cgi.
So I guess it should be something with the mysql_query function itself?
Any ideas? It's very hard to optimize the db (lots of tables, lots of content) but at least the site should be able to be used concurrently...
Probably because you're using file-based sessions. PHP will lock session files while they're in used, which essentially makes all requests by a particular session to become serial.
The only ways around this are to either use session_write_close() to release the session lock on a per-script basis, after any session-changing code is complete, or implementing your own session handler which can have its own locking logic.
Related
I made a website using mysql (InnoDB), php-mysqli and PDO to track progress data.
Originally, there are massively simultaneous SELECT-UPDATE queries at the backend. In one transaction, two statements are presented: SELECT id FROM ... WHERE started = 0 FOR UPDATE, UPDATE ... SET started = 1 .... They are separated because of some pre-processing. The backend works quite well.
Now I'm coding a frontend monitoring the database and plan to use statements like SELECT COUNT(started) FROM ... WHERE started = 0. I'm unsure about which kind of lock mysql will use under such circumstance. As I needn't guarantee every frontend call is successful, the success rate of the backend is more important.
To clarify, "backend" here is several PHP files that would be accessed by program as REST API; "frontend" is a PHP accessed by user.
Q: May these monitoring SELECT result in backend failures? If so, is there a way of prioritizing backend access?
When the web server receives a request for my PHP script, I presume the server creates a dedicated process to run the script. If, before the script exits, another request to the same script comes, another process gets started -- am I correct, or the second request will be queued in the server, waiting for the first request to exit? (Question 1)
If the former is correct, i.e. the same script can run simultaneously in a different process, then they will try to access my database.
When I connect to the database in the script:
$DB = mysqli_connect("localhost", ...);
query it, conduct more or less lengthy calculations and update it, I don't want the contents of the database to be modified by another instance of a running script.
Question 2: Does it mean that since connecting to the database until closing it:
mysqli_close($DB);
the database is blocked for any access from other software components? If so, it effectively prevents the script instances from running concurrently.
UPDATE: #OllieJones kindly explained that the database was not blocked.
Let's consider the following scenario. The script in the first process discovers an eligible user in the Users table and starts preparing data to append for that user in the Counter table. At this moment the script in the other process preempts and deletes the user from the Users table and the associate data in the Counter table; it then gets preempted by the first script which writes the data for the user no more existing. These data become in the head-detached state, i.e. unaccessible.
How to prevent such a contention?
In modern web servers, there's a pool of processes (or possibly threads) handling requests from users. Concurrent requests to the same script can run concurrently. Each request-handler has its own connection to the DBMS (they're actually maintained in a pool, but that's a story for another day).
The database is not blocked while individual request-handlers are using it, unless you block it explicitly by locking a table or doing a request like SELECT ... FOR UPDATE. For more information on this deep topic, read about transactions.
Therefore, it's important to write your database queries in such a way that they won't interfere with each other. For example, if you need to learn the value of an auto-incremented column right after you insert a row, you should use LAST_INSERT_ID() or mysqli_insert_id() instead of trying to query the data base: another user may have inserted another row in the meantime.
The system test discipline for scaled-up web sites usually involves a rigorous load test in order to shake out all this concurrency.
If you're doing a bunch of work on a particular entity, in your case a User, you use a transaction.
First you do
BEGIN
to start the transaction. Then you do
SELECT whatever FROM User WHERE user_id = <<whatever>> FOR UPDATE
to choose the user and mark that user's row as busy-being-updated. Then you do all the work you need to do to fill out various rows in various tables relating to that user.
Finally you do
COMMIT
If you messed things up, or don't want to go through with the change, you do
ROLLBACK
and all your changes will be restored to their state right before the SELECT ... FOR UPDATE.
Why does this work? Because if another client does the same SELECT .... FOR UPDATE, MySQL will delay that request until the first one either gets COMMIT or ROLLBACK.
If another client works with a different userid, the operations may proceed concurrently.
You need the InnoDB access method to use transactions: MyISAM doesn't support them.
Multiple reads can be done concurrently, if there is a write operation then it will block all other operations. A read will block all writes.
I have a very large dataset that i am exporting using a batch process to keep the page from timing out. The whole process can take over an hour, and i'm using drupal batch which basically reloads the page with a status on how far the process has completed. Each page request essentially runs the query again which includes a sort which takes a while. Then it exports the data to a temp file. The next page load runs the full mongo query, sorts, skips the entries already exported, and exports more to the temp file. The problem is that each page load makes mongo rerun the entire query and sort. I'd like to be able to have the next batch page just pick up the same cursor where it left off and continue to pull the next set of results.
The MongoDB Manual entry for cursor.skip() gives some advice:
Consider using range-based pagination for these kinds of tasks. That is, query for a range of objects, using logic within the application to determine the pagination rather than the database itself. This approach features better index utilization, if you do not need to easily jump to a specific page.
E.g If your nightly batch process runs over the data accumulated in the last 24hrs, perhaps you can run date-range based queries (maybe one per hour of the day) and process your data that way. I'm assuming that your data contains some sort of usable time stamp per document, but you get the idea.
Although cursors live on the server and only timeout after roughly 10minutes of no-activity, the PHP driver does not support persisting cursors between requests.
At the end of each request the driver will kill all cursors created during that request that have not been exhausted.
This also happens when all references to the MongoCursor object are removed (eg $cursor = null).
This is done as its unfortunately fairly common for applications not to iterate over the entire cursor, and we don't want to leave unused cursors around on the server as it could cause performance implications.
For your specific case, the best way to work around this problem is to improve your indexes so loading the cursor is faster.
You may also want to only select some subset of the data so you have a fixed point you can request data between.
Say, for reports, your first request may ask for all data from 1am to 2am.
Then your next request asks for all data from 2am to 3am and so on and on, like Saftschleck explains.
You may also want to look into the aggregation framework, which is designed to do "online reporting": http://docs.mongodb.org/manual/aggregation/
I have a upload location so users can update a portion of my database with an uploaded file. The files are often up to 9gb, so inserting the 150,000,000 lines can take a few minutes.
After clicking the button on the website to update the database, PHP (using mysqli) basically goes on mysql lock down. If I open other tabs, they get nothing until the large update is complete.
However, I know it's not actually locking the database/table, because from CLI i can still "SELECT count(*) FROM table" and it gives me a result right away.
What would be the best method of inserting 150,000,000 records while still letting other php pages access the db (for reading only)?
You can use "INSERT DELAYED". The DELAYED option for the INSERT statement is a MySQL extension to standard SQL that is very useful if you have clients that cannot or need not wait for the INSERT to complete. This is a common situation when you use MySQL for logging and you also periodically run SELECT and UPDATE statements that take a long time to complete.
You can read about this resource on the official documentation here.
;-)
The issue was with sessions.
Since the upload validated against the login in the session, and I failed to session_write_close() before starting the db writes the session file remained locked for the entire 9gb read/write to the db.
This explains why I could still use CLI mysql, and basic php (the basic php I was using to test had no sessions in it).
I am creating a custom query builder, when user has created his query he can verify the query syntax by clicking a button. When the user click on the button to verify, An AJAX call is sent to the server with the query and execution of the query starts, during this time user can see a modal on his screen with a cancel button. If by any chance user clicks on the cancel button I want the send another AJAX call to the server to kill the execution of the query.
Currently I am only able to kill the AJAX call which I originally send and my page works fine at user end
but I am looking for a PHP code to stop the MYSQL query at server side because some queries can be quite heavy and run for a long time
First of all, if the purpose of the query is just to check the syntax, do not execute it! execute explain, or add limit 0, or execute against empty database.
As for the killing, you have to connect to database with root privileges, and issue KILL command (but you need to know the query id). Or you can kill the entire thread. Take a look to mysqli::kill
Edit: seems you don't need root privileges, to see queries by your user, use SHOW PROCESSLIST command
I believe that thread_id reported by CONNECTION_ID() is the same as the thread id used in mysqladmin....
You'd need to capture the process id for the connection before starting the query...
$qry="SELECT CONNECTION_ID() AS c";
$row=mysql_fetch_assoc(mysql_query($qry));
$_SESSION['mysql_connection_id']=$row['c'];
then when the user clicks the button do something like....
exec('mysqladmin -u $user -p$password killproc '
. $_SESSION['mysql_connection_id']);
But this opens up the possibility of users killing other peoples' queries. You could inject a comment key, e.g. the php session id, at the beginning of each query (mysqladmin truncates text of the query) then use mysqladmin processlist to check the ownership and/or retrieve the thread id before killing it.
No. It's not possible, as you're closing your connection everytime executions ends. There's no way (through PHP) to control old "sessions" (connections)
You would have to allow the application to kill database queries, and you need to implement a more complex interaction between Client and Server, which could lead to security holes if done wrong.
The Start-Request should contain a session and a page id (secure id, so not 3 and 4 and 5 but a non-guessable but unique hash of some kind). The backend then connects this id with the query. This could be done in some extra table of the database, or a redis-way if you have redis anyway, but also via comments in the SQL query, like "Session fid98a08u4j, Page 940jfmkvlz" => s:<session>p:<page>.
/* s:fid98a08u4jp:940jfmkvlz */ select * from ...
If the user presses "Cancel", you send a Cancel-request with session and page id to the server. The php-code then fetches the list of your running SQL Queries with show processlist and searches for session and page to extract the query id.
Then the php sends a
kill query <id>
to the MySQL-server.
This might lead to trouble when not using transactions, and this might damage replication. And even a kill query might take some time in the state 'killing'.
So this should be the last possible of several variants. But sometimes it has to be done, I even had once a program where you could list your own running queries to kill them, which was needed because of "quotas" (you could not run more than two or three reporting requests at the same time).