Killing a MySQL query during execution with PHP and AJAX - php

I am creating a custom query builder, when user has created his query he can verify the query syntax by clicking a button. When the user click on the button to verify, An AJAX call is sent to the server with the query and execution of the query starts, during this time user can see a modal on his screen with a cancel button. If by any chance user clicks on the cancel button I want the send another AJAX call to the server to kill the execution of the query.
Currently I am only able to kill the AJAX call which I originally send and my page works fine at user end
but I am looking for a PHP code to stop the MYSQL query at server side because some queries can be quite heavy and run for a long time

First of all, if the purpose of the query is just to check the syntax, do not execute it! execute explain, or add limit 0, or execute against empty database.
As for the killing, you have to connect to database with root privileges, and issue KILL command (but you need to know the query id). Or you can kill the entire thread. Take a look to mysqli::kill
Edit: seems you don't need root privileges, to see queries by your user, use SHOW PROCESSLIST command

I believe that thread_id reported by CONNECTION_ID() is the same as the thread id used in mysqladmin....
You'd need to capture the process id for the connection before starting the query...
$qry="SELECT CONNECTION_ID() AS c";
$row=mysql_fetch_assoc(mysql_query($qry));
$_SESSION['mysql_connection_id']=$row['c'];
then when the user clicks the button do something like....
exec('mysqladmin -u $user -p$password killproc '
. $_SESSION['mysql_connection_id']);
But this opens up the possibility of users killing other peoples' queries. You could inject a comment key, e.g. the php session id, at the beginning of each query (mysqladmin truncates text of the query) then use mysqladmin processlist to check the ownership and/or retrieve the thread id before killing it.

No. It's not possible, as you're closing your connection everytime executions ends. There's no way (through PHP) to control old "sessions" (connections)

You would have to allow the application to kill database queries, and you need to implement a more complex interaction between Client and Server, which could lead to security holes if done wrong.
The Start-Request should contain a session and a page id (secure id, so not 3 and 4 and 5 but a non-guessable but unique hash of some kind). The backend then connects this id with the query. This could be done in some extra table of the database, or a redis-way if you have redis anyway, but also via comments in the SQL query, like "Session fid98a08u4j, Page 940jfmkvlz" => s:<session>p:<page>.
/* s:fid98a08u4jp:940jfmkvlz */ select * from ...
If the user presses "Cancel", you send a Cancel-request with session and page id to the server. The php-code then fetches the list of your running SQL Queries with show processlist and searches for session and page to extract the query id.
Then the php sends a
kill query <id>
to the MySQL-server.
This might lead to trouble when not using transactions, and this might damage replication. And even a kill query might take some time in the state 'killing'.
So this should be the last possible of several variants. But sometimes it has to be done, I even had once a program where you could list your own running queries to kill them, which was needed because of "quotas" (you could not run more than two or three reporting requests at the same time).

Related

when using mysqli, does it make new connections every single time?

Let's say I'm using mysqli and pure php. I'm going to write pseudocode.
in config.php
$msql = new mysqli("username","password","dbname");
$msql->connect();
in app.php
include "config.php";
$row = mysqli_select("select * from posts");
foreach($row as $r){
var_dump($r);
}
1) Question is : Everytime user makes a request or access the webpage on mywebsite.com/app.php, every time new mysql instance gets created and the old one gets destroyed? or there's only one mysql instance at all (one single connection to database)
Yes each time your script runs it will make a new connection to the database and a new request to retrieve data.
Even though you don't close the connection at the end of your script, the mysqli connection is destroyed at the end of it, so you can't "trick" it to stay open or work in a cookie-way let's say.
I mean it's what it is supposed a script to be doing. Connects to the db, does it's job, leaves the db (connection dies).
On the other hand if in the same script you have like 2-3 or more queries then it's another story because as i mentioned above the connection of mysqli dies at the end of the script, meaning that you make 1 connection, run all your script queries and you exit after.
Edit for answering comment:
Let's assume that i come into your page and a friend of mine comes at the same time (let's assume as i said). I connect in your database and request some data and so does a friend of mine. Let's see the procedure:
We both trigger a backend script to run for each of us. In this script an instance of mysqli is created so we have 2 instances running at this time but for two separate users.
And that makes total sense and let me elaborate on this:
Think of a page that you book your holidays. If i want to see ticket prices for France and you want to see ticket prices for England then in the php script that runs there is going to be put a where clause for each of us like :
->where('destination','France');
After that the data is send in the frontend and i am able to see what i requested. Meanwhile my instance is dead as i queried the database, got my result and there is nothing more to be done.
The same happens with every user who will join at this time. He/she will create his instance, get the data he wants to get and let his instance die.
Latest edit:
After reading your latest comment i figured out what was your issue first hand. So as i mentioned in my post an instance that a user created in mysqli can not be shared with any other user. It is build to be unique.
What you can do if you have that much traffic is that you cache your data. You can use Reddis database that is build specifically for that reason, to be queried a lot and you can set caching to it, so it deletes the data after some time if you want to.

laravel cancel database query

I have a site in laravel where a single page makes multiple database queries (select statements which can take a little while to run).
Is there a way to detect if a user:
Refreshes the page
Changes the parameters of the query
Closes the page
hence meaning that the result of the query is not needed, and therefore cancelling it or killing the process?
Note the database is mysql so I can from mysql workbench easily call kill {PID} to end a similar query manually.
Thanks!
Yest, this can all be detected easily with some JavaScript; there is an 'onunload' event that is fired when the user tries to leave or close the page, so that answers 1 and 3. Changing a field in a form is also easy to detect.
The hard problem will be to match the cancel request with the original query; somehow you have to know the PID of that query, and since MySQL functions generally don't return until they are done you can't ask for information like the PID.
You can detect if a user leaves or reloads a page with Javascript, catching the ONUNLOAD event (documentation here, SO related question here).
To check if query parameters change, you should specify if the user enters them manually in a form or any other way you are getting them, there's a different answer for any different method.

Concurrent database access

When the web server receives a request for my PHP script, I presume the server creates a dedicated process to run the script. If, before the script exits, another request to the same script comes, another process gets started -- am I correct, or the second request will be queued in the server, waiting for the first request to exit? (Question 1)
If the former is correct, i.e. the same script can run simultaneously in a different process, then they will try to access my database.
When I connect to the database in the script:
$DB = mysqli_connect("localhost", ...);
query it, conduct more or less lengthy calculations and update it, I don't want the contents of the database to be modified by another instance of a running script.
Question 2: Does it mean that since connecting to the database until closing it:
mysqli_close($DB);
the database is blocked for any access from other software components? If so, it effectively prevents the script instances from running concurrently.
UPDATE: #OllieJones kindly explained that the database was not blocked.
Let's consider the following scenario. The script in the first process discovers an eligible user in the Users table and starts preparing data to append for that user in the Counter table. At this moment the script in the other process preempts and deletes the user from the Users table and the associate data in the Counter table; it then gets preempted by the first script which writes the data for the user no more existing. These data become in the head-detached state, i.e. unaccessible.
How to prevent such a contention?
In modern web servers, there's a pool of processes (or possibly threads) handling requests from users. Concurrent requests to the same script can run concurrently. Each request-handler has its own connection to the DBMS (they're actually maintained in a pool, but that's a story for another day).
The database is not blocked while individual request-handlers are using it, unless you block it explicitly by locking a table or doing a request like SELECT ... FOR UPDATE. For more information on this deep topic, read about transactions.
Therefore, it's important to write your database queries in such a way that they won't interfere with each other. For example, if you need to learn the value of an auto-incremented column right after you insert a row, you should use LAST_INSERT_ID() or mysqli_insert_id() instead of trying to query the data base: another user may have inserted another row in the meantime.
The system test discipline for scaled-up web sites usually involves a rigorous load test in order to shake out all this concurrency.
If you're doing a bunch of work on a particular entity, in your case a User, you use a transaction.
First you do
BEGIN
to start the transaction. Then you do
SELECT whatever FROM User WHERE user_id = <<whatever>> FOR UPDATE
to choose the user and mark that user's row as busy-being-updated. Then you do all the work you need to do to fill out various rows in various tables relating to that user.
Finally you do
COMMIT
If you messed things up, or don't want to go through with the change, you do
ROLLBACK
and all your changes will be restored to their state right before the SELECT ... FOR UPDATE.
Why does this work? Because if another client does the same SELECT .... FOR UPDATE, MySQL will delay that request until the first one either gets COMMIT or ROLLBACK.
If another client works with a different userid, the operations may proceed concurrently.
You need the InnoDB access method to use transactions: MyISAM doesn't support them.
Multiple reads can be done concurrently, if there is a write operation then it will block all other operations. A read will block all writes.

does mysql_query in php block concurrently queries?

I have a SugarCRM installation.
My problem is when I do a search in sugarcrm the search query block all other queries.
Id User Host db Command Time State Info
49498 sugar xx.xx.xx.xx:59568 sugarcrm Query 5 Sorting result SELECT leads.id ,leads_cstm.cedula_c,leads_cstm.numplanilla_c,leads_cstm.profession_c,leads_cstm.b
49502 sugar xx.xx.xx.xx:59593 sugarcrm Sleep 5 NULL
As you can see the query Id 49502 is, I presume, waiting for query 49498 to finalize.
first query is a search query which last a looong time to execute
second query is a query for the index page
The odd thing is that if I open two terminals and connect to mysql using the same user as my sugarcrm installation I can execute both queries concurrently but if I make a search in the browser and open a new tab and try to access the index page, that second tab hungs until the first tab completes execution or it get a timeout from server.
I have tested this using php both as a module and as cgi.
So I guess it should be something with the mysql_query function itself?
Any ideas? It's very hard to optimize the db (lots of tables, lots of content) but at least the site should be able to be used concurrently...
Probably because you're using file-based sessions. PHP will lock session files while they're in used, which essentially makes all requests by a particular session to become serial.
The only ways around this are to either use session_write_close() to release the session lock on a per-script basis, after any session-changing code is complete, or implementing your own session handler which can have its own locking logic.

Using MySQL last insert id for the same user

I'm using the mysql_insert_id within my code to get an auto increment.
I have read around and it looks like there is no race condition regarding this for different user connections, but what about the same user? Will I be likely to run into race condition problems when connecting to the database using the same username/user but still from different connection sessions?
My application is PHP. When a user submits a web request my PHP executes code and for that particular request/connection session I keep a persistent SQL connection open in to MySQL for the length of that request. Will this cause me any race condition problems?
None for any practical purpose, If you execute the last_id request right after executing your insert then there is practically not enough time for another insert to spoil that. Theoretically might be
possible
According to PHP Manual
Note:
Because mysql_insert_id() acts on the last performed query, be sure to
call mysql_insert_id() immediately after the query that generates the
value.
Just in case you want to double check you can use this function to confirm your previous query
mysql_info
The use of persistent connections doesn't mean that every request will use the same connection. It means that each apache thread will have its own connection that is shared between all requests executing on that thread.
The requests will run serially (one after another) which means that the same persistent connection will not be used by two threads running at the same time.
Because of this, your last_insert_id value will be safe, but be sure that you check the result of your inserts before using it, because it will return the last_insert_id of the last successful INSERT, even if it wasn't the last executed INSERT.

Categories