I have a site in laravel where a single page makes multiple database queries (select statements which can take a little while to run).
Is there a way to detect if a user:
Refreshes the page
Changes the parameters of the query
Closes the page
hence meaning that the result of the query is not needed, and therefore cancelling it or killing the process?
Note the database is mysql so I can from mysql workbench easily call kill {PID} to end a similar query manually.
Thanks!
Yest, this can all be detected easily with some JavaScript; there is an 'onunload' event that is fired when the user tries to leave or close the page, so that answers 1 and 3. Changing a field in a form is also easy to detect.
The hard problem will be to match the cancel request with the original query; somehow you have to know the PID of that query, and since MySQL functions generally don't return until they are done you can't ask for information like the PID.
You can detect if a user leaves or reloads a page with Javascript, catching the ONUNLOAD event (documentation here, SO related question here).
To check if query parameters change, you should specify if the user enters them manually in a form or any other way you are getting them, there's a different answer for any different method.
Related
This question already has answers here:
How to make sure there is no race condition in MySQL database when incrementing a field?
(2 answers)
Lock file/content while being edited in browser.
(1 answer)
Closed 4 years ago.
I've developed a web application using Apache, MySQL and PHP.
This web app allows multiple users' to login to the application.
Then, through the application, they have access to the Database.
Since race conditions may apply when two or more users try to SELECT/UPDATE/DELETE the same information (in my case a row of a Table), I am searching for the best way to avoid such race conditions.
I've tried using mysqli with setting autocommit to OFF and using SELECT .... FOR UPDATE, but this fails to work as -to my understanding- with PHP, each transaction commits automatically and each connection to the db is being auto released when the PHP -->html page is provided/loaded for the user.
After reading some posts, there seem to be two possible solutions for my problem :
Use PDO. To my understanding PDO creates connections to the DB which are not released when the html page loads. Special precautions should be taken though as locks may remain if e.g. the user quits the page and the PDO connection has not been released...
Add a "locked" column in the corresponding table to flag locked rows. So e.g. when an UPDATE transaction may only be performed if the corresponding user has locked the row for editing. The other users shall not be allowed to modify.
The main issue I may have with PDO is that I have to modify the PHP code in order to replace mysqli with PDO, where applicable.
The issue with the scenario 2 is that I need to also modify my DB schema, add additional coding for lock/unlock and also consider the possibility of "hanging" locked rows which may result in additional columns to be added in the table (e.g. to store the time the row was locked and the lockedBy information) and code as well (e.g. to run a Javascript at User side that will be updating the locked time so that the user continuously flags the row while using it...)
Your comments based on your experience would be highly appreciated!!!
Thank you.
It might be an opinion instead of a technical answer, but too long to write it as a comment.
I want to think it like booking a seat in a movie or a flight: When an user selects a seat and presses next, the seat will be reserved for that user for a certain amount of time, and when user doesn't finish in the given time, it gets a timeout exception without processing further. You can use an edit button besides the row, and when the user clicks it, on the server side, you check if the row is reserved to someone else, and if not, reserve it to the user. Other users won't get an edit form when they also click the edit button after that user. I don't know how database systems handle this though.
But, one way to make it sure, re-read the row after user edits and commits it to display the user. If any lock mechanism prevented the row from being updated, the user will also know it by not seeing the change in the row.
I have discovered that some old (2009) code that was written for a website, did, under certain circumstances on a search query save the SQL as a a $_GET variable!
When the search was carried out, the details are POSTED and then sanitized, and the results are paginated with the LIMIT clause in MySQL. If there is more than one page (ie +30 results) the pages are anchor links in the HTML with a GET var containing the SQL statement.
I know, this is absolutely not the way to do this. It's old code I've just seen it by chance. This needs to be fixed.
So I've fixed it, sanitized it and used an alternative method to reload the SQL, BUT:
My question is thus:
The page outputs the data relating to thumbnail images, all data is output as named array var (the original clause is a SELECT * clause), so if someone does abuse the GET variable, the page itself will only output the columns named,
I have managed to DELETE rows from the DB using the GET abuse, I would like to think the abuse is only effective if the result is not involving any returned output (such as DELETE) but I don't know; so given that the user can input anything into the GET clause but only get the displayed output of what's coded (ie named columns in a 30 row array) -- what other abuses can this gaping hole be open to?
Further details: The code is MySQLi
A tool like SQLMAP can probably take over the entire server and do with it whatever the user wants.
Having an unsanitized database input isn´t even hacking anymore, it´s waiting for someone to run a script on your machine and basically own it from that point on.
What the attacker can do depends on your database configuration and database user access. If you create a new user with a permission to only SELECT that one specified table, and use that user for that particular script, the harm it can do is reading data from that table.
Still this is bad practice. Never use it.
I have little confusion about the Php PDO function: lastInsertID. If I understand correctly, it returns the last auto-incremental id that was inserted in the database.
I usually use this function when I execute a query that inserts a user in my database when I am creating the functionality of registering a user.
My question is that say I have a hundred people registering on my site at one point for example. And may be one user hit the 'Register' button a millisecond after another user. Then is there a chance that this function lastInsertId will return the id of another user that register just momentarily earlier?
May be what I am trying to ask is does the server handle one request at a time and go through a php file one at a time?
Please let me know about this.
Thank you.
Perfectly safe. There is no race condition. It only returns the last inserted Id from the pdo object that made the insert.
It is safe - it guarantees to return you a value from the current connection.
I am creating a custom query builder, when user has created his query he can verify the query syntax by clicking a button. When the user click on the button to verify, An AJAX call is sent to the server with the query and execution of the query starts, during this time user can see a modal on his screen with a cancel button. If by any chance user clicks on the cancel button I want the send another AJAX call to the server to kill the execution of the query.
Currently I am only able to kill the AJAX call which I originally send and my page works fine at user end
but I am looking for a PHP code to stop the MYSQL query at server side because some queries can be quite heavy and run for a long time
First of all, if the purpose of the query is just to check the syntax, do not execute it! execute explain, or add limit 0, or execute against empty database.
As for the killing, you have to connect to database with root privileges, and issue KILL command (but you need to know the query id). Or you can kill the entire thread. Take a look to mysqli::kill
Edit: seems you don't need root privileges, to see queries by your user, use SHOW PROCESSLIST command
I believe that thread_id reported by CONNECTION_ID() is the same as the thread id used in mysqladmin....
You'd need to capture the process id for the connection before starting the query...
$qry="SELECT CONNECTION_ID() AS c";
$row=mysql_fetch_assoc(mysql_query($qry));
$_SESSION['mysql_connection_id']=$row['c'];
then when the user clicks the button do something like....
exec('mysqladmin -u $user -p$password killproc '
. $_SESSION['mysql_connection_id']);
But this opens up the possibility of users killing other peoples' queries. You could inject a comment key, e.g. the php session id, at the beginning of each query (mysqladmin truncates text of the query) then use mysqladmin processlist to check the ownership and/or retrieve the thread id before killing it.
No. It's not possible, as you're closing your connection everytime executions ends. There's no way (through PHP) to control old "sessions" (connections)
You would have to allow the application to kill database queries, and you need to implement a more complex interaction between Client and Server, which could lead to security holes if done wrong.
The Start-Request should contain a session and a page id (secure id, so not 3 and 4 and 5 but a non-guessable but unique hash of some kind). The backend then connects this id with the query. This could be done in some extra table of the database, or a redis-way if you have redis anyway, but also via comments in the SQL query, like "Session fid98a08u4j, Page 940jfmkvlz" => s:<session>p:<page>.
/* s:fid98a08u4jp:940jfmkvlz */ select * from ...
If the user presses "Cancel", you send a Cancel-request with session and page id to the server. The php-code then fetches the list of your running SQL Queries with show processlist and searches for session and page to extract the query id.
Then the php sends a
kill query <id>
to the MySQL-server.
This might lead to trouble when not using transactions, and this might damage replication. And even a kill query might take some time in the state 'killing'.
So this should be the last possible of several variants. But sometimes it has to be done, I even had once a program where you could list your own running queries to kill them, which was needed because of "quotas" (you could not run more than two or three reporting requests at the same time).
I have table called playlist, and I display those details using display_playlist.php file.
screen shot of display_playlist.php:
Every time user clicks the 'up' or 'down' button to arrange the song order, I just update the table.But I feel updating DB very often is not recommended, so Is there any efficient way to accomplish this task.
I am still a newbie to AJAX, so if AJAX is the only way to do it, can you please explain it in detail.thank you in advance.
In relative terms, yes, hitting the database is an expensive operation. However, if the playlist state is meant to be persistent then you have to hit the database at some point, it's just a question of when/how often.
One simple optimization you might try is instead of sending each change the user makes to the server right away, allow them to make however many changes they want (using some client-side javascript to keep the UI in the correct state) and provide a "Save Playlist" button that they can press to submit all of their changes to the server at once. That will reduce database hits, and also the number of round-trips made to the server (in terms of what a user experiences, a round-trip to the server is far more expensive than a database hit).
More broadly though, you shouldn't get hung up over hypothetical performance concerns. Is your application too slow to handle its current load (and if so, have you done any profiling to verify that it is indeed this database query that is causing the issue)? If not, then you don't need to worry too much about changing it just yet.
You can have a save button, so instead of updating on each move there will only be one update where you update every row at one time. This also lets you have a cancel button for people to refresh the way it was.
You can do it so users can change locally all they wish; defer writing the final result to the database until they choose to move on from the page.
if you really want to avoid updating the database, you can try some JavaScript based MP3players , which allow you to pass the path to *.mp3 files.
Then I suggest you to use Jquery UI - Sortable
and use it to update the songs list to the flash player ..