Maintaining state between browser tabs in PHP - php

I'm building a web application that allows users to run a query against one of two databases. They are only allowed to submit 1 query to each database at a time. Currently, I'm setting $_SESSION['runningAdvancedQuery'] (for instance) to let me know if they're running a query. This is to prevent someone from opening up a 2nd browser tab and hitting the resource again. It works great except...
If I start a query in a tab, then close that tab before it finishes, the flag does not get unset. Therefore, if I reopen the page, I cannot run any queries because it thinks I have one still running. Anyone have advice for getting around this?

Set this value not to for example 1, but to unix timestamp, and do checking by comprasion last-query-timestamp to now, setting up some time difference that must go by to execute next query. Remeber to set a block-time to safe value - longest time which query can be executed. If user closes his tab, after a short time he will be "unlocked".

<?php
ignore_user_abort(true);
//if session variable is not set
//set session variable
//run query
//unset session variable
//else
//show error: "your other query isn't finished yet"
?>

Instead of setting $_SESSION['runningAdvancedQuery']; to true, you could set it to the output of SELECT CONNECTION_ID();, and check show processlist; whether it's still running. This would be an addon to the other answers though: especially when using persistent connections other processes could have the connection_id in use.
As the same user, you are always allowed to see your own processes. So, to combine the other checks:
Store timestamp, connection_id & actual sql-query (I'd remove all white-space & cast to either upper or lower string to avoid slight differences in presentation to matter).
Of course use the ignore_user_abort() functionality, do a session_write_close() before the query, restart the session & set it to finished after the query.
On a check for the running query, check whether
The connection-id is still present.
The timestamp + seconds the query is running are reasonably close to the current time.
The query (lowercase, stripped whitespace) is about the same query as requested (use SHOW FULL PROCESSLIST; to get the whole query).
Optionally, take it a step further, and give people the possibility to KILL QUERY <connection_id>; (only if the previous 3 checks all resulted in a positive).

Related

PHP script continuing to run after page close [initiated from AJAX]

Everything I google tells me this should not be happening, however it is.
I'm building a migration tool to build a 'master database'. I have an admin panel, only accessible to a few select people. There is a merge button that starts an AJAX call to run the migration php function. I'm not positive how long this script takes considering I'm still developing it but none the less I'm expecting a minimum of 20 minutes once pushed to production and populated with the production database. I do NOT need a lecture on best practices telling me not to do it via a GUI. This will become a cron as well, however I want to be able to induce it manually, if the admin desires.
So here's my process. The migration function immediately closes the session session_write_close() allowing me to run multiple php scripts simultaneously. I do this because I start a setInterval that checks to see a session variable. This is my 'progress' which is just an int on what loop iteration I'm on. In my migration script I open sessions, add 1 to that int, and close the sessions again. I do this at the end of each loop.
By doing this I have successfully created a progress for my AJAX. Now I noticed something. If I start my migration, then close out of my tab - or refresh. Once I reload the page my progress continues to grow. This tells me that the migration script is still executing in the background.
If I close 100% out of my browser, or clear my sessions I no longer see progress go up. This however is not because the script stops. This is because my progress indication relies on sessions and once I clear my sessions or close out my browser my session cookie changes. However I know the script is still running because I can query the database manually and see that entries are being added.
NOW to my question:
I do NOT want this. If my browser closes, if I press refresh, if I loose connection, etc I want the script to be TERMINATED. I want it to stop mid process.
I tried ignore_user_abort(false); however I'm pretty sure this is specific to command line and made no difference for me.
I want it to be terminated because I'm building a 'progress resume' function where we can choose where to resume the migration progress again.
Any suggestions?
UPDATE:
I didn't want to go this route but some solution I just though of is I could have another session variable. And it's my 'last time client was validated' which could be a timestamp. In my javascript, on the client side, every like 30 seconds I could hit a php script to 'update last time client was validated'. And in my migration function at the beginning of each loop I could check to make sure that timestamp isn't like 60 seconds old for example. If it IS 60 seconds old, or older, I do a die thus stopping my script. This would locally mean 'if there is no client updating this timestamp then we can assume the user closed out of his browser/tab/refreshed'. And as for the function I can ignore this check if in command line (cron). Not the ideal solution but it is my plan B
I am, and did, go with the solution to ping from the client to indicate if the client is still alive or not.
So essentially this is what I did:
From the client, in javascript, I set up a setInterval to run every 1.5 seconds and that hits a php script via AJAX. This php script updates a session variable with the current timestamp (this could easily be a database value if you needed to, however I didn't want the overhead of another query).
$_SESSION['migration_listsync_clientLastPing_'.$decodedJson['progressKey']] = time();
Then, inside my migration function I run a check to see if the 'timestamp' is over 10 seconds old, and if it is I die - thus killing the script.
if(isset($_SESSION['migration_listsync_clientLastPing_'.$progressKey])){
$calc = time() - $_SESSION['migration_listsync_clientLastPing_'.$progressKey];
if($calc > 10){
die();
}
}
I added a 'progressKey' param which is a random number from 1-100 that is generated when the function is called. This number is generated in javascript and passed into both of my AJAX calls. This way if the user refreshes the page and then immediately pressed the button again we won't have 2 instances of the function running. The 'old' instance will die after a few seconds and the new instance will take over.
This isn't an ideal solution however it is an effective one.

Is there a standard way to randomly search my database items with a cron until complete, then reset in PHP?

I already have a sceen scraper built using PHP cURL, tied to a mySQL database. I have stored products that need to be updated weekly rather than what I have now (a form that I input the url/product and hit go).
My first thought would be to use standard cron every 30 minutes on a PHP file like so.
I would like to randomize two things, the delay on the PHP script actually accessing the source site (i.e. 0 - 20 minutes) so the process timing is random. Second, I want to access my target items/pages randomly, but be sure to get all of them weekly and/or consistently, before cycling through the list again.
The timer is fairly strait forward and needs no storage of data, but how should I keep track of my items/uri's in this fashion? I was thinking a second cron to clear data, while the first just increments. But still I have to set flags as to what was updated already and I am just not familiar enough for choice of where and how to store this data.
I am using mySQL, with HTML5 options and is on Codeigniter, so can also hold data in SQLite as options..along with cookies if that makes sense. I couple questions on this part, do I query my database (mySQL) for what I need every-time, or do I store on a jSON file once a week, and run that? This obviously depends and/or determines on where I flag what was already processed.
You have a list of items to scrape in your MySQL database. Ensure that there is field that holds the last time the item was scraped.
Set a cron job to run every minute with this workflow:
Ensure that the previous run of the script has completed (see step #4). If not, end.
Check last time you scraped any item.
Ensure enough time has passed (see step #9). If not, end.
Set a value somewhere to show that you are processing (so step #1 of subsequent runs is aware).
Select an item to scrape at random. (from those that haven't been scraped in n time.)
Delay random interval of seconds to ensure all requests aren't always on the minute.
Scrape it.
Update time last scraped for that item.
Set a random time to wait before next operation (so step #3 of subsequent runs is aware).
Set a value to show that you are not processing (so step #1 of subsequent runs is aware).
End.
Once all items have been scraped, you can set a variable to hold the time the batch was completed and use it for n in step #5.

Modify variables from different PHP Sessions

I am making a session system for my website using PHP and MySQL. The idea is that a user session will last for around 5 minutes if they are inactive, and a CronJob runs every now and then and checks to see if sessions are expired, and if they are, removes the session.
The issue:
Every time someone loads their page it has to check the database to see if their session is still valid. I am wondering if in that CronJob task, I could make it find that users PHP Session and change a variable like $_SESSION['isValidSession'] and set it to false.
So once they load the page it just checks if that variable if the session is valid.
Sorry for the wall of text!
TL;DR: I want to modify session variables of different specified sessions.
Thanks.
Every time someone loads their page it has to check the database to
see if their session is still valid. I am wondering if in that CronJob
task, I could make it find that users PHP Session and change a
variable like $_SESSION['isValidSession'] and set it to false.
You have to do this regardless. When the users load their page, the system must verify whether the session exists in the database (I assume that you're using a DB).
If you run the cron job every minute, and expire all sessions older than five (which seems rather excessive? I often stay inactive on a site for five, ten, even fifteen minutes if I am reading a long page), this will automatically "mark invalid" (actually remove) the sessions.
Normally you would keep a TIMESTAMP column with the time of last update of that row (meaning that session), and the cron job would DELETE all rows with timestamp older than five minutes ago. When reloading the page, the system would no longer find the relevant session row, and deduce (correctly) that the session has expired.
However, what you want (reading a session knowing its SessionID) can be accomplished by reading in the session by the cron job (you can code the job in PHP) either loading as extant session given its ID, or by reading the DB column holding the serialized data with a SELECT SessionData FROM SessionTable WHERE id = 'SessionId'; and de-serializing it. Then you modify the inflated object, re-serialize it and store it back in the database with SQL UPDATE. Hey presto!, session has now been modified.
But be aware that this will likely cause concurrency problems with active clients, and cannot be done in SQL in one fell swoop - you can't execute UPDATE Sessions SET isInactive = 1 WHERE expiry... directly. Normally you need to read the rows of interest one by one, unserialize them and store them back, processing them with PHP code.
You can do it indirectly with two different workarounds.
One, you change your session code to use unserialized data. This will impact maintainability and performance (you can't "just add" something to a session: you have to create a column for it).
Two: you take advantage of the fact that in serialized form, "0" and "1" have the same length. That is, the serialized session containing isValidSession (name of 14 characters) will contain the text
...{s:14:"isValidSession";b:1;}...
and you can change that piece of string with {s:14:"isValidSession";b:0;}, thus making isValidSession become False. This is not particularly good practice - you're messing with the system's internals. Of course, I don't think anybody expects PHP's serialized data syntax to change anytime soon (...or do they?).
<?php var_dump($_SESSION); ?>
You should store the time of last request of the users in the database.
In the cornjob you should check users last view time and compare to current time, then check which user time has been expired.
And then update the column of database as false for expired users.
After than you can easily find out which user should be log out just by checking that colmn in database.

Stop 2 identical queries from executing almost simultaneously?

I have developed an AJAX based game where there is a bug caused (very remote, but in volume it happens at least once per hour) where for some reason two requests get sent to the processing page almost simultaneously (the last one I tracked, the requests were a difference of .0001 ms). There is a check right before the query is executed to make sure that it doesn't get executed twice, but since the difference is so small, the check hasn't finished before the next query gets executed. I'm stumped, how can I prevent this as it is causing serious problems in the game.
Just to be more clear, the query is starting a new round in the game, so when it executes twice, it starts 2 rounds at the same time which breaks the game, so I need to be able to stop the script from executing if the previous round isn't over, even if that previous round started .0001 ms ago.
The trick is to use an atomic update statement to do something which cannot succeed for both threads. They must not both succeed, so you can simply do a query like:
UPDATE gameinfo SET round=round+1 WHERE round=1234
Where 1234 is the current round that was in progress. Then check the number of rows affected. If the thread affects zero rows, it has failed and someone else did it before hand. I am assuming that the above will be executed in its own transaction as autocommit is on.
So all you really need is an application wide mutex. flock() sem_acquire() provide this - but only at the system level - if the application is spread across mutliple servers you'd need to use memcached or implement your own socket server to coordinate nodes.
Alternatively you could use a database as a common storage area - e.g. with MySQL acquire a lock on a table, check when the round was last started, if necessary update the row to say a new round is starting (and remember this - then unlock the table. Carry on....
C.
Locks are one way of doing this, but a simpler way in some cases is to make your request idempotent. This just means that calling it repeatedly has exactly the same effect as calling it once.
For example, your call at the moment effectively does $round++; Clearly repeated calls to this will cause trouble.
But you could instead do $round=$newround; Here repeated calls won't have any effect, because the the round has already been set, and the second call just sets it to the same value.

php mysql db connect very slow

I have a small PHP framework that basically just loads a controller and a view.
An empty page loads in about 0.004 seconds, using microtime() at the beginning and end of execution.
Here's the "problem". If I do the following:
$link = #mysql_connect($server,$user,$pass,$link);
#mysql_select_db($database, $link);
the page loads in about 0.500 seconds. A whopping 12500% jump in time to render the empty page.
is this normal or am I doing something seriously wrong here... (I'm hoping for the latter).
EDIT: Could someone say what a normal time penalty is for just connecting to a mysql db like above.
Error suppression with # will slow down your script. Also an SQL connection is reliant on the speed of the server. So if the server is slow to respond, you will get a slow execution of your script.
Actually, I don't get what you're trying to do with the $link variable .
$link = #mysql_connect($server,$user,$pass,$link); is probably wrong, possibly not doing what you want (ie nothing in your example) unless if you have more than one link to databases (advanced stuff)
the php.net documention states
new_link
If a second call is made to
mysql_connect() with the same
arguments, no new link will be
established, but instead, the link
identifier of the already opened link
will be returned. The new_link
parameter modifies this behavior and
makes mysql_connect() always open a
new link, even if mysql_connect()
was called before with the same
parameters. In SQL safe mode, this
parameter is ignored.
On my webserver, the load time is always about the same on average (4000 µsecs the first time and 600 µsec the second time)
Half a second to connect to a mysql database is a bit slow but not unusual either. If it's on another server on the network with an existing load, it might be absolutely normal.
I wouldn't worry too much about this problem.
(oh, old question ! Nevermind, still replying)
Have you tried verifying your data integrity? do a repair table, and an optimze table to all tables. I know that sometimes when a table has gone corrupt the connection time can take an enormous amount of time / fail all together.
May be the reason is domain resolve slow.
skip-name-resolve
Add this to my.cnf, then restart mysqld.
And if you skip-name-resolve, you can't use hostname in mysql for user permission.

Categories