lock mysql database table row on select - php

I am managing a pool of stateless session for a web service between users.
So when a user request web service he start session and response timeout is 5 sec, so he can hold session for 5 sec max. second user comes in and system check if there is available session then use it.
Now i have a problem here. when let say there is a session available, user A comes, system check if it was used more than 5 sec ago, given to user A, at the same time another user hit and system check if session used more than 4 sec ago, assigned to user B.
Now both user using same session and system fails on one.
I have tried select of update command as well to lock it.
I have tried update last used time as soon as it is selected by first user, but this didn't work (i think second user hitting system as the same time)
can someone advice on it.
code: check for session from db if available then pick it or insert a new
//get 25 sessions from database order by lastqueryDate
$session = $sessionObj->select('session', '*', '', 'LastQueryDate DESC', '25');
$available_session = array();
//if sessions available, get rows from getResult
if ($session) {
$session_Data = $sessionObj->getResult();
//now get session which is sitting there more than response time out
$available_session = $sessionObj->getAvailableSession($session_Data);
}
//if there is any, use it. otherwise create new session and save in database
if (!$available_session) {
$auth->securityAuthenticate();
$header = $auth->getHeaders();
$sequence = (int) $header['Session']->SequenceNumber + 1;
$values[] = $header['Session']->SessionId;
$values [] = $sequence;
$values [] = $header['Session']->SecurityToken;
$rows = "SessionID,SequenceNo,Security_token";
if ($sessionObj->insert('session', $values, $rows)) {
$available_session['Session']->SessionId = $header['Session']->SessionId;
$available_session['Session']->SequenceNumber = $sequence;
$available_session['Session']->SecurityToken = $header['Session']->SecurityToken;
}
}
function that check availability of session in db :
public function getAvailableSession($session_data) {
$available_session = array();
foreach ($session_data as $key) {
if (!is_array($key)) {
$key = $session_data;
}
$date = date('Y-m-d h:i:s a', time());
$now = new DateTime($date);
$last_query_time = new DateTime($key['LastQueryDate']);
$dteDiff = $last_query_time->diff($now);
$difference = $dteDiff->format("%H:%I:%S");
//if response time out is smaller than session used last. then pick it.
if (RTO <= $difference) {
$available_session['Session']->SessionId = $key['SessionID'];
$available_session['Session']->SequenceNumber = $key['SequenceNo'];
$available_session['Session']->SecurityToken = $key['Security_token'];
// run update to update lastqueryDate as its default value set to current time stamp
$session_value = $key['SessionID'];
$rows['SequenceNo'] = $key['SequenceNo'];
$where[0] = "SessionID";
$where[2] = "'" . $session_value . "'";
$this->update('session', $rows, $where);
return $available_session;
}
}
return false;
}
As soon as i find a session sitting idle for more than 5 sec i am updating database.

Open transaction. Issue "select ... for update" query for getting session data. Commit transaction at end of script.

Related

How does PHP & MySQL handle simultaneous requests?

I must be missing something regarding how simultaneous requests are handled by PHP/Symfony, or perhaps how potentially simultaneous queries on the DB are handled...
This code seems to be doing the impossible - it randomly (about once a month) creates a duplicate of the new entity at the bottom. I conclude that it must happen when two clients make the same request twice, and both threads execute the SELECT query at the same time, picking up the entry where stop == NULL, and then they both (?) set the stoptime for that entry, and they both write a new entry.
Here's my logical outline as far as I understand things:
Get all entries with NULL for stoptime
Loop over those entries
Only proceed if the day of the entry (UTC) is different from current day (UTC)
Set stoptime for the open entry to 23:59:59 and flush to DB
Build a new entry with the starttime 00:00:00 on the next day
Assert that there are no other open entries in that position
Assert that there are no future entries in that position
Only then - flush the new entry to DB
Controller autocloseAndOpen
//if entry spans daybreak (midnight) close it and open a new entry at the beginning of next day
private function autocloseAndOpen($units) {
$now = new \DateTime("now", new \DateTimeZone("UTC"));
$repository = $this->em->getRepository('App\Entity\Poslog\Entry');
$query = $repository->createQueryBuilder('e')
->where('e.stop is NULL')
->getQuery();
$results = $query->getResult();
if (!isset($results[0])) {
return null; //there are no open entries at all
}
$em = $this->em;
$messages = "";
foreach ($results as $r) {
if ($r->getPosition()->getACRGroup() == $unit) { //only touch the user's own entries
$start = $r->getStart();
//Assert entry spanning datebreak
$startStr = $start->format("Y-m-d"); //Necessary for comparison, if $start->format("Y-m-d") is put in the comparison clause PHP will still compare the datetime object being formatted, not the output of the formatting.
$nowStr = $now->format("Y-m-d"); //Necessary for comparison, if $start->format("Y-m-d") is put in the comparison clause PHP will still compare the datetime object being formatted, not the output of the formatting.
if ($startStr < $nowStr) {
$stop = new \DateTimeImmutable($start->format("Y-m-d")."23:59:59", new \DateTimeZone("UTC"));
$r->setStop($stop);
$em->flush();
$txt = $unit->getName() . " had an entry in position (" . $r->getPosition()->getName() . ") spanning datebreak (UTC). Automatically closed at " . $stop->format("Y-m-d H:i:s") . "z.";
$messages .= "<p>" . $txt . "</p>";
//Open new entry
$newStartTime = $stop->modify('+1 second');
$entry = new Entry();
$entry->setStart( $newStartTime );
$entry->setOperator( $r->getOperator() );
$entry->setPosition( $r->getPosition() );
$entry->setStudent( $r->getStudent() );
$em->persist($entry);
//Assert that there are no future entries before autoopening a new entry
$futureE = $this->checkFutureEntries($r->getPosition(),true);
$openE = $this->checkOpenEntries($r->getPosition(), true);
if ($futureE !== 0 || $openE !== 0) {
$txt = "Tried to open a new entry for " . $r->getOperator()->getSignature() . " in the same position (" . $r->getPosition()->getName() . ") next day but there are conflicting entries.";
$messages .= "<p>" . $txt . "</p>";
} else {
$em->flush(); //store to DB
$txt = "A new entry was opened for " . $r->getOperator()->getSignature() . " in the same position (" . $r->getPosition()->getName() . ")";
$messages .= "<p>" . $txt . "</p>";
}
}
}
}
return $messages;
}
I'm even running an extra check here with the checkOpenEntries() to see whether there is at this point any entries with stoptime == NULL in that position. Initially, I thought that to be superflous because I thought that if one request is running and operating on the DB, the other request will not start until the first is finished.
private function checkOpenEntries($position,$checkRelatives = false) {
$positionsToCheck = array();
if ($checkRelatives == true) {
$positionsToCheck = $position->getRelatedPositions();
$positionsToCheck[] = $position;
} else {
$positionsToCheck = array($position);
}
//Get all open entries for position
$repository = $this->em->getRepository('App\Entity\Poslog\Entry');
$query = $repository->createQueryBuilder('e')
->where('e.stop is NULL and e.position IN (:positions)')
->setParameter('positions', $positionsToCheck)
->getQuery();
$results = $query->getResult();
if(!isset($results[0])) {
return 0; //tells caller that there are no open entries
} else {
if (count($results) === 1) {
return $results[0]; //if exactly one open entry, return that object to caller
} else {
$body = 'Found more than 1 open log entry for position ' . $position->getName() . ' in ' . $position->getACRGroup()->getName() . ' this should not be possible, there appears to be corrupt data in the database.';
$this->email($body);
$output['success'] = false;
$output['message'] = $body . ' An automatic email has been sent to ' . $this->globalParameters->get('poslog-email-to') . ' to notify of the problem, manual inspection is required.';
$output['logdata'] = null;
return $this->prepareResponse($output);
}
}
}
Do I need to start this function with some kind of "Lock database" method to achieve what I am trying to do?
I have tested all functions and when I simulate all kinds of states (entry with NULL for stoptime even when it shouldn't be able to be etc.) it all works out. And the majority of time it all works nicely, but that one day somewhere in the middle of the month, this thing happens...
You can never guarantee sequential order (or implicit exclusive access). Try that and you will dig yourself deeper and deeper.
As Matt and KIKO mentioned in the comments, you could use constraints and transactions and those should help immensely as your database will stay clean, but keep in mind your app needs to be able to catch errors produced by the database layer. Definitely worth trying first.
Another way of handling this is to enforce database/application level locking.
Database-level locking is more coarse and very unforgiving if you somewhere forget to release a lock (in long-running scripts).
MySQL docs:
If the connection for a client session terminates, whether normally or abnormally, the server implicitly releases all table locks held by the session (transactional and nontransactional). If the client reconnects, the locks are no longer in effect.
Locking the entire table is usually a bad idea altogether, but it is doable. This highly depends on the application.
Some ORMs, out-of-box, support object versioning and throw exceptions if the version has changed during the execution. In theory, your application would get an exception and upon retrying it would find that someone else already populated the field and that is no longer a candidate for an update.
Application-level locking is more fine-grained, but all points in the code need to honor the lock, otherwise, you are back to square #1. And if your app is distributed (say K8S, or just deployed across multiple servers), your locking mechanism has to be distributed as well (not local to an instance)

Alter table after X minutes, even if user closed window, is this a good way?

I'm trying to make a mission system where users can accept a mission or not, and if it hasn't been accepted after X minutes it will be inactivated. Is this a good way to do it, would it be able to handle 10K missions per day?
<?php
$mission_id = htmlspecialchars($_POST["mission_id"]));
$user = $_SESSION["user"];
// Verify that user is same as mission agent
$verify = $conn->prepare("SELECT agent FROM missions WHERE id = ? AND active = 0 ORDER BY id limit 1");
$verify->bindParam(1, $user);
$verify->execute();
$verify = $verify->fetch(PDO::FETCH_ASSOC);
if($verify["agent"] == $user) {
unset($verify);
// Do time code.
ignore_user_abort(true);
set_time_limit(300);
$time = 0;
while(time < 300) {
sleep(15);
time += 15;
// check if mission was accepted
$verify = $conn->prepare("SELECT accepted FROM missions WHERE id = ? ORDER BY id LIMIT 1");
$verify->bindParam(1, $mission_id);
$verify->execute();
$verify = $verify->fetch(PDO::FETCH_ASSOC);
if($verify["accepted"] == 0) { // not accepted
unset($verify);
// Inactivate mission
$inactivate = $conn->prepare("UPDATE missions SET active = 0 WHERE id = ?");
$inactivate->bindParam(1, $id);
$inactivate->execute();
unset($inactivate);
}
else {
break;
}
}
}
else {
header("location: logout.php");
// Log user out
}
?>
Use mysql Create Event functionality that is perfect for these types of situations. Think of them as scheduled stored procedures (as complicated as you want) to fire often in very flexible re-occurring fashion.
This functionality was put in to do away with cron especially when database-only operations are to occur.
A high-level view of it can be seen here in an Answer I wrote up.
I would use an expire date timestamp and set it
To 17 minutes in the future when u create the mission
If they don't act upon it. It will automatically expire
If they do.. Update the expire record to 9999-12-31
And not use active/inactive. Just use current time to get all active missions.
Thrn no event needed. No crontab. No extra code

Reducing Server Response Time Where Loads Many Resources

I have a two PHP scripts that are loading many variable resources from APIs, causing the response times to as long as 2.2 seconds to 4 seconds. Any suggestions on how to decrease response times and increase efficiency would be very appreciated?
FIRST SCRIPT
require('path/to/local/API_2');
//Check if user has put a query and that it's not empty
if (isset($_GET['query']) && !empty($_GET['query'])) {
//$query is user input
$query = str_replace(" ", "+", $_GET['query']);
$query = addslashes($query);
//HTTP Request to API_1
//Based on $query
//Max Variable is ammount of results I want to get back in JSON format
$varlist = file_get_contents("http://ADRESS_OF_API_1.com?$query&max=10");
//Convert JSON to Array()
$varlist = json_decode($varlist, true);
//Initializing connection to API_2
$myAPIKey = 'KEY';
$client = new APIClient($myAPIKey, 'http://ADRESS_OF_API_2.com');
$Api = new API_FUNCTION($client);
$queries = 7;
//Go through $varlist and get data for each element in array then use it in HTML
//Proccess all 8 results from $varlist array()
for ($i = 0; $i <= $queries; ++$i) {
//Get info from API based on ID included in first API data
//I don't use all info, but I can't control what I get back.
$ALL_INFO = $Api->GET_FUNCTION_1($varlist[$i]['id']);
//Seperate $ALL_INFO into info I use
$varlist[$i]['INFO_1'] = $ALL_INFO['PATH_TO_INFO_1'];
$varlist[$i]['INFO_2'] = $ALL_INFO['PATH_TO_INFO_2'];
//Check if info exists
if($varlist[$i]['INFO_1']) {
//Concatenate information into HTML
$result.='
<div class="result">
<h3>'.$varlist[$i]['id'].'</h3>
<p>'.$varlist[$i]['INFO_1'].'</p>
<p>'.$varlist[$i]['INFO_2'].'</p>
</div>';
} else {
//In case of no result for specific Info ID increase
//Allows for 3 empty responses
++$queries;
}
}
} else {
//If user didn't enter a query, relocates them back to main page to enter one.
header("Location: http://websitename.com");
die();
}`
NOTE: $result equals HTML information from each time arround the loop.
NOTE: Almost all time is spent in the for ($i = 0; $i <= 7; ++$i)
loop.
SECOND SCRIPT
//Same API as before
require('path/to/local/API_2');
//Check if query is set and not empty
if (isset($_GET['query']) && !empty($_GET['query'])) {
//$query is specific $varlist[$i]['id'] for more information on that data
$query['id'] = str_replace(" ", "+", $_GET['query']);
$query['id'] = addslashes($query['id']);
//Initializing connection to only API used in this script
$myAPIKey = 'KEY';
$client = new APIClient($myAPIKey, 'http://ADRESS_OF_API_2.com');
$Api = new API_FUNCTION($client);
$ALL_INFO_1 = $Api->GET_FUNCTION_1($query['id']);
$query['INFO_ADRESS_1.1'] = $ALL_INFO_1['INFO_ADRESS_1'];
$query['INFO_ADRESS_1.2'] = $ALL_INFO_2['INFO_ADRESS_2'];
$ALL_INFO_2 = $Api->GET_FUNCTION_2($query['id']);
$query['INFO_ADRESS_2.1'] = $ALL_INFO_3['INFO_ADRESS_3'];
$ALL_INFO_3 = $Api->GET_FUNCTION_3($query['id']);
$query['INFO_ADRESS_3.1'] = $ALL_INFO_4['INFO_ADRESS_4'];
$ALL_INFO_4 = $Api->GET_FUNCTION_4($query['id']);
$query['INFO_ADRESS_4.1'] = $ALL_INFO_5['INFO_ADRESS_5'];
$query['INFO_ADRESS_4.2'] = $ALL_INFO_6['INFO_ADRESS_6'];
$ALL_INFO_5 = $Api->GET_FUNCTION_5($query['id']);
$query['INFO_ADRESS_5.1'] = $ALL_INFO_7['INFO_ADRESS_7'];
}
$result = All of the $query data from the API;
} else {
//If no query relocates them back to first PHP script page to enter one.
header("Location: http://websitename.com/search");
die();
}`
NOTE: Similiarly to the first script, most time is spent getting info
from the secondary API.
NOTE: In the second script, the first API is replaced by a single
specific variable from the first script page,so $varlist[$i]['id'] =
$query['id'].
NOTE: Again, $result is the HTML data.
You could also move the API calls out from your normal page load. Respond to the user with a generic page to show something is happening and then make an ajax request to query the APIs and respond with data. There really is no way to speed up an individual external request. Your best bet is to:
try to minimize the number of requests (even if it means you request a little more data once then filter out on your side vs sending multiple requests for a small subset of data).
cache any remaining requests and pull from cache.
respond with a small page to let the user know something is happening and make separate ajax requests for the queried data.

Avoid to many SQL query on page reload

I have a website page that, on load, fires 10 different queries against a table with 150,000,000 rows.
Normally the page loads in under 2 seconds - but if I refresh too often, it creates a lot of queries, which slow page load time by up to 10 seconds.
How can I avoid firing all of those queries, since it would kill my database?
I have no caching yet. The site works in the following way. I have a table were all URIs are stored. If a user enters a URL I grap the URI out of the called URL and check in the table if the URI is stored. In case the URI is stored in the table I pull the corresponding data from the other tables in a relational database.
An example code from one of the PHP files that pulls the information from the other tables is this
<?php
set_time_limit(2);
define('MODX_CORE_PATH', '/path/to/modx/core/');
define('MODX_CONFIG_KEY','config');
require_once MODX_CORE_PATH . 'model/modx/modx.class.php';
// Criteria for foreign Database
$host = 'hostname';
$username = 'user';
$password = 'password';
$dbname = 'database';
$port = 3306;
$charset = 'utf8mb4';
$dsn = "mysql:host=$host;dbname=$dbname;port=$port;charset=$charset";
$xpdo = new xPDO($dsn, $username, $password);
// Catch the URI that is called
$pageURI = $_SERVER["REQUEST_URI"];
// Get the language token saved as TV "area" in parent and remove it
if (!isset($modx)) return '';
$top = isset($top) && intval($top) ? $top : 0;
$id= isset($id) && intval($id) ? intval($id) : $modx->resource->get('id');
$topLevel= isset($topLevel) && intval($topLevel) ? intval($topLevel) : 0;
if ($id && $id != $top) {
$pid = $id;
$pids = $modx->getParentIds($id);
if (!$topLevel || count($pids) >= $topLevel) {
while ($parentIds= $modx->getParentIds($id, 1)) {
$pid = array_pop($parentIds);
if ($pid == $top) {
break;
}
$id = $pid;
$parentIds = $modx->getParentIds($id);
if ($topLevel && count($parentIds) < $topLevel) {
break;
}
}
}
}
$parentid = $modx->getObject('modResource', $id);
$area = "/".$parentid->getTVValue('area');
$URL = str_replace($area, '', $pageURI);
$lang= $parentid->getTVValue('lang');
// Issue queries against the foreign database:
$output = '';
$sql = "SELECT epf_application_detail.description FROM epf_application_detail INNER JOIN app_uri ON epf_application_detail.application_id=app_uri.application_id WHERE app_uri.uri = '$URL' AND epf_application_detail.language_code = '$lang'";
foreach ($xpdo->query($sql) as $row) {
$output .= nl2br($row['description']);
}
return $output;
Without knowing what language you're using, here is some pseudo-code to get you started.
Instead of firing a large query every time your page loads, you could create a separate table called something like "cache". You could run your query and then store the data from the query in that cache table. Then when your page loads, you can query the cache table, which will be much smaller, and won't bog things down when you refresh the page a lot.
Pseudo-Code (which can be done on an interval using a cronjob or something, to keep your cache fresh.):
Run your ten large queries
For each query, add the results to cache like so:
query_id | query_data
----------------------------------------------------
1 | {whatever your query data looks like}
Then, when your page loads, have each query collect the data from cache
It is important to note, that with a cache table, you will need to refresh it often. (either as often as you get more data, or on a set interval, like every 5 minutes or something.)
You should do some serverside caching and optimizations.
If I was you I would install Memcached for your database.
I would also consider some static caching like Varnish, this will cache every page as static HTML, With varnish the 2nd (and 3th, 4th,...) request doesn't have to be handled by PHP and MySQL, which will make it load a lot faster when you load it for the 2nd time (within the cache lifetime ofcourse).
Last of all you can help the PHP side handle the data better by installing APC (or an other opcode cache).

how to retrieve data on compare time in php

My code:
$query = "INSERT IGNORE INTO `user_app` (app_id,imei, package_name, sdk_name, sdk_version, app_version)
VALUES
('".$appid."','".$imei."', '".$pkg."', '".$sdkn."', '".$sdkv."', '".$appv."')";
$mysqli -> query($query);
$id = $mysqli -> insert_id ; //get last user insert id
$idt = date('G:i:s', time());
$new = strtotime($idt);
include('requestad.php');
When a new user registered, he'll get an ad from requestad.php in json format. Insert id is save in a separate variable named $id, if a user again hit via application (as application invoke after every 30min ) then he'll get again json ad. I am trying to do some stuff like user get ad only once in whole 24hours, this is possible with insert id and insert time stamp. I am doing something like that:
if ( $new == time() ) {
include('requestad.php');
} elseif ( $new < time() ) {
echo 'nothing';
}
But problem is i didn't save exact execution time in variable and save time is necessary for comparison. Also, i have to send some blank to user if he again request for resource. Pl have a look on this and help me to produce optimal solution.
Still i didn't apply any logic yet. I can achieve this through store time which is static and compare it to time() which shows real time. Still i am looking this one
$store_time=$row['time'];//pick up the store time of user
$store_time=strtotime($row['time']);//and convert it in strtotime.if alredy did no need to do this.
$new=strtotime("-24 hours",strtotime(time()));//substract the time to -24 hours.
if($new==$store_time)
{
//your code
}
else
{
//your code
}

Categories