I am writing a lottery program in PHP, because there would be large concurrent request for this program, I have limited number of each prize, 10 in this example. I don't want to see any prize exceed the stock. so I put the entire logic in transaction of Redis(I use predis(https://github.com/nrk/predis) as my PHP redis client), but it doesn't work, after more than 10 times requests to this program, I found more than 10 records in database that I could not understand. does anyone who knows the reason? very appreciate to your explanation, thank you!
here is my PHP code:
$this->load->model('Lottery_model');
$money = $this->_get_lottery();//which prize do you get
if($money > 0){
$key = $this->_get_sum_key($money);
$dbmodel = $this->Lottery_model;
// Executes a transaction inside the given callable block:
$responses = $redis->transaction(function ($tx) use ($key, $money, $dbmodel){
$qty = intval($tx->get($key));
if($qty < 10){
//not exceed the stock limit
$dbmodel->add($customer, $money); // insert record into db
$tx->incr($key);
}else{
log_message('debug', $money . ' dollar exceed the limit');
}
});
}else{
log_message('debug', 'you are fail');
}
after reading documentation about transaction of Redis, I know the usage of above code is totally wrong. then I modified it to below version, using optimistic lock and check-and-set.
$options = array(
'cas' => true, // Initialize with support for CAS operations
'watch' => $key, // Key that needs to be WATCHed to detect changes
'retry' => 3,
);
try{
$responses = $redis->transaction($options, function ($tx) use ($key, $money, $username, $dbmodel, &$qty){
$qty = intval($tx->get($key));
if($qty < 10){
$tx->multi();
$tx->incr($key);
$dbmodel->add($username, $money);// insert into mysql db
}else{
log_message('debug', $money . ' dollar exceed the limit');
}
});
}catch(PredisException $e){
log_message('debug', 'redis transaction failed');
}
But the problem is that the number of record in database exceeds the limitation of the prize, the total number saved in Redis won't be. what is the common solution to solve this kind of problem? do I have to lock INNodb table in this case?
You need to understand how Redis transactions work - in a nutshell, all commands making a transaction are buffered by the client (predis in your case) and then fired to the server all at once. Your code attempts to use the result of a read request (get) before the transaction had been executed. Please refer to the documentation for more details: https://redis.io/topics/transactions
Either read the qty outside the transaction, and use WATCH to protect against competing updates, or move this logic in its entirety to a Lua script.
Related
I am in the process of transferring data from one database to another. They are different dbs (mssql to mysql) so I cant do direct queries and am using PHP as an intermediary. Consider the following code. For some reason, each time it goes through the while loop it takes twice as much time as the time before.
$continue = true;
$limit = 20000;
while($continue){
$i = 0;
$imp->endTimer();
$imp->startTimer("Fetching Apps");
$qry = "THIS IS A BASIC SELECT QUERY";
$data = $imp->src->dbQuery($qry, array(), PDO::FETCH_ASSOC);
$inserts = array();
$continue = (count($data) == $limit);
$imp->endTimer();
$imp->startTimer("Processing Apps " . memory_get_usage() );
if($data == false){
$continue = false;
}
else{
foreach($data AS $row){
// THERE IS SOME EXTREMELY BASIC IF STATEMENTS HERE
$inserts[] = array(
"paymentID"=>$paymentID,
"ticketID"=>$ticketID,
"applicationLink"=>$row{'ApplicationID'},
"paymentLink"=>(int)($paymentLink),
"ticketLink"=>(int)($ticketLink),
"dateApplied"=>$row{'AddDate'},
"appliedBy"=>$adderID,
"appliedAmount"=>$amount,
"officeID"=>$imp->officeID,
"customerID"=>-1,
"taxCollected"=>0
);
$i++;
$minID = $row{'ApplicationID'};
}
}
$imp->endTimer();
$imp->startTimer("Inserting $i Apps");
if(count($inserts) > 0){
$imp->dest->dbBulkInsert("appliedPayments", $inserts);
}
unset($data);
unset($inserts);
echo "Inserted $i Apps<BR>";
}
It doesn't matter what I set the limit to, the processing portion takes twice as long each time. I am logging each portion of the loop and selecting the data from the old database and inserting it into the new one take no time at all. The "processing portion" is doubling every time. Why? Here are the logs, if you do some quick math on the timestamps, each step labeled "Processing Apps" takes twice as long as the one before... (I stopped it a little early on this one, but it was taking a significantly longer time on the final iteration)
Well - so I don't know why this works, but if I move everything inside the while loop into a separate function, it DRAMATICALLY increases performance. Im guessing its a garbage collection / memory management issue and that having a function call end helps the Garbage collector know it can release the memory. Now when I log the memory usage, the memory usage stays constant between calls instead of growing... Dirty php...
I just started to maintain a browser-game and optimize its code. I can see some functions run more than hundred times. I thought about how to optimize the code.
1- Modify whole platform of programming of the game. (it doesn't make sense because it needs at least 9 months to modify it. Also maybe cause of unexpected bugs and I don't want to get in the way).
2- Working on caching for more efficiency.
I am going to use second way. For more clarifying I put a function of the game here.
function modifyResource($vid, $wood, $clay, $iron, $crop, $mode) {
if($wood<0)$wood=0;
if($clay<0)$clay=0;
if($iron<0)$iron=0;
//if($crop<0)$crop=0;
$q = "SELECT * from " . TB_PREFIX . "vdata where wref = $vid";
$result = $this->query($q,__FILE__,__LINE__);
$res=$this->mysql_fetch_all($result);
if($wood+$res[0]['wood']>$res[0]['maxstore'] && $mode)$wood=$res[0]['maxstore']-$res[0]['wood'];
if($clay+$res[0]['clay']>$res[0]['maxstore'] && $mode)$clay=$res[0]['maxstore']-$res[0]['clay'];
if($iron+$res[0]['iron']>$res[0]['maxstore'] && $mode)$iron=$res[0]['maxstore']-$res[0]['iron'];
if($crop+$res[0]['crop']>$res[0]['maxcrop'] && $mode)$crop=$res[0]['maxcrop']-$res[0]['crop'];
if($res[0]['wood']<$wood && !$mode)$wood=$res[0]['wood'];
if($res[0]['clay']<$clay && !$mode)$clay=$res[0]['clay'];
if($res[0]['iron']<$iron && !$mode)$iron=$res[0]['iron'];
if($res[0]['crop']<$crop && !$mode)$crop=$res[0]['crop'];
if(!$mode) {
$wood = -$wood;
$clay = -$clay;
$iron = -$iron;
$crop = -$crop;
}
$q = "UPDATE " . TB_PREFIX . "vdata set wood = wood + $wood, clay = clay + $clay, iron = iron + $iron, crop = crop + $crop where wref = $vid";
$r = $this->query($q,__FILE__,__LINE__);
if($r){
// $this->logging->addResourceLog($vid, $wood, $clay, $iron, $crop, $log);
}
return true;
}
As you can see the function select vdata table to get necessary information then modify it and finally update the table. two queries run in this function. vdata is a real big table for example in the case is more than 500MB for 80,000 rows. The function runs 88 times for every request from user. This means 88 times a big table Selected and Updated. it costs very much hardware resources and also raise loading time.
An Idea is crossed my mind, Store select query in a variable then update it If function run for the first time. And for other times only update the variable until the class of database destroyed then apply all changes. this will cause save 2x87 queries.
Note:
this caching in variable only persistent until user request accomplished. then destroy all stored(cached) data.
This is just an idea, I have no idea how it can be functioning, Or would be useful if I coded it. Or is there any alternative way or any suggested plugins or class that have same functioning?
Therefore my question has three parts:
1- Would be this idea useful?
2- if yes! Is there any class or plugin that I use it or get more idea?
3- if no! What will be your suggestion to optimze the game?
Some users have bad WiFi connection when using our system because of geography or internet provider issue. When some user save some product info, i found some data(field) not 100% save into DB. And this is very difficult to me to reproduce the problem because my connection is good.
So my question is how to let server know that i already send all field to process before save/update query execute ?
case "save":
for ($i=0, $n=sizeof($languages); $i<$n; $i++)
{
$language_id = $languages[$i]['id'];
$name = preg_replace( '/\s+/', ' ', tep_db_prepare_input($_POST["products_name"][$language_id]));
$description = tep_db_prepare_input(str_replace(" ", " ", $_POST["products_description"][$language_id]));
$extra_info = tep_db_prepare_input(str_replace(" ", " ", $_POST["products_extra_info"][$language_id]));
tep_db_perform(TABLE_PRODUCTS_DESCRIPTION, $sql_data_array, 'update', 'products_id = "' . $pID . '" AND language_id="'.$language_id.'"');
// update other language that empty
$sql_data_array = array('products_name' => $name);
tep_db_perform(TABLE_PRODUCTS_DESCRIPTION, $sql_data_array, 'update', 'products_id = "'.$pID.'" and products_name = ""');
$sql_data_array = array('products_description' => $description);
tep_db_perform(TABLE_PRODUCTS_DESCRIPTION, $sql_data_array, 'update', 'products_id = "'.$pID.'" and products_description = ""');
$sql_data_array = array('products_extra_info' => $extra_info);
tep_db_perform(TABLE_PRODUCTS_DESCRIPTION, $sql_data_array, 'update', 'products_id = "'.$pID.'" and products_extra_info = ""');
}
... etc
Using database transaction you can prevent these errors.
When something goes wrong no update will be performed to the DB.
Note that you can't use transactions on MyISAM tables, you need to use the InnoDB Engine
You need to use a transaction to perform multiple DB queries. The advantage is that if one of the query fails in the execution, all of the previous that were executed get undone in the database. That way you don't have any corrupted or incomplete data in your DB.
Apart from the obvious as stated by #Mark Baker, you could use PHP's connection_aborted() function to detect when a user disconnects. Here's an excerpt from the comments section:
--
A trick to detecting if a connection is closed without having to send data that will otherwise corrupt the stream of data (like a binary file) you can use a combination of chunking the data on HTTP/1.1 by sending a "0" ("zero") as a leading chunk size without anything else.
NOTE it's important to note that it's not a good idea to check the stream more then once every few seconds. By doing this you are potentially increasing the data sent to the user with no gain to the user.
A good reason to do it this way is if you are generating a report that takes a long time to run and takes a lot of server resources. This would allow the server to detect if a user canceled the download and do any cleanup without corrupting the file file being download.
Here is an example:
<?php
ignore_user_abort(true);
header('Transfer-Encoding:chunked');
ob_flush();
flush();
$start = microtime(true);
$i = 0;
// Use this function to echo anything to the browser.
function vPrint($data){
if(strlen($data))
echo dechex(strlen($data)), "\r\n", $data, "\r\n";
ob_flush();
flush();
}
// You MUST execute this function after you are done streaming information to the browser.
function endPacket(){
echo "0\r\n\r\n";
ob_flush();
flush();
}
do{
echo "0";
ob_flush();
flush();
if(connection_aborted()){
// This happens when connection is closed
file_put_contents('/tmp/test.tmp', sprintf("Conn Closed\nTime spent with connection open: %01.5f sec\nLoop itterations: %s\n\n", microtime(true) - $start, $i), FILE_APPEND);
endPacket();
exit;
}
usleep(50000);
vPrint("I get echo'ed every itteration (every .5 second)<br />\n");
}while($i++ < 200);
endPacket();
?
Note: This line ignore_user_abort(true); allows the script to continue running in the background after the user disconnects, without it (i.e. by default) the PHP process is killed instantly. This is why using transactions will solve your problem as the transaction that was started, never completed.
I have a strange bug here related to mysql and php. I'm wondering if it could be a performance problem on our server's behalf too.
I got a class used to manage rebate promotional codes. The code is great, works fine and doesn exactly what it is supposed to do. The saveChanges() operation sends an INSERT or UPDATE depending on the state of the object and in the current context will only insert cause i'm trying to generate coupon codes.
The classe's saveChanges goes like this: (I know, i shouldn't be using old mysql, but i've got no choice due to architectural limitations of the software, so don't complain about that part please)
public function saveChanges($asNew = false){
//Get the connection
global $conn_panier;
//Check if the rebate still exists
if($this->isNew() || $asNew){
//Check unicity if new
if(reset(mysql_fetch_assoc(mysql_query('SELECT COUNT(*) FROM panier_rabais_codes WHERE code_coupon = "'.mysql_real_escape_string($this->getCouponCode(), $conn_panier).'"', $conn_panier))) > 0){
throw new Activis_Catalog_Models_Exceptions_ValidationException('Coupon code "'.$this->getCouponCode().'" already exists in the system', $this, __METHOD__, $this->getCouponCode());
}
//Update the existing rebate
mysql_query($q = 'INSERT INTO panier_rabais_codes
(
`no_rabais`,
`code_coupon`,
`utilisation`,
`date_verrou`
)VALUES(
'.$this->getRebate()->getId().',
"'.mysql_real_escape_string(stripslashes($this->getCouponCode()), $conn_panier).'",
'.$this->getCodeUsage().',
"'.($this->getInvalidityDate() === NULL ? '0000-00-00 00:00:00' : date('Y-m-d G:i:s', strtotime($this->getInvalidityDate()))).'"
)', $conn_panier);
return (mysql_affected_rows($conn_panier) >= 1);
}else{
//Update the existing rebate
mysql_query('UPDATE panier_rabais_codes
SET
`utilisation` = '.$this->getCodeUsage().',
`date_verrou` = "'.($this->getInvalidityDate() === NULL ? '0000-00-00 00:00:00' : date('Y-m-d G:i:s', strtotime($this->getInvalidityDate()))).'"
WHERE
no_rabais = '.$this->getRebate()->getId().' AND code_coupon = "'.mysql_real_escape_string($this->getCouponCode(), $conn_panier).'"', $conn_panier);
return (mysql_affected_rows($conn_panier) >= 0);
}
}
So as you can see, the code itself is pretty simple and clean and returns true if the insert succeeded, false if not.
The other portion of the code generates the codes using a random algorithm at goes like this:
while($codes_to_generate > 0){
//Sleep to delay mysql choking on the input
usleep(100);
//Generate a random code
$code = strtoupper('RC'.$rebate->getId().rand(254852, 975124));
$code .= strtoupper(substr(md5($code), 0, 1));
$rebateCode = new Activis_Catalog_Models_RebateCode($rebate);
$rebateCode->setCouponCode($code);
$rebateCode->setCodeUsage($_REQUEST['utilisation_generer']);
try{
if($rebateCode->saveChanges()){
$codes_to_generate--;
$generated_codes[] = $code;
}
}catch(Exception $ex){
}
}
As you can see here, two things to note. The number of codes to generate and the array of generated codes only get filled if i get a return true from the saveChanges, so mysql HAS to report that the information was inserted for this part to happen.
Another tidbit is the first line of the while:
//Sleep to delay mysql choking on the input
usleep(100);
Wtf? Well this post is all about this. My code works flawlessly with small amounts of codes to generate. But if i ask mysql to save more than a few codes at once, i have to throttle myself using usleep or mysql drops some of these lines. It will report that there are affected rows but is not saving them.
Under 100 lines, i don't need throttling and then i need to usleep depending on the amount of lines to insert. It must be something simple but i don't know what. Here is a sum of the lines i tried to insert and the minimum usleep throttle i had to implement:
< 100 lines: none
< 300 lines: 2 ms
< 1000 lines: 5 ms
< 2000 lines: 10 ms
< 5000 lines: 20 ms
< 10000 lines: 100 ms
Thank you for your time
Are you sure that your codes are all inserted and not updated, because, update a non existing line does nothing.
I am recording unique page views using memcached and storing them in db at 15 mins interval. Whenever number of users grow memcached gives me following error:
Memcache::get(): Server localhost (tcp 10106) failed with: Failed reading line from stream (0)
I am using following code for insert/update page views in memcached
if($memcached->is_valid_cache("visiors")) {
$log_views = $memcached->get_cache("visiors");
if(!is_array($log_views)) $log_views = array();
}
else {
$log_views = array();
}
$log_views[] = array($page_id, $time, $other_Stuff);
$memcached->set_cache("visiors", $log_views, $cache_expire_time);
Following code retrieves the array from memcached, updates the X number of page views in db and sets the remaining page views in memcached
if($memcached->is_valid_cache("visiors")) {
$log_views = $memcached->get_cache("visiors");
if(is_array($log_views) && count($log_views) > 0) {
$logs = array_slice($log_views, 0, $insert_limit);
$insert_array = array();
foreach($logs as $log) {
$insert_array[] = '('. $log[0]. ',' . $log[1] . ', NOW())';
}
$insert_sql = implode(',',$insert_array);
if(mysql_query('INSERT SQL CODE')) {
$memcached->set_cache("visiors", array_slice($log_views, $insert_limit), $cache_expire_time); //store new values
}
}
}
The insert/update cause thread locking because I can see lots of script in waiting for their turn. I think I am losing page views during the update process. Any suggestions how to avoid memcached reading errors and make this code perfect?
You are likely running into a connection limit within memcached, your firewall, network, etc. We have a simple walk through on the most common scenarios: http://code.google.com/p/memcached/wiki/Timeouts
There's no internal locking that would cause sets or gets to block for any amount of time.