Memcached item deletion doesn't work - php

I stumbled upon a weird behavior of Memcached server (version 1.4.5):
I have a single server and I'm trying to delete a stored value and it doesn't work as the item remains there (and I don't receive any error).
I wrote a quick PHP code that shows the problem:
$memcache_object = memcache_connect(MEMCACHED_SERVER_ADDRESS, MEMCACHED_SERVER_PORT);
$key = '64b788714dx7cds5350101e37ec0ddd40253123d';
$myObject = memcache_get($memcache_object, $key);
echo count($myObject); // Prints 1000
memcache_delete($memcache_object, $key);
$myObjectSecondTry = memcache_get($memcache_object, $key);
if (empty($myObjectSecondTry))
echo 'Empty'; // It does print it's empty
memcache_close($memcache_object);
Now if I run the code once it says "1000" and also it says that the object was empty on the second memcache_get() try.
But if I click refresh and run it again then the $key still exists on the memcached server and I get the same output.
I also tried to reconnect between each memcache call (i.e. get->delete->get) but it didn't help.
The only thing that clears the memory is reseting the Memcached service.
Please advise.

As far as I remember, it was an issue with the timeout so can you please try to use:
memcache_delete($memcache_object, $key, 0);

Related

Wincache using with php

I'm using wincache to store the value persistent. I'm using the below code to store the value
$newhighlowarray = array();
//high low calculation
if(wincache_ucache_exists("Highlow")) {
$existhighlowarray = wincache_ucache_get("Highlow");
$isexist = true;
$newhighlowarray = /* Calculations*/;
}
wincache_ucache_set("Highlow", $newhighlowarray);
I need to store value without time expire, i will update the cache for every second because of value changes in my stock market.
But this cache get clear some of the time and also some time happening 500 internal server error this time also getting clear the cache.How to store the value persistent with out clear my cache. Please help anyone.
My hosting server windows server with iis7
By default, the wincache_ucache_set function uses a ttl=0, which means the entry should never be expired.
To get some insight, you should check in the php_errors log when you get the 500 internal server error. There should be some information on why the request failed.

PHPRedis and SMEMBERS

I'm trying some stuff with Redis and PHP, and I've encountered a problem when it came to work with SETS and SMEMBERS.
I'm using Symfony2 and SncRedisBundle.
$redis->multi();
// Some stuff
$result = $redis->smembers("myset");
var_dump($result);
die();
$redis->exec();
Here's the dump
object(Redis)[990]
public 'socket' => resource(841, Redis Socket Buffer)
I'm a bit stuck now, I don't know how I can work with the result since there's nothing really visible or explained on php-redis documentation.
Can someone help me?
You should check the result of $redis->exec() instead of the result of smembers. The principle of MULTI/EXEC blocks is that command executions are delayed until the EXEC command. At this point, all commands are executed atomically and their results are sent back to the client.
See this example: https://github.com/nicolasff/phpredis#transactions
Note that using a MULTI/EXEC block with just one command inside is pointless and does not bring any benefits.

PHP - Hit counter textfile reset

I have an issue with my non-unique hit counter.
The script is as below:
$filename = 'counter.txt';
if (file_exists($filename)) {
$current_value = file_get_contents($filename);
} else {
$current_value = 0;
}
$current_value++;
file_put_contents($filename, $current_value);
When I'm refreshing my website very often (like 10 times per second or even faster), the value in the text file are getting reset to 0.
Any guess for fixing this issue?
This is a pretty poor way to maintain a counter, but your problem is probably that when you fire multiple requests at the site, one of the calls to file_exists() is getting a false because one of the other processes is removing and recreating the file.
If you want this to work consistantly you are going to have to lock the file between read and write See flock on php manual
Of course without the file lock you would also be getting incorrect counts anyway, when 2 processes manage to read the same value from the file.
Locking the file would also potentially slow your system down as 2 or more processes queue for access to the file.
It would probably be a better idea to store your counter in a database, as they are designed for coping with this kind of quick fire access and ensuring every process is properly queued and released.
Does it help if you add a check if file_get_contents isn't returning false?
$value = file_get_contents($filename);
if($value !== false)
{
$current_value = $value
}

Breaking Up Massive MySQL Update

Right now I have something like this in my CodeIgniter model:
<?php
$array = array(...over 29k IDs...);
$update = array();
foreach ($array as $line) {
$update[] = array('id' => $line, 'spintax' => $this->SpinTax($string));
### $this->SpinTax parses the spintax from a string I have. It has to be generated for each row.
}
$this->db->update_batch('table', $update, 'id');
?>
The first 20k records get updated just fine, but I get a 504 Gateway Time-out before it completes.
I have tried increasing the nginx server timeout to something ridiculous (like 10 minutes), and I still get the error.
What can I do to make this not timeout. I've read many answers and HOW-TOs to segment the update, but I continue to get the server timeout. A PHP or CodeIgniter solution would be excellent, and I need to deploy this code to multiple servers that might not be using nginx (similar error in Apache).
Thanks in advance.
You'll likely need to run this through command line and set_time_limit(0). IF you're in codeigniter, check this out on how to run a command line through the user guide. http://codeigniter.com/user_guide/general/cli.html
Now, before you do that, you mentioned you are using array chunk. If you're getting all the values from the database, no need to use array_chunk. Just set a get variable for instance.
/your/url?offset=1000, when that finishes, do a redirect to the same thing, but with 2000 and so on until it finishes.
Not the nicest or cleanest, but will likely get it done.

PHP script keeps working when it should fail

I know people complain usually about scripts not working, but here is a case where it keeps working even if I want it to stop.
I have a CSV parser that analyzes lines and inserts entries in a DB table. I am using PDO and Zend Framwork for the project. The code works fine.. too fine in fact.
public function save()
{
$memory_limit = ini_get('memory_limit');
ini_set('memory_limit', '512M');
$sql = "
INSERT INTO my_table (
date_start,
timeframe,
type,
country_to,
country_from,
code,
weight,
value
) VALUES (?,?,?,?,?,?,?,?)
ON DUPLICATE KEY UPDATE
weight = VALUES(weight),
value = VALUES(value)
";
if ($this->test_mode) {
echo $sql;
return;
}
$stmt = new Zend_Db_Statement_Pdo($this->_db, $sql);
foreach($this->parsed_data as $entry){
$stmt->execute(array_values($entry));
$affected_rows = $stmt->rowCount();
if ($affected_rows){
$this->_success = true;
}
}
unset($this->parsed_data, $stmt, $sql);
ini_set('memory_limit', $memory_limit);
}
The script takes various seconds to complete as I am parsing a big file. The problem appears when I am trying to stop the script, with ESC or even by closing the page. The script does not stop until it finishes to insert all entries. Not even an Apache reload is not fixing this, probably a restart will do it.
I am thinking that this is not normal behaviour and maybe I am doing something wrong so I am asking for suggestions.
Thanks.
UPDATE
ignore_user_abort is off (default behaviour) so user abort should be considered..
I'm pretty sure that's standard PHP behaviour - just because the browser goes away doesn't mean it won't stop processing the script. (Although restarting Apache, etc. will achieve this goal.)
To change this behaviour, you can use ignore_user_abort.
That said, "PHP will not detect that the user has aborted the connection until an attempt is made to send information to the client", which I suspect may be the issue you're experiencing.
See the above link and the PHP runtime configuration information for more info.
It is not wrong. Your tries won't work because:
ESCape - because it is totally unrelated to the working of a page - most browsers don't actually react to it
closing (or refreshing) the page - again, not related - the SERVER is doing something, and PHP will NOT stop when the client-side stops - server can't actually know if the client closed or refreshed a page
Apache reload - won't kill the PHP forked process
Restart WOULD do it - this will kill PHP processes and stuff. Although it is kinda troublesome.
Way to do this (if the long execution is undesirable), is to actually set an execution time limit, using PHP function set_time_limit(), or to make the parsing more optimal (if it is not).

Categories