I have memcache (installed on php5) and memcached (installed on php7.2 via libmemcached) both connecting to the same memcached daemon/server.
Memcache::get works perfectly fine and fetches the data as per my expectation. But when I do Memcached::get, it always returns 0.
I have checked that I have compression off when using both extensions. I also tried toggling between Memcached::OPT_BINARY_PROTOCOL for memcached and it still produces same null result.
Interestingly, when I add a key/value pair using memcached extension and retrieve using the same key, I get proper/correct value that I added.
I am now clueless what could be the reason that it's not working for data already stored in memcached server.
EDIT 1 : I did telnet to my memcached server, and checked that it actually has the value. Also, I checked the result code returned by Memcached::getResultCode is not any kind of failure.
EDIT 2 : I may have narrowed it down further. I noticed that when I save ["key1" => "value1"] from memcache-php5 script, it stores and retrieves data correctly. But when I try to retrieve that same data with memcached-php7.1 script, it returns 0.
After that I removed the data with key "key1" from memcached server using telnet. And then I saved ["key1" => "value1"] using memcached-php7.1 script and it can retrieve that data correctly. But when trying to retrieve it using memcache-php5 script, it returns kind of the serialized data "a:1:{s:4:\"key1\";s:6:\"value1\";}" (this is json_encoded output)
So in order to upgrade, I may have to delete/flush everything and recreate entries in memcached server using memcached extension.
P.S. : I know the differences between both these php extensions. I have read all the comments on this question and it's not a duplicate of mine.
As you already know, memcache and memcached are two different extensions. Even though, they're are used for the same purpose - connecting to a memcache server - each one of them serialize data differently.
That means you can't safely switch between them, without a proper cache flush in the server or independent server instances.
<?php
$memcache = new Memcache;
$memcacheD = new Memcached;
$memcache->addServer($host);
$memcacheD->addServers($servers);
$checks = array(
123,
4542.32,
'a string',
true,
array(123, 'string'),
(object)array('key1' => 'value1'),
);
foreach ($checks as $i => $value) {
print "Checking WRITE with Memcache\n";
$key = 'cachetest' . $i;
$memcache->set($key, $value);
usleep(100);
$val = $memcache->get($key);
$valD = $memcacheD->get($key);
if ($val !== $valD) {
print "Not compatible!";
var_dump(compact('val', 'valD'));
}
print "Checking WRITE with MemcacheD\n";
$key = 'cachetest' . $i;
$memcacheD->set($key, $value);
usleep(100);
$val = $memcache->get($key);
$valD = $memcacheD->get($key);
if ($val !== $valD) {
print "Not compatible!";
var_dump(compact('val', 'valD'));
}
}
Related
I'm using Redis for PHP.
I need to check if the key exists in redis list, and if it does not, add it. For now my code looks as follows:
$redis = Redis::connection();
$redis->pipeline(function($pipe) use ($type, $redis)
{
$list = $pipe->lRange($type.'_unique_list', 0, -1);
if(!in_array($this->uid, $list)) {
$pipe->rPush($type . '_unique_list', $this->uid);
}
});
The problem is that $list taken from $pipe returns Redis object, not array, and in_array does not work. But if I use $redis->lRange, the script becomes too slow.
So my question is, if there is any possibility to check if the key exists in Redis list without retrieving the list into the script? Some special Redis command which I can't find in docs? Or maybe I could replace in_array with something else in this particular situation?
Wrongish answer: you can call LINDEX instead of doing the search in the client.
Righter answer: scanning a linked list is always an expensive operation (O(N)) whether it is server- or client-side. Consider using a different data structure, e.g. a Set, for that purpose if your N is large.
Alright, sooooo... The issue is this: I am LPUSH'ing a variable value to a list called "keys". When I try to get and output the value of that list... it claims the list is empty (bool(false)). The syntax seems correct. This code has worked on other occasions (in fact, I am merely going through each function and testing/refactoring/improving what I have already written). I got snagged on this, and I'm completely boggled. Here's the code (with relevant notes):
$kw = $_REQUEST['keyword']; //we're passing a value to this in a query string
if(empty($kw)){
$key = 'default';
createRedis($key);
}else{
$key = $kw;
createRedis($key);
}
function{
$key = $a;
$r = new Redis();
$r->connect( 'localhost' );
$r->LPUSH( 'keys',$key ); // $key echos a value when one is passed in
echo $key; // a query string, BUT....
$keys=$r->get('keys'); //'keys'... the redis list
var_dump($keys); // throws a bool(false) when dumped
}
Is there something crazy that I'm missing? Redis is tested on my server as working. I, otherwise, am unable to sort out what the heck is wrong with this. Here's the documentation on LPUSH for phpredis (which is what we are using (it is also installed and working)): https://github.com/nicolasff/phpredis#lpush
and
the documentation for this on the Redis website (these are CLI examples):
http://redis.io/commands/lpush
Any help is sincerely appreciated. Perhaps I am using an ineffective method to test whether or not the redis list 'keys' is retaining value? (That was the whole purpose of the dump).
You have to use lrange instead of get:
$keys = $r->lrange('keys', 0, -1);
I got a little and strange problem with the APC. In our code we have to deserialize some hundred big arrays from json and this operation is really expensive. Now I tried to deserialize and store the array in APC, but apc_fetch() returns false on next request.
$items = $entity->getItems(); // JSON-String
$cacheKey = __FUNCTION__ . '_itemcache_' . $entity->getId() . '_' . md5($items);
$cacheItems = apc_fetch($cacheKey);
if(false === $cacheItems) {
$cacheItems = json_decode($items, true);
apc_store($cacheKey, $cacheItems, 3600);
}
// ...
I can see all cached items in apc.php and I can also fetch them from other applications with the same server-config. What could be wrong? This snippet is taken from a symfony-project, but how far I can see there isn't any other apc-code in use.
Any ideas? I already searched here and on google, but I didn't found any helpful.
You should check params like:
apc.max_file_size = 512M
and few others. Also be aware that APC isn't made to handle a single large variabile. For big data your best bet is to use a Database.
I'm having some trouble with memcache on my php site. Occasionally I'll get a report that the site is misbehaving and when I look at memcache I find that a few keys exist on both servers in the cluster. The data is not the same between the two entries (one is older).
My understanding of memcached was that this shouldn't happen...the client should hash the key and then always pick the same server. So either my understanding is wrong or my code is. Can anyone explain why this might be happening?
FWIW the servers are hosted on Amazon EC2.
All my connections to memcache are opened through this function:
$mem_servers = array(
array('ec2-000-000-000-20.compute-1.amazonaws.com', 11211, 50),
array('ec2-000-000-000-21.compute-1.amazonaws.com', 11211, 50)
);
function ConnectMemcache()
{
global $mem_servers;
if ($memcon == 0) {
$memcon = new Memcache();
foreach($mem_servers as $server) $memcon->addServer($server[0], $server[1], true);
}
return($memcon);
}
and values are stored through this:
function SetData($key,$data)
{
global $mem_global_key;
if(MEMCACHE_ON_OFF)
{
$key = $mem_global_key.$key;
$memcache = ConnectMemcache();
$memcache->set($key, $data);
return true;
}
else
{
return false;
}
}
I think this blog post touches on the problems your having.
http://www.caiapps.com/duplicate-key-problem-in-memcache-php/
From the article it sounds like the following happens:
- a memcache server that has the key originally drops out
- the key is recreated on the 2nd server with updated data
- 1st server come back online and into the cluster with the old data.
- Now you have the keys save on 2 servers with different data
Sounds like you may need to use Memcache::flush to clear out the memcache cluster before your write to help minimize how long duplicates might exist in your cluster.
Awhile ago I came across a script that basically fetched a list of countries/states from a web resource if it wasn't located in a database, and this script would then populate the database with those contents and from then on, rely on them from then on.
Since I'm working on a localization class of my own, I'll be using the same locale data Zend is using, in the form of around ~60 or so xml files which contain localised data such as countries, languages for locales.
I figure since the framework I'm working on will rely on these files from now on ( where it isn't now ), and none of the servers now have this data, should I:
Setup my web application to download these files from a central server where all the content is stored in a .tar.gz, unpack them, store them on the server and then rely on them
Create a separate script to do this, and not actually do this within the application.
Pseudo code:
if ( !data ) {
resource = getFile( 'http://central-server.com/tar.gz' );
if ( resource ) {
resource = unpack( directory, resource )
return true
}
throw Exception('could not download files.')
}
I would go for the first option iff the data needs to be contantly updated, otherwise I would choose your second option.
Here is a method I developed some years ago, that was part of a GeoIP class:
function Update()
{
$result = false;
$databases = glob(HIVE_DIR . 'application/repository/GeoIP/GeoIP_*.dat');
foreach ($databases as $key => $value)
{
$databases[$key] = basename($value);
}
$databases[] = 'GeoIP.dat.gz';
$date = date('ym');
if ((!in_array('GeoIP_' . $date . '.dat', $databases)) && (date('j') >= 2))
{
if ($this->Hive->Filesystem->Write(HIVE_DIR . 'application/repository/GeoIP/GeoIP.dat.gz', file_get_contents('http://www.maxmind.com/download/geoip/database/GeoIP.dat.gz'), false) === true)
{
$handler = gzopen(HIVE_DIR . 'application/repository/GeoIP/GeoIP.dat.gz', 'rb');
$result = $this->Hive->Filesystem->Write(HIVE_DIR . 'application/repository/GeoIP/GeoIP_' . $date . '.dat', gzread($handler, 2 * 1024 * 1024), false);
gzclose($handler);
foreach ($databases as $database)
{
$this->Hive->Filesystem->Delete(HIVE_DIR . 'application/repository/GeoIP/' . $database);
}
}
}
return $result;
}
Basically the Update() was executed every single time, it would then check if the day of the month equal or higher than 2 (MaxMind releases GeoIP databases on the first day of the month) and if a a database for that month didn't existed already. Only if both these conditions where true the method would download, unpack, rename the database and remove all the old databases from previous months.
In your case, since you're dealing with locales, doing a periodical check similar to this once in a while might not be a bad idea, since countries change stuff (names, currencies, calling codes, etc...) a lot.
if this is a library, i would probably have this be part of the setup steps. an error can be printed if data isn't there.
Have an install script do the downloading, or throw an error if its not available. Downloading as requested from the server could lead to timeouts and would likely turn away users. fsockopen is the easiest way to do this and deal with sockets by hand if you don't have CURL setup and can't fopen/fread remote files.