I'm noticing an intermittent issue with our Memcached session handler. The error that occurs is:
Unknown: Failed to write session data (memcache). Please verify that
the current setting of session.save_path is correct.
Notes:
It seems to be an intermittent issue that occurs 5 or 6 times a day to various users.
Memcached is not localhost. i.e. It's on a different server than the web server.
I'm using the Memcache extension (as opposed to the MemcacheD extension).
I'm using the tcp prefix. If you look at this question, you'll see that the "fix" was to put tcp:// a prefix if you're using the Memcache extension.
My php.ini settings:
session.save_handler = memcache
session.save_path = "tcp://64.233.191.255:11211"
Note that I've also used:
session.save_path = "tcp://64.233.191.255:11211?persistent=1&weight=1&timeout=1&retry_interval=15"
But it doesn't seem to matter.
Checked the memcached.log file, where I found the following error:
Failed to write, and not due to blocking: Connection reset by peer.
Note: This particular error occurs at least once, at the same time (01:07AM), everyday. It will then occur sporadically throughout the day.
Maybe you're running out of filehandles? Perhaps the backups make your machine swap, resulting in slower responses, meaning more concurrent connections to the memcached process resulting in a stampeding hurd.
Related
PHP Warning: session_start(): Unable to clear session lock record in
/var/www/efs/html/v43/Api/PortalApi/PortalApi.php on line 39
When multiple request are made simultaneously, the above issue occurs in one of our dev environment but other environments are working fine. could not resolve the reason for it.
Your problem was the user concurrency.
Setting the following ini settings (also from PHP directly) can help to mitigate this issue on a high load project
I recommend disable session.lazy_write in php.ini
You can try this too:
ini_set('memcached.sess_lock_retries', 10);
ini_set('memcached.sess_lock_wait_min', 1000);
ini_set('memcached.sess_lock_wait_max', 2000);
We fixed the issue, the problem -> in a recent code change in dev environment multiple logs was added in the code for testing configuration, the writing of multiple logs directly to efs caused the slow down
Basically i am trying to read the FTP files from serve via cron job.
I am getting following error
Warning: Unknown: Failed to write session data (memcache). Please
verify that the current setting of session.save_path is correct
(tcp://...:11211?persistent=1&weight=1&timeout=1&retry_interval=15)
in Unknown on line 0
I don't have idea why i am getting this error. Any idea what is missing there
Thanks
I'd recommend updating your php.ini to use file based caching unless memcache is what you need.
Change session.save_handler to files.
http://php.net/manual/en/session.configuration.php
Memcache is typically used when you have multiple servers that need to share the same session (a.k.a behind a load balancer)
"PHP Fatal error: Uncaught exception 'RedisException' with message 'read error on connection'"
The driver here is phpredis
$redis->blpop('a', 0);
This always times out after ~1 minute. My redis.conf says timeout 0 and $redis->getOption(Redis::OPT_READ_TIMEOUT) returns double(0)
If I do this it has never timed out $redis->setOption(Redis::OPT_READ_TIMEOUT, -1);
Why do I need -1? Redis documentation says timeout 0 in redis.conf should never time me out.
"By default recent versions of Redis don't close the connection with the client if the client is idle for many seconds: the connection will remain open forever."
The current solution I know of is to disable persistent connections for phpredis, as they have been reported as buggy since October 2011. If you’re using php-fpm or other threaded models, the library specifically disables persistent connections.
Reducing the frequency of this error might be possible by adjusting the php.ini default_socket_timeout value.
Additionally, read timeout configurations in phpredis are not universally supported. The feature (look for OPT_READ_TIMEOUT) was introduced in tag 2.2.3.
$redis->connect(host, port, timeout1);
.....
$redis->blpop($key, timeout2);
In which timeout1 must be longer than timeout2.
After a lot of study of articles and doing my own strace's of redis and php, it seemed the issue was easily fixed by this solution. The main issue in my use case was that redis server is not able to fork a process towards saving the in-memory writes to the on-disk db.
I have left all the timeout values in php.ini and redis.conf as they were without making the hacky changes suggested and then tried the above solution alone, and this issue 'read error on connection' that was unfixable using all the suggestions around changing timeout values across php and redis conf files went away.
I also saw some suggestions around increasing limit on file descriptors to 100000 etc. I am running my use case on a cloud server with file descriptor limit at 1024 and my use case runs even with that limit perfectly.
I added the code ini_set(‘default_socket_timeout’, -1) in my php program, but I found it didn't work immediately.
However after 3 minutes when I started to run the php program again, at last I found the reason: the redis connection is not persistent
So I set timeout=0 in my redis.conf, and the problem is solved!
I have recently tried implementing memcached for session saving in php.
I modified the session.save_handler in my php.ini and for the most part it works correctly. Sessions are saved in it. However, once in a while, I get this weird message for certain sessions:
PHP Warning: Unknown: Failed to write session data (memcached). Please verify that the current setting of session.save_path is correct (x.x.x.x:11211) in Unknown on line 0.
The session data is the same, way under the 1MB barrier of memcached and I have yet to see a pattern in the occurences of this message... maybe a couple of times every minute. The website is usually under medium load, 150 users concurrently.
If you are using memcache then save_path must have the tcp:// prefix.
If you are using memcached then the save_path should not have the tcp:// prefix.
The answer is Memcached objects can be maximum of 1MB (default)
if your array or object exceeds this limit, the object will be removed magically :)
All the items in your session will be removed, just saying this because right now at this moment I have experienced it my self
I solved it by starting the Memcached Session server like this
memcached -I 10m
I believe it is something to do with using the memcached extension and it not initializing before the sessions. I switched to using the memcache extension rather than the memcached extension and it works.
session.save_handler = memcache
session.save_path="tcp://192.168.1.103:11211"
I had a similar issue with symfony2 and memcached on a docker-compose stack.
The error stated:
Warning: Failed to write session data (user). Please verify that the current setting of session.save_path is correct
And the problem was that I had an outdated ./app/config/parameters.yml
Check your memchached setting to fit your needs, e.g.:
parameters:
session_memcached.host: '%session_memcached_host%'
session_memcached.port: '%session_memcached_port%'
session_memcached.prefix: '%session_memcached_prefix%'
session_memcached.expire: '%session_memcached_expire%'
I have a Drupal site on a shared web host, and it's getting a lot of connection errors. It's the first time I have seen so many connection timeout errors on a server. I'm thinking it's something in the configuration settings. Non-drupal parts of the site are not giving as many connection errors.
Since this hosting provider doesn't give me access to the php.ini file, I put one at my docroot to modify the lines that I thought would be causing this:
memory_limit = 128M
max_execution_time = 259200
set_time_limit = 30000
But it didn't work. There is no improvement in the frequency of the timeout errors. Does anyone have any other ideas about this type of error?
Thanks.
You can control the time limit on a script while your script is running. Add a called to set_time_limit near the top of your PHP pages to see if it helps.
Ideally you need to figure out what you actual limits are as defined by your host. A call to phpinfo() somewhere will let you see all the config settings that your server has in place.