EINTR error for semop call - php

I am using the following code fragment in a php script to safely update a shared resource.
$lock_id = sem_get( ftok( 'tmp/this.lock', 'r'));
sem_acquire($lock_id)
//do something
sem_release($lock_id)
When I stress test this code with large number of requests I get an error:
Warning: semop() failed acquiring SYSVSEM_SETVAL for key 0x1e: No space left on device in blahblah.php on line 1293
php sources show the following code for failed acquiring SYSVSEM_SETVAL
while (semop(semid, sop, 3) == -1) {
if (errno != EINTR) {
php3_error(E_WARNING, "semop() failed acquiring SYSVSEM_SETVAL for key 0x%x: %s", key, strerror(errno));
break;
}
}
which means semop fails with EINTR. man page reveals that the semop() system call was interrupted by a signal.
My question is can I safely ignore this error and retry sem_acquire?
Edit: I have misunderstood this problem, Pl see the clarification I have posted below.
raj

I wouldn't ignore the ENOSPC (you're getting something other than EINTR, as the code shows). You may end up in a busy loop waiting for a resource that you have earlier exhausted. If you're out of some space somewhere, you want to make sure that you deal with that issue. ENOSPC generally means you are out of...something.
A couple of random ideas:
I am not an expert on the PHP implementation, but I'd try to avoid calling sem_get() each time you want the semaphore. Store the handle instead. It may be that some resource is associated with each call to sem_get, and that is where you're running out of space.
I'd make sure to check your error returns on sem_get(). It's a code snippet, but if you were to fail to get the sema4, you would get inconsistent results when trying to sem_op() it (perhaps EINTR makes sense)

After posting this question I noticed that I misread the code as errno == EINTR and jumped into conclusion. So as bog has pointed out, the error is ENOSPC and not EINTR. After some digging I located the reason for ENOSPC. The number of undo buffers were getting exhausted. I have increased the number of semmnu and now the code is running with out issues. I have used semmni*semmsl as the value of semmnu

Related

Using msg_send() in PHP results in error 11... Cannot find a solution

I am currently building a small application that uses the message queue built in PHP.
I have 1 "server" process and 1 "client" process. Messages flow from server to client.
They are simple JSON objects, that are serialised, then send.
This code is used
<?php
$send = msg_send($q, MESSAGE_TYPE_EXECUTION, $update, true, false, $error);
if (isset($error) && $error != 0) {
echo 'Execution error: ' . $error . PHP_EOL;
}
// $q is the message queue integer
// MESSAGE_TYPE_EXECUTION is integer 1
// $update is the JSON string
// true is that the JSON string is serialised
// false is that it is blocking (which it is not)
// $error get's filled when an error occurs (see below)
This works without issue, until it does not.
Sometimes after a couple of minutes, sometimes after a couple of hours the following error appears:
PHP Warning: msg_send(): msgsnd failed: Resource temporarily unavailable in
/var/www/server.php on line 57
The value of the $error variable is the integer 11.
All messages that follow this error will have error 11, until I restart the process and all is working again (for a while, until the same error appears again)
I have been searching but cannot find any explanation what error 11 is, how this can be managed and fixed without restarting the process.
Any clue, information, example etc is welcome. I would really like for server.php to be reliable.
-- edit --
client.php is the process that fetches the messages (which are all more or less the same, but with other values)
it uses this fetch the messages from the queue (filled in server.php)
<?php
$update = msg_receive($q, 0, $messagetype, 1024, $message, true, MSG_IPC_NOWAIT && MSG_NOERROR, $error);
if ($update) {
// Do stuff
}
usleep(1000000);
I have not yet checked memory usage, will look into that
Platform used
PHP 7.1.3
Centos 7
So, Solution was found after some information and leads (read the comments on my original question), brought up by #ChrisHaas (Thanks again!). After some tinkering all is running smoothly now, without error 11 for msg_send().
PHP msg_send() call is basically a wrapper of msgsnd
So a lot of information can be found there, also about errors you might encounter (in combination with flags used when reading messages with msg_receive() )
The queue is limited in total size and total messages it can hold (I, however, have not found a way to increase the total size of the queue).
The reason I was getting error 11 was due to a couple of things:
The client I created was too slow fetching messages from the queue, causing it to run into the max limit and crapping out. I did not find a solution for fixing this situation, other than restarting all processes involved. To repeat the same over and over again.
I also increased the size of reading messages in msg_receive() as sometimes the messages where big (most where small). But when you declare a too small size the big messages will remain in queue and clog it up until it craps out. Increasing the max_size helped with fetching the bigger messages too.
Long story short: error 11 is related to a full message queue in my perspective (I still do not have a 100% clear documented answer though).
Pointers to fix the issue:
Be sure you fetch all messages that are big.
Be sure to read out at least as fast as you send the messages in the queue.
Check your queue(s) with the command ipcs -q in the terminal. It allows you to see the queues currently active. Keeping an eye on that allows you to see it slowly filling up on problems.
Wish the documentation on php.net was better in this case...

crons stopping for no reason

Well this isn't true, I'm sure there's a reason, but I can't find it!!
I have a script that can take around 10 minutes to execute. It does a lot of communicating with an api on a service that we have that use. It pulls a bit of a fingerprint of everything every 24 hours. So what it's doing is pretty aside from the point. the probm I'm finding is the script stops executing somewhat randomly!!
I can't find any errors that would cause my script to stop executing, even with
//for debugging
error_reporting(E_ALL);
ini_set('display_errors', '1');
on for debugging, it's all clean. I've also used
set_time_limit(0);
so that it shouldn't ever time out.
With that said, I'm not sure how to get any more debug info to figure out what it's stopping. I can say that the script should NOT be hitting any memory limits or anything. I mean that should throw an error, and I've gone through and cleaned this script up as much as I can see to clean it up.
So my Question is: What are common causes for a cron ending when it shouldn't? How can I debug this more effectively?
You could try using a register_shutdown_function() to define a codeblock that will execute when the script shuts down. Then create a variable across the main code execution points in the cron with details of what is going on. In the shutdown function write this into a log and check your log to see what state the program was in when it stopped. Of course, this is based on the assumption that your code is not totally erroring out.
You could also redirect the standard echo statements and logs into a log file by using
/path/to/cron.php > /path/to/log.txt 2>&1
2>&1 indicates that the standard error (2>) is redirected to the same file descriptor that is pointed by standard output (&1).So, both standard output and error will be redirected to /path/to/log.txt
UPDATE:
Below is a function/flow that I usually use in my crons:
function addLog($msg)
{
if(empty($msg)) return;
$handle = fopen('log.txt', 'a');
$msg = $msg."\r\n";
fwrite($handle,$msg);
fclose($handle);
}
Then I use it like so:
addLog("Initializing...");
init();
addLog("Finished initializing...");
addLog("Calling blah-blah API...");
$result = callBlahBlah();
addLog("blah-blah API returned value". $result);
It is more tedious to have all these logs, but when cron messes up, it really helps!
For eg. when you look at your log.txt and if you see something like:
Initializing...
Finished initializing...
Calling blah-blah API...
And there is no entry which says blah-blah API returned value, then you know that the function call to blah-blah messed up.
What are common causes for a cron ending when it shouldn't?
The most common in my experience is that the cron user has different permissions or different environment variables than the way that you're executing it from the command line.
Make your cronned program dump its environment to a temporary file and see if it's what you expect.

PHP/MongoDB: find() is ignoring timeout setting

Trying mongodb global timeout etc. is still ignored by find() queries in my PHP script.
I'd like a findOne({...}) or find({...}) lookup and wait max 20ms for the DB server before timeout.
How to make sure that PHP does not utilize this setting as soft limit? It's still ignored and processing answers even 5sec later.
Is this a PHP mongo driver bug?
Example:
MongoCursor::$timeout=20;
$nosql_server=new Mongo('mongodb://user:pw#'.implode(",",$arr_replicas).'',array("replicaSet" => "gmt","timeout"=>10)) OR troubles("too slow to connect");
$nosql_db=$nosql_server->selectDB('aDB');
$nosql_collection_mcol=$nosql_db->mcol;
$testFind=$nosql_collection_mcol->find(array('crit'=>123));
//If PHP considered the MongoCursor::$timeout, I'd expect the prev. line to be skipped or throwing a mongo/timeout exception if DB does not return the find result cursor ready within 20ms.
//However, I arrive with this line after seconds, without exception whenever the DB has some lock or delay, without skipping previous line.
In the PHP documentation for $timeout the following is the explanation for the cursor timeout:
Causes methods that fetch results to throw a
MongoCursorTimeoutException if the query takes longer than the
specified number of milliseconds.
I believe that the timeout is referring to the operations performed on the cursor (e.g. getNext()).
Do not do this:
MongoCursor::$timeout=20;
That is calling a static method and won't do you any good AFAIK.
What you need to realize is that in your code example, $testFind is the MongoCursor object. Therefore in the code snippet you gave, what you should do is add this after everything else in order to set the timeout of the $testFind MongoCursor:
$testFind->timeout(100);
NOTE: If you want to deal with $testFind as an an array you need to do:
$testFindArray = iterator_to_array($testFind);
That one threw me for a loop for awhile. Hope this helps someone.
Pay attention on the readPreference attribute. The possible values are:
MongoClient::RP_PRIMARY
MongoClient::RP_PRIMARY_PREFERRED
MongoClient::RP_SECONDARY
MongoClient::RP_SECONDARY_PREFERRED
MongoClient::RP_NEAREST

php filesize returning stat failed error

I'm calling the following:
while ( (!file_exists('./download/ah141090676723_100.jpg')) || (filesize('./download/ah141090676723_100.jpg') == '1359') ) { code that retrieves a remote file and writes it to '/ah141090676723_100.jpg' }
... and getting a "filesize(): stat failed for ./download/ah141090676723_100.jpg" error.
The problem I'm trying to solve is that the remote server is flaky, and sometimes returns a garbage response (which is always 1359 bytes long). So, I want to check to see if either A) the file doesn't exist (first run through), or B) the file equals garbage (1359); if either is true, attempt to grab and write the file. Rinse and repeat until we get something that's not garbage.
The code actually seems to be working -- the file is retrieved and written, and I haven't had any garbage responses get through this loop -- but the error mystifies me. I thought it might be that on the first run-through, the file doesn't exist, so filesize is throwing this error. But the "||" operator should be preventing that second evaluation on the first run-through... right?
I should mention that I'm calling "clearstatcache();" inside the loop, after the retrieval/write.
Any help appreciated!
Scott
Change to
while ( file_exists('./download/ah141090676723_100.jpg') && filesize('./download/ah141090676723_100.jpg') == 1359)
as file_exists is always required. filesize() returns "stat failed" when a file does not exist or is not readable.

totally random timeouts after session_start() is called

I have been trying to fix this wired php session issue for some time now.
Setup: running on IIS V6.0, php for windows V 5.2.6
Issue:
At totally random times, the next php line after session_start() times out.
I have a file, auth.php that gets included on every page on an extranet site (to check valid logins)
auth.php
session_start();
if (isset($_SESSION['auth']==1) { <---- timesout here
do something ...
}
...
When using the site, I get random "maximum execution time of 30 seconds exceeded" errors at the line 2: if (isset($_SESSION['auth']==1) {
If I modify this script to
session_start();
echo 'testing'; <---- timesout here
if (isset($_SESSION['auth']==1) {
do something ...
}
...
The random error now happens on line 2 as well (echo 'testing'), which is a simple echo statement, strange.
It looks like session_start() is randomly causing issues, preventing line of code right after it to throw a timeout error (even for a simple echo statement) ....
This is happening on all sorts of page on the site (db intensive, relatively static ...) which is making it difficult to troubleshoot. I have been tweaking the session variables and timeouts in php.ini without any luck
Has anyone encountered something like that, or could suggest possible places to look at ?
thanks !
A quick search suggests that you should be using session_write_close() to close the session when you are done using it if you are on an NTFS file system. Starting a session locks the session file so no other file can access it while code is running. For some reason, the lock sometimes doesn't release automatically reliably on Windows/NTFS, so you should manually close the session when you are done with it.

Categories