PHPRedis and SMEMBERS - php

I'm trying some stuff with Redis and PHP, and I've encountered a problem when it came to work with SETS and SMEMBERS.
I'm using Symfony2 and SncRedisBundle.
$redis->multi();
// Some stuff
$result = $redis->smembers("myset");
var_dump($result);
die();
$redis->exec();
Here's the dump
object(Redis)[990]
public 'socket' => resource(841, Redis Socket Buffer)
I'm a bit stuck now, I don't know how I can work with the result since there's nothing really visible or explained on php-redis documentation.
Can someone help me?

You should check the result of $redis->exec() instead of the result of smembers. The principle of MULTI/EXEC blocks is that command executions are delayed until the EXEC command. At this point, all commands are executed atomically and their results are sent back to the client.
See this example: https://github.com/nicolasff/phpredis#transactions
Note that using a MULTI/EXEC block with just one command inside is pointless and does not bring any benefits.

Related

2006 MySQL server has gone away while saving object

Im getting this error "General error: 2006 MySQL server has gone away" when saving an object.
Im not going to paste the code since it way too complicated and I can explain with this example, but first a bit of context:
Im executing a function via Command line using Phalcon tasks, this task creates a Object from a Model class and that object calls a casperjs script that performs some actions in web page, when it finishes it saves some data, here's where sometimes I get mysql server has gone away, only when the casperjs takes a bit longer.
Task.php
function doSomeAction(){
$object = Class::findFirstByName("test");
$object->performActionOnWebPage();
}
In Class.php
function performActionOnWebPage(){
$result = exec ("timeout 30s casperjs somescript.js");
if($result){
$anotherObject = new AnotherClass();
$anotherObject->value = $result->value;
$anotherObject->save();
}
}
It seems like the $anotherObject->save(); method is affected by the time exec ("timeout 30s casperjs somescript.js"); takes to get an answer, when it shouldn`t.
Its not a matter of the data saved since it fails and saves succesfully with the same input, the only difference I see is the time casperjs takes to return a value.
It seems like if for some reason phalcon opens the MySQL conection during the whole execution of the "Class.php" function, provoking the timeout when casperjs takes too long, does this make any sense? Could you help me to fix it or find a workaround to this?
Problem seems that either you are trying to fetch heavy data in single packet than allowed in your mysql config file or your wait_timeout variable value is not set properly as per your code requirement.
check your wait_timeout and max_allowed_packet values, you can check by below command-
SHOW GLOBAL VARIABLES LIKE 'wait_timeout';
SHOW GLOBAL VARIABLES LIKE 'max_allowed_packet';
And increase these values as per your requirement in your my.cnf (linux) or my.ini (windows) config file and restart mysql service.

Interesting thing happened while writing data into redis within a php loop

i wrote a php script to pull data from one server (lets call it Server A) to the other (Server B). data in server A is a redis list stores all the operating commands need to be written in server B, such as :
["setex",["session:xxxx",604800,"xxxx"]]
["set",["uid:xxx","xxxxx"]]
["pipeline",[]]
["set",["uid:xxx","xxxxx"]]
["hIncrBy",["Signin:xxxx","totalTimes",1]]
["pipeline",[]]
....
my php codes are :
while($i < 1000){
$line = $redis['server_a']->rpop('sync:op');
list($op,$params) = json_decode($line,1);
$r = call_user_func_array(array($redis['server_b'], $op), $params);
$i++;
}
The wired thing is, when the call_user_func_array method executes the redis command uncorrectly, all the rest commands in the queue cannot be written correctly into server B.
i stuck in this problem almost one week for seeking answers. after thousands of tests i found if i remove the "bad commands" that cannot be executed correctly, such as the ["pipeline",[]] row. all the other commands can be inserted properly. so it reminds me of some redis transaction issue. maybe there are some machanisms that when a command executed unproperly in redis , all the other commands afterwards will be treated as a transaction. so i add a exec() command into the while loop :
while($i < 1000){
$line = $redis['server_a']->rpop('sync:op');
list($op,$params) = json_decode($line,1);
$r = call_user_func_array(array($redis['server_b'], $op), $params);
$redis['server_b']->exec(); //this is the significant update
$i++;
}
then, my problem solved !!!
My question is , can anybody help me explain the redis machanism? is my assumption correct ?
Your library is probably using transactions for pipelining for whatever reason. pipeline is no actual Redis command, see http://redis.io/commands
Just strip all pipeline commands with empty arguments or just use ->exec when you issued a pipeline before.

crons stopping for no reason

Well this isn't true, I'm sure there's a reason, but I can't find it!!
I have a script that can take around 10 minutes to execute. It does a lot of communicating with an api on a service that we have that use. It pulls a bit of a fingerprint of everything every 24 hours. So what it's doing is pretty aside from the point. the probm I'm finding is the script stops executing somewhat randomly!!
I can't find any errors that would cause my script to stop executing, even with
//for debugging
error_reporting(E_ALL);
ini_set('display_errors', '1');
on for debugging, it's all clean. I've also used
set_time_limit(0);
so that it shouldn't ever time out.
With that said, I'm not sure how to get any more debug info to figure out what it's stopping. I can say that the script should NOT be hitting any memory limits or anything. I mean that should throw an error, and I've gone through and cleaned this script up as much as I can see to clean it up.
So my Question is: What are common causes for a cron ending when it shouldn't? How can I debug this more effectively?
You could try using a register_shutdown_function() to define a codeblock that will execute when the script shuts down. Then create a variable across the main code execution points in the cron with details of what is going on. In the shutdown function write this into a log and check your log to see what state the program was in when it stopped. Of course, this is based on the assumption that your code is not totally erroring out.
You could also redirect the standard echo statements and logs into a log file by using
/path/to/cron.php > /path/to/log.txt 2>&1
2>&1 indicates that the standard error (2>) is redirected to the same file descriptor that is pointed by standard output (&1).So, both standard output and error will be redirected to /path/to/log.txt
UPDATE:
Below is a function/flow that I usually use in my crons:
function addLog($msg)
{
if(empty($msg)) return;
$handle = fopen('log.txt', 'a');
$msg = $msg."\r\n";
fwrite($handle,$msg);
fclose($handle);
}
Then I use it like so:
addLog("Initializing...");
init();
addLog("Finished initializing...");
addLog("Calling blah-blah API...");
$result = callBlahBlah();
addLog("blah-blah API returned value". $result);
It is more tedious to have all these logs, but when cron messes up, it really helps!
For eg. when you look at your log.txt and if you see something like:
Initializing...
Finished initializing...
Calling blah-blah API...
And there is no entry which says blah-blah API returned value, then you know that the function call to blah-blah messed up.
What are common causes for a cron ending when it shouldn't?
The most common in my experience is that the cron user has different permissions or different environment variables than the way that you're executing it from the command line.
Make your cronned program dump its environment to a temporary file and see if it's what you expect.

PHP foreach stack - is it possible that functions called in a for each loop are still running when the next iteration is called

I am having problems with cURL not being able to connect to a server that returns an xml feed and am not sure if my code is stacking up and causing the problem. Is it possible the final function called in this foreach loop is still running when the next loop iteration comes round.
Is it possible to make sure all functions in the loop complete before the next iteration begins, or does foreach do this by default anyway? I tried setting a return true on process_xml() and running a test in the loop: if($this->process_xml($xml_array))continue;
but it didn't seem to have an effect and seems like a bad idea anyway.
foreach($arrayOfUrls as $url){
//retrieve xml from url as string.
if($url_xml_string = $this->getFeedStringUsing_cURL($url)){
$xml_object = simplexml_load_string($url_xml_string);
$xml_array = $this->feedStringToArray($xml_object);
//process the xml.
$this->process_xml($xml_array);
}
}
No, this is not possible. Each statement is executed and finished before the next statement is run.
and am not sure if my code is stacking up
Not sure? If it's important to you why don't you find out? Without knowing what OS you are running on its rather hard to advise how you'd go about that - but netstat might be a good starting point.
Is it possible the final function called in this foreach loop is still running
It's highly improbable - PHP scripts run in a single thread of execution unless you tell them not to - but the curl extension allows you to define callbacks into your php code which run before the operation completes, and the curl_multi_ family of functions also allow you to run php code while requests are in progress.

php exec() error

I'm having a little problem with the following:
When I execute this line:
echo exec(createDir($somevariable));
I get this error:
Warning: exec() [function.exec]: Cannot execute a blank command in /home/mydir/myfile.inc.php on line 32
Any ideas.
Thanks.
exec() expects a string argument, which it would pass on to your operating system to be executed. In other words, this is a portal to the server's command line.
I'm not sure what function createDir() is, but unless it's returning a valid command line string, it's probably failing because of that.
In Linux, you might want to do something like
exec('/usr/bin/mkdir '.$path);
...on the other hand, you should abstain from using exec() at all costs. What you can do here, instead, is take a look at mkdir()
With exec you can execute system calls like if you were using the command line. It hasn't to do anything with executing PHP functions.
To create a directory you could do the following:
exec( 'mkdir [NAME OF DIRECTORY]' );
I'd guess that your createDir() function doesn't return anything. Might also be worth checking that $somevariable is also set to something sensible
You're misunderstanding the purpose of exec(). If all you want to do is create a directory then you should use mkdir().
I think I've derived from other posts and comments what it is you actually want to do:
I think createDir() is a PHP function you've written yourself. It does more than just make a directory - it populates it, and that might take some time.
For some reason you believe that the next command gets run before createDir() has finished working, and you thought that by invoking createDir() using exec() you could avoid this.
Tell me in a comment if this is way out, and I'll delete this answer.
It's seems unlikely that createDir() really does keep working after it's returned (if it does, then we call that 'asynchronous'). It would require the programmer to go out of their way to make it asynchronous. So check that assumption.
Even so, exec() is not for invoking PHP functions. It is for invoking shell commands (the kind of thing you type in at a command prompt). As many of us have observed, it is to be avoided unless you're very careful - the risk being that you allow a user to execute arbitrary shell commands.
If you really do have to wait for an asynchronous function to complete, there are a couple of ways this can be done.
The first way requires that the asynchronous function has been written in an amenable manner. Some APIs let you start an asynchronous job, which will give you a 'handle', then do some other stuff, then get the return status from the handle. Something like:
handle = doThreadedJob(myParam);
# do other stuff
results = getResults(handle);
getResults would wait until the job finished.
The second way isn't as good, and can be used when the API is less helpful. Unfortunately, it's a matter of finding some clue that the job is finished, and polling until it is.
while( checkJobIsDone() == false ) {
sleep(some time interval);
}
I'm guessing createDir() doesn't have a return value.
Try exec("mkdir $somevariable");

Categories