I'm currently running PHP Memcache on Apache server. Since Memcache and Memcached have similar inner workings this question is about both of them.
I was wondering through the addServer method of memcached here and the second comment on the user section is this:
Important to not call ->addServers() every run -- only call it if no servers exist (check getServerList() ); otherwise, since addServers() does not check for dups, it will let you add the same server again and again and again, resultings in hundreds if not thousands of connections to the MC daemon. Specially when using FastCGI.
It is not clear what is meant by "every run". Does it mean calling addServer within the script multiple times or within multiple requests by different users/remote clients? Because consider the following script:
<?php
$memcache_obj = new \Memcache;
//$memcache_obj->connect('localhost', 11211); --> each time new connection, not recommended
$memcache_obj->addServer('localhost', 11211,true,1,1,15,true,function($hostname,$port){
//echo("There was a problem with {$hostname} at {$port}");
die;
});
print_r($memcache_obj->getExtendedStats());
?>
If as client, I make an xmlhttp request to above script, I will get something like this:
Array
(
[localhost:11211] => Array
(
[pid] => 12308
[uptime] => 3054538123
....
So far so good, if I uncomment the addServer part and execute like this:
<?php
$memcache_obj = new \Memcache;
print_r($memcache_obj->getExtendedStats());
?>
Then I get this:
<br />
<b>Warning</b>: MemcachePool::getserverstatus(): No servers added to
memcache connection in <b>path/to/php</b> on line <b>someLineNumber</b><br />
So obviously at least a server has to be added when the php script is called by the remote client. Then which of the following is true here:
we should be careful not to call `addServer`` within the same PHP script too many times. (I am inclined to understand it this way)
we should be careful not to call addServer among multiple requests (For example 2 user's calling same php script etc. I can't seem to figure out how this can ever be done.)
You do have to add the server once, else you will get this error. As the comment suggests you should use getServerList() to check if the servers have been added already and add them if they are not present:
<?php
$memcache_obj = new \Memcache;
//$memcache_obj->connect('localhost', 11211); --> each time new connection, not recommended
if (!memcache_obj->getServerList()){
$memcache_obj->addServer('localhost', 11211,true,1,1,15,true,function($hostname,$port){
//echo("There was a problem with {$hostname} at {$port}");
die;
});
}
print_r($memcache_obj->getExtendedStats());
?>
Related
is there a good way to make some PHP calls asynchronous, non-blocking?
For example, take a look at this simple code:
<?php
$hosts = [...] // array of 100+ hosts
foreach ($hosts as $host){
$sysNames['$host'] = snmpget($host, 'community', "system.sysName.0");
}
echo 'done'
If, for example, 10 hosts are down, that will make a huge delay.
How to make snmpget calls non-blocking?
I've tried with React\Promise but I couldn't found some useful examples to start from. Can anyone suggest proper implementation of that class?
PHP supports multi-threading using the threads extensions; but it needs a properly build php-binary, plus some additional dll's if you're on Windows
I am unable to understand and run a simple PHP script in FCGI mode. I am learning both Perl and PHP and I got the Perl version of FastCGI example below to work as expected.
Perl FastCGI counter:
#!/usr/bin/perl
use FCGI;
$count = 0;
while (FCGI::accept() >= 0) {
print("Content-type: text/html\r\n\r\n",
"<title>FastCGI Hello! (Perl)</title>\n",
"<h1>FastCGI Hello! (Perl)</h1>\n",
"Request number ", $++count,
" running on host <i>$ENV('SERVER_NAME')</i>");
}
Searching for similar in PHP found talk about "fastcgi_finish_request" but have no clue how
to accomplish the counter example in PHP, here is what I tried:
<?php
header("content-type: text/html");
$counter++;
echo "Counter: $counter ";
//http://www.php.net/manual/en/intro.fpm.php
fastcgi_finish_request(); //If you remove this line, then you will see that the browser has to wait 5 seconds
sleep(5);
?>
Perl is not PHP. This must not mean that you can not most often interchange things and port code between the two, however when it comes to runtime environments there are bigger differences you can not just interchange.
FCGI is on the request / protocol level already which is fully abstracted in the PHP runtime and you therefore have not as much control in PHP as you would have with Perl and use FCGI;
Therefore you can not just port that code.
Next to that fastcgi_finish_request is totally unrelated to the Perl code. You must have confused it or thrown it in by sheer luck to give it a try. However it's not really useful in this counter example context.
PHP and HTTP are stateless.
All data is only relevant for the current, ongoing request.
If you need to save state, you might consider storing the data into cookie, session, cache or db.
So the implementation of this "counter" example will be different for PERL and PHP.
Your usage of fastcgi_finish_request won't bring the functionality you expect from PERL.
Think about a long running calculation, where you output data in the middle.
You can do that with fastcgi_finish_request, the data is then pushed to the browsers, while the long running tasks keeps running.
Opening happens together FASTCGI+PHP.
Normally the connection would be open till PHP finishes, then FASTCGI would be closed.
Except you reach the exec timeout of PHP (exec timeout) or fastcgi timeout (connection timeout). fastcgi_finish_request handles the case, where the fascgi connection to the browser is closed BEFORE PHP finishes execution.
Simple Hit Counter Example for PHP
<?php
$hit_count = #file_get_contents('count.txt'); // read count from file
$hit_count++; // increment hit count by 1
echo $hit_count; // display
#file_put_contents('count.txt', $hit_count); // store the new hit count
?>
Honestly, that's not even how you should do it using Perl either.
Instead, I'd recommend using CGI::Session to track session information:
#!/usr/bin/perl
use strict;
use warnings;
use CGI;
use CGI::Carp qw(fatalsToBrowser);
use CGI::Session;
my $q = CGI->new;
my $session = CGI::Session->new($q) or die CGI->Session->errstr;
print $session->header();
# Page View Count
my $count = 1 + ($session->param('count') // 0);
$session->param('count' => $count);
# HTML
print qq{<html>
<head><title>Hello! (Perl)</title></head>
<body>
<h1>Hello! (Perl)</h1>
<p>Request number $count running on host <i>$ENV{SERVER_NAME}</i></p>
</body>
</html>};
Alternatively, if you really want to go barebones, you could keep a local file as demonstrated in: I still don't get locking. I just want to increment the number in the file. How can I do this?
I'm moving some part of my site from relational database to Redis and need to insert milions of keys in possibly short time.
In my case, data must be first fetched from MySQL, prepared by PHP and then added to corresponding sorted sets (time as a score + ID as a value). Currently I'm taking adventage of phpredis multi method with Redis::PIPELINE parameter. Despite noticeable speed improvements it turned out to block reads and slow down loading times while doing import.
So here comes the question - is using pipeline in phpredis an equivalent to the mass insertion described in http://redis.io/topics/mass-insert?
Here's an example:
phpredis way:
<?php
// All necessary requires etc.
$client = Redis::getClient();
$client->multi(Redis::PIPELINE); // OR $client->pipeline();
$client->zAdd('key', 1, 2);
...
$client->zAdd('key', 1000, 2000);
$client->exec();
vs protocol from redis.io:
cat data.txt | redis-cli --pipe
I'm one of the contributors to phpredis, so I can answer your question. The short answer is that it is not the same but I'll provide a bit more detail.
What happens when you put phpredis into Redis::PIPELINE mode is that instead of sending the command when it is called, it puts it into a list of "to be sent" commands. Then, once you call exec(), one big command buffer is created with all of the commands and sent to Redis.
After the commands are all sent, phpredis reads each reply and packages the results as per each commands specification (e.g. HMGET calls come back as associative arrays, etc).
The performance on pipelining in phpredis is actually quite good, and should suffice for almost every use case. That being said, you are still processing every command through PHP, which means you will pay the function call overhead by calling the phpredis extension itself for every command. In addition, phpredis will spend time processing and formatting each reply.
If your use case requires importing MASSIVE amounts of data into Redis, especially if you don't need to process each reply (but instead just want to know that all commands were processed), then the mass-import method is the way to go.
I've actually created a project to do this here:
https://github.com/michael-grunder/redismi
The idea behind this extension is that you call it with your commands and then save the buffer to disk, which will be in the raw Redis protocol and compatible with cat buffer.txt | redis-cli --pipe style insertion.
One thing to note is that at present you can't simply replace any given phpredis call with a call to the RedisMI object, as commands are processed as variable argument calls (like hiredis), which work for most, but not all phpredis commands.
Here is a simple example of how you might use it:
<?php
$obj_mi = new RedisMI();
// Some context we can pass around in RedisMI for whatever we want
$obj_context = new StdClass();
$obj_context->session_id = "some-session-id";
// Attach this context to the RedisMI object
$obj_mi->SetInfo($obj_context);
// Set a callback when a buffer is saved
$obj_mi->SaveCallback(
function($obj_mi, $str_filename, $i_cmd_count) {
// Output our context info we attached
$obj_context = $obj_mi->GetInfo();
echo "session id: " . $obj_context->session_id . "\n";
// Output the filename and how many commands were sent
echo "buffer file: " . $str_filename . "\n";
echo "commands : " . $i_cmd_count . "\n";
}
);
// A thousand SADD commands, adding three members each time
for($i=0;$i<1000;$i++) {
$obj_mi->sadd('some-set', "$i-one", "$i-two", "$i-three");
}
// A thousand ZADD commands
for($i=0;$i<1000;$i++) {
$obj_mi->zadd('some-zset', $i, "member-$i");
}
// Save the buffer
$obj_mi->SaveBuffer('test.buf');
?>
Then you can do something like this:
➜ tredismi php mi.php
session id: some-session-id
buffer file: test.buf
commands : 2000
➜ tredismi cat test.buf|redis-cli --pipe
All data transferred. Waiting for the last reply...
Last reply received from server.
errors: 0, replies: 2000
Cheers!
I'm coding a web application in php using mongodb and I would like to store very large files (1gb) with gridfs.
I've got 2 problems, first I get a timeout, and I can't find out how to set the cursor timeout of the MongoGridFS class.
<?php
//[...]
$con = new Mongo();
$db = $con->selectDB($conf['base']);
$grid = $db->getGridFS();
$file_id = $grid->storeFile($_POST['projectfile'],
array('metadata' => array('type' => 'release',
'version' => $query['files'][$time]['version'],
'mime' => mime_content_type($_POST['projectfile']),
'filename' => file_name($projectname).'-'.file_name($query['files'][$time]['version']).'.'
.getvalue(pathinfo($_POST['projectfile']), 'extension'))), array( 'safe' => false ));
//[...]
?>
And secondly I wonder if it were possible to execute the request in the background? When I store the file with this query, the execution is blocked and I get an error 500 due to the timeout
PHP Fatal error: Uncaught exception 'MongoGridFSException' with
message 'Could not store file: cursor timed out (timeout: 30000, time
left: 0:0, status: 0)'
May be it will be better to store your files in some directory, and put in database only location of that file? It will be rather quick.
Gridfs queries, by default, are not "safe" however they are not a single query in the driver. This function must run multiple queries within the driver (one to store a fs.files row and one to split the fs.chunks). This means that the timeout is most likely occuring on a find needed to process further batches of information, it might even be related to the PHP tiemout rather than a MongoDB one.
The easiest way to use this in the background is to create a "job" via calling a cronjob or using a message queue to another service.
As for the timeout; unfortunately the gridfs functions (on your side) don't have direct access to the cursor being used (other than setting safe), you can set a timeout on the connection but I wouldn't think this is a wise idea.
However if your cursor is timing out it means (as I said) that a find query is probably taking too long in which case you might wanna monitor the MongoDB logs to find out what is timing out, this might just be a simple case of needing better indexes or a more performant setup.
As #Anton said, you can also consider housing large files outside of MongoDB, however, there is no requirement.
i have a php script that accepts a POST request as a listener to a web service then process all the data to two final arrays,
I'm looking for a way to initiate a second script that GET's those serialized arrays and do some more processing.
include() will not be good for me since i actually want to "free" or "end" the first script after passing the data
your help is much appreciated as always :)
EDIT - OK so looks like queue might be the solution! i never did anything like this before any examples or reference?
Does it need to happen immediately? Otherwise you could set up a cronjob that does that every X minutes. You'll have to make some kind of queue in which your first script sticks "requests" to the second script. The cronjob then processes the requests in the queue.
You should get into the habit of writing php scripts that are just a collection of functions (no auto-ran scripts, per se). This way you can include a script file at the top of the script your talking about and then call the function that does what you want.
For instance:
<?php
include('common_functions.php');
$array_1 = whatever_you_do_with_post_values();
$array_2 = other_thing_you_do_with_post_values();
// this function is located in 'common_functions.php'
do_stuff_with_arrays($array_1,$array_2);
?>
In Fact:
Just to be consistent with what I'm saying:
<?php
include('common_functions.php');
do_your_stuff();
function do_your_stuff() {
$array_1 = whatever_you_do_with_post_values();
$array_2 = other_thing_you_do_with_post_values();
// this function is located in 'common_functions.php'
do_stuff_with_arrays($array_1,$array_2);
}
?>
Obviously you should use better function & variable names, haha.
I'd do it all in one request. It cuts down on latency and makes the whole operation more efficient.
Remember you can have a long running request, but still service other requests. Apache will just spawn another php process to handle the other request from the webservice even though the first has not completed. As long as the script doesn't lock a shared resource (database file etc) this will work just fine.
That said, you should use cURL to call the second script. then post the unserialized array. cUrl will handle the rest.