Before using redis I want to check it availability, if redis is not available, I'll use mysql, if redis is available I'll use it. How can I do it, if I use predis client?
My first method was:
/**
* #return bool
*/
public function check(){
if(!#fsockopen ( $server['host'], $server['port'], $errno, $errstr, 3 )){
Debug::instance()->log( 'Redis connect error host: ' . $server['host'] . ' port: ' . $server['port'] );
return false;
}
return true;
}
But it was very bad idea, because I occupy free sockets. Now I try to find a better method.
This should do the trick ;)
// Redis configuration
$vm = array(
'host' => '127.0.0.1',
'port' => 6379,
'timeout' => 0.8 // (expressed in seconds) used to connect to a Redis server after which an exception is thrown.
);
$redis = new Predis\Client($vm);
try {
$redis->ping();
} catch (Exception $e) {
// LOG that redis is down : $e->getMessage();
}
if(isset($e)) {
//use MySQL
} else {
/* Use Redis */
}
Use the predis client and the PING command.
return true if PONG is received as response.
return false on CommunicationException.
Redis can handle far more requests than MySQL, which is why it is so often used as a cache for it. On laptop level hardware I have seen it handling over 1M requests per second - single instance no special server side tuning.
First try to pull the data from redis. If the connection fails, go to MySQL. if you get a null back, go to MySQL and add the data to Redis (optionally with a timeout depending in your criteria) and return the data to the client.
Please do not do a ping before every command. If you have a valid connection, try to get the data and handle getting back nothing. Getting null back means there was no data to pull. Performing the ping before every command is wasteful. You spend a round trip for every call. Often the command to get the data is just or nearly just as fast as the ping, especially considering the cost of testing for the pong. When considering that you will then also be testing to see if you got data returned, the ping before every command model is not wise.
Open your connection in your long term process outside the command, or open the connection, query, and then close your socket each time. I prefer the former, but realize that isn't always possible depending on the framework you are using.
And as someone who has infrastructure with many hundreds of MySQL servers with thousands of DBs, no you can not always assume MySQL is there, just as you shouldn't assume Redis will always be there. Crashes happen, networking happens, servers are tripped up or mistakenly bounced.
But you can follow the flow of "is the TCP connection alive" and "then give me data" followed by validating you got the data. As long as you account for failed connections or timed out requests, and handle them, you are fine.
As far as how to actually code using predis, I recommend starting with the docs such as are found at the Predis github page
Related
I have backend on php, that works with Redis.
But when requests increased and they more than 2000 request per sec I receive an error:
99 - Cannot assign requested address
All sockets in TIME_WAIT.
Connecting example:
$this->_socket = #stream_socket_client(
'tcp://' . $this->hostname . ':' . $this->port,
$errorNumber,
$errorDescription,
ini_get('default_socket_timeout'),
STREAM_CLIENT_CONNECT | STREAM_CLIENT_PERSISTENT
);
I find solution: http://redis4you.com/articles.php?id=012&name=redis
But /proc/sys/net/ipv4/tcp_tw_recycle I can't set in 1.
Don't want to loss packets on the network between application and redis.
Php on new request from API create a new socket.
Any ideas?
I don't know your whole design, but here something you could do :
Create a PHP page that always run (with a while(true) loop)
This page would wait for content from your initial page (where the socket code was before)
Using the pipelining technique, you would send all requests using the same socket.
Only thing missing is how to pass data from the initial page to this new page.
For that last part I see multiple solutions (not sure if they all work though) :
Using APC to store data from initial page and still use it to get it from the new one.
Create a SESSION in the new page which would than have two modes : Processing, Submitting. You should then call this page using your local server inside the initial page.
In both solutions, one instance of this new page shall be executed locally so the 'Processing/Waiting' is activated.
Fixed problem.
Use tcp reuce and time waite for socket sets in 10 seconds. Php work with socket in persistent mode
STREAM_CLIENT_CONNECT | STREAM_CLIENT_PERSISTENT
So even in 2 000 request per second it use not more then 61 sockets.
I've written a little monitoring script in PHP, which should monitor a virtual directory and it's active directories. Everything works fine but when the virtual directory service freezes is my ldap_connect() not able to connect but also doesn't get an error back. So my whole script stands still. I think that the ldap_connect function gets a timeout back (like when you try to ping an IP and it's not reachable).
That's my connect command:
$connection = ldap_connect($hostname, $port) or die("Could not connect to {$hostname});
And I haven't found something in the manual for ldap_connect() (manual) about a timelimit parameter in which you could define how long the function should try to connect until it aborts.
How ever I wasn't quite able to come up with a solution with try and catch or something like this. I also didn't wanted to use the set_time_limit() function because my script needs to be run until the end.
I appreciate every help :)
Thanks and greetings
Tim
http://www.php.net/manual/en/function.ldap-set-option.php
particular the following options :-
LDAP_OPT_NETWORK_TIMEOUT
LDAP_OPT_TIMELIMIT
http://www.php.net/manual/en/function.ldap-set-option.php
try set LDAP_OPT_REFERRALS in 0
If you don't want your PHP program to wait XXX seconds before giving up in a case when one of your corporate DC's have failed,
and since ldap_connect() does not have a mechanism to timeout on a user specified time,
this is my workaround which shows excellent practical results.
function serviceping($host, $port=389, $timeout=1)
{
$op = fsockopen($host, $port, $errno, $errstr, $timeout);
if (!$op) return 0; //DC is N/A
else {
fclose($op); //explicitly close open socket connection
return 1; //DC is up & running, we can safely connect with ldap_connect
}
}
// ##### STATIC DC LIST, if your DNS round robin is not setup
//$dclist = array('10.111.222.111', '10.111.222.100', '10.111.222.200');
// ##### DYNAMIC DC LIST, reverse DNS lookup sorted by round-robin result
$dclist = gethostbynamel('domain.name');
foreach ($dclist as $k => $dc) if (serviceping($dc) == true) break; else $dc = 0;
//after this loop, either there will be at least one DC which is available at present, or $dc would return bool false while the next line stops program from further execution
if (!$dc) exit("NO DOMAIN CONTROLLERS AVAILABLE AT PRESENT, PLEASE TRY AGAIN LATER!"); //user being notified
//now, ldap_connect would certainly connect succesfully to DC tested previously and no timeout will occur
$ldapconn = ldap_connect($dc) or die("DC N/A, PLEASE TRY AGAIN LATER.");
Also with this approach, you get a real nice fail over functionality.
Take for an example a company with a dozen of DC-a distributed along distant places.
This way your PHP program will always have high availability if at least one DC is active at present.
You'll need to use an API that supports time-outs. Connection time-outs are not supported in a native fashion by LDAP (the protocol). The timelimit is a client-requested parameter that refers to how long the directory will spend processing a search request, and is not the same as a "connect time-out".
I have a free application that is very used and I get around 500 to 1000 concurrent users from time to time.
This application is a desktop application that will communicate with my website API to receive data every 5 ~ 15 minutes as well as send back minimum data about 3 selects top every 15 minutes.
Since users can turn the application on and off as they wish the timer for each one of them to query my API may vary and as such I have been hitting the max connection limit available for my hosting plan.
Not wanting to upgrade it for financial matter as well as because it is a non-profitable application for the moment I am searching for other options to reduce the amount of connections and cache some information that can be cached.
The first thing that came to my mind was to use FastCGI with Perl I have tested it for some time now and it seems to work great but I have to problems while using it:
if for whatever reason the application goes idle for 60 the
server kills it and for the next few requests it will reply with
error 500 until the script is respawned which takes about 3+ minutes
(yes it takes that much I have tried my code locally on my own test
server and it comes up instantly so I am sure it is a server issue
of my hosting company but they don't seem like wanting to resolve
it).
the kill timeout which is set to 300 and will kill/restart the
script after that period which would result on the above said at 1)
about the respawn of the script.
Given that I am now looking for alternatives that are not based on FastCGI if there is any.
Also due to the limitations of the shared host I can't make my own daemon and my access to compile anything is very limited.
Are there any good options that I can archive this with either Perl or PHP ?
Mainly reduce the database open connections to a minimum and still be able to cache some select queries for returning data... The main process of the application is inserting/updating data anyway so there inst much to cache.
This was the simple code I was using for testing it:
#!/usr/bin/perl -w
use CGI::Simple; # Can't use CGI as it doesn't clear the data for the
# next request haven't investigate it further but needed
# something working to test and using CGI::Simples was
# the fastest solution found.
use DBI;
use strict;
use warnings;
use lib qw( /home/my_user/perl_modules/lib/perl/5.10.1 );
use FCGI;
my $dbh = DBI->connect('DBI:mysql:mydatabase:mymysqlservername',
'username', 'password',
{RaiseError=>1,AutoCommit=>1}
) || die &dbError($DBI::errstr);
my $request = FCGI::Request();
while($request->Accept() >= 0)
{
my $query = new CGI::Simple;
my $action = $query->param("action");
my $id = $query->param("id");
my $server = $query->param("server");
my $ip = $ENV{'REMOTE_ADDR'};
print $query->header();
if ($action eq "exp")
{
my $sth = $dbh->prepare(qq{
INSERT INTO
my_data (id, server) VALUES (?,INET_ATON(?))
ON DUPLICATE KEY UPDATE
server = INET_ATON(?)});
my $result = $sth->execute($id, $server, $server)
|| die print($dbh->errstr);
$sth->finish;
if ($result)
{
print "1";
}
else
{
print "0";
}
}
else
{
print "0";
}
}
$dbh->disconnect || die print($DBI::errstr);
exit(0);
sub dbError
{
my ($txt_erro) = #_;
my $query = new CGI::Simple;
print $query->header();
print "$txt_erro";
exit(0);
}
Run a proxy. Perl's DBD::Proxy should fit the bill. The proxy server shouldn't be under your host's control, so its 60-???-of-inactivity rule shouldn't apply here.
Alternatively, install a cron job that runs more often than the FastCGI timeout, simply to wget some "make activity" page on your site, and discard the output. Some CRMs do this to force a "check for updates" for example, so it's not completely unusual, though somewhat of an annoyance here.
FWIW, you probably want to look at CGI::Fast instead of CGI::Simple to resolve your CGI.pm not dealing in the expected manner with persistent variables...
I have a simple problem. I use php as server part and have an html output. My site shows a status about an other server. So the flow is:
Browser user goes on www.example.com/status
Browser contacts www.example.com/status
PHP Server receives request and ask for stauts on www.statusserver.com/status
PHP Receives the data, transforms it in readable HTML output and send it back to the client
Browser user can see the status.
Now, I've created a singleton class in php which accesses the statusserver only 8 seconds. So it updates the status all 8 seconds. If a user requests for update inbetween, the server returns the locally (on www.example.com) stored status.
That's nice isn't it? But then I did an easy test and started 5 browser windows to see if it works. Here it comes, the php server created a singleton class for each request. So now 5 Clients requesting all 8 seconds the status on the statusserver. this means I have every 8 second 5 calls to the status server instead of one!
Isn't there a possibility to provide only one instance to all users within an apache server? That would be solve the problem in case 1000 users are connecting to www.example.com/status....
thx for any hints
=============================
EDIT:
I already use a caching on harddrive:
public function getFile($filename)
{
$diff = (time()-filemtime($filename));
//echo "diff:$diff<br/>";
if($diff>8){
//echo 'grösser 8<br/>';
self::updateFile($filename);
}
if (is_readable($filename)) {
try {
$returnValue = #ImageCreateFromPNG($filename);
if($returnValue == ''){
sleep(1);
return self::getFile($filename);
}else{
return $returnValue;
}
} catch (Exception $e){
sleep(1);
return self::getFile($filename);
}
} else {
sleep(1);
return self::getFile($filename);
}
}
this is the call in the singleton. I call for a file and save it on harddrive. but all the request call it at same time and start requesting the status server.
I think the only solution would be a standalone application which does an update every 8 seconds on the file... All request should just read the file and nomore able to update it.
This standalone could be a perl script or something similar...
Php requests are handled by different processes and each of them have a different state, there isn't any resident process like in other web development framework. You should handle that behavior directly in your class using for instance some caching.
The method which query the server status should have this logic
public function getStatus() {
if (!$status = $cache->load()) {
// cache miss
$status = // do your query here
$cache->save($status); // store the result in cache
}
return $status;
}
In this way only one request of X will fetch the real status. The X value depends on your cache configuration.
Some cache library you can use:
APC
Memcached
Zend_Cache which is just a wrapper for actual caching engines
Or you can store the result in plain text file and on every request check for the m_time of the file itself and rewrite it if more than xx seconds are passed.
Update
Your code is pretty strange, why all those sleep calls? Why a try/catch block when ImageCreateFromPNG does not throw?
You're asking a different question, since php is not an application server and cannot store state across processes your approach is correct. I suggest you to use APC (uses shared memory so it would be at least 10x faster than reading a file) to share status across different processes. With this approach your code could become
public function getFile($filename)
{
$latest_update = apc_fetch('latest_update');
if (false == $latest_update) {
// cache expired or first request
apc_store('latest_update', time(), 8); // 8 is the ttl in seconds
// fetch file here and save on local storage
self::updateFile($filename);
}
// here you can process the file
return $your_processed_file;
}
With this approach the code in the if part will be executed from two different processes only if a process is blocked just after the if line, which should not happen because is almost an atomic operation.
Furthermore if you want to ensure that you should use something like semaphores to handle that, but it would be an oversized solution for this kind of requirement.
Finally imho 8 seconds is a small interval, I'd use something bigger, at least 30 seconds, but this depends from your requirements.
As far as I know it is not possible in PHP. However, you surely can serialize and cache the object instance.
Check out http://php.net/manual/en/language.oop5.serialization.php
I have a long-running script that seems to occasionally report the following NOTICE-level error:
pg_send_query(): Cannot set connection to blocking mode
It seems to continue to send queries afterward, but it's unclear if it successfully sends the query that generates the error.
What is this a symptom of?
Edit: There are no entries in the postgres log at the time the error occurred, suggesting this is solely a connection error, not something going wrong on postgres' side (e.g. probably not the result of postgres crashing and restarting or something)
Edit: As far as I can tell, my INSERT statements are succeeding, one way or another, when this error is triggered.
Edit: Looks like this may have been fixed in June 2013: https://bugs.php.net/bug.php?id=65015
It is a symptom of pg_send_query() not being able to successfully switch the connection back to blocking mode. Looking at the source code in PHPs pgsql.c, you can find:
/* {{{ proto bool pg_send_query(resource connection, string query)
Send asynchronous query */
PHP_FUNCTION(pg_send_query)
{
<... snipped function setup stuff ...>
if (PQ_SETNONBLOCKING(pgsql, 1)) {
php_error_docref(NULL TSRMLS_CC, E_NOTICE, "Cannot set connection to nonblocking mode");
RETURN_FALSE;
}
<... snipped main function execution stuff ...>
if (PQ_SETNONBLOCKING(pgsql, 0)) {
php_error_docref(NULL TSRMLS_CC, E_NOTICE, "Cannot set connection to blocking mode");
}
RETURN_TRUE;
}
So the error gets raised at the end of the function, after the main work is done. This fits with your observation that your INSERT statements get executed.
The whole purpose of the two PQ_SETNONBLOCKING calls is to put the connection in non blocking mode to allow asynchronous execution and putting it back to the default blocking behaviour afterwards. From the documentation of PQsetnonblocking: (PQ_SETNONBLOCKING is just an alias defined for that function):
Sets the nonblocking status of the
connection.
int PQsetnonblocking(PGconn *conn, int arg);
Sets the state of the connection to nonblocking if arg is 1,
or blocking if arg is 0. Returns 0 if
OK, -1 if error.
In the nonblocking state, calls to
PQsendQuery, PQputline, PQputnbytes,
and PQendcopy will not block but
instead return an error if they need
to be called again.
Note that PQexec does not honor
nonblocking mode; if it is called, it
will act in blocking fashion anyway.
Looking further at the source of PQsetnonblocking (in PostgeSQLs fe-exec.c), there are two possible reasons why the call could fail:
/* PQsetnonblocking:
* sets the PGconn's database connection non-blocking if the arg is TRUE
* or makes it non-blocking if the arg is FALSE, this will not protect
* you from PQexec(), you'll only be safe when using the non-blocking API.
* Needs to be called only on a connected database connection.
*/
int
PQsetnonblocking(PGconn *conn, int arg)
{
bool barg;
if (!conn || conn->status == CONNECTION_BAD)
return -1;
barg = (arg ? TRUE : FALSE);
/* early out if the socket is already in the state requested */
if (barg == conn->nonblocking)
return 0;
/*
* to guarantee constancy for flushing/query/result-polling behavior we
* need to flush the send queue at this point in order to guarantee proper
* behavior. this is ok because either they are making a transition _from_
* or _to_ blocking mode, either way we can block them.
*/
/* if we are going from blocking to non-blocking flush here */
if (pqFlush(conn))
return -1;
conn->nonblocking = barg;
return 0;
}
So either the connection got lost somehow, or pqFlush did not finish successfully, indicating leftover stuff in the connection output buffer.
The first case would be harmless, as your script would certainly notice the lost connection for later calls and react to that (or fail more noticeable).
This leaves the second case, which would mean you have a connection in the non default, non blocking state. I do not know if this could affect later calls that would reuse this connection. If you want to play it safe, you'd close the connection in this case and use a new/other one.
It sounds like you're trying to use the pg_send_query() function for sending asynchronous queries to PostgreSQL. The purpose of this function is to allow your PHP script to continue executing other code while waiting for PostgreSQL to execute your query and make a result ready.
The example given in the docs for pg_send_query() suggest that you shouldn't send a query if PostgreSQL is already chewing on another query:
if (!pg_connection_busy($dbconn)) {
pg_send_query($dbconn, "select * from authors; select count(*) from authors;");
}
Is there a reason you're using pg_send_query() instead of pg_query()? If you can allow your script to block waiting for query execution, I'm guessing (admittedly without having tried it) that you won't see these errors.
I've recently had the same problem, and with the help from Henrik Opels answer realized that PHP does not actually wait for the buffer to flush before setting the connection back to blocking mode.
The 'cannot set connection to blocking mode' is trivially repeatable with large enough queries to fill the send buffer (padding with spaces at the end is enough). With smaller queries I imagine it is dependent on load, and rather intermittent.
if you do actually need asynchronous mode then try the patch at https://bugs.php.net/bug.php?id=65015
This could occur if you are using threads and the connection is being reused.
If is this the case you could use the PGSQL_CONNECT_FORCE_NEW like this:
pg_connect("...", PGSQL_CONNECT_FORCE_NEW)
This will force a new database connection resource but be advised: you could run out of connections clients, so be carefully using this inside threads so don't forget to use pg_close().
I encountered same error message with PHP 5.6.9
It occurs when persistent connection made by pg_pconnect() is lost and
pgsql.auto_reset_persistent is set to Off.
Connection might get lost when:
PHP Session expires
Connection to DB timeouts
Webserver / DB server is restarted
You can check PHP.ini for pgsql.auto_reset_persistent and set it to On.
With pgsql.auto_reset_persistent enabled, each time pg_pconnect() is being called, connection link is checked, if it is still valid. This requires a little overhead, but fixies error message when conncetion is lost.
I got that error too. I solve my problem by restarting the web server (Apache).