Mongodb and PHP APC - php

I just ran a test creating 1000 non-persistent connections to mongodb via nginx/php fastcgi which took about 2.1 seconds on my dev machine. I then tried the same test using persistent connections, same result. I think I read somewhere that persistence in the php driver is now always enabled anyway. Next, I tried storing the connections to APC which resulted in a 7-9ms response time after the first request. Now I'm wondering a few things here:
There's almost never a time I can think of where I'd want to create more than one connection in my app at once and with a persistent connection from what I understand, new connections are created as needed by the mongo driver.
Creating a single connection seems to take about the same time as pulling the stored connection object from APC. Will caching the connection object ever really provide a benefit?
Caching the connection I know of course would still require some sort of check to see if it's even still a valid connection.. in performing this check each time, I wonder if it would negate the performance gain (if any) from pulling it from cache.
I can't seem to really find any material really covering any of this so I'm assuming it's because I'm confused in my understanding. Have any of you experimented with this?
Thanks!

First, as far as i know, APC is serializing data while storing it. so it would not make any sense to store any connection in APC.
Then, persistend connections will be reused by the php process for various requests. So a non persistend connection will be reestablished for each request the php process will receive.

Related

php gRPC connections

I have a question I can't seem to find the answer to.
We are building a gRPC microservice in Go, to serve our main application written in PHP. I am running some tests on one of the functions now, to see the performance.
My results indicate that setting up the connection takes about 2 seconds, but after that, each call takes less than a microsecond.
How does it work in a real-life application? Does it open one shared connection that is kept alive for a while, or does each request to our application have to open its own connection to the service?
If each request has to open its own connection, is it possible to get around this to get rid of the overhead that comes with establishing a new connection?
What you need is called connection pool.
I found this PR https://github.com/grpc/grpc/issues/15426 after a short search on Google, but I am not sure if it is actually for pooling connections.
If you get able to create connections polls, you may just pick up one available connection and use for your request.
[EDITED]
I just found something that may work for your needs:
https://github.com/swoole/swoole-src#the-simplest-example-of-a-connection-pool
If you follow the example of the Redis connection pool you will be able to make a gRPC connection pool.
The latest php-grpc extension will keep your connections around and re-use them between requests.
Those are called "persistent" connections. A previous connection can be re-used if the set of parameters given to the connection constructor are exactly the same.
Source: php-grpc documentation: https://grpc.github.io/grpc/php/class_grpc_1_1_channel.html
By default, the underlying grpc_channel is "persistent". That is, given the same set of parameters passed to the constructor, the same underlying grpc_channel will be returned.

Should you cache a MySQL connection in PHP?

Are there any disadvantages to caching a MySQL connection?
For example,
$mysqli = new mysqli(params);
apc_store('mysqli', $mysqli);
This would save some time if there are lots of users on your site all requiring connections. Instead of opening a connection for every user, why not cache it?
I haven't found anything by googling, so just wondering if I've missed something.
Thanks for your time.
A mysqli object is not something you can cache. It's a resource, not a plain object.
Fetching it out of the cache would require it to reconnect to the database server, which means that the cached version would need to store a password in plaintext, which makes it a security flaw if you could cache it.
Also, there's no guarantee that the database server would still be available when you fetch it out of the cache.
And there's no way for multiple PHP requests to share the same cached resource. There's all sorts of problems with this plan.
What is a solution for your goal is persistent mysqli connections, so the PHP runtime environment maintains some number of connections and can reuse them from one PHP request to the next.

mySQL AND memcached for PHP sessions?

For a high traffic web site we are planning to scale up to use 2 web servers in a HA setup.
One issue we will need to tackle is the management of PHP sessions.
The obvious answer is to move session handling to the DB which is easy and example code is widely available ton the internet.
On the other hand we are aware of the benefits of memcached but once a memcached node fails, users on that node will lose their session.
So we are thinking of implementing a setup where sessions are handled in memcached by default but also written in the DB. When we get a memcached MISS we would try to also retrieve it from the DB.
Does the above make sense and are there any implementation examples you are aware of?
thanks in advance
I refer you to Dormando's oft-cited explanation of how to store sessions in MySQL with memcached caching. The original LiveJournal post is more wordy but more thoroughly explains why storing sessions in memcached only is a bad idea.
In short:
Read session data from memcached first, look in MySQL on a cache miss.
Write session data to memcached on every update.
Only write to MySQL if cache data hasn't been synced for 120 seconds or so.
Run a periodic script that checks MySQL for expired sessions. For every expired session, update from memcached and only expire the ones that are truly expired.
Sessions it's a temporary thing, there is nothing to worry about if once per month memcache-server will fail and truncate sessions. I'm sure you can use just memcache for sessions, without replication in DB.
But if you still want to dump sessions to disk, as existing solution you can use Redis:
Redis works with an in-memory dataset.
Depending on your use case, you can
persist it either by dumping the
dataset to disk
...
Redis also supports trivial-to-setup master-slave replication, with very
fast non-blocking first
synchronization, auto-reconnection on
net split and so forth.

PHP, MySQL and a large nummer of simple queries

I'm implementing an application that will have a lot of clients querying lots of small data packages from my webserver. Now I'm unsure whether to use persistent data connections to the database or not. The database is currently on the same system as the webserver and could connect via the socket, but this may change in the near future.
As I know a few releases of PHP ago mysqli_pconnect was removed because it behaved suboptimally. In the meantime it seems to be back again.
Based on my scenario I suppose I won't have an other chance to handle thousands of queries per minute but with loads of persistent connections and a MySQL configuration that reserves only little resources per connection, right?
Thanks for your input!
What happened when you tested it?
With the nest will in the world, there's no practical way you can convey all the informaton required for people to provide a definitive answer in a SO response. However usually there is very little overhead in establishing a mysql connection, particularly if it resides on the same system as the database client (in this case the webserver). There's even less overhead if you use a filesystem rather than a network socket.
So I'd suggest abstracting all your database calls so you can easily switch between connection types, but write your system to use on-demand connections, and ensure you code explicitly releases the connection as soon as practical - then see how it behaves.
C.
Are PHP persistant connections evil?
The problem is there can be only so
many connections active between Host
“Apache” and Host “MySQL”
Persistant connections usually give problems in that you hit the maximum number of connections. Also, in your case it does not give a great benefit since your database server is on the same host. Keep it to normal connections for now.
As they say, your mileage may vary, but I've never had good experiences using persistent connections from PHP, including MySQL and Oracle (both ODBC and OCI8). Every time I've tested it, the system fails to reuse connections. With high load, I end up hitting the top limit while I have hundreds of idle connections.
So my advice is that you actually try it and find out whether your set-up is reusing connections properly. If it isn't working as expected, it won't be a big lose anyway: opening a MySQL connection is not particularly costly compared to other DBMS.
Also, don't forget to reset all relevant settings when appropriate (whatever session value you change, it'll be waiting for you next time to stablish a connection and happen to reuse that one).

Approaches for memcached sessions

I was thinking about using memcached to store sessions instead of mySQL, which seemed like a good idea, at first.
When it comes to the failover part of utilizing memcached servers, It's a bit worrying that my sessions will stop working if the memcached would go offline. It will certainly affect my users.
There's a few techniques that we already utilize to reduce failover, including having a pool of servers available to compensate in the event of downtime, utilizing sharding/consistent hashing across the server pool and so on. We would also do some sort of graceful degradation that tells the users that something have gone wrong and they are welcome to login again, in the event of them being kicked out due to memcached server failover.
So how does people generally deal with these issues when storing sessions on memcached servers?
First, if you put something in memcache only, you should be OK losing it. For everything else, there's persistent storage.
Second, memcached simply doesn't fail very often. There aren't any moving parts like disk platters. The only times I've ever lost sessions were due to reboots for kernel upgrades. But losing those sessions weren't a big deal, because of the first point.
So to answer your question directly, if a datum is OK to lose, storing it in a memcache session only is OK. If it's not OK to lose, store it in persistent storage, and maybe cache it in memcache for speed.
You could create a fail safe method by using both the db and memcached. Check to see if your memcached object is in memory else store session in the db then create the memcache instance. Just make sure when log out / sign out, it flushes/removes the memcached...
So check memcached first, if fails, check db... :)

Categories