I am new to php, but in other web technologies, you can share objects between page instances. For example, in java jsp pages you easily have on class that exist as static class for the whole server instance. How to do this in php?
I am not refering to sessions variables (at least I don't think so). This is more for the purpose of resource pooling (perhaps a socket to share, or database connections etc). So a whole class needs to be shared between subsequent loads, not just some primitive variables that I can store in the session.
I have also looked into doing php singleton classes but I believe that class is only shared within the same page and not across pages.
To make things even more clear, I'm looking for something that can help me share, say, a socket connected to a server for a connectSocket.php page such that all users who loads that page uses the same socket and does not open a new one.
This is a bit of a difficult answer, and might not be exactly what you are looking for.
PHP is built upon a 'shared-nothing' architecture. If you require some type of state across your application, you must do this through other means.
First I would recommend looking into the core of the problem.. Do you really need it? If you assume the PHP application could die (and lose state) is it ok to lose the data?
If you must maintain the state, even after the application dies or otherwise, you should assume probably the best place to put the data is in MySQL. PHP is intended as a thin layer around your business logic, so I can highly recommend this.
If you don't care about losing the data after a restart, the problem domain you're looking for is probably caching. I would recommend looking into memcached or if you're on a single machine, apc. APC will definitely work for you with Apache on a single machine, but you will still have to code your application assuming you might lose the data.
If you're worried your underlying datastore (MySQL) is too slow, but you still need to maintain the data after a restart, you should look into a combination of these 2 systems. You can always push and pull your data from the cache, but only when it updates send it over to Mysql.
If the data is purely user or session-bound, you probably want to just looking into the sessions system.
I've personally developed a reasonably large multi-tenant application, and although its a pretty complex application, I've never needed the true state you're looking for.
Update: Sorry, I did not read your note about sharing a socket. You will need a separate daemon to handle this, perhaps if you can explain your problem further, there might be other approaches. What type of socket is this?
There's a fundamental difference between web-served Java and web-served interpreted languages like PHP and Perl. In Java, your web server will have an operating environment that maintains state (ie. Tomcat). With interpreted languages, a request to your web server will generally spawn a new web server thread, which in turn loads a fresh operating environment for that thread, in this case, the PHP environment.
Therefore, in PHP, there is no concept of page instances. Every request to the web server is a fresh start. All the classes are re-loaded, so there is no concept of class sharing, nor is there a concept of resource pooling, unless it is implemented externally.
Sharing sockets between web requests therefore isn't really possible.
This is likely a partial answer but you can save an instance of a class into a Session variable and access it at another time.
Most of the PHP database libraries use connection pooling already. You call, for example, pg_connect as if you were requesting a new connection, but if the connection string is the same as a connection that already exists, you will get the established connection back instead. If you only care about pooling for database access, then you can just confirm that it exists in the db library you're using.
An other horroble solution may be to load the data of the object to any $_SESSION variable and then user it back into the object of the other page.
In fact, this is the solution I'm going to follow in my project, until I get some better one.
Regards!
Related
I have a setup where I have a host which routes multiple requests in a load-balanced fashion. My backend uses PHP. Now, I need to use the $_SESSION object for some of my processing.
Will $_SESSION work where I have 3 backend servers which can receive any request at any time?
If not, Can one suggest alternatives to handle such cases?
EDIT: I do understand that we can store sessions in a database and find a way to track it. But, the problem in a realtime load-balanced production scenario is the number of calls that go into a DB. That can be a real bummer for my performance. I'm kind of hoping that, we can handle this at an webserver level.
Not sure, if it is possible, but, if two webservers have some kind of replication mechanism like databases do, it will be brilliant. I dont have to do a thing.
If such a thing does not exist, PHP should be modified to support it. That will actually, make it a seriously robust language.
My Suggestion is to setup PHP to handle the sessions in the database (this way they can all access the session data independent of which server is requesting it).
A good tutorial for that can be found HERE
Look into either memcache (Is it recommended to store PHP Sessions in MemCache?) or REDIS (https://joshtronic.com/2013/06/20/redis-as-a-php-session-handler/).
There is a good tutorial on setting up memcache on Ubuntu at https://www.globo.tech/learning-center/php-memcached-instances-ubuntu-16/. Which also covers using haproxy as a load balancer (although you may already a solution).
Perhaps have a read of https://blog.newtonhq.com/session-handling-for-1-million-requests-per-hour-68cdece15030.
You can store sessions in memcache in order to share session among servers.
Please have a look on documentation here
We have a need to keep a collection of socket objects around that are associated with different client browser sessions, so that when the client's browser makes a subsequent request, we can use the existing socket connection/session to make a request on their behalf. The socket is to something that is not HTTP. Is there a way to store objects like this in PHP that will survive across page requests?
Is there a way to store objects like this in PHP that will survive across page requests?
No.
To quote zombat's answer to a very similar question:
In PHP, there is no concept of page instances. Every request to the web server is a fresh start. All the classes are re-loaded, so there is no concept of class sharing, nor is there a concept of resource pooling, unless it is implemented externally. Sharing sockets between web requests therefore isn't really possible.
If the objects were serializable, you could use PHP's serialize() and unserialize() in conjunction with MySQL memory tables to solve this issue. Other than that, I don't think there's much else you can do.
In php script dies after page load so there is no way to do it. Hovewer you can create a long-living daemon which will open all required by process sockets and keep it opened between page reload. Of course you'll need to isolate these sockets by some kind of access key to make sure different sessions won't have access to other user's sockets. Also you need to keep in mind that it will die at some moment so make sure you have logic to restart all the sockets which were opened. But it can be achieved for sure.
Thanks.
This isn't a complete answer; but it steps towards an answer.
As has been pointed out ad nauseum elsewhere, the standard, classic model of using PHP, via a web server (Apache, Nginx, etc) does not allow you to do this, because each page hit starts with an entirely fresh set of variables.
Three thoughts:
You need a persistence layer. Obviously this is where you store stuff in a database, or use APC (APCu in PHP7+), Redis, or something similar.
Your problem, however, is that you specify "unserializable."
My next suggestion would be, perhaps you could persist the elements necessary to construct the object, and re-initialise the object for each PHP request. It won't be as amazingly performant as you'd like, but it's the most useful solution without having to rewrite eveything. Perhaps you've already tried it.
The next step is to do something completely out-there. One of the advantages the NodeJS infrastructure has is that the entire server loop persists.
So you could try one of the alternate methods of running PHP, like ReactPHP or PHP FastCGI. (There are others, but I can't remember them off the top of my head. I'll edit this if I remember.)
This involves an entirely different way of writing PHP--as different as NodeJS programming is from stuffing around with jQuery inside your browser. It wouldn't run within Apache. Rather, it would run directly as an app on your Unix server. And you'll have to cater for things like garbage collection so you don't have memory leaks, and write nice tight event loops.
The plus side is, because your thread persists and handles each subsequent request, you're able to handle requests in exactly the way you're after.
Your comment mentions these are local sockets. And it sounds like the PHP application acts as a socket client. So the only thing which is needed by the PHP session is a common identifier, such as a user ID, and for the sockets to be named consistently.
So, for example:
<?php
$userid = $_SESSION['userid'];
$fp = stream_socket_client("unix:///tmp/socket-" . $userid, $errno, $errstr, 30);
if ($fp) {
fread...
}
I want to create a live, checkers-like app, which will work like this: There will be multiple icons/avatars displayed on this checkerboard like surface. I want to have a command prompt beneath this board, or some other sort of interface, that will allow them to control a certain avatar, and get it to preform actions. Multiple users will be using it at one time, and I will all be able to view the other user's changes/actions to the checkerboard.
What I'm wondering is: what's the best way to do this? I've got my HTML, CSS, and JS approach down, but not my data storage method. I know that, using PHP, I've got the choices to use either: file-based storage, MYSQL, or some other method. I need to know which is better, because I don't want to have server-lag, poor-response time, or some other issue, especially in this case since actions will be preformed every other second 2 or so, by these multiple users.
I've done similar stuff before, but I'm wanting to hear how others would handle it (advice, etc.) from more experienced programmers.
Sounds like a great project for node.js!
To clarify, node.js is a server-side implementation of javascript. What you'll want is a comet based application (a web-based client application that receives server side pushes instead of the client constantly polling the server), which is exactly what node.js is good at.
Traditional ajax calls for your clients to poll the server for data. This creates enormous overhead for both the client and the server. Allowing the server to push requests directly to the client without the client repeatedly asking solves the overhead issue and creates a more responsive interface. This is accomplished by holding asynchronous client connections on the server and only returning when the server has something to respond with. Once the server responds with data, another connection is immediately created and held by the server again until data is ready to be sent.
You may be able to accomplish the same thing with PHP, but I'm not that familiar with PHP and Comet type applications.
Number of users and hosting costs will play into your file vs DB options. If you're planning on more than a couple of users, I'd stick to the database. There are some NoSQL options available out there, but in my experience MySQL is much faster and more reliable than those options.
Good luck with your project!
http://en.wikipedia.org/wiki/Comet_%28programming%29
http://www.nodejs.org/
http://zenmachine.wordpress.com/2010/01/31/node-js-and-comet/
http://socket.io/ - abstracts away the communication layer with your clients based on their capability (LongPolling, WebSockets, etc.)
MySQL and XCache !!!!
Make sure you use predefined statements so MySQL does not need to compile the SQL again. Also memtables could be used to use memory storage
Of course make use of indexes appropriately.
If the 'gamestate' is not that important you can even store everything in XCache.
Remember that XCache does not store data persistently (after Apache restart)
I'm trying to figure out the best way to minimize resource utilization when I have PHP talking to various backend services (e.g. Amazon S3 or any other random web services -- I'd like a general solution). Ideally, I'd like to have a single persistent connection to the backend (or maybe a small pool of persistent connections) with some caching, and then have all of the PHP tasks share it. We can consider it all read-only for the purposes of this question. It's not obvious to me how to do this in PHP. There's the database-specific stuff like mysql_pconnect(), but that doesn't really do it for me.
One idea I've had, which seems seems somewhat suboptimal (but is still better than having every single request create and destroy a new connection) is to use a local caching proxy (in a separate process) that would effectively do the pooling and caching. PHP would still be opening and closing a connection for every request, but at least it would be to a local process, so it should be a little faster (and it would reduce load on the backends). But it doesn't seem like this kind of craziness should be necessary. There's gotta be a better way. This is easy in other languages. Please tell me what I'm missing!
There's a large ideological disconnect between the various web technologies. Some are essentially daemons that run full-time in the background, and handle requests passed in on their own. Because there's a process always running, you can have a pool of already open existing working connections.
PHP (and normal CGI scripts) does not have a daemon behind the scenes. Every time a request comes in, the PHP interpreter is started up with a clean slate, compiles the scripts, and runs the bytecode. There's no persistence. The PHP database functions that support persistent connections establish the connection at the web server child level (i.e. mod_php attached to an Apache process). This isn't exactly a connection pool, as you can only ever see the persistent connection attached to your own process.
Without having a daemon or similar process sitting behind the scenes to hand out resources, you won't get real connection pooling.
Keep in mind that most new connections to most services are not heavy-weight, and non-database connections that are heavy-weight might not be friendly to the concept of a connection pool.
Before you think about writing your own PHP-based daemon to handle stuff like this, keep in mind that it may already be a solved problem. Python came up with something called WSGI, with a similar implementation in Ruby called Rack. Perl also has something remarkably similar but I can't remember the name of it off the top of my head. A quick look at Google didn't show any PHP implementations of WSGI, but that doesn't mean they don't exist...
Because S3 and other webservices use HTTP as their transport, you won't get a significant benefit from caching the connection.
Although you may be using an API that appears to authenticate as a first step, looking at the S3 Documentation, the authentication happens with every request - so no benefit in authenticating once and reusing a connection
Web service requests over HTTP are lightweight and typically stateless. Once your request has been answered, no resources (connection or sesson state) are consumed on the server. This allows the web service implementer to use many machines to answer your request without tying up resources on a particular server
OK, so I've got this totally rare an unique scenario of a load balanced PHP website. The bummer is - it didn't used to be load balanced. Now we're starting to get issues...
Currently the only issue is with PHP sessions. Naturally nobody thought of this issue at first so the PHP session configuration was left at its defaults. Thus both servers have their own little stash of session files, and woe is the user who gets the next request thrown to the other server, because that doesn't have the session he created on the first one.
Now, I've been reading PHP manual on how to solve this situation. There I found the nice function of session_set_save_handler(). (And, coincidentally, this topic on SO) Neat. Except I'll have to call this function in all the pages of the website. And developers of future pages would have to remember to call it all the time as well. Feels kinda clumsy, not to mention probably violating a dozen best coding practices. It would be much nicer if I could just flip some global configuration option and VoilĂ - the sessions all get magically stored in a DB or a memory cache or something.
Any ideas on how to do this?
Added: To clarify - I expect this to be a standard situation with a standard solution. FYI - I have a MySQL DB available. Surely there must be some ready-to-use code out there that solves this? I can, of course, write my own session saving stuff and auto_prepend option pointed out by Greg seems promising - but that would feel like reinventing the wheel. :P
Added 2: The load balancing is DNS based. I'm not sure how this works, but I guess it should be something like this.
Added 3: OK, I see that one solution is to use auto_prepend option to insert a call to session_set_save_handler() in every script and write my own DB persister, perhaps throwing in calls to memcached for better performance. Fair enough.
Is there also some way that I could avoid coding all this myself? Like some famous and well-tested PHP plugin?
Added much, much later: This is the way I went in the end: How to properly implement a custom session persister in PHP + MySQL?
Also, I simply included the session handler manually in all pages.
You could set PHP to handle the sessions in the database, so all your servers share same session information as all servers use the same database for that.
A good tutorial for that can be found here.
The way we handle this is through memcached. All it takes is changing the php.ini similar to the following:
session.save_handler = memcache
session.save_path = "tcp://path.to.memcached.server:11211"
We use AWS ElastiCache, so the server path is a domain, but I'm sure it'd be similar for local memcached as well.
This method doesn't require any application code changes.
You don't mentioned what technology you are using for load balancing (software, hardware etc.); but in any case, the solution to your problem is to employ "sticky sessions" on the load balancer.
In summary, this means that when the first request from a "new" visitor comes in, they are assigned a specific server from the cluster: all future requests for the lifetime of their session are then directed to that server. In practice this means that applications written to work on a single server can be up-scaled to a balanced environment with zero/few code changes.
If you are using a hardware balancer, such as a Radware device, then the sticky sessions is configured as part of the cluster setup. Hardware devices usually give you more fine-grained control: such as which server a new user is assigned to (they can check for health status etc. and pick the most healthy / least utilised server), and more control of what happens when a server fails and drops out of the cluster. The drawback of hardware balancers is the cost - but they are worth it imho.
As for software balancers, it comes down to what you are using. For Apache there is the stickysession property on mod_proxy - and plenty of articles via google to get this working with the php session ( for example )
Edit:
From other comments posted after the original question, it sounds like your "balancing" is done via Round Robin DNS, so the above probably won't apply. I'll refrain from commenting further and starting a flame against round robin dns.
The easiest thing to do is configure your load balancer to always send the same session to the same server.
If you still want to use session_set_save_handler then maybe take a look at auto_prepend.
If you have time and you still want to check more solutions, take a look at
http://redis4you.com/articles.php?id=01..
Using redis you are fault tolerant. From my point of view, it could be better than memcache solutions because of this robustness.
If you are using php sessions you could share with NFS the /tmp directory, where I think the sessions are stored, between all the servers in the cluster. That way you don't need database.
Edited: You can also use an external service like memcachedb (persistent and fast) and store the session info in the memcachedb index and indentify it with a hash of the content or even the session ID.
When we had this situation we implemented some code that lives in a common header.
Essentially for each page we check if we know the session Id. If we dont we check if we're in the situation whehich you describe, by checking if we have stored sesion data in the DB.Otherwise we just start a new session.
Obviously this requires all relevant data to be copied to the DB, but if you encapsulate your session data in a seperate class then it works OK.
you could also try using memcache as session handler
Might be too late, but check this out: http://www.pureftpd.org/project/sharedance
Sharedance is a high-performance server to centralize ephemeral key/data
pairs on remote hosts, without the overhead and the complexity of an SQL
database.
It was mainly designed to share caches and sessions between a pool of web
servers. Access to a sharedance server is trivial through a simple PHP API and
it is compatible with the expectations of PHP 4 and PHP 5 session handlers.
When it comes to php session handling in the Load Balancing Cluster, it's best to have Sticky Sessions. For that ask the network of datacenter who is maintaining the load balancer to enable the sticky session. Once that is enabled you'll don't need worry about sessions at php end