How is application lifecycle managed in PHP frameworks? - php

I come from a Java background, where the JVM is a long-running process, servers can be started by user code and application state is normally kept and managed by the framework of choice (e.g. Spring).
In the PHP world, things are stateless, as each script execution is volatile in the sense that it does not keep state (unless it uses an external medium, like a DB, in-memory cache, etc). The web server invokes the script via CGI (probably through php-fpm to optimise resources).
Is it correct that every single HTTP request incurs in the overhead of initialising the framework and middleware only for that request? It appears so when reading Laravel's Request Lifecycle, for example.
Doesn't this entail a lot of repetitive overhead for every single request that enters the system (e.g. detecting environment, initialising handlers, routes, ORM, logging, etc.)?
Or am I missing something? Do these frameworks indeed keep state in some manner?

Related

Is it possible to share an object between HTTP requests on the server side (PHP, Symfony)?

I want to improve a performance of a web service, which needs to create a stateless but very large (in terms of a memory) object. This takes quite a lot of time - 0.3s. Replacing a constructor with loading a serialized object improved the time to 0.1s. Is there a way to keep such an object in http server memory (PHP, Symfony 4) and use it in different http requests?
First off: This is usually done with a key-value storage like Redis or Memcache. But that doesn't get rid of the time it takes to unserialize which is probably the bottleneck here.
It depends what your setup is. The usual "web server answers a request and boots up PHP via either mod_php, CGI or FPM" can't do this. HTTP is stateless and PHP processes are isolated from each other. If you start your own server directly with PHP with something like ReactPHP, you can share memory because your application will always be "up", much akin to a Java server process. But that changes the whole programming paradigm as suddenly you will have to take care of memory leaks, multithreading and such.

Stateless & asynchronous web-server with PHP (and Symfony)

TL;DR: I'm not sure this topic has its place on StackOverflow, but basically it's just a topic of debate and thinking about making PHP apps like we would do with NodeJS for example (stateless request flow, asynchronous calls, etc.)
The situation
We know NodeJS can be used as both a web-server and web-app.
But for PHP, the internal web-server is not recommended for production (so says the documentation).
But, as Symfony full-stack is based on the Kernel which handles Request objects, it means we should be able to send lots of requests to the same kernel, only if we could "bootstrap" the php web-server (not the app) by creating a kernel before listening to HTTP requests. And our router would only create a Request object and make the kernel handle it.
But for this, a Symfony app has to be stateless, for example we need Doctrine to effectively clear its unit of work after a request, or maybe we would need to sort of isolate some components based on a request (By identifying a request with its unique PHP class reference id? Or by using other php processes?), and obviously, we would need more asynchronous things in PHP or in the way we use the internal web-server.
The main questions I sometimes ask myself, and now ask to the community
To clarify this, I have some questions about PHP:
Why exactly is the internal PHP webserver not recommended for production?
I mean, if we can configure how the server is run and its "router" file, we should be able to use it like any PHP server, yes or no?
How does it behaves internally? Is memory shared between two requests?
By using the router, it seems obvious to me that variables are not shared, else we could make nodejs-like apps, but it seems PHP is not capable of doing something like this.
Is it really possible to make a full-stateless application with Symfony?
e.g. I send two different requests to the same kernel object, in this case, is there any possibility that the two requests create a conflict in Symfony core components?
Actually, the idea of "Create a kernel -> start server -> on request, make the kernel handle it" behavior would be awesome, because it would be something quite similar to NodeJS, but actually, the PHP paradigm is not compatible with this because we would need each request to be handled asynchronously. But if a kernel and its container is stateless, then, there should be a way to do something like that, shouldn't it?
Some thoughts
I've heard about React PHP, Ratchet PHP for Websocket integration, Icicle, PHP-PM but never experienced them, it seems a bit too complex to me for now (I may lack some concepts about asynchronicity in apps, that's why my brain won't understand until I have some more answers :D ).
Is there any way that these libraries could be used as "wrappers" for our kernel request handling?
I mean, let's create this reactphp/icicle/whatever environment setup, create our kernel like we would do in any Symfony app, and run the app as web-server, and when a request is retrieved, we send it asynchrously to our kernel, and as long as the kernel has not sent the response, the client waits for it, even if the response is also sent asynchrously (from nested callbacks, etc., like in NodeJS).
This would make any existing Symfony app compatible with this paradigm, as long as the app is stateless, obviously. (if the app config changes based on a request, there's a paradigm issue in the app itself...)
Is it even a possible reality with PHP libraries rather than using PHP internal web-server in another way?
Why ask these questions?
Actually, it would be kind of a revolution if PHP could implement real asynchronous stuff internally, like Javascript has, but this would also has a big impact on performances in PHP, because of persistent data in our web-server, less bootstraping (require autoloader, instantiate kernel, get heavy things from cached files, resolve routing, etc.).
In my thoughts, only the $kernel->handleRaw($request); would consume CPU, the whole rest (container, parameters, services, etc.) would be already in the memory, or, for the case of services, "awaiting to be instantiated". Then, performance boost, I think.
And it may troll a bit the people who still think PHP is a very bad and slow language to use :D
For readers and responders ;)
If a core PHP contributor reads me, is there any way that internally PHP could be more asynchronous even with a specific new internal API based on functions or classes?
I'm not a pro of all of these concepts, and I hope really good experts are going to read this and answer me!
It could be a great advance in the PHP world if all of this was possible in any way.
Why exactly is the internal PHP webserver not recommended for
production? I mean, if we can configure how the server is run and its
"router" file, we should be able to use it like any PHP server, yes or
no?
Because it's not written to behave well under load, and there are no configuration options that let you handle HTTP request processing before it reaches PHP.
Basically, it lacks features if you compare it to nginx. It would be equal to comparing a skateboard to a Lamborghini.
It can get you from A to B but.. you get the gist.
How does it behaves internally? Is memory shared between two requests?
By using the router, it seems obvious to me that variables are not
shared, else we could make nodejs-like apps, but it seems PHP is not
capable of doing something like this.
Documentation states it's singlethreaded, so it appears that it would behave the same as if you wrote while(true) { // all your processing here }.
It's a playtoy designed to quickly check a few things if you can't be bothered to set up a proper web server before trying out your code.
Is it really possible to make a full-stateless application with
Symfony? e.g. I send two different requests to the same kernel object,
in this case, is there any possibility that the two requests create a
conflict in Symfony core components?
Why would it go to the same kernel object? Why not design your app in such a way that it's not relevant which object or even processing server gets the request? Why not design for redundancy and high availability from the get go? HTTP = stateless by default. Your task = make it irrelevant what processes the request. It's not difficult to do so, if you avoid coupling with the actual processing server (example: don't store sessions to local filesystem etc.)
Actually, the idea of "Create a kernel -> start server -> on request,
make the kernel handle it" behavior would be awesome, because it would
be something quite similar to NodeJS, but actually, the PHP paradigm
is not compatible with this because we would need each request to be
handled asynchronously. But if a kernel and its container is
stateless, then, there should be a way to do something like that,
shouldn't it?
Actually, nginx + php-fpm behave almost identical to node.js.
nginx uses a reactor to handle all connections on the same thread. Node.js does the exact same thing. What you do is create a closure / callback that is fed into Node's libraries and I/O is handled in a threaded environment. Multithreading is abstracted from you (related to I/O, not CPU). That's why you can experience that Node.js blocks when it's asked to do a CPU intensive task.
nginx implements the exact same concept, except this callback isn't a closure written in javascript. It's a callback that expects an answer from php-fpm during <timeout> seconds. Nginx takes care of async for you. What your task is is to write what you want in PHP. Now, if you're reading a huge file, then async code in your PHP would make sense, except it's not really needed.
With nginx and sending off requests for processing to a fastcgi worker, scaling becomes trivial. For example, let's assume that 1 PHP machine isn't enough to deal with the amount of requests you're dealing with. No problem, add more machines to nginx's pool.
This is taken from nginx docs:
upstream backend {
server backend1.example.com weight=5;
server backend2.example.com:8080;
server unix:/tmp/backend3;
server backup1.example.com:8080 backup;
server backup2.example.com:8080 backup;
}
server {
location / {
proxy_pass http://backend;
}
}
You define a pool of servers and then assign various weights / proxying options related to balancing how requests are handled.
However, the important part is that you can add more servers to cope with availability requirements.
This is the reason why nginx + php-fpm stack is appealing. Since nginx acts as a proxy, it can proxy requests to node.js as well, letting you handle web socket related operations in node.js (which, in turn, can perform an HTTP request to a PHP endpoint, allowing you to contain your entire app logic in PHP).
I know this answer might not be what you're after, but what I wanted to highlight is the way node.js works (conceptually) is identical to what nginx does when it comes to handling incoming request. You could make php work as node does, but there's no need for that.
Your questions can be summed up as this:
"Could PHP be more like Node?"
to which the answer is of course "Yes." But that leads us to another question:
"Should PHP be more like Node?"
and now the answer is not that obvious.
Of course in theory PHP could be made more like Node - even to a point to make it exactly the same. Just take the next version of Node and call it PHP 6.0 or something.
I would argue that it would be harmful to both Node and PHP. There is a diversity in the runtime environments for a reason. One of the variations is the concurrency model used in a given environment. Making one like the other would mean less choice for the programmer. And less choice is less freedom of expression.
PHP and Node were created in different times and for different reasons.
PHP was developed in 1995 and the name stood for Personal Home Page. The use case was to add some server-side dynamic features to HTML. We already had SSI and CGI at that point but people wanted to be able to inject right into the HTML - synchronously, as it wouldn't make much sense otherwise - results of database queries and other computations. It isn't a surprise how good it is at this job even today.
Node, on the other hand, was developed in 2009 - almost 15 years later - to create high performance network servers. So it shouldn't surprise us that writing such servers in Node is easy and that they have great performance characteristics. This is why Node was created in the first place. One of the choices it had to make was a 100% non-blocking environment of single-threaded, asynchronous event loops.
Now, single-threading concurrency is conceptually more difficult than multi-threading. But if you want performance for I/O-heavy operations then currently you have no other options. You will not be able to create 10,000 threads but you can easily handle 10,000 connections with Node in a single thread. There is a reason why nginx is single-threaded and why Redis is single threaded. And one common characteristic of nginx and Redis is amazing performance - but both of those were hard to write.
Now, as far as Node and PHP go, those technologies are so far from each other that it's hard to even comprehend how their fusion would look like. It reminds me the old April Fool's joke about unifying Perl and Python that so many people believed in.
PHP has its strengths and Node has it strengths. And just like it would be hard to imagine Node with blocking-I/O, it would be equally hard to imagine PHP with non-blocking I/O.
To summarize: it could be possible to make PHP like Node, but I wouldn't expect it to happen any time soon - if ever.

Bootstrapping web application - PHP, Ruby, Python, Node.js

Assumption 1: When running PHP via apache or nginx, each incoming request results in the script bootstrapping all of its include files, so essentially there is no shared memory, and the "world is recreated" upon each request.
Assumption 2: Node.js applications are bootstrapped when the server is started. The "world is only created once".
Are Python and Ruby applications bootstrapped in a similar way as PHP or as Node.js?
If possible, would appreciate some guidance regarding terminology: is this basically a question of multi-threaded or concurrency support?
It depends totally on how the application is run.
Most web applications in Python are run as servers which receive requests, rather than being a 'dead' script that gets called on request. "The world is created before the request arrives".
Note I didn't say "only once", as you phrased it.
The reason I phrase it that way is that there are different ways of serving python web applications.
Most python (web) apps are 'WSGI' applications. WSGI is a specification which basically requires the application (or framework) to have a single entry-point function:
def app(environment, start_response):
where environment is all the stuff like the address being asked for, cookies, request type, query args, etc. start_response is a callback function, which the app function needs to call with the response HTTP code, and headers.
start_response('200 OK', [('Content-type', 'text/html')])
for example. Once the this has been called, either the function needs to return the body of the response to be sent back to the client, or else to yield it back as a generator (for super big files).
All of this is usually handled by a WSGI framework, which does all that transparently, and provides a easier to write interface for writing your application logic.
In PHP, all your routes & routing is normally handled by apache (or nginx/php-fpm) running individual script files. This, as you rightly suggest, requires re-creating the whole world each time. With WSGI, the world is already created, and WSGI simply calls the application function each time a new request comes in. Most python based web frameworks have some kind of router, either the flask style:
#app.route('/elephants')
def elephants_view():
return 'view the elephants!'
or Django style routing table:
urls = [
(r'^/kangaroos$', 'views.kangaroos'),
]
# in views.py:
def kangaroos():
return 'kangaroos, baby!'
or other ways. There are many different WSGI frameworks which all have their pros and cons. Some of the popular WSGI-based frameworks include Flask, Django, Falcon.
There are many different ways to serve WSGI applications. Flask & Django come with basic development servers, which are single-threaded, and great for development, but not suitable for production.
Since they're single-threaded, "the world is only created once". So global variables last between requests, etc.
There are many other WSGI servers, which can serve any of the frameworks on top of WSGI. Waitress is a great pure-python one. uWSGI is another production grade server, as is gUnicorn, and many others.
These servers do NOT guarantee that global state is shared between requests, and will 'create the world' an unspecified (configurable) number of times. Some of them use a fixed number of workers, which the main incoming reciever will pass out requests to, others may spin up new worker threads or processes as they are needed.
Flask, and most of the other Python WSGI frameworks do have the concepts of 'Application Globals' which is how you can store data which must last the whole server lifespan. These special values are shared between 'worlds'. (By using magic rings and pools in a forest).
(Side note: For fun, I started writing a WSGI server using the very cool gevent async library, which does work in the same kind of manner as Node.js in that it is only a single process, which does as much as possible asynchronously (although without Node.js callback style...) in a single thread. It's very short, just one file, so it's pretty easy to see how it all works.)
Ruby is pretty similar to Python in this way, except the protocol is called 'Rack', rather than WSGI, and common servers are 'Puma', 'Unicorn' and 'Rainbows!'. Common Ruby Rack-based frameworks are 'Ruby on Rails', 'Sinatra', and 'Merb'.
One advantage of this kind of model is that you can create 'middleware' which sits between the application responder, and the WSGI (or Rack) server, and "does stuff" to the request on the way (such as minifying javascript, caching, logging, authentication, etc).
Another good introduction to WSGI, and how it works is in 'Full stack Python'.
There are other ways of writing web servers than using WSGI (or Rack). For instance, the Tornado and Twisted frameworks in Python allow a totally different asynchronous style of (web) app to be written. They are also using 'the world is created before requests come in' style servers.

Is there a central / main context in PHP?

Here is my question: Consider Django or web2py in Python (as web frameworks) or Java WEB applications (being simple servlets apps or complex struts2/wicket/whatever frameworks). They share at least two things I like:
There's a Context environment or a way to access data out of the request or session contexts (i.e. global data, singletones, pools ... anything that can share in-memory values and behavior).
Classes are loaded/initialized ONCE. Perhaps i'm missing something but AFAIK in PHP a class is loaded and initialized in a PER REQUEST basis (so, in a regular class, if I (e.g.) modify a static value, this will live only in the current request, and even a simultaneous request hitting that value will get a different one).
Is there a way to get that in php? e.g. in Python/Django i can declare a regular class and that class can hold static data or be a true singleton (again: perhaps a pool or a kind of central queue manager), and will be the same object until the django server dies (note: modules in python are kept loaded in the python context when imported).
The fact that PHP's "context" lives on a per-request basis is pretty much core to how the language works with web servers.
If you want to get it working more like Java or other languages where the data doesn't get reset every request, you basically have two options:
Serialize data into a file, DB, whatever, and reload it on the next request
Instead of serving your pages through a web server, write the server using PHP
Serializing data into storage and reloading it on subsequent requests is the typical approach.
Writing a server in PHP itself, while possible, is not something I would recommend. Despite much effort, PHP still has sort of bad memory management, and you are very likely to encounter memory leaks in long-running PHP processes.

Shared/pooled connections to backend services in PHP

I'm trying to figure out the best way to minimize resource utilization when I have PHP talking to various backend services (e.g. Amazon S3 or any other random web services -- I'd like a general solution). Ideally, I'd like to have a single persistent connection to the backend (or maybe a small pool of persistent connections) with some caching, and then have all of the PHP tasks share it. We can consider it all read-only for the purposes of this question. It's not obvious to me how to do this in PHP. There's the database-specific stuff like mysql_pconnect(), but that doesn't really do it for me.
One idea I've had, which seems seems somewhat suboptimal (but is still better than having every single request create and destroy a new connection) is to use a local caching proxy (in a separate process) that would effectively do the pooling and caching. PHP would still be opening and closing a connection for every request, but at least it would be to a local process, so it should be a little faster (and it would reduce load on the backends). But it doesn't seem like this kind of craziness should be necessary. There's gotta be a better way. This is easy in other languages. Please tell me what I'm missing!
There's a large ideological disconnect between the various web technologies. Some are essentially daemons that run full-time in the background, and handle requests passed in on their own. Because there's a process always running, you can have a pool of already open existing working connections.
PHP (and normal CGI scripts) does not have a daemon behind the scenes. Every time a request comes in, the PHP interpreter is started up with a clean slate, compiles the scripts, and runs the bytecode. There's no persistence. The PHP database functions that support persistent connections establish the connection at the web server child level (i.e. mod_php attached to an Apache process). This isn't exactly a connection pool, as you can only ever see the persistent connection attached to your own process.
Without having a daemon or similar process sitting behind the scenes to hand out resources, you won't get real connection pooling.
Keep in mind that most new connections to most services are not heavy-weight, and non-database connections that are heavy-weight might not be friendly to the concept of a connection pool.
Before you think about writing your own PHP-based daemon to handle stuff like this, keep in mind that it may already be a solved problem. Python came up with something called WSGI, with a similar implementation in Ruby called Rack. Perl also has something remarkably similar but I can't remember the name of it off the top of my head. A quick look at Google didn't show any PHP implementations of WSGI, but that doesn't mean they don't exist...
Because S3 and other webservices use HTTP as their transport, you won't get a significant benefit from caching the connection.
Although you may be using an API that appears to authenticate as a first step, looking at the S3 Documentation, the authentication happens with every request - so no benefit in authenticating once and reusing a connection
Web service requests over HTTP are lightweight and typically stateless. Once your request has been answered, no resources (connection or sesson state) are consumed on the server. This allows the web service implementer to use many machines to answer your request without tying up resources on a particular server

Categories