I have two PHP applications on my server. One of them has RESTAPI which I would like to consume and render in the second application. What is better way then curling the API? Can I somehow ask php-fpm for the data directly or something like that?
Doing curl and making request through the webserver seems wrong.
All this happens on single server - I know it probably doesn't scale well but its small project.
why use REST if you can access the functions directly?
If everything is on the same server then there is no need for some REST, since it makes a somewhat pointless run through the webserver.
But if it is already there and you don't care about the overhead (if there's not much traffic going on then it would make sense), then use file_get_contents instead of curl, it is easier to use, but I doubt it is faster/slower; both are right.
You could also use a second webserver (a second virtualhost) on a different port for internal use. That way things are nicely separated.
(If everything is on different servers, but a local network, then using sockets would be fastest. )
Doing curl and making request through the webserver seems wrong. - I disagree with that. You can still achieve what you want to achieve using Php CURL, even if it's on the same server.
I was in the same problem, but i solved it using MySQL to "queue" tasks, and a worker could use any pooling method, or PHP executing a new server side worker.
Since the results were stored in the same database, the PHP pages could load the results, or the status anytime.
Related
We have a need to keep a collection of socket objects around that are associated with different client browser sessions, so that when the client's browser makes a subsequent request, we can use the existing socket connection/session to make a request on their behalf. The socket is to something that is not HTTP. Is there a way to store objects like this in PHP that will survive across page requests?
Is there a way to store objects like this in PHP that will survive across page requests?
No.
To quote zombat's answer to a very similar question:
In PHP, there is no concept of page instances. Every request to the web server is a fresh start. All the classes are re-loaded, so there is no concept of class sharing, nor is there a concept of resource pooling, unless it is implemented externally. Sharing sockets between web requests therefore isn't really possible.
If the objects were serializable, you could use PHP's serialize() and unserialize() in conjunction with MySQL memory tables to solve this issue. Other than that, I don't think there's much else you can do.
In php script dies after page load so there is no way to do it. Hovewer you can create a long-living daemon which will open all required by process sockets and keep it opened between page reload. Of course you'll need to isolate these sockets by some kind of access key to make sure different sessions won't have access to other user's sockets. Also you need to keep in mind that it will die at some moment so make sure you have logic to restart all the sockets which were opened. But it can be achieved for sure.
Thanks.
This isn't a complete answer; but it steps towards an answer.
As has been pointed out ad nauseum elsewhere, the standard, classic model of using PHP, via a web server (Apache, Nginx, etc) does not allow you to do this, because each page hit starts with an entirely fresh set of variables.
Three thoughts:
You need a persistence layer. Obviously this is where you store stuff in a database, or use APC (APCu in PHP7+), Redis, or something similar.
Your problem, however, is that you specify "unserializable."
My next suggestion would be, perhaps you could persist the elements necessary to construct the object, and re-initialise the object for each PHP request. It won't be as amazingly performant as you'd like, but it's the most useful solution without having to rewrite eveything. Perhaps you've already tried it.
The next step is to do something completely out-there. One of the advantages the NodeJS infrastructure has is that the entire server loop persists.
So you could try one of the alternate methods of running PHP, like ReactPHP or PHP FastCGI. (There are others, but I can't remember them off the top of my head. I'll edit this if I remember.)
This involves an entirely different way of writing PHP--as different as NodeJS programming is from stuffing around with jQuery inside your browser. It wouldn't run within Apache. Rather, it would run directly as an app on your Unix server. And you'll have to cater for things like garbage collection so you don't have memory leaks, and write nice tight event loops.
The plus side is, because your thread persists and handles each subsequent request, you're able to handle requests in exactly the way you're after.
Your comment mentions these are local sockets. And it sounds like the PHP application acts as a socket client. So the only thing which is needed by the PHP session is a common identifier, such as a user ID, and for the sockets to be named consistently.
So, for example:
<?php
$userid = $_SESSION['userid'];
$fp = stream_socket_client("unix:///tmp/socket-" . $userid, $errno, $errstr, 30);
if ($fp) {
fread...
}
TL;DR: I'm not sure this topic has its place on StackOverflow, but basically it's just a topic of debate and thinking about making PHP apps like we would do with NodeJS for example (stateless request flow, asynchronous calls, etc.)
The situation
We know NodeJS can be used as both a web-server and web-app.
But for PHP, the internal web-server is not recommended for production (so says the documentation).
But, as Symfony full-stack is based on the Kernel which handles Request objects, it means we should be able to send lots of requests to the same kernel, only if we could "bootstrap" the php web-server (not the app) by creating a kernel before listening to HTTP requests. And our router would only create a Request object and make the kernel handle it.
But for this, a Symfony app has to be stateless, for example we need Doctrine to effectively clear its unit of work after a request, or maybe we would need to sort of isolate some components based on a request (By identifying a request with its unique PHP class reference id? Or by using other php processes?), and obviously, we would need more asynchronous things in PHP or in the way we use the internal web-server.
The main questions I sometimes ask myself, and now ask to the community
To clarify this, I have some questions about PHP:
Why exactly is the internal PHP webserver not recommended for production?
I mean, if we can configure how the server is run and its "router" file, we should be able to use it like any PHP server, yes or no?
How does it behaves internally? Is memory shared between two requests?
By using the router, it seems obvious to me that variables are not shared, else we could make nodejs-like apps, but it seems PHP is not capable of doing something like this.
Is it really possible to make a full-stateless application with Symfony?
e.g. I send two different requests to the same kernel object, in this case, is there any possibility that the two requests create a conflict in Symfony core components?
Actually, the idea of "Create a kernel -> start server -> on request, make the kernel handle it" behavior would be awesome, because it would be something quite similar to NodeJS, but actually, the PHP paradigm is not compatible with this because we would need each request to be handled asynchronously. But if a kernel and its container is stateless, then, there should be a way to do something like that, shouldn't it?
Some thoughts
I've heard about React PHP, Ratchet PHP for Websocket integration, Icicle, PHP-PM but never experienced them, it seems a bit too complex to me for now (I may lack some concepts about asynchronicity in apps, that's why my brain won't understand until I have some more answers :D ).
Is there any way that these libraries could be used as "wrappers" for our kernel request handling?
I mean, let's create this reactphp/icicle/whatever environment setup, create our kernel like we would do in any Symfony app, and run the app as web-server, and when a request is retrieved, we send it asynchrously to our kernel, and as long as the kernel has not sent the response, the client waits for it, even if the response is also sent asynchrously (from nested callbacks, etc., like in NodeJS).
This would make any existing Symfony app compatible with this paradigm, as long as the app is stateless, obviously. (if the app config changes based on a request, there's a paradigm issue in the app itself...)
Is it even a possible reality with PHP libraries rather than using PHP internal web-server in another way?
Why ask these questions?
Actually, it would be kind of a revolution if PHP could implement real asynchronous stuff internally, like Javascript has, but this would also has a big impact on performances in PHP, because of persistent data in our web-server, less bootstraping (require autoloader, instantiate kernel, get heavy things from cached files, resolve routing, etc.).
In my thoughts, only the $kernel->handleRaw($request); would consume CPU, the whole rest (container, parameters, services, etc.) would be already in the memory, or, for the case of services, "awaiting to be instantiated". Then, performance boost, I think.
And it may troll a bit the people who still think PHP is a very bad and slow language to use :D
For readers and responders ;)
If a core PHP contributor reads me, is there any way that internally PHP could be more asynchronous even with a specific new internal API based on functions or classes?
I'm not a pro of all of these concepts, and I hope really good experts are going to read this and answer me!
It could be a great advance in the PHP world if all of this was possible in any way.
Why exactly is the internal PHP webserver not recommended for
production? I mean, if we can configure how the server is run and its
"router" file, we should be able to use it like any PHP server, yes or
no?
Because it's not written to behave well under load, and there are no configuration options that let you handle HTTP request processing before it reaches PHP.
Basically, it lacks features if you compare it to nginx. It would be equal to comparing a skateboard to a Lamborghini.
It can get you from A to B but.. you get the gist.
How does it behaves internally? Is memory shared between two requests?
By using the router, it seems obvious to me that variables are not
shared, else we could make nodejs-like apps, but it seems PHP is not
capable of doing something like this.
Documentation states it's singlethreaded, so it appears that it would behave the same as if you wrote while(true) { // all your processing here }.
It's a playtoy designed to quickly check a few things if you can't be bothered to set up a proper web server before trying out your code.
Is it really possible to make a full-stateless application with
Symfony? e.g. I send two different requests to the same kernel object,
in this case, is there any possibility that the two requests create a
conflict in Symfony core components?
Why would it go to the same kernel object? Why not design your app in such a way that it's not relevant which object or even processing server gets the request? Why not design for redundancy and high availability from the get go? HTTP = stateless by default. Your task = make it irrelevant what processes the request. It's not difficult to do so, if you avoid coupling with the actual processing server (example: don't store sessions to local filesystem etc.)
Actually, the idea of "Create a kernel -> start server -> on request,
make the kernel handle it" behavior would be awesome, because it would
be something quite similar to NodeJS, but actually, the PHP paradigm
is not compatible with this because we would need each request to be
handled asynchronously. But if a kernel and its container is
stateless, then, there should be a way to do something like that,
shouldn't it?
Actually, nginx + php-fpm behave almost identical to node.js.
nginx uses a reactor to handle all connections on the same thread. Node.js does the exact same thing. What you do is create a closure / callback that is fed into Node's libraries and I/O is handled in a threaded environment. Multithreading is abstracted from you (related to I/O, not CPU). That's why you can experience that Node.js blocks when it's asked to do a CPU intensive task.
nginx implements the exact same concept, except this callback isn't a closure written in javascript. It's a callback that expects an answer from php-fpm during <timeout> seconds. Nginx takes care of async for you. What your task is is to write what you want in PHP. Now, if you're reading a huge file, then async code in your PHP would make sense, except it's not really needed.
With nginx and sending off requests for processing to a fastcgi worker, scaling becomes trivial. For example, let's assume that 1 PHP machine isn't enough to deal with the amount of requests you're dealing with. No problem, add more machines to nginx's pool.
This is taken from nginx docs:
upstream backend {
server backend1.example.com weight=5;
server backend2.example.com:8080;
server unix:/tmp/backend3;
server backup1.example.com:8080 backup;
server backup2.example.com:8080 backup;
}
server {
location / {
proxy_pass http://backend;
}
}
You define a pool of servers and then assign various weights / proxying options related to balancing how requests are handled.
However, the important part is that you can add more servers to cope with availability requirements.
This is the reason why nginx + php-fpm stack is appealing. Since nginx acts as a proxy, it can proxy requests to node.js as well, letting you handle web socket related operations in node.js (which, in turn, can perform an HTTP request to a PHP endpoint, allowing you to contain your entire app logic in PHP).
I know this answer might not be what you're after, but what I wanted to highlight is the way node.js works (conceptually) is identical to what nginx does when it comes to handling incoming request. You could make php work as node does, but there's no need for that.
Your questions can be summed up as this:
"Could PHP be more like Node?"
to which the answer is of course "Yes." But that leads us to another question:
"Should PHP be more like Node?"
and now the answer is not that obvious.
Of course in theory PHP could be made more like Node - even to a point to make it exactly the same. Just take the next version of Node and call it PHP 6.0 or something.
I would argue that it would be harmful to both Node and PHP. There is a diversity in the runtime environments for a reason. One of the variations is the concurrency model used in a given environment. Making one like the other would mean less choice for the programmer. And less choice is less freedom of expression.
PHP and Node were created in different times and for different reasons.
PHP was developed in 1995 and the name stood for Personal Home Page. The use case was to add some server-side dynamic features to HTML. We already had SSI and CGI at that point but people wanted to be able to inject right into the HTML - synchronously, as it wouldn't make much sense otherwise - results of database queries and other computations. It isn't a surprise how good it is at this job even today.
Node, on the other hand, was developed in 2009 - almost 15 years later - to create high performance network servers. So it shouldn't surprise us that writing such servers in Node is easy and that they have great performance characteristics. This is why Node was created in the first place. One of the choices it had to make was a 100% non-blocking environment of single-threaded, asynchronous event loops.
Now, single-threading concurrency is conceptually more difficult than multi-threading. But if you want performance for I/O-heavy operations then currently you have no other options. You will not be able to create 10,000 threads but you can easily handle 10,000 connections with Node in a single thread. There is a reason why nginx is single-threaded and why Redis is single threaded. And one common characteristic of nginx and Redis is amazing performance - but both of those were hard to write.
Now, as far as Node and PHP go, those technologies are so far from each other that it's hard to even comprehend how their fusion would look like. It reminds me the old April Fool's joke about unifying Perl and Python that so many people believed in.
PHP has its strengths and Node has it strengths. And just like it would be hard to imagine Node with blocking-I/O, it would be equally hard to imagine PHP with non-blocking I/O.
To summarize: it could be possible to make PHP like Node, but I wouldn't expect it to happen any time soon - if ever.
I want to separate out the API calls my site makes to another install as the site has become big and buggy by having everything together. I would like to know what ways are there if any to make two sites communicate when they are on the same server.
I originally was thinking I could get the client-facing site to just include the models from the API site through a custom loader for CodeIgniter, but I am currently leaning towards wanting the API site to take advantage of Laravel which would obviously scrap directly loading them.
Currently I have some calls which are using CURL to POST requests, is this the only way? I was hoping to drop the HTTP calls in favour of something more direct.
As I said in my comments to the question, I'm definitely no expert on this kind of stuff, but my original thinking was that IPC-style stuff could be done, maybe using names pipes.
PHP does allow for this in its POSIX and process control functions. Simply use posix_mkfifo to create a named pipe and then you should be able to use fopen, fread, etc. (along with the stream_* functions if you need to) to write to and read form the pipe. However, I'm not sure how well that works with a single writer and multiple readers, and it's also probably quite a large change to your code to replace the HTTP stuff you currently have.
So the next possibility is that, if you want to stick with HTTP (and don't mind the small overhead of forming HTTP requests and the headers, etc.), then you should ensure that your server is using local sockets to cut down on transport costs. If your web site's domain name is the same hostname as the server itself this should already be happening (as your /etc/hosts file will have any entry pointing the hostname to 127.0.0.1). However, if not, all you need to do is add such an entry and, as far as I'm aware, it'll work. At the very worst you could hardcode 127.0.0.1 in your code (and ensure your webserver responds correctly to these requests), of course.
A page is sending AJAX call to server and should get item info in response. The array to look-up/return is a rather big one and I can’t hold it in the PHP file to accept the request. So, as far as my knowledge and experience tell, there are 2 methods:
Access database for each request.
Store items in files (e.g. “item12.txt”) and send contents to the user.
My C experience says that opening and closing a file takes much more system time than the rest of the program. How is it in PHP? What is the preferred method (most importantly, resource-wise) – file system or database? Is there any other way you would recommend (e.g. JavaScript directly loading the file with variable array from the server for each request)? Maybe there’s some innovative method lying around you’re aware of?
P.S. On the server-side a number only will be accepted, so no worries regarding someone trying to access files in the server or trying to do some fancy stuff on database.
Sockets
Depending on how many requests you will be handling, you could look into socket connections.
Sockets gives you 2 way communication between the client and the server, which would allow you to do interactive things, as needed.
Socket tutorial 1
Socket tutorial 2
Node.js
node.js is the new kid on the block. You write your own socket webserver, and use javascript to communicate with it. This is a great alternative to Ajax, as it's much more efficient and reliabe.
node.js can be run alongside PHP, and only be used for ajax-like calls.
node.js
node.js socket turotial
There are nothing innovative. If you have low frequency calls to data and you want super simple access to data then use files. But today is much better to use any database (SQL lite) is ok i think. IF you need more performance then use MySQL or NoSQL solutions. Tools made to solve things. Use the right tool for your purpose.
I want to run a php script from the command line that is always running and constantly updating a variable.
I then want any php script that is run in the meantime (probably but not necessarily from the web) to be able to read that variable at any time.
Anyone know how I can do this?
Thanks.
Here, you want some kind of inter-process communication mecanism.
You cannot use a PHP variable for that : these are local to the script they're in.
Which means you'll have to use some "external" tool to store your data, like, to only speak of a few :
a file
a database (SQLite, MySQL, ...)
some shared-memory segment
In each case, you'll have :
One script that write to the data-storage space -- i.e. your first always running script
One or many other scripts that will read from the data-store
You should write the variable to a file with the CLI script and read from that with the other script.
Be sure to use flock to prevent race conditions.
You can write a php socket based server script, which will listen on desired port. Find article here.
Then your client php script can connect to it either locally or from the web and retrieve any data, including variables.
You can use any simple protocol designed by you or well known like XML to transfer variables.
Lots of idea's:
At set intervals it appends/writes to a file.
You use sqlite and write your data to it.
Your use a small memcached service as your intermediary.
You go somewhat crazy and write a socket class, listen on a set port, then make non-blocking calls to check.
1-2 are probably the simplest
3 would work great if you need to query the value a lot
4 would be fun, but might not be worth the effort.