How much requests Heroku can handle for PHP - php

According to official heroku website a dyno can handle thousands of requests per second depending on language used. I am using PHP. Can you explain how much requests can its one dyno handle?
Are all of my applications are using same dyno or each uses separate one?

Direct from the Heroku dev center:
A single-threaded, non-concurrent framework like Rails can process one
request at a time. For an app that takes 100ms on average to process
each request, this translates to about 10 requests per second per
dyno.
Load testing your app is the only realistic way to determine request
throughput.
If you are using something that is event drive, like Node.js, more requests would be handled.

Related

Real-time Apps Symfony - What technology to use?

I would like to know if someone could explain to me how to build a real-time application with Symfony?
I have looked at a lot of documentation with my best friend Google, but I have not found quite detailed articles.
I would like some more PHP-oriented thing and saw that there were technologies like ReactPHP / Ratchet (but I can not find a tutorial clear enough to integrate it into an existing symfony project).
Do you have any advice on which technologies to use and why? (If you have tutorial links I take!)
Thank you in advance for your answers !
Every useful Symfony application does some form of I/O. In traditional applications this is most often blocking I/O. Even if it's non-blocking I/O, it doesn't integrate a global event loop that could schedule other things while waiting for I/O.
If you integrate Symfony into an existing event loop based WebSocket server, it will work with blocking I/O as a proof of concept, but you will quickly notice it isn't running fine in production, because any blocking I/O blocks your whole event loop and thus blocks all other connected clients.
One solution is rewriting everything to non-blocking I/O, but then you'd no longer be using Symfony. You might be able to reuse some components, but only those not doing any I/O.
Another solution is to use RPC and queue WebSocket requests into a queue. The intermediary can be written using non-blocking I/O only, it doesn't have much to do. It basically just forwards WebSocket messages as RPC requests to a queue. Then you have a set of workers pulling from that queue, doing a normal Symfony kernel dispatch and sending the response into a response queue. The worker can then continue to fetch the next job.
With the second solution you can totally use blocking I/O and all existing Symfony components. You can spawn as many workers as you need and you can even keep them alive between requests. The difference with a queue in between is that one blocking worker doesn't block the responsiveness of the WebSocket endpoint.
If you want multiple WebSocket processes, you'll need separate response queues for them, so the responses are sent back to the right process where the client is connected.
You can find a working implementation with BeanstalkD as queue in kelunik/rpc-demo. src/Server.php is just for the demo purpose and can be replaced with a HTTP server at any time. To keep the demo simple it uses a single WebSocket process but that can be changed as outlined above. You can start php bin/server and php bin/worker, then use telnet localhost 2000 to connect and send messages. It will respond with the same message but base64 encoded in the workers.
The mentioned demo is built on Amp, but the same concepts apply to ReactPHP as well.
In this issue of the official Symfony repository you may find comments and ideas about this: https://github.com/symfony/symfony/issues/17051

Laravel app not handling requests in parallel

I am building a Laravel web app that performs some long running queries and utilizes a couple (both internal and external) APIs. I am having a hard time figuring out why I can't handle requests in parallel. To shed some light on my issue, here is the high level overview of my system/problem via an example:
Page loads
AJAX request called on page load which GET's a BigQuery result set (long running query), cleans the data and executes a python clustering algorithm which creates an image and returns the path to that image to the web app
Long running (~15 seconds)
Will max CPU while performing the Python clustering (at times)
AJAX request called which queries an external API for some information and simply displays it
Short running (~1-2 seconds)
The issue is that my AJAX requests are not being handled in parallel. The first one is received and the web app does not begin the other until the first is complete. I've checked the network tab in Chrome dev tools and both requests are being made in parallel but the web server is not handling them in parallel.
I cannot determine if this is an error in configuration with php, artisan, Laravel or if I have a whole other problem on my hands. I've done some testing with two simple route closures: one that simply returns a string and another which returns a string after sleep(10). When I call both with AJAX, the instantly returning route does not return until the long running request is served (after sleeping).
TL;DR: It's clear both AJAX calls are being fired and received in parallel, but how can I have my Laravel web app handle the requests in parallel (concurrently)?
For HTTP requests that might take a while, use Laravel's job structure to send the request as job and use either the built in queue or 3rd-party service provider to process the jobs. Laravel doesn't do parallel requests hence job was created.
You're problem is similar to the following thread: handle multiple post requests to same url Laravel 5
API Docs:
https://laravel.com/docs/5.1/queues#configuration

Is threading in PHP efficient or not?

I'm writing a php code for a web server where it's required to do some heavy duty processes when requested before returning the results to the users.
My question is: does the apache server creates a separate thread/process for each client or should I use multi-threading to separate them?
The processes include calling the execution of other applications through cmd and downloading files to the server.
Well every request to the web server is a separate process which will try to use a free core from the CPU, and if there isn't a free one currently, it will go on a que and wait.
You can't have multithreading in php with apache within a single web request. You simply can't. Usually at each request apache forks a new O.S. process.
This is configurable, but typically chosen when working with php, since many methods of php standard library are not thread safe.
When I had to handle heavy computation I always choose to make the user request asynchronous, and let a third-process daemon to do the actual computation in background. In this case, after the user request, I let the client to poll the daemon (through others web-requests) to know when the computation is done.

PHP - How to kick off multiple requests to another page, get results as requests are completed, and display on original page?

I've got a small php web app I put together to automate some manual processes that were tedious and time consuming. The app is pretty much a GUI that ssh's out and "installs" software to target machines based off of atomic change #'s from source control (perforce if it matters). The app currently kicks off each installation in a new popup window. So, say I'm installing software to 10 different machines, I get 10 different pop ups. This is getting to be too much. What are my options for kicking these processes off and displaying the results back on one page?
I was thinking I could have one popup that dynamically created divs for every installation I was kicking off, and do an ajax call for each one then display the output for each install in the corresponding div. The only problem is, I don't know how I can kick these processes off in parallel. It'll take way too long if I have to wait for each one to go out, do it's thing, and spit the results back. I'm using jQuery if it helps, but I'm looking mainly for high level architecture ideas atm. Code examples are welcome, but psuedo code is just fine.
I don't know how advanced you are or even if you have root access to your server which would be required, but this is one possible way.. it uses several different technologies, and would probably be suited for a large scale application rather than a small. But I'll advise you on it anyway.
Following technologies/stacks are used (in addition to PHP as you mentioned):
WebSockets (on top of node.js)
JSON-RPC Server (within node.js)
Gearman
What you would do, is from your client (so via JavaScript), when the page loads, a connection is made to node.js via WebSockets ) you can use something like socket.io for this).
Then when you decide that you want to do a task, (which might take a long time...) you send a request to your server, this might be some JSON encoded raw body, or it might just be a simple GET /do/something. What is important is what happens next.
On your server, when the job is received, you kick off a new job to Gearman, by adding a Task to your server. This then processes your task, and it will be a non blocking request, so you can respond immediately back to the client who made the request saying "hey we are processing your job".
Then, your server with all of your Gearman workers, receives the job, and starts processing it. This might take 5 minutes lets say for arguments sake. Once it has finished, the worker then makes a JSON encoded message which it sends to your node.js server which receives it via JSON-RPC.
After it grabs the message, it can then emit the event to any connections which need to know about it via websockets.
I needed something like this for a project once and managed to learn the basics of node.js in a day (having already a strong JS background). The second day I was complete with a full push/pull messaging job notification platform.

load balancing in php

I have a web service running written in PHP-MYSQL. The script involves fetching data from other websites like wikipedia,google etc. The average execution time for a script is 5 secs(Currently running on 1 server). Now I have been asked to scale the system to handle 60requests/second. Which of the approach should I follow.
-Split functionality between servers (I create 1 server to fetch data from wikipedia, another to fetch from google etc and a main server.)
-Split load between servers (I create one main server which round robin the request entirely to its child servers with each child processing one complete request. What about MYSQL database sharing between child servers here?)
I'm not sure what you would really gain by splitting the functionality between servers (option #1). You can use Apache's mod_proxy_balancer to accomplish your second option. It has a few different algorithms to determine which server would be most likely to be able to handle the request.
http://httpd.apache.org/docs/2.1/mod/mod_proxy_balancer.html
Apache/PHP should be able to handle multiple requests concurrently by itself. You just need to make sure you have enough memory and configure Apache correctly.
Your script is not a server it's acting as a client when it makes requests to other sites. The rest of the time its merely a component of your server.
Yes, running multiple clients (instances of your script - you don't need more hardware) concurrently will be much faster than running the sequentially, however if you need to fetch the data synchronously with the incoming request to your script, then coordinating the results of the seperate instances will be difficult - instead you might take a look at the curl_multi* functions which allow you to batch up several requests and run them concurrently from a single PHP thread.
Alternately, if you know in advance what the incoming request to your webservice will be, then you should think about implementing scheduling and caching of the fetches so they are already available when the request arrives.

Categories