I use php and laravel as my web service.
I want to know does laravel store and process requests in these situation?
requests to different controllers from many users;
requests to the same controller from the same user.
Is the laravel store these requests in a queue by the sequence the requests reached?
Is laravel parallel process requests for different users, and in sequence for the same user?
For example, there are two requests from the user. The two requests route to two methods in the same controller. While the first request will cost a long time for the server side processing, the second one will cost very little time. When a user set up the first request then the second one, though the second one cost very little time, the server side will not process the second request until it finish processing the first one.
So I want to know how does laravel store and process the requests?
Laravel does not process requests directly, this is something managed by your webserver and PHP. Laravel receives a request already processed by your webserver, because it is only a tool, written in PHP, which processes the data related to a request call. So, as long as your webserver knows how to execute PHP and calls the proper index.php file, Laravel will be booted and process the request data it receives from the webserver.
So, if your webserver is able to receive 2 different calls (usually they do that in the hundreds), it will try to instantiate 2 PHP (sub)processes, and you should have 2 Laravel instances in memory running in parallel.
So if you have code which depend on anther code, which may take too long to execute depending on many other factors, you'll have to deal with that yourself, in your Laravel app.
What we usually do is to just add data to a database and then get a result back from a calculation done with data already in the datastore. So it should not matter the order the data get to the datastore, which one got in first, the end result is always the same. If you cannot rely on this kind of methodology, you'll have to prepare your app to deal with it.
Everything in PHP starts as a separate process. These processes are independent form each other until some shared resource comes in Picture.
In your case one user is handled one session and sessions are file based by default. The session file is shared resource for processes which means you can only make one call to PHP at a time for one user.
Multiple user can invoke any number of processes at once depending on your systems capabilities.
Related
I am trying to create a project where I make an API request to another server and write down HTML interface with the data I get. For example, if a single request takes 2 seconds to complete and if there were 5 people that requested the same page, would the last person wait 2 seconds to finish or wait for other people to finish so 10 seconds? I couldn't find any info about this and not sure if Node is a better option for this project.
Any normal web server + PHP setup will handle several requests in parallel. Each incoming request spawns a new web server thread with independent PHP instance. There are several different models of how this can be handled by a web server (workers, threads, events, etc.), but in general that's how it works. There's some limit to how many requests can be handled in parallel, but generally speaking it's significantly more than one.
So, modulus some overhead of running several PHP threads in parallel, each request will be handled in 2 seconds.
A typical pitfall here is session handling: if each PHP instance tries to get a handle on the same session data, they'll block since only one instance at a time can use the normal file-based session store, and subsequent requests will have to wait. To be clear: that's if the same user tries multiple parallel requests; it does not affect different users with different sessions. That goes for any shared resource you may be trying to access.
I am building a Laravel web app that performs some long running queries and utilizes a couple (both internal and external) APIs. I am having a hard time figuring out why I can't handle requests in parallel. To shed some light on my issue, here is the high level overview of my system/problem via an example:
Page loads
AJAX request called on page load which GET's a BigQuery result set (long running query), cleans the data and executes a python clustering algorithm which creates an image and returns the path to that image to the web app
Long running (~15 seconds)
Will max CPU while performing the Python clustering (at times)
AJAX request called which queries an external API for some information and simply displays it
Short running (~1-2 seconds)
The issue is that my AJAX requests are not being handled in parallel. The first one is received and the web app does not begin the other until the first is complete. I've checked the network tab in Chrome dev tools and both requests are being made in parallel but the web server is not handling them in parallel.
I cannot determine if this is an error in configuration with php, artisan, Laravel or if I have a whole other problem on my hands. I've done some testing with two simple route closures: one that simply returns a string and another which returns a string after sleep(10). When I call both with AJAX, the instantly returning route does not return until the long running request is served (after sleeping).
TL;DR: It's clear both AJAX calls are being fired and received in parallel, but how can I have my Laravel web app handle the requests in parallel (concurrently)?
For HTTP requests that might take a while, use Laravel's job structure to send the request as job and use either the built in queue or 3rd-party service provider to process the jobs. Laravel doesn't do parallel requests hence job was created.
You're problem is similar to the following thread: handle multiple post requests to same url Laravel 5
API Docs:
https://laravel.com/docs/5.1/queues#configuration
I have a webpage that when users go to it, multiple (10-20) Ajax requests are instantly made to a single PHP script, which depending on the parameters in the request, returns a different report with highly aggregated data.
The problem is that a lot of the reports require heavy SQL calls to get the necessary data, and in some cases, a report can take several seconds to load.
As a result, because one client is sending multiple requests to the same PHP script, you end up seeing the reports slowly load on the page one at a time. In other words, the generating of the reports is not done in parallel, and thus causes the page to take a while to fully load.
Is there any way to get around this in PHP and make it possible for all the requests from a single client to a single PHP script to be processed in parallel so that the page and all its reports can be loaded faster?
Thank you.
As far as I know, it is possible to do multi-threading in PHP.
Have a look at pthreads extension.
What you could do is make the report generation part/function of the script to be executed in parallel. This will make sure that each function is executed in a thread of its own and will retrieve your results much sooner. Also, set the maximum number of concurrent threads <= 10 so that it doesn't become a resource hog.
Here is a basic tutorial to get you started with pthreads.
And a few more examples which could be of help (Notably the SQLWorker example in your case)
Server setup
This is more of a server configuration issue and depends on how PHP is installed on your system: If you use php-fpm you have to increase the pm.max_children option. If you use PHP via (F)CGI you have to configure the webserver itself to use more children.
Database
You also have to make sure that your database server allows that many concurrent processes to run. It won’t do any good if you have enough PHP processes running but half of them have to wait for the database to notice them.
In MySQL, for example, the setting for that is max_connections.
Browser limitations
Another problem you’re facing is that browsers won’t do 10-20 parallel requests to the same hosts. It depends on the browser, but to my knowledge modern browsers will only open 2-6 connections to the same host (domain) simultaneously. So any more requests will just get queued, regardless of server configuration.
Alternatives
If you use MySQL, you could try to merge all your calls into one request and use parallel SQL queries using mysqli::poll().
If that’s not possible you could try calling child processes or forking within your PHP script.
Of course PHP can execute multiple requests in parallel, if it uses a Web Server like Apache or Nginx. PHP dev server is single threaded, but this should ony be used for dev anyway. If you are using php's file sessions however, access to the session is serialized. I.e. only one script can have the session file open at any time. Solution: Fetch information from the session at script start, then close the session.
I have a web service running written in PHP-MYSQL. The script involves fetching data from other websites like wikipedia,google etc. The average execution time for a script is 5 secs(Currently running on 1 server). Now I have been asked to scale the system to handle 60requests/second. Which of the approach should I follow.
-Split functionality between servers (I create 1 server to fetch data from wikipedia, another to fetch from google etc and a main server.)
-Split load between servers (I create one main server which round robin the request entirely to its child servers with each child processing one complete request. What about MYSQL database sharing between child servers here?)
I'm not sure what you would really gain by splitting the functionality between servers (option #1). You can use Apache's mod_proxy_balancer to accomplish your second option. It has a few different algorithms to determine which server would be most likely to be able to handle the request.
http://httpd.apache.org/docs/2.1/mod/mod_proxy_balancer.html
Apache/PHP should be able to handle multiple requests concurrently by itself. You just need to make sure you have enough memory and configure Apache correctly.
Your script is not a server it's acting as a client when it makes requests to other sites. The rest of the time its merely a component of your server.
Yes, running multiple clients (instances of your script - you don't need more hardware) concurrently will be much faster than running the sequentially, however if you need to fetch the data synchronously with the incoming request to your script, then coordinating the results of the seperate instances will be difficult - instead you might take a look at the curl_multi* functions which allow you to batch up several requests and run them concurrently from a single PHP thread.
Alternately, if you know in advance what the incoming request to your webservice will be, then you should think about implementing scheduling and caching of the fetches so they are already available when the request arrives.
I need some help understanding internal workings of PHP.
Remember, in old days, we used to write TSR (Terminate and stay resident) routines (Pre-windows era)? Once that program is executed, it will stay in memory and can be re-executed by some hot-key (alt- or ctrl- key combination).
I want to use similar concept in web server/applications. Say, I have common_functions.php which consists of common functions (like Generate_City_Combo(), or Check_Permission() or Generate_User_Permission_list() or like) to all the web applications running on that apache/php server.
In all the modules or applications php files, I can write:
require_once(common_functions.php) ;
which will include that common file in all the modules and applications and works fine.
My question is: How does php handle this internally?
Say I have:
Two applications AppOne and AppTwo.
AppOne has two menu options AppOne_Menu_PQR and AppOne_Menu_XYZ
AppTwo has two menu options AppTwo_Menu_ABC and APPTwo_Menu_DEF
All of these four menu items call functions { like Generate_City_Combo(), or Check_Permission() or Generate_User_Permission_list() } from common_functions.php
Now consider following scenarios:
A) User XXX logs in and clicks on AppOne_Menu_PQR from his personalized Dashboard then s/he follows through all the screens and instructions. This is a series of 8-10 page requests (screens) and it is interactive. After this is over, user XXX clicks on AppTwo_Menu_DEF from his personalized Dashboard and again like earlier s/he follows through all the screens and instructions (about 8-10 pages/screens). Then User XXX Logs off.
B) User XXX logs in and does whatever mentioned in scenario A. At the same time, user YYY also logs in (from some other client machine) and does similar things mentioned in scenario A.
For scenario A, it is same session. For Scenario B, there are two different sessions.
Assume that all the menu options call Generate_User_Permission_list() and Generate_Footer() or many menu options call Generate_City_Combo().
So how many times will PHP execute/include common_functions.php per page request? per session? or per PHP startup/shutdown? My understanding is common_functions.php will be executed once EVERY page request/cycle/load/screen, right? Basically once for each and every interaction.
Remember functions like Generate_City_Combo() or Generate_Footer() produces same output or does same thing irrespective of who or when is calling.
I would like to restrict this to once per Application startup and shutdown.
These are just examples. My actual problem is much more complex and involved. In my applications, I would like to call Application_Startup() routines just once which will create ideal environment (like all lookup and reference data structures, Read-Only-Data, Security-Matrix, Menu-options, context sensitive business execution logic etc..). After that all the requests coming to server need not spend any time or resources to create environment but can instantly refer "already-created-environment".
Is this something feasible in PHP? How? Could you point me to someplace or some books which explains internal working of PHP?
Thanks in advance.
PHP processes each HTTP request in a completely separate frame of execution - there is no persistent process running to service them all. (Your webserver is running, but each time it loads a PHP page, a separate instance of the PHP interpreter is invoked.)
If the time it takes for your desired persistent areas to be generated is significant, you may wish to consider caching the output from those scripts on disk and loading the cached version first if it is available (and not out of date).
I would say that you are likely prematurely optimizing, but there is hope.
You very frequently want multiple copies of your compiled code in memory since you want stability per request; you don't want separate requests operating in the same memory space and running the risk of race conditions or data corruption!
That said, there are numerous PHP Accelerators out there that will pre-compile PHP code, greatly speeding up include and require calls.
PHP(in almost all cases) is page oriented. There is no Application_Startup() that will maintain a state across HTTP requests.
You can sometimes emulate this by loading/unloading serialized data from a database or $_SESSION, but there is overhead involved. Also, there are other cases where a memcached server can optimize this as well, but you typically can't use those with you typical virtual hosting services like cPanel.
If I had to build an app like you are talking about I would serialize the users choices into the session, and then save whatever needs to persist between sessions in a database.
There are several ORM modules for PHP like Doctrine which simplify object serialization to a database.
I'm necromancing, here, but with the advent of PThread, it seems like there may be the possibility of a stab in the direction of an actual solution for this, rather than just having to say, in effect, "No, you can't do that with PHP."
A person could basically create their own multi-threaded web server in PHP, just with the CLI tools, the socket_* functions and PThreads. Just listen on port 80, add requests to a request queue, and launch some number of worker threads to process the queue.
The number of workers could be managed based on the request queue length and the operating system's run queue length. Every few seconds, the main thread could pass through a function to manage the size of the worker pool. If the web request queue length was greater than some constant times the operating system's run queue length and the number of workers was less than a configured maximum, it could instantiate another worker thread. If the web request queue length was less than some other (lower) constant times the OS's run queue length and the number of workers was greater than a configured minimum, it could tell one of the worker threads to die when it finishes its current request. The constants and configured values could then be tuned to maximize over all throughput for the server. Something like that.
You'd have to do all your own uri parsing, and you'd have to piece together the HTTP response yourself, etc., but the worker threads could instantiate objects that extend Threaded, or reuse previously instantiated Threaded objects.
Voila - PHP TomCat.