PHP requests one by one or simultaneously - php

I have got some questions about PHP and how requests work under the hood.
1) Let's say I wrote my PHP application and upload it to server. Now there's a function that I wrote and if user goes to that route which executes that function, something happens.
Question is: if one user makes the request and another user also makes the request, does the second user have to wait until first user's request is done? (by saying request is done, I mean until the function I wrote gets executed through the end). Is this the correct guess or it doesn't matter which function gets executed. Until the request isn't done, the second request never starts?
2) I have my PHP application. Imagine two persons make the request at the same time which writes data to the database (not writing, but updating). Let's say I used load balancers. if one user makes request to balancer1 and another user makes request to balancer2, what I want to do is if first user's call updates the database, second user's request has to stop immediately (it shouldn't be updated).
The scenario is that I have jwt token in my database which is used to do requests on third party tool. it has expiration 1 hour. Let's say 1 hour has passed. If one user makes the call to update the token and in the way second user also makes the call to update the token, what will happen is second user will update the token and first user's token will be invalid, which is bad.

PHP can handle multiple requests at the same time, but requests from the same user will be processed one by one if the user's PHP session is locked by the first request. Second request will be proceeded when session would be closed.
For example, if you run a PHP script with sleep(30) in one browser tab:
<?
session_start();
sleep(30);
And another script in another tab:
<?
session_start();
echo 'hello';
The second script won't be executed until the first one is finished.
It's important to note this behavior because sessions are used in almost every app.

If you have a route which is served by a controller function, for each request there is a separate instantiation of the controller. For example: user A and user B request same route laravel.com/stackoverflow, the controller is ready to respond to each request, independent of how many users are requesting at the same time. You can consider similar as a principle of processes for any service. For example, Laravel is running on PHP. So PHP makes process threads each time we need PHP to process any script. Similarly Laravel instantiates the controller for each request.
For same user sending multiple requests, it will be still processed like point 1.
If you want to process particular requests one by one, you can queue the jobs. For example let us say you want to process a payment. You have 5 requests happening. So the controller will be taking all requests simultaneously but the controller function can dispatch a queued job and those are processed one by one.
Considering two persons try to request same route which has an DB update function, you can read a nice article here about optimistic and pessimistic locking.

I should be voting to close this - its way too broad....but I'll give it a go.
If the requests depend on a resource which can only perform one task at a time, then they cannot "run" concurrently. It is quite possible you may have a single CPU core, or a single disk - however at the level of the HTTP request (in the absence of code to apply mutex locking) they will appear to run at the same time - that is what multi-tasking is all about. The execution thread will often be delayed waiting for something else to happen and at that point the OS task scheduler will check to see if there are any other tasks waiting to be run. You can easily test this yourself:
<?php
$started=time();
sleep(20);
print "Ran for " . (time() - $started) " seconds";
(try accessing this in different browser windows around the same time - or in 2 iframes on the same window)
Compare this with:
<?php
$started=time();
$fh=fopen("/tmp/concurency_test", "w");
flock($fh, LOCK_EX);
sleep(20);
flock($fh, LOCK_UN);
print "Ran for " . (time() - $started) " seconds";
This also demonstrates just one of the reasons why you should not use flat files for storing data on your server. Note that the default session handler in PHP uses file based locking for the duration the session data is open by the script.
Databases employ a variety of strategies to avoid reverting to single operation queuing - most commonly versioning. That does not address the problem you describe: 2 clients should never be using the same session token - that is why the session token is seperate from the credentials in a well designed system.

Related

Laravel events fired from different HTTP requests sync or async

Two HTTP requests received by a server with a PHP app that uses Laravel 5.2. Both fired the same type of Event. Both events intercepted by a single Listener.
Q1: Will events processed one after another in the order they were received by Listener (synchronously) or they will process concurrently?
Q2: Does there a way (or to do it in another way if Q1 answer is sync) synchronize any function between requests? I mean to be sure that no matter how many requests were received simultaneously, the function can be executed only by one request at the time
UPD. The issue what I'm trying to solve: my app should authenticate in a 3d party service. This is critical to establish only one session which will be used by other parts of the application. I want to store an access token for a session in DB. So this operation is not atomic:
1. Try to get token from DB.
2. If token does not exists:
2A. Authenticate and receive token.
2B. Store token in DB.
3. Return token
Events are not a good way to fire functions in series. However, in PHP (and also Javascript) event callbacks are executed in the order their events have been triggered (so 1 -> 2 results in a -> b).
You'll have to elaborate on why you want to execute a function only by one request at a time. Likely you are looking for a locking or transaction mechanism, which is a RDBMS/SQL feature that prevents editting of records while they have not yet been saved. That way, when 2 requests happen to reach the same PHP function at the same time, you can have them wait on the database until a certain transaction completes, such that no overlap in reads or writes can take place.
See https://laravel.com/docs/5.8/queries#pessimistic-locking and https://laravel.com/docs/5.8/database#database-transactions for the Laravel implementation. There is more information on the MySQL website (assuming InnoDB is being used) :
https://dev.mysql.com/doc/refman/8.0/en/innodb-locking-reads.html

Executing a long action

I'm creating a PHP script that will allow a user to log into a website and execute database queries and do other actions that could take some time to complete. If the PHP script runs these actions and they take too long, the browser page times out on the user end and the action never completes on the server end. If I redirect the user to another page and then attempt to run the action in the PHP script, will the server run it even though the user is not on the page? Could the action still time out?
In the event of long-running server-side actions in a web application like this, a good approach is to separate the queueing of the actions (which should be handled by the web application) from the running of the actions (which should be handled by a different server-side application).
In this case it could be as simple as the web application inserting a record into a database table which indicates that User X has requested Action Y to be processed at Time Z. A back-end process (always-running daemon, scheduled script, whatever you prefer) would be constantly polling that database table to look for new entries. ("New" might be denoted by something like an "IsComplete" column in that table.) It could poll every minute, every few minutes, every hour... whatever is a comfortable balance between server performance and the responsiveness of an action beginning when it's requested.
Once the action is complete, the server-side application that ran the action would mark it as complete in the database and would store the results wherever you need them to be stored. (Another database table or set of tables? A file? etc.) The web application can check for these results whenever you need it to (such as on each page load, maybe there could be some sort of "current status" of queued actions on each page so the user can see when it's ready).
The reason for all of this is simply to keep the user-facing web application responsive. Even if you do things like increase timeouts, users' browsers may still give up. Or the users themselves may give up after staring at a blank page and a spinning cursor for too long. The user interface should always respond back to the user quickly.
You could look at using something like ignore_user_abort but that is still not ideal in my opinion. I would look at deferring these actions and running them through a message queue. PHP comes with Gearman - that is one option. Using a message queue scales well and does a better job ensuring the request actions actually get completed.
Lots on SO on the subject... Asynchronous processing or message queues in PHP (CakePHP) ...but don't use Cake :)
set_time_limit() is your friend.
If it were me, I would put a loading icon animation in the user interface telling them to wait. Then I would execute the "long process" using an asynchronous AJAX call that would then return an answer, positive or negative, that you would pass to the user through JavaScript.
Just like when you upload pictures to Facebook, you can tell the user what is going on. Very clean!

How should I make a long PHP request via AJAX, periodically check for status updates, and close the script if the request cancels?

Part of the PHP web app I'm developing needs to do the following:
Make an AJAX request to a PHP script, which could potentially take from one second to one hour, and display the output on the page when finished.
Periodically update a loading bar on the web page, defined by a status variable in the long running PHP script.
Allow the long running PHP script to detect if the AJAX request is cancelled, so it can shut down properly and in a timely fashion.
My current solution:
client.php: Creates an AJAX request to request.php, followed by one request per second to status.php until the initial request is complete. Generates and passes along a unique identifier (uid) in case multiple instances of the app are running.
request.php: Each time progress is made, saves the current progress percentage to $_SESSION["progressBar"][uid]. (It must run session_start() and session_write_close() each time.) When finished, returns the data that client.php needs.
status.php: Runs session_start(), returns $_SESSION["progressBar"][uid], and runs session_write_close().
Where it falls short:
My solution fulfills my first two requirements. For the third, I would like to use connection_aborted() in request.php to know if the request is cancelled. BUT, the docs say:
PHP will not detect that the user has aborted the connection until an attempt is made to send information to the client. Simply using an echo statement does not guarantee that information is sent, see flush().
I could simply give meaningless output, but PHP must send a cookie every time I call session_start(). I want to use the same session, BUT the docs say:
When using session cookies, specifying an id for session_id() will always send a new cookie when session_start() is called, regardless of if the current session id is identical to the one being set.
My ideas for solutions, none of which I'm happy with:
A status database, or writing to temp files, or a task management system. This just seems more complicated than what I need!
A custom session handler. This is basically the same as the above solution.
Stream both progress data and result data in one request. This solves everything, but I would essentially be re-implementing AJAX. That can't be right.
Please tell me I'm missing something! Why doesn't PHP know immediately when a connection terminates? Why must PHP resend the cookie, even when it is exactly the same? An answer to any of these questions will be a big help!
My sincere thanks.
Why not set a second session variable, consisting of the unique request identifier and an access timestamp, from status.php.
If the client is closed it stops getting updates from status.php and the session variable stops being updated, which triggers a clean close in request.php if the variable isn't updated in a certain amount of time.

php, multithreading and other doubts

morning
I have some doubts about the the way php works. I cant find the answer anywhere on books so I thought to hit the stack ;)
so here it goes:
lets assume we have one single server with php+apache installed. Here are my beliefs:
1 - php can handle one request at a time. Doesn't matter if apache can handle more than 1 thread at a time because eventually the invoked php interpreter is single threaded.
2 - from belief 1 follows that I believe if the server receives 4 calls at the same very time these calls are queued up and executed 1 at a time. Who makes the request last gets the response last.
3 - from 1 and 2 follows that if I cron-call a url corresponding to a script that does some heavy-lifting/time consuming stuff I slow down the server up to the moment the script returns.
Whats true? whats false?
cheers
My crystal ball suggests that you are using PHP sessions and you have having simultaneous requests (either iframes or AJAX) getting queued. The problem is that the default session handler uses files and session_start() locks the data file. You should read your session data quickily and then call session_write_close() to release the file.
I see no reason why would PHP be not able to handle multiple requests at the same time. That said, it may be semi-true for handling requests of single client, depending on the type of script.
Many scripts use sessions. When session_start() is called, session is being opened and locked. When execution of script ends, session is being closed and unlocked (this can be done manually). When there are multiple requests for the same session, first requests opens and locks session, and the second request has to wait until session is unlocked. This might make an impression that multiple PHP scripts cannot be executed at the same time, but that's true (partly) only for requests that use the same session (in other words - requests from the same browser). Requests from two clients (browsers) may be processed parallelly as long as they don't use resources (files, DB tables etc) that are being locked/unlocked in other requests.

Dealing with long server-side operations using ajax?

I've a particularly long operation that is going to get run when a
user presses a button on an interface and I'm wondering what would be the best
way to indicate this back to the client.
The operation is populating a fact table for a number of years worth of data
which will take roughly 20 minutes so I'm not intending the interface to be
synchronous. Even though it is generating large quantities of data server side,
I'd still like everything to remain responsive since the data for the month the
user is currently viewing will be updated fairly quickly which isn't a problem.
I thought about setting a session variable after the operation has completed
and polling for that session variable. Is this a feasible way to do such a
thing? However, I'm particularly concerned
about the user navigating away/closing their browser and then all status
about the long running job is lost.
Would it be better to perhaps insert a record somewhere lodging the processing record when it has started and finished. Then create some other sort of interface so the user (or users) can monitor the jobs that are currently executing/finished/failed?
Has anyone any resources I could look at?
How'd you do it?
The server side portion of code should spawn or communicate with a process that lives outside the web server. Using web page code to run tasks that should be handled by a daemon is just sloppy work.
You can't expect them to hang around for 20 minutes. Even the most cooperative users in the world are bound to go off and do something else, forget, and close the window. Allowing such long connection times screws up any chance of a sensible HTTP timeout and leaves you open to trivial DOS too.
As Spencer suggests, use the first request to start a process which is independent of the http request, pass an id back in the AJAX response, store the id in the session or in a DB against that user, or whatever you want. The user can then do whatever they want and it won't interrupt the task. The id can be used to poll for status. If you save it to a DB, the user can log off, clear their cookies, and when they log back in you will still be able to retrieve the status of the task.
Session are not that realible, I would probably design some sort of tasks list. So I can keep records of tasks per user. With this design I will be able to show "done" tasks, to keep user aware.
Also I will move long operation out of the worker process. This is required because web-servers could be restrated.
And, yes, I will request status every dozens of seconds from server with ajax calls.
You can have JS timer that periodically pings your server to see if any jobs are done. If user goes away and comes back you restart the timer. When job is done you indicate that to the user so they can click on the link and open the report (I would not recommend forcefully load something though it can be done)
From my experience the best way to do this is saving on the server side which reports are running for each users, and their statuses. The client would then poll this status periodically.
Basically, instead of checkStatusOf(int session), have the client ask the server of getRunningJobsFor(int userId) returning all running jobs and statuses.

Categories