Two HTTP requests received by a server with a PHP app that uses Laravel 5.2. Both fired the same type of Event. Both events intercepted by a single Listener.
Q1: Will events processed one after another in the order they were received by Listener (synchronously) or they will process concurrently?
Q2: Does there a way (or to do it in another way if Q1 answer is sync) synchronize any function between requests? I mean to be sure that no matter how many requests were received simultaneously, the function can be executed only by one request at the time
UPD. The issue what I'm trying to solve: my app should authenticate in a 3d party service. This is critical to establish only one session which will be used by other parts of the application. I want to store an access token for a session in DB. So this operation is not atomic:
1. Try to get token from DB.
2. If token does not exists:
2A. Authenticate and receive token.
2B. Store token in DB.
3. Return token
Events are not a good way to fire functions in series. However, in PHP (and also Javascript) event callbacks are executed in the order their events have been triggered (so 1 -> 2 results in a -> b).
You'll have to elaborate on why you want to execute a function only by one request at a time. Likely you are looking for a locking or transaction mechanism, which is a RDBMS/SQL feature that prevents editting of records while they have not yet been saved. That way, when 2 requests happen to reach the same PHP function at the same time, you can have them wait on the database until a certain transaction completes, such that no overlap in reads or writes can take place.
See https://laravel.com/docs/5.8/queries#pessimistic-locking and https://laravel.com/docs/5.8/database#database-transactions for the Laravel implementation. There is more information on the MySQL website (assuming InnoDB is being used) :
https://dev.mysql.com/doc/refman/8.0/en/innodb-locking-reads.html
Related
I have got some questions about PHP and how requests work under the hood.
1) Let's say I wrote my PHP application and upload it to server. Now there's a function that I wrote and if user goes to that route which executes that function, something happens.
Question is: if one user makes the request and another user also makes the request, does the second user have to wait until first user's request is done? (by saying request is done, I mean until the function I wrote gets executed through the end). Is this the correct guess or it doesn't matter which function gets executed. Until the request isn't done, the second request never starts?
2) I have my PHP application. Imagine two persons make the request at the same time which writes data to the database (not writing, but updating). Let's say I used load balancers. if one user makes request to balancer1 and another user makes request to balancer2, what I want to do is if first user's call updates the database, second user's request has to stop immediately (it shouldn't be updated).
The scenario is that I have jwt token in my database which is used to do requests on third party tool. it has expiration 1 hour. Let's say 1 hour has passed. If one user makes the call to update the token and in the way second user also makes the call to update the token, what will happen is second user will update the token and first user's token will be invalid, which is bad.
PHP can handle multiple requests at the same time, but requests from the same user will be processed one by one if the user's PHP session is locked by the first request. Second request will be proceeded when session would be closed.
For example, if you run a PHP script with sleep(30) in one browser tab:
<?
session_start();
sleep(30);
And another script in another tab:
<?
session_start();
echo 'hello';
The second script won't be executed until the first one is finished.
It's important to note this behavior because sessions are used in almost every app.
If you have a route which is served by a controller function, for each request there is a separate instantiation of the controller. For example: user A and user B request same route laravel.com/stackoverflow, the controller is ready to respond to each request, independent of how many users are requesting at the same time. You can consider similar as a principle of processes for any service. For example, Laravel is running on PHP. So PHP makes process threads each time we need PHP to process any script. Similarly Laravel instantiates the controller for each request.
For same user sending multiple requests, it will be still processed like point 1.
If you want to process particular requests one by one, you can queue the jobs. For example let us say you want to process a payment. You have 5 requests happening. So the controller will be taking all requests simultaneously but the controller function can dispatch a queued job and those are processed one by one.
Considering two persons try to request same route which has an DB update function, you can read a nice article here about optimistic and pessimistic locking.
I should be voting to close this - its way too broad....but I'll give it a go.
If the requests depend on a resource which can only perform one task at a time, then they cannot "run" concurrently. It is quite possible you may have a single CPU core, or a single disk - however at the level of the HTTP request (in the absence of code to apply mutex locking) they will appear to run at the same time - that is what multi-tasking is all about. The execution thread will often be delayed waiting for something else to happen and at that point the OS task scheduler will check to see if there are any other tasks waiting to be run. You can easily test this yourself:
<?php
$started=time();
sleep(20);
print "Ran for " . (time() - $started) " seconds";
(try accessing this in different browser windows around the same time - or in 2 iframes on the same window)
Compare this with:
<?php
$started=time();
$fh=fopen("/tmp/concurency_test", "w");
flock($fh, LOCK_EX);
sleep(20);
flock($fh, LOCK_UN);
print "Ran for " . (time() - $started) " seconds";
This also demonstrates just one of the reasons why you should not use flat files for storing data on your server. Note that the default session handler in PHP uses file based locking for the duration the session data is open by the script.
Databases employ a variety of strategies to avoid reverting to single operation queuing - most commonly versioning. That does not address the problem you describe: 2 clients should never be using the same session token - that is why the session token is seperate from the credentials in a well designed system.
Maybe it's a stupid question. But anyway here is my problem. I have multiple classes in my project.
At the beginning the constructor of the class Calculate($param1, $param2...) is called.
This Calculate class is called multiple times via jQuery Events (click, change..) depending on which new form field is filled.. The prices and values are calculated in the background by php and are represented on the website via AJAX (live while typing).
The connection between the AJAX and the Calculate class is a single file (jsonDataHanlder) this file receives the POST-values from the AJAX and returns a JSON-String for the website output. So every time I call this jsonDataHandler a new Calculate object is beeing created. With the updated values, but never the first created object. I am experiencing now multiple problems as you may can imagine.
How can I always access the same object, without creating an new one?
EDIT: because of technical reasons, I cannot use sessions..
Here is the php application lifetime:
The browser sends an http request to the web-server
Web-server (for example Apache), accepts the request and launches your php application (in this case your jsonDataHandler file)
Your php application handles the request and generates the output
Your php application terminates
Web-server sends the response generated by php application to the browser
So the application "dies" at the end of each request, you can not create an object which will persist between requests.
Possible workarounds:
Persist the data on the server - use sessions or the database (as you said this is not an option for you)
Persist the data on the client - you still create your object for each request, but you keep additional information client-side to be able to restore the state of your object (see more on this below)
Use something like reactphp to have your application running persistently (this also can be not an option because you will need to use different environment). Variance of this option - switch to another technology which doesn't re-launch the server-side application each time (node.js, python+flask, etc).
So, if you can't persist the data on the server, the relatively simple option is to persist the data on the client.
But this will only work if you need to keep the state of your calculator for each individual client (vs keeping the same state for all clients, in this case you do need to persist data on the server).
The flow with client-side state can be this:
Client sends the first calculation request, for example param1=10
Your scripts responds with value=100
Client-side code stores both param1=10 and param1_value=100 into cookies or browser local storage
Client sends the next calculation, for example param2=20, this time the client-side code finds previous results and sends everything together (param1=10¶m1_value=100¶m2=20)
On the server you now can re-create the whole sequence of calculation, so you can get the same result as if you would have a persistent Calculate object
Maybe you should try to save the values of the parameters of Calculate object in database, and every you make an AJAX call you take the latest values from the DB.
I'm creating a PHP script that will allow a user to log into a website and execute database queries and do other actions that could take some time to complete. If the PHP script runs these actions and they take too long, the browser page times out on the user end and the action never completes on the server end. If I redirect the user to another page and then attempt to run the action in the PHP script, will the server run it even though the user is not on the page? Could the action still time out?
In the event of long-running server-side actions in a web application like this, a good approach is to separate the queueing of the actions (which should be handled by the web application) from the running of the actions (which should be handled by a different server-side application).
In this case it could be as simple as the web application inserting a record into a database table which indicates that User X has requested Action Y to be processed at Time Z. A back-end process (always-running daemon, scheduled script, whatever you prefer) would be constantly polling that database table to look for new entries. ("New" might be denoted by something like an "IsComplete" column in that table.) It could poll every minute, every few minutes, every hour... whatever is a comfortable balance between server performance and the responsiveness of an action beginning when it's requested.
Once the action is complete, the server-side application that ran the action would mark it as complete in the database and would store the results wherever you need them to be stored. (Another database table or set of tables? A file? etc.) The web application can check for these results whenever you need it to (such as on each page load, maybe there could be some sort of "current status" of queued actions on each page so the user can see when it's ready).
The reason for all of this is simply to keep the user-facing web application responsive. Even if you do things like increase timeouts, users' browsers may still give up. Or the users themselves may give up after staring at a blank page and a spinning cursor for too long. The user interface should always respond back to the user quickly.
You could look at using something like ignore_user_abort but that is still not ideal in my opinion. I would look at deferring these actions and running them through a message queue. PHP comes with Gearman - that is one option. Using a message queue scales well and does a better job ensuring the request actions actually get completed.
Lots on SO on the subject... Asynchronous processing or message queues in PHP (CakePHP) ...but don't use Cake :)
set_time_limit() is your friend.
If it were me, I would put a loading icon animation in the user interface telling them to wait. Then I would execute the "long process" using an asynchronous AJAX call that would then return an answer, positive or negative, that you would pass to the user through JavaScript.
Just like when you upload pictures to Facebook, you can tell the user what is going on. Very clean!
I am trying to find a solution to prevent race conditions in my application logic (specifically when renewing an OAuth access token) and my back-end database happens to be mongodb.
Coming from a MySQL background, I'm used to using GET_LOCK and it's related functions to handle blocking in PHP. Does Mongo have any analog to MySQL's GET_LOCK function, or will I have to use PHP's file locking or something similar?
Is flock() a good (or proper) alternative for this situation, or is that meant only for use when reading and writing to files?
Edit:
The race condition I am trying to prevent is the following:
Instance A notices OAuth access token nearing expiration
Instance B notices OAuth access token nearing expiration
Instance A requests refreshed OAuth access token from remote server and obtains one
Instance B requests refreshed OAuth access token from the same server and is rejected (server potentially invalidates access token from step 3 as security precaution)
Instance A saves result back to database
Instance B saves result back to database
If you want to simulate a named mutex or lock using MongoDB, I would suggest using findAndModify by creating a special collection for it and having a document, you can even call it db.my_lock.
db.my_lock.save({"IMREFRESHINGAUTHTOKEN":false});
Now, between steps 2 and 3 add a findAndModify to grab the "lock":
db.my_lock.findAndModify(
query: {"IMREFRESHINGAUTHTOKEN":false},
update: {$set: {"IMREFRESHINGAUTHTOKEN": true}, ...}
);
If you get to the "lock" first, you will get back this object (and you will get to set the first field to true - I recommend setting a second field with timestamp or connection number or process ID or some other identifier which will allow cleaning up after a crashed process so it won't hold a lock forever).
If you "lose" the race you will get back nothing that matches "IMREFRESHINGAUTHTOKEN":false and you'll know you lost the race and give up (or check the timestamp of the lock and maybe try to see if it's old and stale).
This describes a stand-alone single lock on the whole "collection" - of course you can implement this as an extra field on the stored OAuth token and have as many of them being refreshed at a time as there are threads noticing they are expiring.
Hope this helps.
I have done some google search on this topic and couldn't find the answer to my question.
What I want to achieve is the following:
the client make an asynchronous call to a function in the server
the server runs that function in the background (because that function is time consuming), and the client is not hanging in the meantime
the client constantly make a call to the server requesting the status of the background job
Can you please give me some advices on resolving my issue?
Thank you very much! ^-^
You are not specifying what language the asynchronous call is in, but I'm assuming PHP on both ends.
I think the most elegant way would be this:
HTML page loads, defines a random key for the operation (e.g. using rand() or an already available session ID [be careful though that the same user could be starting two operations])
HTML page makes Ajax call to PHP script to start_process.php
start_process.php executes exec /path/to/scriptname.php to start the process; see the User Contributed Notes on exec() on suggestions how to start a process in the background. Which one is the right for you, depends mainly on your OS.
long_process.php frequently writes its status into a status file, named after the random key that your Ajax page generated
HTML page makes frequent calls to show_status.php that reads out the status file, and returns the progress.
Have a google for long running php processes (be warned that there's a lot of bad advice out there on the topic - including the note referred to by Pekka - this will work on Microsoft but will fail in unpredicatable ways on anything else).
You could develop a service which responds to requests over a socket (your client would use fsockopen to connect) - some simple ways of acheiving this would be to use Aleksey Zapparov's Socket server (http://www.phpclasses.org/browse/package/5758.html) which handles requests coming in via a socket however since this runs as a single thread it may not be very appropriate for something which requiers a lot of processing. ALternatively, if you are using a non-Microsoft system then yo could hang your script off [x]inetd however, you'll need to do some clever stuff to prevent it terminating when the client disconnects.
To keep the thing running after your client disconnects then the PHP code must be running from the standalone PHP executable (not via the webserver) Spawn a process in a new process group (see posix_setsid() and pcntl_fork()). To enable the client to come back and check on progress, the easiest way to achieve this is to configure the server to write out its status to somewhere the client can read.
C.
Ajax call run method longRunningMethod() and get back an idendifier (e.g an id)
Server runs the method, and sets key in e.g. sharedmem
Client calls checkTask(id)
server lookup the key in sharedmem and check for ready status
[repeat 3 & 4 until 5 is finished]
longRunningMethod is finished and sets state to finished in sharedmem.
All Ajax calls are per definition asynchronous.
You could (although not a strictly necessary step) use AJAX to instantiate the call, and the script could then create a reference to the status of the background job in shared memory (or even a temporary entry in an SQL table, or even a temp file), in the form of a unique job id.
The script could then kick off your background process and immediately return the job ID to the client.
The client could then call the server repeatedly (via another AJAX interface, for example) to query the status of the job, e.g. "in progress", "complete".
If the background process to be executed is itself written in PHP (e.g. a command line PHP script) then you could pass the job id to it and it could provide meaningful progress updates back to the client (by writing to the same shared memory area, or database table).
If the process to executed it's not itself written in PHP then I suggest wrapping it in a command line PHP script so that it can monitor when the process being executed has finished running (and check the output to see if was successful) and update the status entry for that task appropriately.
Note: Using shared memory for this is best practice, but may not be available if you are using shared hosting, for example. Don't forget you want to have a means to clean up old status entries, so I would store "started_on"/"completed_on" timestamps values for each one, and have it delete entries for stale data (e.g. that have a completed_on timestamp of more than X minutes - and, ideally, that also checks for jobs that started some time ago but were never marked as completed and raises an alert about them).