I'm developing an application that will use a server to sync data, this server will use php and a mysql database.
To the sync process I'm thinking is a 3 way communication:
1 - The client sends the data to the server, the server handle the data and reply to the client with an OK or ERROR, at this point the server should begin the transaction.
2 - The client if receive OK just updates the internal info (update date and delete some rows from the database)
3 - The client sends another request to the server (OK or CANCEL), when the server receives this new request it commits or rollback the transaction.
Is this possible? Start the transaction in one request and commit the transaction in another?
In case of YES, how? Sessions?
Or should i do this in another way?
in php you shouldnt store objects in the session(citation needed). so what i would do is store the data into session, and when you receive the confirmation from the client retrieve the data and assemble the query(mysqli or PDO, the one you like).
this will work unless you need to use some data from the database(IE last_inser_id) if thats the case i dont kwnow how to do that, and im tempted to say thats impossible (dont remember, but i think that php closes the DB session when the script ends)
Related
Two HTTP requests received by a server with a PHP app that uses Laravel 5.2. Both fired the same type of Event. Both events intercepted by a single Listener.
Q1: Will events processed one after another in the order they were received by Listener (synchronously) or they will process concurrently?
Q2: Does there a way (or to do it in another way if Q1 answer is sync) synchronize any function between requests? I mean to be sure that no matter how many requests were received simultaneously, the function can be executed only by one request at the time
UPD. The issue what I'm trying to solve: my app should authenticate in a 3d party service. This is critical to establish only one session which will be used by other parts of the application. I want to store an access token for a session in DB. So this operation is not atomic:
1. Try to get token from DB.
2. If token does not exists:
2A. Authenticate and receive token.
2B. Store token in DB.
3. Return token
Events are not a good way to fire functions in series. However, in PHP (and also Javascript) event callbacks are executed in the order their events have been triggered (so 1 -> 2 results in a -> b).
You'll have to elaborate on why you want to execute a function only by one request at a time. Likely you are looking for a locking or transaction mechanism, which is a RDBMS/SQL feature that prevents editting of records while they have not yet been saved. That way, when 2 requests happen to reach the same PHP function at the same time, you can have them wait on the database until a certain transaction completes, such that no overlap in reads or writes can take place.
See https://laravel.com/docs/5.8/queries#pessimistic-locking and https://laravel.com/docs/5.8/database#database-transactions for the Laravel implementation. There is more information on the MySQL website (assuming InnoDB is being used) :
https://dev.mysql.com/doc/refman/8.0/en/innodb-locking-reads.html
I'm trying to send certain data from iOS to online MySQL database. PHP is used on the server to handle the data receiving and inserting.
The thing is that I have several data packages. And the key is to send them one by one, which means I need a mechanism to let the second data package in the queue wait until iOS received feedback from the server confirming the first set of data has already been stored into the database.
I initially tried creating a serial dispatch queue, aiming to have the iOS app execute uploading work in a sequence. Although iOS side did carry out the work according to the sequence, but each task simply "finished" at sending out its data package without waiting to see if the data have been inserted into the database. Then the problem is there will always be some time lapse between sending out the data and data being fully saved to MySQL in the server, due to issues like network connection.
So the result is the data may not be saved in desired sequence, with some later data may be saved earlier than the previous data.
I'm guess what is missing is a "feedback" mechanism from the server side to the iOS side.
Can anybody suggest a way to realize this feedback mechanism, so I can control the serial sequence of uploading data tasks.
Thank you very much!
Regards,
Paul
If you are sending data to server then most of available frameworks offers callback. With AFNetworking (or now known as Almofire) it would look like this:
[[ConnectionManager instance] GET: #"link" parameters: nil
success:^(AFHTTPRequestOperation* operation, id responseObject)
{
}
failure:^(AFHTTPRequestOperation* operation, NSError* error)
{
}];
So you can put your code in given handlers and continuously make requests.
You may also want to create concurrent Operations and put those on OperationQueue while setting proper dependencies but it's surely more time consuming.
Maybe it's a stupid question. But anyway here is my problem. I have multiple classes in my project.
At the beginning the constructor of the class Calculate($param1, $param2...) is called.
This Calculate class is called multiple times via jQuery Events (click, change..) depending on which new form field is filled.. The prices and values are calculated in the background by php and are represented on the website via AJAX (live while typing).
The connection between the AJAX and the Calculate class is a single file (jsonDataHanlder) this file receives the POST-values from the AJAX and returns a JSON-String for the website output. So every time I call this jsonDataHandler a new Calculate object is beeing created. With the updated values, but never the first created object. I am experiencing now multiple problems as you may can imagine.
How can I always access the same object, without creating an new one?
EDIT: because of technical reasons, I cannot use sessions..
Here is the php application lifetime:
The browser sends an http request to the web-server
Web-server (for example Apache), accepts the request and launches your php application (in this case your jsonDataHandler file)
Your php application handles the request and generates the output
Your php application terminates
Web-server sends the response generated by php application to the browser
So the application "dies" at the end of each request, you can not create an object which will persist between requests.
Possible workarounds:
Persist the data on the server - use sessions or the database (as you said this is not an option for you)
Persist the data on the client - you still create your object for each request, but you keep additional information client-side to be able to restore the state of your object (see more on this below)
Use something like reactphp to have your application running persistently (this also can be not an option because you will need to use different environment). Variance of this option - switch to another technology which doesn't re-launch the server-side application each time (node.js, python+flask, etc).
So, if you can't persist the data on the server, the relatively simple option is to persist the data on the client.
But this will only work if you need to keep the state of your calculator for each individual client (vs keeping the same state for all clients, in this case you do need to persist data on the server).
The flow with client-side state can be this:
Client sends the first calculation request, for example param1=10
Your scripts responds with value=100
Client-side code stores both param1=10 and param1_value=100 into cookies or browser local storage
Client sends the next calculation, for example param2=20, this time the client-side code finds previous results and sends everything together (param1=10¶m1_value=100¶m2=20)
On the server you now can re-create the whole sequence of calculation, so you can get the same result as if you would have a persistent Calculate object
Maybe you should try to save the values of the parameters of Calculate object in database, and every you make an AJAX call you take the latest values from the DB.
I'm attempting to build a live-updated system in php-mysql (and jQuery).
The question I have is if the approach i'm using is good/correct or if i should look into another way of doing it:
Using jQuery and AJAX i have made:
setInterval(function() {
Ajax('check_status.php');
},4000);
in check_status.php i use memcache, to check if the result is either 1 or 0
$memcache = new Memcache;
$memcache->connect('127.1.0.1', 11211) or die ("");
$userid.$key = md5($userid."live_feed");
$result = $memcache->get($userid.$key);
if($result==1) {
doSomething();
}
The idea is that user A does something, and that updates the memcache for user B.
The memcache is then being checked every 4 seconds via jQuery, and that way i can do a live_feed to user B.
The reason i use memcache is to limit the mysql_queries and thereby limiting the load on the datbase.
So the question is. Am i totally off here ? Is there a better way of doing this, to reduce the server load ?
Thank you
The server load on this is going to be rough cause every user will be hitting your web server every 4 seconds. If I were you I would look into two other options.
Option 1, and the better of the two in my opinion, is Websockets. Websockets allow for persistent communication between a server and the client. The web socket server can be created with PHP, or something else. You can then have all clients connect to the same web socket and send data to all or individual clients connected. On the client side this is done with Javascript and flash fallback for older browsers that don't support Websockets.
Option 2 is a technique called long polling. Right now your clients have to hit the web server every 4 seconds no matter what. In long polling the client sends an ajax request and you don't give back a response from the server until your memcache status changes. So basically you put the code you have now in a while loop, with a pause to prevent it from using up 100% of your server resources, and only run doSomething() if something has changed. Then on the client side when it gets a response, initiate a new ajax call that waits for a new response again. So while with what you currently have the user hits the server 15 times in one minute regardless of activity, in this method the user will only hit the server every time there is actual activity. So this normally saves you a ton of useless connections back and forth.
Look up both of these options and see which one would work better in your situation. Long polling is easier to implement but not nearly as efficient.
I'd rather use a version id for content. For example say the user A's content is retrieved and the version id is set as 1000. This is then saved in memcached with the key userA_1000.
If user B's action or any other action affects the user A's content, the version ID is then reset to 1001.
When you next time checks memcached, you will now check for the key userA_1001, which does not exist and will tell you that it's updated. Hence re-create the content, save in memcached and send back with the Ajax request.
User version key can also be saved in memcached in order to not to do a DB query each time.
So
User A's key/val --- user_a_key => userA_1001
User A's content --- userA_1001 => [Content]
When a change has happaned that effected the user A's content, simply change the version and update the key of the content in memcahed
User A's key/val --- user_a_key => userA_1002
So next time your Ajax request look for the users content in memcached, it will notice that there is nothing save for the key userA_1002, which will prompt it to re-create the content. If it's found simply respond saying no need to update anything.
Use a well designed class methods to handle when and how the content is updated and how the keys are invalidated.
I have a contact form, which sends non-sensitive data (name, message, a few checkboxes and address fields) to an external CRM system through a CURL request.
The problem is, sometimes the receiving system is down for maintenance, and I have to store the incoming requests somewhere temporary. Right now, I'm doing this manually when I receive a message for incoming maintenance, but this is not an option in the long run.
My question is, what's the best way to do automated storing and sending depending on the server status. I know it should depend on the CURL response code, and if it returns code 200, the script should check for any temporary stored requests to be sent alongside the current one, but I'm not sure exactly what's the best way to implement this - for example, I wonder if serialized request inside a database table is better than making a JSON array and storing it inside a file which is later deleted.
How would you solve this? Any kind of advice and tips you can give me is welcomed.
Thanks.
I would perform the following procedure to make sure the HTTP request is being successfully sent to the target server:
Make sure that the server is up and running, you may want to use get_headrs to perform this check.
Depending on server's response from the previous step you will perform one of tow actions:
1) If server response was OK then go a head and fire your request.
2) If server response was NOT OK then you may want to to store the HTTP request in some serialized form in database.
Run a cronjob that reads all requests from DB and fire them, upon server's response, if the request went successfully then delete its record from DB table.
At the ocasion of cronjob running, i would use HttpRequestPool to run all requests in parallel.
I would suggest using the DB storage instead of JSON file storage cause it will be easer to maintain specially in the case of having some of the HTTP requests being successfully processed by the server and some are not, in that case you want to remove the succesful ones and keep the others, this is much easer using a database.