I used cURL to authenticate first then login via POST to a cms.
Then another POST again to ask the cms to generate a new code number (eg voucher code) and grab the csv from url that contain about 70 lines per page.
I can explode each line and get the last line for the new code number that is generated before.
My question is if many requests are created by many customers, is it possible to accidentally reading the same voucher code? Although cURL get the csv file pretty fast, should I make sure that a request must be completed first before another? like a sql transaction.
Though I read somewhere php do not run in parallel, since I am a beginner in all these and someone asked me if my script can cause that for multiple requests. THanks in advance.
The only way you will get the same voucher code is if the remote server generates the same code twice. You are correct in saying php do not run in parallel but concurrency has nothing to do with your specific case, there is no way the two HTTP response get mixed up with each other because they are sent back to you in different TCP streams. The underlying TCP/IP stack of the OS will prevent collisions.
Regardless of this, you should be able to check for collisions after you have received the data. For example, if you are inserting it into an SQL database, you can create a unique index on the field that holds the code, and the database will prevent you from inserting duplicated rows.
As a side note, you say you can explode each line which is true, but you may wish to have a look at fgetcsv() and str_getcsv(), which will parse the line for you and take into account escape sequences and all sorts of edge-cases that you code will not account for. If you want to perform multiple cURL requests at once, you may also want to have a look at curl_multi_exec(), which will allow you to execute several requests at once and speed up the execution of your script.
Related
I'm trying to import the excel data to mysql database by PHP Ajax method. When the user upload an excel, the jQuery will fetch each row by loop and send it to PHP via ajax but there are about 500++ rows. Due to that, the PHP is running the query simultaneously and causing the database error already has more than 'max_user_connections' active connections. Some of the query are working but some not.
the jQuery will fetch each row by loop and send it to PHP via ajax
...this is a design flaw. If you try to generate 500 AJAX requests in a short space of time it's inevitable, due to its asynchronous nature, that a lot of them will overlap and overload the server and database...but I think you've realised that already, from your description.
So you didn't really ask a question, but are you just looking for alternative implementation options?
It would make more sense to either
just upload the whole file as-is and let the server-side code process it.
Or
If you must read it on the client-side, you should at least send all the rows in one AJAX request (e.g. as a JSON array or something).
We have a SPA app with big traffic and sometimes occasionally double rows are inserted in several parts of application.
For example user registration. Normally the validation mechanism will do the trick checking if email address already exists, however I was able to reproduce the problem by dispatching the same request twice using axios resulting doubled user in the database.
I initially thought that the second request should throw validation error but apparently it's too quick and checks user before the first request was able to store it.
So I put 500ms delay between those requests and yes it worked, the second request thrown a validation error.
My question is, what are the techniques to prevent double inserts IF TWO REQUESTS ARE ALREADY DISPATCHED IN THE SAME FRACTION OF A SECOND?
Of course we have blocked submit form button (since the beginning) after making first request, yet people somehow manages to dispatch requests twice.
One option I've utilized in the past is database locking. I'm a bit rusty on this, but in your case:
Request WRITE LOCK for the table
Run SELECT on table to find user.
Run INSERT on table.
Release WRITE LOCK.
This post on DB locking should give you a better idea of what locks apply what affect. Note: some database systems may implement locks differently.
Edit: I should also note that there will be additional performance issues using database locks.
I'm using LAMP and have full access to server configuration and setup.
I am very confused about which is the best method to simply log some data to.
I want to log things like this:
Analytics (server side with PHP) of every visitor.
When creating a new user, store id number so an email and SMS message can be sent later in a Cron Task (saves from sending the email/SMS during the users' request).
Number of page views of certain 'articles'. Increment once per visit to that page.
As you can see they are all simple insert/add actions that all can be processed later by a Cron Task.
The application needs to be scalable for the future.
These are my options (and what I have learned):
(1) Database (MySQL). People say don't use this for logging data like above.
(2) Use file_get_contents() WITHOUT a file lock. I'm told this can cause data corruption.
(3) Use file_get_contents() WITH a file lock but I believe this either results in missed data as file_get_contents returns false and doesn't add the data if the lock is in force -OR- it results in PHP having to wait for the lock to release. I don't think MySQL has to wait to do multiple inserts.
Which is the best option? Does it make a difference if I'm handling tens of requests per seconds compared to thousands of requests per second, or would I use the same option?
So I was wondering how / if PHP has some sort of mutual exclusion on file reading and writing.
Here's how I plan on using it:
The site I'm working with utilizes a payment service that requires leaving the server, making it difficult to deal with form submissions, such that the form does not get submitted into the database until after returning from the payment service. Information CAN be passed through the payment service and regurgitated on the other end. However, there is minimum information that can be passed.
My idea of a solution:
Before a registration is passed to the payment service, process and write the sql statements in a file, with each group of statements referring to a registration separated by some token.
On returning find the entry based on the information you sent through the payment service, execute the statements and remove the registration block from the file.
Rephrasing the question:
-So the question is - in this scenario would I need mutual exclusion on the file, and if so how would I achieve it? Could this be locked from multiple languages? (The payment service requires returning to a cgi / perl script - although I could include a php script that actually processes)
-How would I run through and execute the SQL statements (preferably in perl)?
-Does my solution even seem like a good one?
Thanks,
Travis
Both PHP and Perl support flock().
However, a better way to do it would be to use the database. Your DB table could have a processed column that indicates whether the payment has been processed. When you send the request to the payment service add a record with processed = 0. When the registration is returned, update the table with processed = 1.
I don’t know much PHP and I want to see how/if the following algorithm can be implemented in PHP:
I am sending a string to a PHP script via HTTP GET. When the script receives the data, I PROCESS it and write the result to a txt file. The file already exists. I only update some lines. My problem is: what happens if the server fails while my data is processed? How can I minimize the damage in case of server/script failure?
The processing of data may take up to one second. So I think it is a high risk that the server will breakdown during it. Therefore, I am thinking in breaking it in two parts:
One script (let’s call it RECEIVER) that receives the data from HTTP GET and store it to a file (called Jobs.txt). It should finish really fast as it has to write only 20-50 chars.
A second script (let’s call it PROCESSOR) that checks this file every 2-3 seconds to see if new entries were added. If it finds new entries, it processes the data, save it and finally deletes the entry from the Jobs file. If the server fails, on resume, maybe I can start my PROCESSOR and start the work from where it was interrupted.
How it sounds?
Problems: What happens if two users are sending GET commands at the same time to the RECEIVER? It will be a conflict on who will write to the file. Also, the PROCESSOR may conflict over that file as it also wants to write to it. How can I fix this?
to send some data to PHP use just URL
http://www.mydomain.com/myscript.php?getparam1=something&getparam2=something_else
to get it from PHP script (in this example myscript.php)
$first_parameter = $_GET['getparam1'];
$second_parameter = $_GET['getparam2'];
// or use $_REQUEST instead of $_GET
or
$get_array = $_GET;
print_r($get_array);
or
$get_array = explode('; ', $_SERVER['QUERY_STRING']);
to write files to text file use:
error: This is XXI century!
Consider using database rather then text file.
As Michael J.V. suggested, using a DB instead of writing to a file will solve some of the problems: "you won't get half-written data and end up with wingdings in your dataset". ACID standards should be used while programming the script: "as you can see, what you need conforms to the ACID standards".