I don’t know much PHP and I want to see how/if the following algorithm can be implemented in PHP:
I am sending a string to a PHP script via HTTP GET. When the script receives the data, I PROCESS it and write the result to a txt file. The file already exists. I only update some lines. My problem is: what happens if the server fails while my data is processed? How can I minimize the damage in case of server/script failure?
The processing of data may take up to one second. So I think it is a high risk that the server will breakdown during it. Therefore, I am thinking in breaking it in two parts:
One script (let’s call it RECEIVER) that receives the data from HTTP GET and store it to a file (called Jobs.txt). It should finish really fast as it has to write only 20-50 chars.
A second script (let’s call it PROCESSOR) that checks this file every 2-3 seconds to see if new entries were added. If it finds new entries, it processes the data, save it and finally deletes the entry from the Jobs file. If the server fails, on resume, maybe I can start my PROCESSOR and start the work from where it was interrupted.
How it sounds?
Problems: What happens if two users are sending GET commands at the same time to the RECEIVER? It will be a conflict on who will write to the file. Also, the PROCESSOR may conflict over that file as it also wants to write to it. How can I fix this?
to send some data to PHP use just URL
http://www.mydomain.com/myscript.php?getparam1=something&getparam2=something_else
to get it from PHP script (in this example myscript.php)
$first_parameter = $_GET['getparam1'];
$second_parameter = $_GET['getparam2'];
// or use $_REQUEST instead of $_GET
or
$get_array = $_GET;
print_r($get_array);
or
$get_array = explode('; ', $_SERVER['QUERY_STRING']);
to write files to text file use:
error: This is XXI century!
Consider using database rather then text file.
As Michael J.V. suggested, using a DB instead of writing to a file will solve some of the problems: "you won't get half-written data and end up with wingdings in your dataset". ACID standards should be used while programming the script: "as you can see, what you need conforms to the ACID standards".
Related
Caveat, I know this has the potential to be a ridiculously stupid question, but I had the thought and want to know the answer.
Aim: run an interactive session between browser and server with a single request without ajax or websockets etc..
Scenario: a PHP file on the server receives data by POST method from a user. The content length in the header is 8MB so it keeps the connection open until it receives the full data of 8MB. But on the user side we are delivering this data very very slowly (simulating a terrible connection, for example). The server is receiving the data bits at a time. [can this be passed to the PHP file to process bits at a time? Or does it only get passed once all the data is received?] It then does whatever it wants with those bits, and delivers it to the browser, in an echo loop). At certain time intervals, the user injects new data into the 'stream' which will be surrounded by a continuous stream of padding data.
Is any of that possible? Or even with CGI? I am expecting this not to be possible really, but what stops the process timing out if someone does have a terrible connection and the POST data is huge?
As far as I know, you could do this, but the PHP file you are calling with the POST data will only be called by the webserver once it has received all the data. Otherwise, say you were sending an image with POST, and your PHP script moves this image from the tempfiles directory to another directory, before all the data has been received, you would have a corrupt image, nothing more, nothing less.
As long as the ini configurations have been altered correctly, I would think so. But would be a great test to try out!
Problem:
I'm trying to see if I can have a back and forth between a program running on the server-side and JavaScript running on the client-side. All the outputs from the program are sent to JavaScript to be displayed to the user, and all the inputs from the user are sent from JavaScript to the program.
Having JavaScript receive the output and send the input is easily done with AJAX. The problem is that I do not know how to access an already running program on the server.
Attempt:
I tried to use PHP, but ran into some hurdles I couldn't leap over. Now, I can execute a program with PHP without any issue using proc_open. I can hook into the stdin and stdout streams, and I can get output from the program and send it input as well. But I can do this only once.
If the same PHP script is executed(?) again, I end up running the program again. So all I ever get out of multiple executions is whatever the program writes to stdout first, multiple times.
Right now, I use proc_open in the script which is supposed to only take care of input and output because I do not know how to access the stdout and stdin streams of an already running program. The way I see it, I need to maintain the state of my program in execution over multiple executions of the same PHP script; maintain the resource returned by proc_open and the pipes hooked into the stdin and stdout streams.
$_SESSION does NOT work. I cannot use it to maintain resources.
Is there a way to have such a back and forth with a program? Any help is really appreciated.
This sounds like a job for websockets
Try something like http://socketo.me/ or http://code.google.com/p/phpwebsocket/
I've always used Node for this type of thing, but from the above two links and a few others, it looks like there's options for PHP as well.
There may be a more efficient way to do it, but you could get the program to write it's output to a text file, and read the contents of that text file in with php. That way you'd have access to the full stream of data from the running program. There are issues with managing the size of the file, and handling requests from multiple clients, but it's a simple approach that might be good enough for your needs.
You are running the same program again, because it's the way PHP works. In your case client does a HTTP request and runs the script. Second request will run the script again. I'm not sure if continuous interaction is possible, so I would suggest making your script able to handle discrete transactions.
In order to figure different steps of the same "interaction", you will have to save data about previous ones in database. Basically, you need to give some unique hash to every client to identify them in your script, then it will know who does the request and will be able to differ consecutive requests from one user from requests of different users.
If your script is heavy and runs for a long time, consider making two script - one heavy and one for interaction (AJAX will query second one). In this case, second script will fill data into database and heavy script will simply fetch it from there.
I used cURL to authenticate first then login via POST to a cms.
Then another POST again to ask the cms to generate a new code number (eg voucher code) and grab the csv from url that contain about 70 lines per page.
I can explode each line and get the last line for the new code number that is generated before.
My question is if many requests are created by many customers, is it possible to accidentally reading the same voucher code? Although cURL get the csv file pretty fast, should I make sure that a request must be completed first before another? like a sql transaction.
Though I read somewhere php do not run in parallel, since I am a beginner in all these and someone asked me if my script can cause that for multiple requests. THanks in advance.
The only way you will get the same voucher code is if the remote server generates the same code twice. You are correct in saying php do not run in parallel but concurrency has nothing to do with your specific case, there is no way the two HTTP response get mixed up with each other because they are sent back to you in different TCP streams. The underlying TCP/IP stack of the OS will prevent collisions.
Regardless of this, you should be able to check for collisions after you have received the data. For example, if you are inserting it into an SQL database, you can create a unique index on the field that holds the code, and the database will prevent you from inserting duplicated rows.
As a side note, you say you can explode each line which is true, but you may wish to have a look at fgetcsv() and str_getcsv(), which will parse the line for you and take into account escape sequences and all sorts of edge-cases that you code will not account for. If you want to perform multiple cURL requests at once, you may also want to have a look at curl_multi_exec(), which will allow you to execute several requests at once and speed up the execution of your script.
I have a script that takes a while to process, it has to take stuff from the DB and transfers data to other servers.
At the moment i have it do it immediately after the form is submitted and it takes the time it takes to transfer that data to say its been sent.
I was wondering is the anyway to make it so it does not do the process in front of the client?
I dont want a cron as it needs to be sent at the same time but just not loading with the client.
A couple of options:
Exec the PHP script that does the DB work from your webpage but do not wait for the output of the exec. Be VERY careful with this, don't blindly accept any input parameters from the user without sanitising them. I only mention this as an option, I would never do it myself.
Have your DB updating script running all the time in the backgroun, polling for something to happen that triggers its update. Say, for example, it could be checking to see if /tmp/run.txt exists and will start DB update if it does. You can then create run.txt from your webpage and return without waiting for a response.
Create your DB update script as a daemon.
Here are some things you can take a look at:
How much data are you transferring, and by transfer is it more like a copy-and-paste the data only, or are you inserting the data from your db into the destination server and then deleting the data from your source?
You can try analyzing your SQL to see if there's any room for optimization.
Then you can check your php code as well to see if there's anything, even the slightest, that might aid in performing the necessary tasks faster.
Where are the source and destination database servers located (in terms of network and geographically, if you happen to know) and how fast the source and destination servers are able to communicate through the net/network?
I am working in a tool in PHP that processes a lot of data and takes a while to finish. I would like to keep the user updated with what is going on and the current task processed.
What is in your opinion the best way to do it? I've got some ideas but can't decide for the most effective one:
The old way: execute a small part of the script and display a page to the user with a Meta Redirect or a JavaScript timer to send a request to continue the script (like /script.php?step=2).
Sending AJAX requests constantly to read a server file that PHP keeps updating through fwrite().
Same as above but PHP updates a field in the database instead of saving a file.
Does any of those sound good? Any ideas?
Thanks!
Rather than writing to a static file you fetch with AJAX or to an extra database field, why not have another PHP script that simply returns a completion percentage for the specified task. Your page can then update the progress via a very lightweight AJAX request to said PHP script.
As for implementing this "progress" script, I could offer more advice if I had more insight as to what you mean by "processes a lot of data". If you are writing to a file, your "progress" script could simply check the file size and return the percentage complete. For more complex tasks, you might assign benchmarks to particular processes and return an estimated percentage complete based on which process has completed last or is currently running.
UPDATE
This is one suggested method to "check the progress" of an active script which is simply waiting for a response from a request. I have a data mining application that I use a similar method for.
In your script that makes the request you're waiting for (the script you want to check the progress of), you can store (either in a file or a database, I use a database as I have hundreds of processes running at any time which all need to track their progress, and I have another script that allows me to monitor progress of these processes) a progress variable for the process. When the process begins, set this to 1. You can easily select an arbitrary number of 'checkpoints' the script will pass and calculate the percentage given the current checkpoint. For a large request, however, you might be more interested in knowing the approximate percent the request has completed. One possible solution would be to know the size of the returned content and set your status variable according to the percentage received at any moment. I.e. if you receive the request data in a loop, each iteration you could update the status. Or if you are downloading to a flat file you could poll the size of the file. This could be done less accurately with time (rather than file size) if you know the approximate time the request should take to complete and simply compare against the script's current execution time. Obviously neither of these are perfect solutions, but I hope they'll give you some insight into your options.
I suggest using the AJAX method, but not using a file or a database. You could probably use session values or something like that, that way you don't have to create a connection or open a file to do anything.
In the past, I've just written messages out to the page and used flush() to flush the output buffer. Very simple, but it may not work correctly on every web server or with every web browser (as they may do their own internal buffering).
Personally, I like your second option the best. Should be reliable and fairly simple to implement.
I like option 2 - using AJAX to read a status file that PHP writes to periodically. This opens up a lot of different presentation options. If you write a JSON object to the file, you can easily parse it and display things like a progress bar, status messages, etc...
A 'dirty' but quick-and-easy approach is to just echo out the status as the script runs along. So long as you don't have output buffering on, the browser will render the HTML as it receives it from the server (I know WordPress uses this technique for it's auto-upgrade).
But yes, a 'better' approach would be AJAX, though I wouldn't say there's anything wrong with 'breaking it up' use redirects.
Why not incorporate 1 & 2, where AJAX sends a request to script.php?step=1, checks response, writes to the browser, then goes back for more at script.php?step=2 and so on?
if you can do away with IE then use server sent events. its the ideal solution.