I have a very troubling problem at hand. I am using a web-socket server that runs in PHP. The issue is I need to be able to use a setInterval/setTimeout function similar to JavaScript, but within my php socket server.
I do not have the time or resources to convert my entire project over to nodejs/javascript. It will take forever. I love php so much, that I do not want to make the switch. Everything else works fine and I feel like it's not worth it to re-write everything just because I cannot use a similar setInterval function inside php.
Since the php socket server runs through the shell, I can use a setInterval type function using a loop:
http://pastebin.com/nzcvXRph
This code does work as intended, but it seems a bit overboard for resources and I feel like that while loop will suck a lot resources.
Is there anyway I can re-compile PHP from source and include a "while2" loop that only iterates every 500 milliseconds instead of instantly?
I don't think there is a way to recompile PHP from source.
If you want to delay the execution of the loop you could use the sleep function, which is used for delaying execution.
For example, I want to print 10 number after every 2 seconds then the code below should do the job.
for($i=0;$i<=10;$i++)
{
print($i++);
sleep(2);
}
Check thee PHP docs here.
EDIT
Following up what I mentioned in the replies, if you want the user to have its own instance of the run time, then threads would be an option. There is very limited examples of multi threaded application in PHP, I would recommend to check out some examples in JAVA, it shouldn't he hard to understand. Here is a good video tutorial.
For PHP
php.net/threads
Check out the contributor notes, sometimes people write good examples.
Related
We have a query that takes about 10 minutes to complete but it seems that PHP doesn't want to wait around for it. If the user does, PHP will respond with the actual query in plain text. No errors or anything like that are reported.
We thought it was an Apache thing, tried to configure it, didn't work, swapped over to nginx and it still gets the same result.
I call set_time_limit(0); at the very beginning of the script.
What is it I'm missing?
EDIT:
I'd like to clarify that PHP isn't doing any computation. It's simply waiting for the result of a query from the database and then formatting the result into an Excel file.
Try setting "max_execution_time" with
ini_set("max_execution_time", 0); but PHP was not build for this kind of work - try to use different dev language (Python, NodeJS?)
Did you try to make your query more optimized? Or try running it in more steps. Try generating some tables with partial results. Then re-do the query on those?
Unfortunately php is not built for heavy processing
I also had this problem in some of my code
My suggestion is:
Use of micro-service structure (nodeJs, Java ,etc.)
Use queuing and split processing into smaller sections
But in general, it is better to entrust heavy processing to another programming language
Like Java,python, ect
You can read more about this on these sites:
https://www.reddit.com/r/PHP/comments/2pyuy0/heavy_data_processing_in_php/
Best way to manage long-running php script?
I'm building a feature of a site that will generate a PDF (using TCPDF) into a booklet of 500+ pages. The layout is very simple but just due to the number of records I think it qualifies as a "long running php process". This will only need to be done a handful of times per year and if I could just have it run in the background and email the admin when done, that would be perfect. Considered Cron but it is a user-generated type of feature.
What can I do to keep my PDF rendering for as long as it takes? I am "good" with PHP but not so much with *nix. Even a tutorial link would be helpful.
Honestly you should avoid doing this entirely from a scalability perspective. I'd use a database table to "schedule" the job with the parameters, have a script that is continuously checking this table. Then use JavaScript to poll your application for the file to be "ready", when the file is ready then let the JavaScript pull down the file to the client.
It will be incredibly hard to maintain/troubleshoot this process while you're wondering why is my web server so slow all of a sudden. Apache doesn't make it easy to determine what process is eating up what CPU.
Also by using a database you can do things like limit the number of concurrent threads, or even provide faster rendering time by letting multiple processes render each PDF page and then re-assemble them together with yet another process... etc.
Good luck!
What you need is to change the allowed maximum execution time for PHP scripts. You can do that by several means from the script itself (you should prefer this if it would work) or by changing php.ini.
BEWARE - Changing execution time might seriously lower the performance of your server. A script is allowed to run only a certain time (30sec by default) before it is terminated by the parser. This helps prevent poorly written scripts from tying up the server. You should exactly know what you are doing before you do this.
You can find some more info about:
setting max-execution-time in php.ini here http://www.php.net/manual/en/info.configuration.php#ini.max-execution-time
limiting the maximum execution time by set_time_limit() here http://php.net/manual/en/function.set-time-limit.php
PS: This should work if you use PHP to generate the PDF. It will not work if you use some stuff outside of the script (called by exec(), system() and similar).
This question is already answered, but as a result of other questions / answers here, here is what I did and it worked great: (I did the same thing using pdftk, but on a smaller scale!)
I put the following code in an iframe:
set_time_limit(0); // ignore php timeout
//ignore_user_abort(true); // optional- keep on going even if user pulls the plug*
while(ob_get_level())ob_end_clean();// remove output buffers
ob_implicit_flush(true);
This avoided the page load timeout. You might want to put a countdown or progress bar on the parent page. I originally had the iframe issuing progress updates back to the parent, but browser updates broke that.
I have a PHP script that runs as a background process. This script simply uses fopen to read from the Twitter Streaming API. Essentially an http connection that never ends. I can't post the script unfortunately because it is proprietary. The script on Ubuntu runs normally and uses very little CPU. However on BSD the script always uses nearly a 100% CPU. The script is working just fine on both machines and is the exact same script. Can anyone think of something that might point me in the right direction to fix this? This is the first PHP script I have written to consistently run in the background.
The script is an infinite loop, it reads the data out and writes to a json file every minute. The script will write to a MySQL database whenever a reconnect happens, which is usually after days of running. The script does nothing else and is not very long. I have little experience with BSD or writing PHP scripts that run infinite loops. Thanks in advance for any suggestions, let me know if this belongs in another StackExchange. I will try to answer any questions as quickly as possible, because I realize the question is very vague.
Without seeing the script, this is very difficult to give you a definitive answer, however what you need to do is ensure that your script is waiting for data appropriately. What you should absolutely definitely not do is call stream_set_timeout($fp, 0); or stream_set_blocking($fp, 0); on your file pointer.
The basic structure of a script to do something like this that should avoid racing would be something like this:
// Open the file pointer and set blocking mode
$fp = fopen('http://www.domain.tld/somepage.file','r');
stream_set_timeout($fp, 1);
stream_set_blocking($fp, 1);
while (!feof($fp)) { // This should loop until the server closes the connection
// This line should be pretty much the first line in the loop
// It will try and fetch a line from $fp, and block for 1 second
// or until one is available. This should help avoid racing
// You can also use fread() in the same way if necessary
if (($str = fgets($fp)) === FALSE) continue;
// rest of app logic goes here
}
You can use sleep()/usleep() to avoid racing as well, but the better approach is to rely on a blocking function call to do your blocking. If it works on one OS but not on another, try setting the blocking modes/behaviour explicitly, as above.
If you can't get this to work with a call to fopen() passing a HTTP URL, it may be a problem with the HTTP wrapper implementation in PHP. To work around this, you could use fsockopen() and handle the request yourself. This is not too difficult, especially if you only need to send a single request and read a constant stream response.
It sounds to me like one of your functions is blocking briefly on Linux, but not BSD. Without seeing your script it is hard to get specific, but one thing I would suggest is to add a usleep() before the next loop iteration:
usleep(100000); //Sleep for 100ms
You don't need a long sleep... just enough so that you're not using 100% CPU.
Edit: Since you mentioned you don't have a good way to run this in the background right now, I suggest checking out this tutorial for "daemonizing" your script. Included is some handy code for doing this. It can even make a file in init.d for you.
How does the code look like that does the actual reading? Do you just hammer the socket until you get something?
One really effective way to deal with this is to use the libevent extension, but that's not for the feeble minded.
On my website various php codes run from various programmers from whom I have bought project scripts. Some use a session ( session start etc...)
Some use external include php files and do their math within there and return or echo some things. Some run only when asked to, like the search script.
Is there an easy way for me to monitor, temporary, all the various scripts's their delays in millisecond sothat I can see whats going on below the water?
I have seen once a programmer making something and below the page there were these long listst of sentences and various ms numbers etc.
Q1. Is there a default php function for this? How do I call/toggle this?
Q2. What are the various methods with which such calculations are made?
Q3. How reliable are they? are those milliseconds theory or actual real world result?
Thanks for your insight!
Sam
No defualt method i can thnik of. But its easy.At the start of your script simply place this:
$s = microtime(true);
and at the end
$e = microtime(true);
echo round($e - $s, 2) . " Sec";
Normally you would leave the second parameter of round() as it is, but if you find that your script reports the time as ’0 Sec’ increase the number until you get an answer.check this for more
If you're running an Apache webserver, then you should have the apache benchmarking tool that can give some very accurate information about script timings, even simulating numbers of concurrent users.
From a web browser, the Firebug extension of Firefox can also be useful as a tool for seeing how long your own requests take.
Neither of these methods is purely a timer for the PHP code though
The easiest/fastest way is to install a debugging extension that supports profiling, like XDebug. You can then run a profiling tool (e.g.: KCachegrind) to profile your scripts, graph the results, and figure out what uses the most memory, execution time, etc.
It also provides various other functionalities like stack tracing, etc.
What is the best way to break up a recursive function that is using a ton of resources
For example:
function do_a_lot(){
//a lot of code and processing is done here
//it takes a lot of execution time
if($true){
//if true we have to do all of that processing again
do_a_lot();
}
}
Is there anyway to make the server only have to take the brunt of the first execution and then break up the recursion into separate processes? Or am I dreaming?
Honestly, if your function is using up that much of your system's resources, I'd most likely refactor my code. However, it's not truly multithreading, but you could perhaps look at using popen to fork your process.
One of the rule of PHP is "Share nothing". That means every PHP process is independant and shares nothing with the others. So if you want to break your execution on several PHP process you'll have to store the data somewhere. It can be a memcached storage, or a database, or the session, as you want.
Then you'll need to 'fork' your PHp process. They're solutions available to get this done on the server side. IMHO this is all hacks. Dangerous and not minded in the PHP/web way. With the exception of 'work queues' tools.
I think the nicest way is to break your task with ajax. This will allow you a clean user interface and will avoid any long response timeout in the web process. i.e. show a 'working zone' to you user, then ask in ajax for next step of the job (first one), get response (in server side stor you response), then ask for next step, store new response and respond , next step, etc. You can even add a 'stop that stuff' function on the client side.
You can check as well for 'php work queue' on google.
If it's a long running task, divide and conquer with gearman