I have been playing with this a while now and I thought maybe I would ask the community.
Log::write("perf", "start" );
for($i=0;$i<100000000;$i++) {}
Log::write("perf", "finish");
As you can see its a basic log write; then a cpu bound computation for 0 to 100 million.
If I execute this program on the command line it resolves in about 2.2 seconds, and if I execute this file with hhvm it resolves in 0.326 seconds. Considerably faster!
However, when I run this same exact setup in my web instance in the middle of the application (note I use the perf log guards so I know the other's are not affecting it)... the same program section runs in a whopping 5 seconds under FCGI apache2.
For the life of me I can't figure out why it would be so slow? As a note; waiting for the JIT 20 notifications to finish does not speed up the program...
Any idea what I might be doing wrong here?
As a note, this is running on Ubuntu 16.04 on an AWS c4 instance.
Related
Lately any date operations or calculations in my script hangs until it is killed for taking too long (Maximum execution time of 30 seconds exceeded error). It happens both running the script from Apache or on the command line. All of the date and date/time operations used to work.
The code that hangs can be as simple as new DateTime() or date('Y'), although time() does work.
I'm running PHP 5.6 on a Raspberry Pi 3 with Raspbian Jessie with Apache 2.4.10.
The only date-related change to the system that I can think of is that I added an RTC module and followed the steps in the Adafruit tutorial to configure it. I don't know how that would have affected PHP though. Python's time.localtime() and just a plain ol' date or hwclock on the command line still work as expected.
Edit: I noticed that if I leave the command-line version running and watch it with top, PHP uses 100% of the CPU and the RAM usage increases steadily.
Update: I rolled back the configuration to not use the RTC module anymore, disabled I2C, and reinstalled fake-hwclock. The problem persists. Running php -r 'echo date("Y");' takes all the CPU and slowly takes all the RAM until manually killed. The problem doesn't appear to be related to the hardware RTC module.
I figured it out! I ran php -r 'echo date("Y");' through strace and noticed that it was recursively loading time zones. For example:
openat(AT_FDCWD, "/usr/share/zoneinfo/localtime", ...
eventually followed by
openat(AT_FDCWD, "/usr/share/zoneinfo/localtime/localtime", ...
and then
openat(AT_FDCWD, "/usr/share/zoneinfo/localtime/localtime/localtime", ...
You see the pattern. It turns out the problem was I had a symlink in my /etc/localtime folder called localtime that pointed to... you guessed it: /etc/localtime, so it kept loading the time zones over and over again, which explains the CPU usage and continually increasing RAM usage.
I have a laravel 4 web project that implements a laravel command.
When running in the development homestead vm, it runs to completion (about 40 seconds total time).
However when running it on the production server, it quits with a 'killed' output on the command line.
At first i thought it was the max_execution_time in cli php.ini, so I set it to 0 (for unlimited time).
How can I find out what is killing my command?
I run it on ssh terminal using the standard artisan invokation:
php artisan commandarea:commandname
Does laravel 4 have a command time limit somewhere?
The vps is a Ubuntu 4.10 machine with mysql, nginx and php-fpm
So, firstly, thank you everyone who has pointed me in the right direction regarding PHP and laravel memory usage tracking.
I have answered my own question hoping that it will benefit laravel devs in the future, as my solution was hard to find.
After typing 'dmesg' to show system messages. I found that the php script was being killed by Linux.
So, I added memory logging calls into my script before and after each of the key areas of my script:
Log::Info('Memory now at: ' . memory_get_peak_usage());
Then I ran the script while watching the log output and also the output of the 'top' command.
I found that even though my methods were ending and the variables were going out of scope, the memory was not being freed.
Things that I tried, that DIDNT make any difference in my case:
unset($varname) on variables after I have finished with them - hoping to get GC to kick off
adding gc_enable() at beginning of script and then adding gc_collect_cycle() calls after a significant number of vars are unset.
Disabling mysql transactions - thinking maybe that is memory intensive - it wasnt.
Now, the odd thing was that none of the above made any difference. My script was still using 150mb or ram by time it killed!
The solution that actually worked:
Now this is definitely a laravel specific solution.
But my scripts purpose is basically to parse a large xml feed and then insert thousands of rows into mysql using Elequent ORM.
It turns out that Laravel creates logging information and objects to help you see the query performance.
By turning this off with the following 'magic' call, I got my script down from 150mb to around 20mb!
This is the 'magic;' call:
DB::connection()->disableQueryLog();
I can tell you by the time I found this call, I was grasping at straws ;-(
A process may be killed for several reasons:
Out of Memory
There are two ways to trigger this error: Exceed the amount of memory allocated to PHP script in php.ini, or exceed the available system memory. Check the PHP error log and php.ini file to rule out the first possibility, and use dmesg output to check for the second possibility.
Exceeded the execution time-out limit
In your post you indicate that you disabled the timeout via the max_execution_time setting, but I have included it here for completeness. Be sure that the setting in php.ini is correct and (for those using a web server instead of a CLI script) restart the web server to ensure that the new configuration is active.
An error in the stack
If your script is error-free and not encountering either of the above errors, ensure that your system is running as expected. When using a web server, restart the web server software. Check the error logs for unexpected output, and stop or upgrade related daemons and needed.
Had this issue on a Laravel/Spark project. just wanted to share if others have this issue.
Try a refresh/restart of your dev server if running Vagrant or Ubuntu before more aggressive approaches.
I accidentally ran install of dependency packages on a Vagrant server. I also removed and replaced a mirrored folder repeatedly during install errors. My error was on Laravel/Spark 4.~. I was able to run migrations on other projects; I was getting 'killed' very quickly, 300ms timeframe, on a particular project for nearly all commands. Reading other users, I was dreading trying to find the issue or corruption. In my case, a quick Vagrant reload did the trick. killed issue was resolved.
WampServer 2.5 starts a php-win.exe process on (re-)start.
This process uses a lot of CPU and I/O, causing slowdowns on one of my harddisks and effectively 100% use of one of my CPU cores.
I typically just kill the process manually, which doesn't seem to affect anything in any way.
But I would rather the process didn't start at all, or somehow use less CPU and I/O.
What does the php-win.exe process do and how I can change it?
php-win.exe is the php command line processor but one that does not need a dos box to run in.
It is used by WAMPServer as most of WAMPServer uses PHP to do most of its stuff.
If this initial processing is causing the problems you suggest it is odd as the startup processing done in PHP usually takes milli second.
It would suggest that something is wrong somewhere.
These WAMPServer startup processes do report errors into the PHP error log so I would suggest you look in there for some clues.
After reading others with this question, most said to be patient.
It's been running for roughly 15 hours now and still nothing.
I've got it on the AWS EC2 Micro, which I know the ram is low but I added a 512mb swap.
Both the memory and cpu sit around 90%.
Is it safe to say that it's not going to finish or will it eventually do something? Is there anyway to log this process to see what it's doing?
Composer is supposed to finish a reasonably sized install job within seconds. It might probably need some minutes to finish if there is a really huge amount of packages to be installed (by huge I mean more than I experienced yet, i.e. probably more than 100 packages).
The process uses an unusual amount of RAM compared with other PHP scripts, i.e. hundreds of megabytes is not uncommon.
I'd advise to not run Composer on such a badly equipped machine. Adding swap space will not help in any way, it will just make the machine read and write to disk heavily, delaying the whole process by orders of magnitudes (like 100 or 1000 times slower). You should run the install step on your development machine, and then copy everything to the Amazon instance.
How long does it run on your local machine?
I have created a php script to import rss feed into the database. The feed which is huge (from year 2004 to 2010, approx 2 million records) has to be inserted into the database. I have been running the script in browser but the pace it is inserting (approx. 1 per second) i doubt it takes another 20-25 days to input this data even if i run it 24 hrs a day. I have tried it running on different browser windows at the same time and have finished only 70000 records in last two days. I am not sure how the server would react if i run 10-12 instances of it simultaneously.
A programmer at my client's end says that i could run it directly on the server through command line. Could anyone tell me how much difference it would make if i run it through command line? Also what is the command line syntax to run it? I am on apache, php/mysql. I tried out over the web for a similar answer but they seem quite confusing to me as i am not a system administrator or that good in linux although i have done tasks like svn repositories and installing some apache modules on server in the past so i hope i could manage this if someone tell me how to do it.
Difference in speed: Minimal. All you save on is blocking on NET I/O and connection (and the apache overhead which is negligible).
How to do it:
bash> php -f /path/to/my/php/script.php
You may only have the php5-mod package installed which is php for apache, you may have to install the actual command line interpreter, however a lot of distros install both. Personally I think you have an efficiency problem in the algorithm. Something taking days and days seems like it could be sped up by caching & worst-case performance analysis (Big-O notation).
Also, php vanilla isn't very fast, there's lots of ways to make it really fast, but if you're doing heavy computation, you should consider c/c++, C#/Mono (Maybe), possibly python (can be pre-compiled, may not actually be much faster).
But the exploration of these other outlets is highly recommended.
Only providing the filename to execute is sufficient:
php -f <YourScriptHere.php>
See the documentation for more command line options.
To run a php script in the command line just execute:
php yourscript.php
If you want to keep this process running in background do:
php yourscript.php &
You can then run several processes at the same time. To identify the instances of the script that are currently running execute:
ps aux | grep yourscript.php
However, if you think it takes too long, try to find out whether there's any bottleneck in your code and optimize it.
in linux:
php -f file.php
type
php --help
for other options
You may also need the -n option (no php.ini file) or options to specify where php-cli.ini or php.ini file can be found.