My long running laravel 4 command keeps being killed - php

I have a laravel 4 web project that implements a laravel command.
When running in the development homestead vm, it runs to completion (about 40 seconds total time).
However when running it on the production server, it quits with a 'killed' output on the command line.
At first i thought it was the max_execution_time in cli php.ini, so I set it to 0 (for unlimited time).
How can I find out what is killing my command?
I run it on ssh terminal using the standard artisan invokation:
php artisan commandarea:commandname
Does laravel 4 have a command time limit somewhere?
The vps is a Ubuntu 4.10 machine with mysql, nginx and php-fpm

So, firstly, thank you everyone who has pointed me in the right direction regarding PHP and laravel memory usage tracking.
I have answered my own question hoping that it will benefit laravel devs in the future, as my solution was hard to find.
After typing 'dmesg' to show system messages. I found that the php script was being killed by Linux.
So, I added memory logging calls into my script before and after each of the key areas of my script:
Log::Info('Memory now at: ' . memory_get_peak_usage());
Then I ran the script while watching the log output and also the output of the 'top' command.
I found that even though my methods were ending and the variables were going out of scope, the memory was not being freed.
Things that I tried, that DIDNT make any difference in my case:
unset($varname) on variables after I have finished with them - hoping to get GC to kick off
adding gc_enable() at beginning of script and then adding gc_collect_cycle() calls after a significant number of vars are unset.
Disabling mysql transactions - thinking maybe that is memory intensive - it wasnt.
Now, the odd thing was that none of the above made any difference. My script was still using 150mb or ram by time it killed!
The solution that actually worked:
Now this is definitely a laravel specific solution.
But my scripts purpose is basically to parse a large xml feed and then insert thousands of rows into mysql using Elequent ORM.
It turns out that Laravel creates logging information and objects to help you see the query performance.
By turning this off with the following 'magic' call, I got my script down from 150mb to around 20mb!
This is the 'magic;' call:
DB::connection()->disableQueryLog();
I can tell you by the time I found this call, I was grasping at straws ;-(

A process may be killed for several reasons:
Out of Memory
There are two ways to trigger this error: Exceed the amount of memory allocated to PHP script in php.ini, or exceed the available system memory. Check the PHP error log and php.ini file to rule out the first possibility, and use dmesg output to check for the second possibility.
Exceeded the execution time-out limit
In your post you indicate that you disabled the timeout via the max_execution_time setting, but I have included it here for completeness. Be sure that the setting in php.ini is correct and (for those using a web server instead of a CLI script) restart the web server to ensure that the new configuration is active.
An error in the stack
If your script is error-free and not encountering either of the above errors, ensure that your system is running as expected. When using a web server, restart the web server software. Check the error logs for unexpected output, and stop or upgrade related daemons and needed.

Had this issue on a Laravel/Spark project. just wanted to share if others have this issue.
Try a refresh/restart of your dev server if running Vagrant or Ubuntu before more aggressive approaches.
I accidentally ran install of dependency packages on a Vagrant server. I also removed and replaced a mirrored folder repeatedly during install errors. My error was on Laravel/Spark 4.~. I was able to run migrations on other projects; I was getting 'killed' very quickly, 300ms timeframe, on a particular project for nearly all commands. Reading other users, I was dreading trying to find the issue or corruption. In my case, a quick Vagrant reload did the trick. killed issue was resolved.

Related

Laravel + Beanstalkd: Trouble getting it to start

I set up beanstalkd with laravel on my local environment a month back for testing purposes. Composer required it, and the note I left myself to turn the queue on was "php artisan queue:work --queue=beanstalkd --tries=3". It was working great!
However, I restarted my computer for the first time since I got it running, and I have now confirmed the queue isn't running (not a surprise), and I just need to get it started again. Running that command I posted above in my terminal just causes the given command to sit idle, which definitely wasn't happening before, and it definitely doesn't turn beanstalkd on.
My best guess is I'm missing a step that I don't remember that I did, but I can't seem to find something that works while googling the solution. Been tinkering for what I know is a really simple solution for hours now.
Thanks in advance.
That command will run the workers - but unless the server is also running, there is nothing for it to connect to.
How that would be started depends on how you have it setup. One common way on a linux-like system would be with /etc/init.d/beanstalkd start. It can be setup to auto-start on a server, but again, that depends on which OS you are using, how you have it installed and what systems you normally use.

after upgrade to new php cron jobs stopped working

last year i have purchased an encrypted script wich run two crons jobs, a month ago crons stop working and i have talk to the hosting company they said its script problem , The PHP cron file works fine without any errors when visited by browser, the script provider told me that this issue should be fixed by hosting service and refuse to help !
here the command used it run every 10 MIN /home/username/public_html/cron/cron.php
cPanel Version 64.0 (build 24)
Apache Version 2.4.25
PHP Version 5.6.30
my question is it true upgrading the PHP version will affect cron job and how can i solve this?
thanx
In short, yes, upgrading PHP can effect your scripts -- the crons aren't run by Apache or PHP; the crons run from the OS level.
Your PHP upgrade is most likely affecting the crons in one of two ways:
The upgrade was large, like PHP5.6 to PHP7.0 and there's a deprecation warning somewhere (which will output in the crons' log) or the script is running some code that's now fully deprecated; most likely a query or a class/method named after a reserved word. Your logs will have more info, just make sure you have debugging turned on, otherwise your errors will be suppressed.
The new PHP settings from the upgrade have disabled some of the allowed rules from an older PHP version, such as getting away with empty or unassigned variables, and now your script is running into errors (ie. using a variable that doesn't exist, such as $_REQUEST['something'], which would have been empty but now returns an error that effects the rest of the script).
To fix this you need to know what the problem is. The easiest way is to access the log files that crons often create. If you don't get that with your host, ask them for it, or ask them to send you a copy of the error that's being created -- a quick Google on the error will tell you what the problem is. But without knowing more about the script or the error log, you probably won't get a better answer.
old command is working its just me i did copy past from my old backup and i forget the PHP at the firts off command ! nothing has changed the command should be like that exp : php /home/username/public_html/cron/cron.php

Php ZipArchive's open/addFile method crashes with fatal error on big datasets

We have a php (Version 5.3.10) cli application doing some heavy work on a ubuntu 12.04 64 bit machine. This script can run for a long time depending on the dataset it receives. These datasets are zip files with a lot of XML, image and MS doc files.
Earlier this script used few system commands (shell, perl, java) to complete its task. We did not have problems then. Recently, we upgraded these scripts to use RabbitMQ for multiple concurrent invocations, moved from cron based working to supervisord for automatic recovery and monitoring, and also used php's core libraries and functions as much as possible to avoid shell invocations.
Now, after deploying to production, we found that the script fatally crashed on a line, where ZipArchive was used to create an archive. To be specific, only on its methods "open" and "addFile". We tested this many a times with the problematic dataset and found that this is where the real problem is.
The error thrown was "Fatal Error: Maximum execution time of 300 seconds exceeded". We know about php's limit on exection time and we double checked php.ini and all those settings under "/etc/php5/conf.d" folder, and everywhere we had "max_execution_time" set to 0. We also checked that the script's sapi mode was "cli" using "php_sapi_name()". ini_get("max_execution_time") also returns 0.
Even when the script is managed by supervisord, the above mode and execution limit are the same. We could not find out from where this "max_execution_time" limit of 300 seconds is being triggered.
One more thing, the script actually ran for more than 600 seconds when it crashed with this message. We also feel that, its only when ZipArchive took more than 300 seconds by itself, that this happens. But we are not sure. Also the partial zip archive it creates when this happens is between 280 MB and 290 MB. So we downloaded php source from its repository and did a quick grep to see if ZipArchive's code base had any such limits. We found none.
We are now trying to replace ZipArchive php code with shell command as a work around. We are yet to test it. I will post our findings here soon.
Had any of you faced such issues before? Is this something related to ZipArchive? Is it recommended to use ZipArchive for creating huge archives? The partial zip file it created before being crashed was between 280 MB and 290 MB.
I had the same problem once when using zipArchive with files > 500 MB. In some cases it also acts up when the size is considerably smaller, but the number of files are higher. Finally I ended up creating a wrapper over the linux zip/unzip commands and used them so that in the core it is basically just doing an exec() on the OS level. Never had problems with that. Course you need a sysad to set up permissions and all, but its a stable soln.

PHP CLI "Out of memory" error on VPS server running CentOS with cPanel

I'm running Symfony2.1 on a hosted virtual server running CentOS with cPanel. Everything runs fine except that I can no longer use Symfony CLI commands. I'm getting this:
Fatal error: Out of memory (allocated 20185088) (tried to allocate 71 bytes)
error (for example when I try to run php app/console cache:clear --env=prod or other useful Symfony commands).
I've used the same commands through the CLI for months without any problem until a few days ago when this error came out. I haven't been able to overcome this error since then.
I want to point out that this not a PHP memory_limit related error (see below), being an "Out of memory" error and not an "Allocated memory size" error.
The exact same commands run perfectly when added to a cron job (this is the temporary workaround I'm using now), meaning that the scripts themselves are not to be blamed.
What I've tried to do so far:
Increase memory_limit in php.ini (or directly on the command line) : as I suspected from the type of error I'm having, this had no effect
Search for any other php.ini file that may overwrite this setting for the CLI : there was none
Monitoring memory usage with free -m : there was plenty of memory available (which was expected, as the symfony commands run perfectly through cron jobs)
Trying to find if WHM/cPanel might have, through an update for example, set any memory restriction for user accounts : I've found that this might be the case when using a jailed SSH shell (but thing is, I'm not, I'm using normal SSH shell), or if enabling Shell Fork Bomb protection (but no, this is disabled on my server)
Checking ulimit settings on my server/account with ulimit -a : there are limits, but they amount to 256M whereas my "Out of memory" error indicated that no more than 20M seem to be allowed to PHP CLI
Checking if there was any memory restriction somewhere in .conf files, especially in /etc/security/limits.conf: there was none
Checking for any other file that could set up that kind of memory restriction: this time I indeed found one, namely /usr/local/cpanel/etc/login_profile/limits.sh which seem to set such limit (ulimit -n 100 -u 35 -m 20000 -d 20000 -s 8192 -c 20000 -v unlimited 2>/dev/null) => I thought I'd finally found the culprit, as the numbers (20000) seem to correspond, but editing this file as root and logging again in my account through SSH had, again, no effect
My question :
I've now run out of options. I contacted my host but they are as clueless as I am (a lot more really, they don't even know about Symfony). Is there anyone out there who experienced the same problem?
If, by any chance, the /usr/local/cpanel/etc/login_profile/limits.sh have gotten me close to the answer, what I am missing? Do I need to reboot the server or restart cPanel, instead of just re-logging to my account?
On a side note, is this something that might-be related to a WHM/cPanel update (as stated, everything was working perfectly on the command line until a few days ago)?
Thanks for any help, and sorry for the long question.
PS: I've found other similar questions on Stack Overflow, but each of them was solved using one of the things I've already tried. So I guess I'm experiencing a different problem.
It turns out that following an unwanted cPanel update my SSH port was changed from the default value and I was unable to login. I restarted SSH in safe mode in order to login.
Today, on a hunch, I looked at the SSH port in /etc/ssh/sshd-config, restarted SSH in "normal" mode, and logged in with the correct port: the scripts now run perfectly.
So the cause was using SSH in safe mode, just that. Hopefully this very specific problem will be useful to someone else: never use SSH safe mode for daily use unless you absolutely need to.

How to run php script on server through command line

I have created a php script to import rss feed into the database. The feed which is huge (from year 2004 to 2010, approx 2 million records) has to be inserted into the database. I have been running the script in browser but the pace it is inserting (approx. 1 per second) i doubt it takes another 20-25 days to input this data even if i run it 24 hrs a day. I have tried it running on different browser windows at the same time and have finished only 70000 records in last two days. I am not sure how the server would react if i run 10-12 instances of it simultaneously.
A programmer at my client's end says that i could run it directly on the server through command line. Could anyone tell me how much difference it would make if i run it through command line? Also what is the command line syntax to run it? I am on apache, php/mysql. I tried out over the web for a similar answer but they seem quite confusing to me as i am not a system administrator or that good in linux although i have done tasks like svn repositories and installing some apache modules on server in the past so i hope i could manage this if someone tell me how to do it.
Difference in speed: Minimal. All you save on is blocking on NET I/O and connection (and the apache overhead which is negligible).
How to do it:
bash> php -f /path/to/my/php/script.php
You may only have the php5-mod package installed which is php for apache, you may have to install the actual command line interpreter, however a lot of distros install both. Personally I think you have an efficiency problem in the algorithm. Something taking days and days seems like it could be sped up by caching & worst-case performance analysis (Big-O notation).
Also, php vanilla isn't very fast, there's lots of ways to make it really fast, but if you're doing heavy computation, you should consider c/c++, C#/Mono (Maybe), possibly python (can be pre-compiled, may not actually be much faster).
But the exploration of these other outlets is highly recommended.
Only providing the filename to execute is sufficient:
php -f <YourScriptHere.php>
See the documentation for more command line options.
To run a php script in the command line just execute:
php yourscript.php
If you want to keep this process running in background do:
php yourscript.php &
You can then run several processes at the same time. To identify the instances of the script that are currently running execute:
ps aux | grep yourscript.php
However, if you think it takes too long, try to find out whether there's any bottleneck in your code and optimize it.
in linux:
php -f file.php
type
php --help
for other options
You may also need the -n option (no php.ini file) or options to specify where php-cli.ini or php.ini file can be found.

Categories