last year i have purchased an encrypted script wich run two crons jobs, a month ago crons stop working and i have talk to the hosting company they said its script problem , The PHP cron file works fine without any errors when visited by browser, the script provider told me that this issue should be fixed by hosting service and refuse to help !
here the command used it run every 10 MIN /home/username/public_html/cron/cron.php
cPanel Version 64.0 (build 24)
Apache Version 2.4.25
PHP Version 5.6.30
my question is it true upgrading the PHP version will affect cron job and how can i solve this?
thanx
In short, yes, upgrading PHP can effect your scripts -- the crons aren't run by Apache or PHP; the crons run from the OS level.
Your PHP upgrade is most likely affecting the crons in one of two ways:
The upgrade was large, like PHP5.6 to PHP7.0 and there's a deprecation warning somewhere (which will output in the crons' log) or the script is running some code that's now fully deprecated; most likely a query or a class/method named after a reserved word. Your logs will have more info, just make sure you have debugging turned on, otherwise your errors will be suppressed.
The new PHP settings from the upgrade have disabled some of the allowed rules from an older PHP version, such as getting away with empty or unassigned variables, and now your script is running into errors (ie. using a variable that doesn't exist, such as $_REQUEST['something'], which would have been empty but now returns an error that effects the rest of the script).
To fix this you need to know what the problem is. The easiest way is to access the log files that crons often create. If you don't get that with your host, ask them for it, or ask them to send you a copy of the error that's being created -- a quick Google on the error will tell you what the problem is. But without knowing more about the script or the error log, you probably won't get a better answer.
old command is working its just me i did copy past from my old backup and i forget the PHP at the firts off command ! nothing has changed the command should be like that exp : php /home/username/public_html/cron/cron.php
Related
Recently we updated our server from PHP version 5.6 to PHP 7.4, since this upgrade we are experiencing some very strange behaviour of some scripts.
The script themselves are fully PHP 7 suitable, there are no error logs or what so ever being printed or even logged when the problem occurs.
What happens is follows:
A script is started, this script calls several functions, when 1 function simply takes to long to finish then the main script stops, there is no error or output given what indicates that something is or went wrong.
It does not matter if we run this script by a GUI or via CLI, on both the result is the same.
The script stops/breaks (on cli you are back on prompt) every time when a called function (does not matter what function) simply takes to long to finish, as mentioned the cause is NOT a php code error, the code is valid.
When the same script is ran using php 5.6 then the script keeps waiting until the called function is finished and then normally continues as it supposed to.
It looks like there is a (new) setting somewhere in PHP7 which limits the execution time that a called function may run otherwise I cannot explain this behaviour, the problem here is which setting is this exactly and how do I change it, the obvious settings we already changed.
Has someone an idea where to look or search for these kind of settings?
The system is running on Centos 8 and using PHP 7.4.13 (or php 5.6), when using an older PHP version (7.2) the problem is the same, only php 5.6 does not have this problem at all.
Error reporting is turned on, no errors are logged whatsoever. If we run a script on PHP 7.4. The script stops from CLI after a short period (1 - 2 minutes). When running the same script on PHP 5.6 it runs for a long time (which it should in this case). Our developpers found that a function that calls another function to check whether an email account exists (HELO check) takes longer than 2-3 seconds, the entire PHP script is stopped in 7.4 where PHP 5.6 just waits longer and runs the entire script
I set up beanstalkd with laravel on my local environment a month back for testing purposes. Composer required it, and the note I left myself to turn the queue on was "php artisan queue:work --queue=beanstalkd --tries=3". It was working great!
However, I restarted my computer for the first time since I got it running, and I have now confirmed the queue isn't running (not a surprise), and I just need to get it started again. Running that command I posted above in my terminal just causes the given command to sit idle, which definitely wasn't happening before, and it definitely doesn't turn beanstalkd on.
My best guess is I'm missing a step that I don't remember that I did, but I can't seem to find something that works while googling the solution. Been tinkering for what I know is a really simple solution for hours now.
Thanks in advance.
That command will run the workers - but unless the server is also running, there is nothing for it to connect to.
How that would be started depends on how you have it setup. One common way on a linux-like system would be with /etc/init.d/beanstalkd start. It can be setup to auto-start on a server, but again, that depends on which OS you are using, how you have it installed and what systems you normally use.
I have a laravel 4 web project that implements a laravel command.
When running in the development homestead vm, it runs to completion (about 40 seconds total time).
However when running it on the production server, it quits with a 'killed' output on the command line.
At first i thought it was the max_execution_time in cli php.ini, so I set it to 0 (for unlimited time).
How can I find out what is killing my command?
I run it on ssh terminal using the standard artisan invokation:
php artisan commandarea:commandname
Does laravel 4 have a command time limit somewhere?
The vps is a Ubuntu 4.10 machine with mysql, nginx and php-fpm
So, firstly, thank you everyone who has pointed me in the right direction regarding PHP and laravel memory usage tracking.
I have answered my own question hoping that it will benefit laravel devs in the future, as my solution was hard to find.
After typing 'dmesg' to show system messages. I found that the php script was being killed by Linux.
So, I added memory logging calls into my script before and after each of the key areas of my script:
Log::Info('Memory now at: ' . memory_get_peak_usage());
Then I ran the script while watching the log output and also the output of the 'top' command.
I found that even though my methods were ending and the variables were going out of scope, the memory was not being freed.
Things that I tried, that DIDNT make any difference in my case:
unset($varname) on variables after I have finished with them - hoping to get GC to kick off
adding gc_enable() at beginning of script and then adding gc_collect_cycle() calls after a significant number of vars are unset.
Disabling mysql transactions - thinking maybe that is memory intensive - it wasnt.
Now, the odd thing was that none of the above made any difference. My script was still using 150mb or ram by time it killed!
The solution that actually worked:
Now this is definitely a laravel specific solution.
But my scripts purpose is basically to parse a large xml feed and then insert thousands of rows into mysql using Elequent ORM.
It turns out that Laravel creates logging information and objects to help you see the query performance.
By turning this off with the following 'magic' call, I got my script down from 150mb to around 20mb!
This is the 'magic;' call:
DB::connection()->disableQueryLog();
I can tell you by the time I found this call, I was grasping at straws ;-(
A process may be killed for several reasons:
Out of Memory
There are two ways to trigger this error: Exceed the amount of memory allocated to PHP script in php.ini, or exceed the available system memory. Check the PHP error log and php.ini file to rule out the first possibility, and use dmesg output to check for the second possibility.
Exceeded the execution time-out limit
In your post you indicate that you disabled the timeout via the max_execution_time setting, but I have included it here for completeness. Be sure that the setting in php.ini is correct and (for those using a web server instead of a CLI script) restart the web server to ensure that the new configuration is active.
An error in the stack
If your script is error-free and not encountering either of the above errors, ensure that your system is running as expected. When using a web server, restart the web server software. Check the error logs for unexpected output, and stop or upgrade related daemons and needed.
Had this issue on a Laravel/Spark project. just wanted to share if others have this issue.
Try a refresh/restart of your dev server if running Vagrant or Ubuntu before more aggressive approaches.
I accidentally ran install of dependency packages on a Vagrant server. I also removed and replaced a mirrored folder repeatedly during install errors. My error was on Laravel/Spark 4.~. I was able to run migrations on other projects; I was getting 'killed' very quickly, 300ms timeframe, on a particular project for nearly all commands. Reading other users, I was dreading trying to find the issue or corruption. In my case, a quick Vagrant reload did the trick. killed issue was resolved.
Firstly forgive me if my terminology isn't entirely accurate. I have only limited knowledge on this subject, but will best try to convey the problems we are having. My server administrator is trying to deploy php 5.5.9 on a live server. Originally the intention was to install php 5.4.x, but we opted for the latest version instead (a manual compile is required regardless due to the o/s)
The O/S is OpenSuse 12.1 and the server is a Plesk server (Plesk Version 11.0.9) with Apache 2.2.1. This particular o/s does not have the ability to update php automatically so everything has to be done manually. Since we didn't want to risk screwing up the server (currently running with php 5.3.8), we opted to install a second version of php alongside the current one. The instructions we followed are outlined here: http://kb.parallels.com/en/114753
After numerous failed attempts due to missing libraries during compilation, we were finally able to compile php 5.5.9 without error and then proceeded to run tests with 'make test'
Unfortunately, the test results came back with 32 failures and 20% of the total tests were skipped. A total of 13011 tests were done, 10410 of which were completed. The TEST SUMMARY can be downloaded from here: http://uploaded.net/file/v6ug55l8
Anyway, deciding we might aswell give it a try, we applied the changes as indicated in the first link above to the vhost.conf. However, it didn't work, and the vhost then returned Internal Server Errors for every page regardless of script or extension. The errors logs sadly do not indicate any errors, only a whole ton of internal server errors recorded by mod_security. We did notice a huge number of these in the error log: Warning: SuexecUserGroup directive requires SUEXEC wrapper. But, it doesn't seem to be related, as the same error goes back several weeks.
So, we're stuck without any idea what to do next. Our next attempt will be to try and compile a php 5.4.x instead, as perhaps something is bumping heads with 5.5.9...
Any and all advice will be appreciated. As per the opening statement, I'm not an expert here, so if you need any additional information about the machine and it's server, feel free to ask. Thankyou for your attention!
Problem solved. The vhost's CGI-BIN needed to be CHMOD 755 and not 775.
I have huge PHP script which I have been running on Apache 2.2.12 and I have recently upgraded to Apache 2.2.14. However, my PHP script doesn't work as it stops at a certain point all the time. I have been trying to work out what the difference is in these two Apache versions, I have looked at this CHANGELOG and have not been able to determine this.
When I look in my Apache error log, I find this before my PHP script fails to do anything else.
Parent: child process exited with status 255
The errors after this are just notices and they end after a few of these.
What changes in Apache do you think can cause this? I was reading around and I found a few people saying that newer versions of Apache on Vista (windows) can detect when a PHP script is in an infinite loop and will kill that child, is this true?
Thanks all for any input.
Update
Apologies, this is no longer relevant. I think its a PHP issue. I switched from 5.2 to 5.3 and I think its do with modules. I will be openning another question.
try this
download this
http://windows.php.net/downloads/snaps/php-5.2-win32-VC6-x86-latest.zip
then copy libmysql.dll into your php directory and restart apache
Actually it does not relate to modules, but to certain validations made by the core interpreter, which breaks many nonstrict processes in different PHP applications. Beware of updating PHP to 5.3! Mwahahahaha!