Problem with Gearman: GEARMAN_INVALID_ARGUMENT - php

I have automatic php crons those run at midnight everyday. I am using Gearman in one of them. But recently I've been getting warnings that break my script.
warning: GearmanClient::doBackground(): add_task(GEARMAN_INVALID_ARGUMENT) invalid workload -> libgearman/add.cc:199 in /var/www/html/client.php
When I run the cron manually, no such error is encountered. I am not able to understand what exactly could be the issue here.
Is it Gearman? Or something with the server?

Related

PHP: extension sysvmsg

i use on my one personaly project simple queue implemented by msg function. This a specific function msg_receive which is used to wait for a message to come to the queue. Once in a while, this wait crashes with a return error code 43 without an error text.
I can cause this error number 43. If I run two processes, you can see on github. This is expected, but if i run by supervisor i get same error.
I created simple script on github, which is executed by supervisor and process waiting for receive message, it sometomes happen the function return error code 43. Is there anything what clean resources?
I have no idea what causes this, but what did I try?
I checked that the supervisor runs only one php process and the PID is still the same.
I tested it on arm raspbian 32 bit (php 7.3.19) and ubuntu 64 bit (php 7.4.9), both are same.
Thank you for help.
I hope the behavior is well described.

after upgrade to new php cron jobs stopped working

last year i have purchased an encrypted script wich run two crons jobs, a month ago crons stop working and i have talk to the hosting company they said its script problem , The PHP cron file works fine without any errors when visited by browser, the script provider told me that this issue should be fixed by hosting service and refuse to help !
here the command used it run every 10 MIN /home/username/public_html/cron/cron.php
cPanel Version 64.0 (build 24)
Apache Version 2.4.25
PHP Version 5.6.30
my question is it true upgrading the PHP version will affect cron job and how can i solve this?
thanx
In short, yes, upgrading PHP can effect your scripts -- the crons aren't run by Apache or PHP; the crons run from the OS level.
Your PHP upgrade is most likely affecting the crons in one of two ways:
The upgrade was large, like PHP5.6 to PHP7.0 and there's a deprecation warning somewhere (which will output in the crons' log) or the script is running some code that's now fully deprecated; most likely a query or a class/method named after a reserved word. Your logs will have more info, just make sure you have debugging turned on, otherwise your errors will be suppressed.
The new PHP settings from the upgrade have disabled some of the allowed rules from an older PHP version, such as getting away with empty or unassigned variables, and now your script is running into errors (ie. using a variable that doesn't exist, such as $_REQUEST['something'], which would have been empty but now returns an error that effects the rest of the script).
To fix this you need to know what the problem is. The easiest way is to access the log files that crons often create. If you don't get that with your host, ask them for it, or ask them to send you a copy of the error that's being created -- a quick Google on the error will tell you what the problem is. But without knowing more about the script or the error log, you probably won't get a better answer.
old command is working its just me i did copy past from my old backup and i forget the PHP at the firts off command ! nothing has changed the command should be like that exp : php /home/username/public_html/cron/cron.php

AWS SDK for PHP in scheduled task not working

I have a simple PHP script that I created to do some basic maintenance on my DynamoDB account and I would like it to run daily. I have it set up on my host which is GoDaddy Windows based / plesk. When I go directly to the page via a browser it runs perfectly and does exactly what I want. However I set it up as a scheduled task and I get this error every time it runs:
Status: 500 Internal Server Error
Content-type: text/html
PHP Parse error: syntax error, unexpected '$value' (T_VARIABLE) in **path removed**\AWS\Aws\functions.php on line 36
I have googled and have found tons of sites with this error but they all say that it was related to the PHP version being earlier than 5.5 and PHP 5.5 minimum is required. However I am running PHP 5.6, plus like I said it runs fine if I just go through a browser, it's the scheduled task that doesn't work...
I'm guessing there is something different about the way a task is pathed or run or something but I can't figure out what it is, does anyone have any insight?
Well technically the error being thrown is the 'unexpected T_VARIABLE' error referenced at Reference - What does this error mean in PHP?, that's not really the issue. That php file is inside the AWS SDK so I don't think the answer is that it's coded wrong. Also it runs on a normal page, just not in task scheduler so really the question is what am I doing wrong with task scheduler that is making it error out...
Ok I'm dumb, I knew it would be something simple. In case anyone has the same issue, the path I put for the php executable in task scheduler was the one for PHP 5.4, not 5.6. I had my hosting set for PHP 5.6 but was telling task scheduler to run PHP 5.4 hence the error. Updated the exe path and it works now.

Laravel 5.3 scheduler Runs once on Windows then ends without any further processing

OS: Windows 8.1
Laravel-version: 5.3.15
Hello,
I'm having a bit of trouble getting Laravel's scheduler to properly run on on my machine and STAY RUNNING in a way where I can see some sort of output to indicate that it's actually working. Every time I run php artisan schedule:run it calls any listed commands once and then just dies. It doesn't repeat or anything. Example attempt below:
The Code:
The Result When Trying to Run the Scheduler:
What I've tried:
I've tried multiple suggested fixes from the link below and some are actually referenced throughout stackoverflow as well. Including adding a batch file, setting php's path in my system veriables etc etc. as suggested in the link below by Kryptonit3:
https://laracasts.com/discuss/channels/general-discussion/running-schedulerun-on-windows
However, nothing works. Even in the "Windows Task Scheduler" tool it will just run once, end any further processing in the CLI and then close the window. There is no actual indication that this scheduler is running in the background indefinitely (Sort-of like a background job only without the queues).
Questions:
1 - With no indication (the echoed "I am Scheduled") how do I know if this is actually working?
2 - Why does attempting to run this command die after it successfully completes once.
3 - Does my Laravel Version have anything to do the scheduler not running permenently when I try using php artisan schedule:run in the CLI or windows task scheduler?
4 - Mainly what I'm trying to accomplish here is every minute the system would scan a single DB table's field. How do I accomplish this?
5 - Can someone please give me some clarification on this? Or point me in the right direction so that I can get this thing really running indefinitely without it just ending after the first run?
Note: I do not want to use Laravel Forge or any other external service for something so simple. That would be overkill and unecessary I feel.
after doing some rather serious digging I found that all of what Kryptonit3 had said in his step by step answer that I referenced was correct. However, I did not know I needed to restart my computer after creating a new task with the Windows Task Scheduler. I thought this would happen automatically. Would've saved me 2 days of debugging had I'd of known but regardless, if someone else comes across this similar problem they will know to reboot the computer.
Reference:
https://laracasts.com/discuss/channels/general-discussion/running-schedulerun-on-windows

My long running laravel 4 command keeps being killed

I have a laravel 4 web project that implements a laravel command.
When running in the development homestead vm, it runs to completion (about 40 seconds total time).
However when running it on the production server, it quits with a 'killed' output on the command line.
At first i thought it was the max_execution_time in cli php.ini, so I set it to 0 (for unlimited time).
How can I find out what is killing my command?
I run it on ssh terminal using the standard artisan invokation:
php artisan commandarea:commandname
Does laravel 4 have a command time limit somewhere?
The vps is a Ubuntu 4.10 machine with mysql, nginx and php-fpm
So, firstly, thank you everyone who has pointed me in the right direction regarding PHP and laravel memory usage tracking.
I have answered my own question hoping that it will benefit laravel devs in the future, as my solution was hard to find.
After typing 'dmesg' to show system messages. I found that the php script was being killed by Linux.
So, I added memory logging calls into my script before and after each of the key areas of my script:
Log::Info('Memory now at: ' . memory_get_peak_usage());
Then I ran the script while watching the log output and also the output of the 'top' command.
I found that even though my methods were ending and the variables were going out of scope, the memory was not being freed.
Things that I tried, that DIDNT make any difference in my case:
unset($varname) on variables after I have finished with them - hoping to get GC to kick off
adding gc_enable() at beginning of script and then adding gc_collect_cycle() calls after a significant number of vars are unset.
Disabling mysql transactions - thinking maybe that is memory intensive - it wasnt.
Now, the odd thing was that none of the above made any difference. My script was still using 150mb or ram by time it killed!
The solution that actually worked:
Now this is definitely a laravel specific solution.
But my scripts purpose is basically to parse a large xml feed and then insert thousands of rows into mysql using Elequent ORM.
It turns out that Laravel creates logging information and objects to help you see the query performance.
By turning this off with the following 'magic' call, I got my script down from 150mb to around 20mb!
This is the 'magic;' call:
DB::connection()->disableQueryLog();
I can tell you by the time I found this call, I was grasping at straws ;-(
A process may be killed for several reasons:
Out of Memory
There are two ways to trigger this error: Exceed the amount of memory allocated to PHP script in php.ini, or exceed the available system memory. Check the PHP error log and php.ini file to rule out the first possibility, and use dmesg output to check for the second possibility.
Exceeded the execution time-out limit
In your post you indicate that you disabled the timeout via the max_execution_time setting, but I have included it here for completeness. Be sure that the setting in php.ini is correct and (for those using a web server instead of a CLI script) restart the web server to ensure that the new configuration is active.
An error in the stack
If your script is error-free and not encountering either of the above errors, ensure that your system is running as expected. When using a web server, restart the web server software. Check the error logs for unexpected output, and stop or upgrade related daemons and needed.
Had this issue on a Laravel/Spark project. just wanted to share if others have this issue.
Try a refresh/restart of your dev server if running Vagrant or Ubuntu before more aggressive approaches.
I accidentally ran install of dependency packages on a Vagrant server. I also removed and replaced a mirrored folder repeatedly during install errors. My error was on Laravel/Spark 4.~. I was able to run migrations on other projects; I was getting 'killed' very quickly, 300ms timeframe, on a particular project for nearly all commands. Reading other users, I was dreading trying to find the issue or corruption. In my case, a quick Vagrant reload did the trick. killed issue was resolved.

Categories