Laravel + Beanstalkd: Trouble getting it to start - php

I set up beanstalkd with laravel on my local environment a month back for testing purposes. Composer required it, and the note I left myself to turn the queue on was "php artisan queue:work --queue=beanstalkd --tries=3". It was working great!
However, I restarted my computer for the first time since I got it running, and I have now confirmed the queue isn't running (not a surprise), and I just need to get it started again. Running that command I posted above in my terminal just causes the given command to sit idle, which definitely wasn't happening before, and it definitely doesn't turn beanstalkd on.
My best guess is I'm missing a step that I don't remember that I did, but I can't seem to find something that works while googling the solution. Been tinkering for what I know is a really simple solution for hours now.
Thanks in advance.

That command will run the workers - but unless the server is also running, there is nothing for it to connect to.
How that would be started depends on how you have it setup. One common way on a linux-like system would be with /etc/init.d/beanstalkd start. It can be setup to auto-start on a server, but again, that depends on which OS you are using, how you have it installed and what systems you normally use.

Related

How does Selenium work when used with Behat and Mink?

I have a task of running tests to a Drupal 8 website.
I have a linux box.
I have successfully configured Behat + Mink.
My tests runs OK when I'm using the default goutte in behat.yml. But when I add #javascript so it runs with selenium2, it takes too long to run (up to 25 minutes for a login test).
So I read the docs to see if I did something wrong but can't understand how it works.
I have installed Selenium for Python3 and I can do a very simple get and assert of a webpage, and it is supposed to use Firefox in headless mode, it runs somewhat fast (less than one minute), so I don't know what could be wrong in my PHP setup, using composer.
The question is, do I need the Selenium Server all tutorials talk about? (those tutorials are aged). In the Selenium docs it says that Selenium Server is optional and I only need this if doing Non-remote. What would be this? Does Non-remote means that it is not meant to be run in a specialized server? I only need to run my tests in the machine hosting the app.
Also, why could it be taking so much to run a simple test? What logs can I look at?
You need selenium server + driver for specific browser for when you are running on local pc.
You need a selenium server running, so you need to start one(local/non-remote) or point to a machine that has selenium server(remote) for example when using services la BrowserStack or SauceLabs.
#javascript is so that he knows to start a driver with JS enabled.
If it take so much time to run a login test then you are doing something wrong, maybe you have some fixed waits or other conditions that are never true and they run until timeout.
Run Behat with -vvv flag, this is to increase the details of the logs.
Debug step by step and see where the issues is, try on your local pc first.
You should check for Behat tutorials, different frameworks are handling things differently, some you need only the driver, some both driver and the selenium server and some none of them because they have scripts to download and start the server automatically.
Also check for some best practices if you are new to automation.
Other related question is this one.
For starting selenium check this.

after upgrade to new php cron jobs stopped working

last year i have purchased an encrypted script wich run two crons jobs, a month ago crons stop working and i have talk to the hosting company they said its script problem , The PHP cron file works fine without any errors when visited by browser, the script provider told me that this issue should be fixed by hosting service and refuse to help !
here the command used it run every 10 MIN /home/username/public_html/cron/cron.php
cPanel Version 64.0 (build 24)
Apache Version 2.4.25
PHP Version 5.6.30
my question is it true upgrading the PHP version will affect cron job and how can i solve this?
thanx
In short, yes, upgrading PHP can effect your scripts -- the crons aren't run by Apache or PHP; the crons run from the OS level.
Your PHP upgrade is most likely affecting the crons in one of two ways:
The upgrade was large, like PHP5.6 to PHP7.0 and there's a deprecation warning somewhere (which will output in the crons' log) or the script is running some code that's now fully deprecated; most likely a query or a class/method named after a reserved word. Your logs will have more info, just make sure you have debugging turned on, otherwise your errors will be suppressed.
The new PHP settings from the upgrade have disabled some of the allowed rules from an older PHP version, such as getting away with empty or unassigned variables, and now your script is running into errors (ie. using a variable that doesn't exist, such as $_REQUEST['something'], which would have been empty but now returns an error that effects the rest of the script).
To fix this you need to know what the problem is. The easiest way is to access the log files that crons often create. If you don't get that with your host, ask them for it, or ask them to send you a copy of the error that's being created -- a quick Google on the error will tell you what the problem is. But without knowing more about the script or the error log, you probably won't get a better answer.
old command is working its just me i did copy past from my old backup and i forget the PHP at the firts off command ! nothing has changed the command should be like that exp : php /home/username/public_html/cron/cron.php

Laravel 5.3 scheduler Runs once on Windows then ends without any further processing

OS: Windows 8.1
Laravel-version: 5.3.15
Hello,
I'm having a bit of trouble getting Laravel's scheduler to properly run on on my machine and STAY RUNNING in a way where I can see some sort of output to indicate that it's actually working. Every time I run php artisan schedule:run it calls any listed commands once and then just dies. It doesn't repeat or anything. Example attempt below:
The Code:
The Result When Trying to Run the Scheduler:
What I've tried:
I've tried multiple suggested fixes from the link below and some are actually referenced throughout stackoverflow as well. Including adding a batch file, setting php's path in my system veriables etc etc. as suggested in the link below by Kryptonit3:
https://laracasts.com/discuss/channels/general-discussion/running-schedulerun-on-windows
However, nothing works. Even in the "Windows Task Scheduler" tool it will just run once, end any further processing in the CLI and then close the window. There is no actual indication that this scheduler is running in the background indefinitely (Sort-of like a background job only without the queues).
Questions:
1 - With no indication (the echoed "I am Scheduled") how do I know if this is actually working?
2 - Why does attempting to run this command die after it successfully completes once.
3 - Does my Laravel Version have anything to do the scheduler not running permenently when I try using php artisan schedule:run in the CLI or windows task scheduler?
4 - Mainly what I'm trying to accomplish here is every minute the system would scan a single DB table's field. How do I accomplish this?
5 - Can someone please give me some clarification on this? Or point me in the right direction so that I can get this thing really running indefinitely without it just ending after the first run?
Note: I do not want to use Laravel Forge or any other external service for something so simple. That would be overkill and unecessary I feel.
after doing some rather serious digging I found that all of what Kryptonit3 had said in his step by step answer that I referenced was correct. However, I did not know I needed to restart my computer after creating a new task with the Windows Task Scheduler. I thought this would happen automatically. Would've saved me 2 days of debugging had I'd of known but regardless, if someone else comes across this similar problem they will know to reboot the computer.
Reference:
https://laracasts.com/discuss/channels/general-discussion/running-schedulerun-on-windows

Something preventing php from seeing if a process exists by using file_exists on /proc/pid on some RHEL/CentOS installations

What I am trying to do
The web application that I am developing relies on background workers for certain tasks and I've built a self-diagnostic suite for it that includes checking on the workers' health.
One of the steps includes checking if the process of the worker is still running.
Current implementation
file_exists("/proc/$pid");
The issue
This seems to work fine for most customers and on my dev machines (both on Ubuntu and CentOS), but I've had 2 reports (one using CentOS 6.7 the other RHEL 6.6) of the diagnostic always returning a negative result.
I cannot reproduce the issue on my systems, so I am wondering if there is any hardening that could cause this behaviour. Did anyone run into this before?
Workarounds that I have tried
Switching to ps -p to check on the process by pid (if it returns more than 1 line, then the process is running) - this works fine as long as SELinux is not enabled, so sadly this isn't a solution for me.
Hope that someone came across this before, please let me know if you have any ideas. Thank you in advance!
It looks like it was simply the open_basedir directive that blocked access to /proc.

My long running laravel 4 command keeps being killed

I have a laravel 4 web project that implements a laravel command.
When running in the development homestead vm, it runs to completion (about 40 seconds total time).
However when running it on the production server, it quits with a 'killed' output on the command line.
At first i thought it was the max_execution_time in cli php.ini, so I set it to 0 (for unlimited time).
How can I find out what is killing my command?
I run it on ssh terminal using the standard artisan invokation:
php artisan commandarea:commandname
Does laravel 4 have a command time limit somewhere?
The vps is a Ubuntu 4.10 machine with mysql, nginx and php-fpm
So, firstly, thank you everyone who has pointed me in the right direction regarding PHP and laravel memory usage tracking.
I have answered my own question hoping that it will benefit laravel devs in the future, as my solution was hard to find.
After typing 'dmesg' to show system messages. I found that the php script was being killed by Linux.
So, I added memory logging calls into my script before and after each of the key areas of my script:
Log::Info('Memory now at: ' . memory_get_peak_usage());
Then I ran the script while watching the log output and also the output of the 'top' command.
I found that even though my methods were ending and the variables were going out of scope, the memory was not being freed.
Things that I tried, that DIDNT make any difference in my case:
unset($varname) on variables after I have finished with them - hoping to get GC to kick off
adding gc_enable() at beginning of script and then adding gc_collect_cycle() calls after a significant number of vars are unset.
Disabling mysql transactions - thinking maybe that is memory intensive - it wasnt.
Now, the odd thing was that none of the above made any difference. My script was still using 150mb or ram by time it killed!
The solution that actually worked:
Now this is definitely a laravel specific solution.
But my scripts purpose is basically to parse a large xml feed and then insert thousands of rows into mysql using Elequent ORM.
It turns out that Laravel creates logging information and objects to help you see the query performance.
By turning this off with the following 'magic' call, I got my script down from 150mb to around 20mb!
This is the 'magic;' call:
DB::connection()->disableQueryLog();
I can tell you by the time I found this call, I was grasping at straws ;-(
A process may be killed for several reasons:
Out of Memory
There are two ways to trigger this error: Exceed the amount of memory allocated to PHP script in php.ini, or exceed the available system memory. Check the PHP error log and php.ini file to rule out the first possibility, and use dmesg output to check for the second possibility.
Exceeded the execution time-out limit
In your post you indicate that you disabled the timeout via the max_execution_time setting, but I have included it here for completeness. Be sure that the setting in php.ini is correct and (for those using a web server instead of a CLI script) restart the web server to ensure that the new configuration is active.
An error in the stack
If your script is error-free and not encountering either of the above errors, ensure that your system is running as expected. When using a web server, restart the web server software. Check the error logs for unexpected output, and stop or upgrade related daemons and needed.
Had this issue on a Laravel/Spark project. just wanted to share if others have this issue.
Try a refresh/restart of your dev server if running Vagrant or Ubuntu before more aggressive approaches.
I accidentally ran install of dependency packages on a Vagrant server. I also removed and replaced a mirrored folder repeatedly during install errors. My error was on Laravel/Spark 4.~. I was able to run migrations on other projects; I was getting 'killed' very quickly, 300ms timeframe, on a particular project for nearly all commands. Reading other users, I was dreading trying to find the issue or corruption. In my case, a quick Vagrant reload did the trick. killed issue was resolved.

Categories