In my application (Laravel 5.1) I have different commands which are fairly simple:
Get ip from RabbitMQ
Attempt to establish a connection to that ip
Update DB entry
Since it can take a while to connect to the ip (up to 30sec). I am forced to launch multiple instances of that script. For that I've created console commands which are launched by systemd.
Once I went in production I was quite surprised by the amount of memory which these scripts were consuming. Each script (as reported by memory_get_usage) was using about 21mb on startup. Considering that ideally I would need to run about 50-70 on those scripts on the same time it's a quiet big issue for me.
Just as a test I've installed clean Laravel 5.1 project and launched its default artisan inspire command, php reported 19mb.
Is it normal for laravel or am I missing something crucial here?
Related
The MySQL DB CPU is running at 95%+ basically at all times, even when there's seemingly no activity in the app. It doesn't happen right away. Only once the app has been running for a while, but then it keeps at 95% CPU even once there's seemingly no activity.
The number of active sessions / connections gradually climbs from dozens to even hundreds. Looking at the MySQL processes on RDS reveals a dozen processes trying to use 8% of the DB CPU each for some reason.
I've checked for Laravel jobs via php artisan queue:listen but nothing appears.
Checked the database and query logs, and there are many DB logs which suggest a job or something occurring in a loop, but no indication as to what the source of those jobs are as the queries being ran are generic queries and could be called from many different places in the application.
We do not believe this is due to user activity, but if it is, it's some kind of user action whicih results in some kind of a server loop.
Checked application and error logs and nothing in particular stands out.
I still don't know the root cause of why this is occurring, but I have discovered the following which is enough for me to solve the problem:
There is a scheduled custom command ($schedule->command(...)->everyTenMinutes()) that runs specified in app/Console/Commands/Kernel.php
At some point, the job either fails to complete on time (and thus the commands running gradually build up over time) and/or there is an error and it gets stuck processing essentially the same records again and again and again in a loop.
protected $signature = 'minute:mycustomjob';
Over a period of several hours, the multiple instances of the same command running ends up using 100% DB CPU due to the never-ending loop of queries. I verified the running processes by running the following in the CLI: ps -ef | grep 'artisan' which listed about a dozen instances of this very CPU and DB-intensive process running on the server during peak load times.
Killing the process by killing artisan jobs with the command's "signature" name dropped the CPU usage back down to 0%, further proving the job as the culprit:
sudo kill -9 `ps -ef | awk '/[a]rtisan minute:mycustomjob/{print $2}'`
The potential solutions I have in mind are as follows: Re-write the job to not error, re-write the job to be more efficient and complete within 10 minutes or less, lower the frequency at which the job executes, upgrade Laravel to a newer version which supports preventing task overlaps: https://laravel.com/docs/9.x/scheduling#preventing-task-overlaps
I feel a little bit silly for asking this question but I can't seem to find an answer on the internet for this problem. After searching for several hours I figured out that on a linux server you use Supervisor to run "php artisan queue:listen" (either with or without daemon) continuously on your website to handle jobs pushed to the queue. This is all well and good, but what if I want to do this on a Windows Azure web app? After searching around the solutions I found were:
Make a chron job to run "php artisan queue:listen" every minute (or every X minutes), I really dislike this solution and wanted to avoid it specially if the site gets more traffic;
Add a WebJob that runs "php artisan queue:listen" continuously (the problem here is I don't know how to write the script for the WebJob...);
I want to ask you guys for help on to know which of these is the correct solution, if there is a better one and if the WebJob is the best one how do I write the script for this? Thanks in advance.
In short, Supervisor is a modern alternative to nohup (no hang up) with a few other bits and pieces tacked on. In short, there's other resources that can keep a task running in the background (daemon) and the solution I use for Windows based projects (very few tbh) is Forever which I discovered via: https://stackoverflow.com/a/18226392/5912664
C:\myprojectroot > forever -c php artisan queue:listen --queue=some_nice_queue --tries=3
How?
Install node for Windows, then with npm install Forever
C:\myprojectroot > npm install -g forever
If you're stuck for getting Node running on Windows, I recommend the Windows Package Manager, Chocolatey
https://chocolatey.org/packages?q=node
Be sure to check for any logfiles that Forever creates, as I had left one long enough to consume 30Gb of disk space!
For Azure you can make a new webjob to your web app, and upload a .cmd file including a command like this.
php %HOME%\site\wwwroot\artisan queue:work --daemon
and defining that as a triguered and 0 * * * * * frequency cron.
that way work for me.
best.
First of all you cannot use a WebJob with Laravel on Azure. The Azure PHP Web App is hosted on Linux. WebJobs do not work with Linux at this moment.
The best way to do chron jobs in Laravel on Azure is to create an Azure Logic App. You use the Recurrence trigger and then a HTTP action to send a POST request to your Laravel Web App. You use this periodic heartbeat to run whatever actions you need to do. Be sure to add authentication to your POST request.
The next problem you will have is that POST will be synchronous so the work you are doing cannot be extensive or your HTTP request will time out or you will reach the time limit on PHP scripts (60 seconds).
The solution is not Laravel Jobs because here again you need something running in the background to process the queues.
The solution is also not PHP threads. The standard Azure PHP Web App does not support PHP Threads. You can of course build your own Web App and enable PHP threads, but this is really swimming upstream.
You simply have to live with synchronous logic. So the work you are doing with the heartbeat should take no more than about 60 seconds.
If you need more extensive processing then you really need to off load it to another place: another Web App, an Azure Function, etc.
But why not do that in the first place? The reason is cost and complexity. If you have something simple...like a daily report...you simply connect the report to the heartbeat and all the facilities for producing the report are right there in Laravel. To separate the daily report into its own container would require setup and the Web App it runs in would incur costs...not worth it in my view for something simple.
I now have a stable Beanstalkd and Laravel 4 Queue setup running on one machine. My question is, how can I install the Laravel 4 workers on a second machine and make them listen to my Beanstalkd? Maybe a very obvious question to some but I can't figure it out. I noticed there was a connection field in the php artisan queue:listen command. Do I have to use that?
how can I install the Laravel 4 workers on a second machine and make them listen to my Beanstalkd?
You'll need to have a working instance of your laravel application on the same server as the listener/workers.
This means deploying your application both to the web server and to server that is listening for jobs.
Then, on the listening server, you can call php artisan queue:listen in order to listen for new jobs and create a worker to handle the job.
I noticed there was a connection field in the php artisan queue:listen command. Do I have to use that?
On top of the above question, and similar to most artisan commands, you will likely also need to define which environment the queue:listen command should use:
$ php artisan queue:listen --env=production
In this way, your laravel app that is used to handle the workers (the app on the listening server) will know what configurations to use, including knowing what database credentials to use. This also likely means that both the web server and your job/listening server needs to have access to your database.
Lastly, you could also create 2 separate Laravel applications - One for your web application and one purely to handle processing job. Then they could each have their own configuration, and you'll have 2 (probably smaller?) code bases. But still, you'll have 2 code bases instead of 1.
In that regard, do whatever works best for your situation.
I'm fully aware that PHP has a range of functions available to issue commands to the DOS bck-end of the Windows operating system, alas from my experience. This runs in a completely seperate scenario.
I've been researching into the methodology of issuing commands to an already running command prompt and printing out the results. My current setup is as followed:
Windows Server 2008R2 (IIS, PHP5.5,MSSQL & MySQL server)
an already running command prompt screen initialized by the following:
C:\Datalog\sys\dedi.exe -logfile=C:\inetpub\wwwroot\Syslog\
The problem now, is that the functions that I'm aware of, such as:
exec(), system() and passthru() only run commands in a seperate envrionment.
Why Don't I start the executional with php?
This can be done with either PHP and/or with an ajax solution, but the problem that will be encountered is that when navigating away from the page the executional will close & when navigating to page again, it might cause duplicate running environments
So, my overall question.. Is it possible to use PHP to issue commands to an already running command prompt screen? which is kept alive by the operating system?
The short answer is no, this is not possible. The web server will launch new processes separate from any other shell. You could write a command line app that runs continuously in a command prompt and takes IPC messages from the web app to get instructions, but this is probably too convoluted given your main concern:
the problem that will be encountered is that when navigating away from
the page the executional will close & when navigating to page again,
it might cause duplicate running environments
These are concerns that can be resolved in other ways. Processes can be launched asynchronously to run apart from the web application and continue if the connection is closed. To prevent "duplicating the running environment" the launched processes or the web app can use semaphores or other techniques to prevent duplicate runs.
I'm working on a small weekend project, which is basically an online IDE that allows you to run PHP, Ruby or Python code from the browser. I have everything setup and working, but the way i created the system, if a user runs a badly-written script, or a script with heavy-calculation, the system might slow down for everyone until i reach the timeout (15 seconds).
My system does not pass the fibonacci test. How can i run the process in isolation, that would allow users to create:
while (true) { fibonacci() } // pseudo-code
Without crashing the server? I have considered the following courses of action:
Running each process inside a Docker (https://www.docker.io) container, but i'm not sure how docker deals with slow containers
Running each process inside a VM
Running each process in an instantly-created EC2 instance (which is not really an option, since this is slow and expensive)
You should spawn another process using the multiprocessing module, then run the users code within that spawned process, thus keeping the inputted code "isolated" in another process. However, you should still keep in mind, you should always run this in a virtual machine because running it outside of one is unsafe on many levels.
Using this method, you can lower the processes priority, since you are in linux, and this should keep each proc. from slowing down your overall machine while the timeout runs. This is assuming that you are indeed running a linux system.
Try limiting the process to just one of your CPU cores.
You can use taskset to do that:
http://linux.die.net/man/1/taskset
You can also isolate one of your CPU cores using isolcpus (and your system processes won't use that core), and use taskset to run PHP/Ruby/Python code in that CPU core.
Learn more about isolcpus:
Whole one core dedicated to single process
https://askubuntu.com/questions/165075/how-to-get-isolcpus-kernel-parameter-working-with-precise-12-04-amd64