Laravel, artisan "generating optimized class loader" after every command - php

I have an annoyance with Laravel at the moment. Running 4.1.27. After almost every command I issue in the terminal involving php artisan, it ends with writing generating optimized class loader taking 20-40 seconds, making me wait a lot every time.
Does anyone know why this is?

In my case problem was in PHP core. Updated it from 5.5.13 to 5.5.16 (current latest) and all problems are gone.
Here a detailed description.

Related

Why laravel migrate:fresh takes a long time for 422 tables?

I have a Laravel project and I'm using this command to migrate a database every time I want to create a new project
php artisan migrate:fresh
It was working fine, but after 3 years of working on it I've reached 422 tables and now when I want to migrate it takes almost 5 to 10 minutes at least to complete a migrate job.
I can wait without problems in localhost, but my biggest problem is when I use the LaravelInstaller library to add an installer feature to the project.
It takes a long time and eventually it returns the next error
internal server error 500
So in order to make this installation work with such long durations, I have to change Max time limit in php.ini to solve this issue.
Is there any way to speed up the migration?
Thanks
This is an answer to speed-up issue, not internal server error 500, because there's not any details related to issue that causing that error.
I'm assuming you're using Laravel 8 at least, and base on that you can try Schema Dump, which It's added to creates a dump of the current schema.
Here's the command :
php artisan schema:dump
# Below command is taken from below link, in case of more help, don't hesitate
# to read the docs
# Dump the current database schema and prune all existing migrations...
php artisan schema:dump --prune
You have this issue simply because you've a huge list of migration files that were created, and in this case, in general, doing a migration:fresh should not take so much time, even for 422 table, but you have this issues, because of repeating this process and make list bigger in years.
Here's the link to reference

Can i remove artisan package:dicover from a laravel app?

I'm facing an intermitent issue with laravel and i would like to know if you guys already faced the same issue and how did you solve it.
My app was developed using docker and when i deploy it to review on gitlab, there's a series of jobs that run to get the app running on AWS.
It's all fun and games, untill "artisan package:discover --ansi" get's stuck for 3 minutes and npm also decides to have it's go and freezes everything for like 10 minutes.
In my case, since i'm using laravel as an API and have no view's I know i can remove NPM from the equation.
But how about artisan package:discover? Can i remove it?
Guess i should have tested it before posting here.
Just removed the line on the scripts section com composer.json as indicated by #lagbox and it worked flawlesly.
Thanks!

Is it possible to check if cronjob is running in Symfony application?

I am currently working on the project that has over 20 crons. Some of them are pretty long processes. It was built on Symfony 2.8, so we decided to upgrade it to 3.4 LTS.
After the upgrade we noticed that, if there is ongoing cron job (long process) and we push some changes to Prod environment we get this error:
Fatal Compile Error: require(): Failed opening required '/.../cache/prod/
Turns out, that when we deploy the changes, cached container (in var/cache/prod/ContainerXXXXXX) changed the XXXXXX value. Or in other words, we clear the cache (during deploy) and then it generates new Container in cache directory. More about this problem: https://github.com/symfony/symfony/issues/25654 .
So, I came up with the idea, add a script with a while loop (?) which checks if there is any running crons, if not run the deploy.
But the question is, is there a way to check this in current situation?
There are many ways to achieve this. Just any kind of semaphore (a file, a database record) to store the "running" status and have the same process clear it when it's done.
Any deployment job should check the value of this semaphore before continuing. A simple flat file would be easiest, since you may not have access to more sophisticated features during deployment, but reading a text file should be easy no matter what kind of of deployment process you are using.

Laravel 5 - Class changes ignored

Bit of an odd one. Any changes I make to Job classes are ignored. Regardless of if I run composer dump-autoload and artisan clear-compiled etc. Going by this answer, I've double checked that OPcache is disabled.
The only way I can get homestead to acknowledge the changes is to halt and up the box, which is obviously very time consuming.
Does anyone know how to get around this?
For completeness. I'm running homestead 0.2.6 on Windows 10. But it's never been a problem, until yesterday.

proc_open Too many open files in long running Symfony Console app

I built a console app with Symfony Console component. It supposed to be run for 5 hours. But after running 2 hours i have a proc_open(): unable to create pipe Too many open files error in Symfony\Component\Console\Application.php on line 985.
I tried gc_collect_cycles in my loop, but got the same error.
Is this a Symfony Console component bug or i should not run an app for this long (but i have to)?
I had this same error with a Symfony web-app, and restarting PHP resolved it.
I appreciate that you can't do that in the midst of your command's 5-hour run, but maybe doing it immediately beforehand will get PHP in a clean enough state to give you the full 5 hours?
(Also, this post is the only one I found about my problem, so I wanted to add this here in case others have the same issue as me!)
This issue is related to:
https://bugs.php.net/bug.php?id=47396
Apparently you're working on a lot of resources in your app. It's not a Symfony Console bug, it's a PHP bug.
You can use another programming language or modify your program in order to open less files.

Categories