Why laravel migrate:fresh takes a long time for 422 tables? - php

I have a Laravel project and I'm using this command to migrate a database every time I want to create a new project
php artisan migrate:fresh
It was working fine, but after 3 years of working on it I've reached 422 tables and now when I want to migrate it takes almost 5 to 10 minutes at least to complete a migrate job.
I can wait without problems in localhost, but my biggest problem is when I use the LaravelInstaller library to add an installer feature to the project.
It takes a long time and eventually it returns the next error
internal server error 500
So in order to make this installation work with such long durations, I have to change Max time limit in php.ini to solve this issue.
Is there any way to speed up the migration?
Thanks

This is an answer to speed-up issue, not internal server error 500, because there's not any details related to issue that causing that error.
I'm assuming you're using Laravel 8 at least, and base on that you can try Schema Dump, which It's added to creates a dump of the current schema.
Here's the command :
php artisan schema:dump
# Below command is taken from below link, in case of more help, don't hesitate
# to read the docs
# Dump the current database schema and prune all existing migrations...
php artisan schema:dump --prune
You have this issue simply because you've a huge list of migration files that were created, and in this case, in general, doing a migration:fresh should not take so much time, even for 422 table, but you have this issues, because of repeating this process and make list bigger in years.
Here's the link to reference

Related

Is it possible to check if cronjob is running in Symfony application?

I am currently working on the project that has over 20 crons. Some of them are pretty long processes. It was built on Symfony 2.8, so we decided to upgrade it to 3.4 LTS.
After the upgrade we noticed that, if there is ongoing cron job (long process) and we push some changes to Prod environment we get this error:
Fatal Compile Error: require(): Failed opening required '/.../cache/prod/
Turns out, that when we deploy the changes, cached container (in var/cache/prod/ContainerXXXXXX) changed the XXXXXX value. Or in other words, we clear the cache (during deploy) and then it generates new Container in cache directory. More about this problem: https://github.com/symfony/symfony/issues/25654 .
So, I came up with the idea, add a script with a while loop (?) which checks if there is any running crons, if not run the deploy.
But the question is, is there a way to check this in current situation?
There are many ways to achieve this. Just any kind of semaphore (a file, a database record) to store the "running" status and have the same process clear it when it's done.
Any deployment job should check the value of this semaphore before continuing. A simple flat file would be easiest, since you may not have access to more sophisticated features during deployment, but reading a text file should be easy no matter what kind of of deployment process you are using.

Laravel 4.2 log files not being created

After resolving an error that was causing my log files to explode (2+ GB log files all full of the same exception log), I deleted the affected log files to avoid having it eat up unnecessary space. This was an error that was a couple of days old, and it affected multiple log files.
Since deleting the log files, however, it looks as if laravel has stopped logging anything. The daily log for the day I made the deletion seems to have stopped logging at the moment I deleted the older daily files, and no new daily files have been created since.
I've rerun composer install, artisan dump-autoload, artisan clear-compiled, and artisan cache:clear, to no avail. The permission settings all look fine as near as I can tell, and are assigned to the correct user. There were no configuration changes made at all, literally the only difference was in deleting the old daily log files.
Can anybody point me in the right direction with this? Can provide more information as necessary if I'm missing anything relevant.
Turns out this wasn't a logging error at all; turns out the problem was my cron scheduler got disabled and key processes weren't running. Different problem to solve, but glad to know I didn't break logging.

How to keep your old migration as reference?

I have 38 migration script that I wrote in Laravel 4.
I don't want to throw them away, but I also don't want to run them either. I just want to keep them as references.
If I place them in the migration folder in Laravel, it will run when I do
php artisan migrate and that will break some part of my database, as they have already been run.
I'm wondering if there is a way to mark them as already run, so Laravel will not trying to run them again.
I notice the migration table in my database - can I do something with it ?
What is the best way I should do to those 38 migrations ? Feel free to give me any suggestions.
Your question is a little confusing -- Laravel will only run each migration once. It keeps track of which migrations have run in the migrations table. The whole idea of migrations is a series of date sortable scripts that you can run at anytime start to finish and they rebuild your database, AND that you can add to without needing to rerun them all as they work (so your data is preserved)
If you're running
php artisan migrate
and Laravel's running a migration it has already run, something is very broken with your system.
(Speculation follows) It seems more likely the latest migration you're running may have halted half way though in a place MySQL couldn't rollback the changes, and Laravel's trying to rerun the latest one. Try a
php artisan migrate:rollback
Fix the error in the breaking migration, and you'll be all set.

proc_open Too many open files in long running Symfony Console app

I built a console app with Symfony Console component. It supposed to be run for 5 hours. But after running 2 hours i have a proc_open(): unable to create pipe Too many open files error in Symfony\Component\Console\Application.php on line 985.
I tried gc_collect_cycles in my loop, but got the same error.
Is this a Symfony Console component bug or i should not run an app for this long (but i have to)?
I had this same error with a Symfony web-app, and restarting PHP resolved it.
I appreciate that you can't do that in the midst of your command's 5-hour run, but maybe doing it immediately beforehand will get PHP in a clean enough state to give you the full 5 hours?
(Also, this post is the only one I found about my problem, so I wanted to add this here in case others have the same issue as me!)
This issue is related to:
https://bugs.php.net/bug.php?id=47396
Apparently you're working on a lot of resources in your app. It's not a Symfony Console bug, it's a PHP bug.
You can use another programming language or modify your program in order to open less files.

Laravel, artisan "generating optimized class loader" after every command

I have an annoyance with Laravel at the moment. Running 4.1.27. After almost every command I issue in the terminal involving php artisan, it ends with writing generating optimized class loader taking 20-40 seconds, making me wait a lot every time.
Does anyone know why this is?
In my case problem was in PHP core. Updated it from 5.5.13 to 5.5.16 (current latest) and all problems are gone.
Here a detailed description.

Categories