Is there a way to clear the cache automatically on Symfony? - php

I'm working on a "legacy" Symfony (it's using Symfony 4 but It's not maintained anymore). The problem is that the cache folder is growing every day, raising 50GB after a few months.
It's running as a DEV environment, but as the original developer left the company we would like to just "patch" the problem cleaning the cache after X time instead of changing the environment to a production one (which could lead to different problems and maybe it won't solve the cache issue), just like rotating Symfony logs where you can configure Symfony to log every day in a different file and remove old files automatically.

There is no ready made way to do this from within the application.
Just clear the cache every now and then. (bin/console cache:clear). You could even schedule this as a cron job to run overnight, the process usually takes only a couple/few seconds at most.
Make sure that you run the command with the same user that's running the application (e.g. www-data, because if you run the command as root the cache will be warmed up with the wrong permissions).
Mind you, running a production system in development mode is inherently dangerous, as it's more likely to leak configuration data on unexpected situations.

Related

Is it possible to check if cronjob is running in Symfony application?

I am currently working on the project that has over 20 crons. Some of them are pretty long processes. It was built on Symfony 2.8, so we decided to upgrade it to 3.4 LTS.
After the upgrade we noticed that, if there is ongoing cron job (long process) and we push some changes to Prod environment we get this error:
Fatal Compile Error: require(): Failed opening required '/.../cache/prod/
Turns out, that when we deploy the changes, cached container (in var/cache/prod/ContainerXXXXXX) changed the XXXXXX value. Or in other words, we clear the cache (during deploy) and then it generates new Container in cache directory. More about this problem: https://github.com/symfony/symfony/issues/25654 .
So, I came up with the idea, add a script with a while loop (?) which checks if there is any running crons, if not run the deploy.
But the question is, is there a way to check this in current situation?
There are many ways to achieve this. Just any kind of semaphore (a file, a database record) to store the "running" status and have the same process clear it when it's done.
Any deployment job should check the value of this semaphore before continuing. A simple flat file would be easiest, since you may not have access to more sophisticated features during deployment, but reading a text file should be easy no matter what kind of of deployment process you are using.

phpunit & paratest & Laravel - Random failures in creating the test storage directory

I am using parallel testing addon for phpunit, paratest, with a Laravel application to speed up the execution of our testsuite.
This works most of the time but occasionally I get the following failure.
League\Flysystem\Exception: Impossible to create the root directory "/codebuild/output/src0123456/src/github.com/org/repo/storage/framework/testing/disks/local". file_get_contents(/codebuild/output/src0123456/src/github.com/org/repo/.env): failed to open stream: No such file or directory
/codebuild/output/src0123456/src/github.com/org/repo/vendor/league/flysystem/src/Adapter/Local.php:112
/codebuild/output/src0123456/src/github.com/org/repo/vendor/league/flysystem/src/Adapter/Local.php:78
/codebuild/output/src0123456/src/github.com/org/repo/vendor/laravel/framework/src/Illuminate/Filesystem/FilesystemManager.php:167
/codebuild/output/src0123456/src/github.com/org/repo/vendor/laravel/framework/src/Illuminate/Support/Facades/Facade.php:261
/codebuild/output/src0123456/src/github.com/org/repo/vendor/laravel/framework/src/Illuminate/Support/Facades/Storage.php:70
/codebuild/output/src0123456/src/github.com/org/repo/tests/TestCase.php:42
The failure on line 42 relates to this line which is creating the local storage folder for testing.
Storage::persistentFake();
I think the second half of the error that mentions the .env file is unrelated as the exception picks the last logged error rather than the error related to the failure.
This only happens every now and again so it must be a sequence of operation or timing issue.
The tests are running and failing inside an AWS codebuild environment against php 7.3 and 7.4.
Anyone have any ideas?
Incase anyone else comes across this, it was resolved by creating the test storage directory before executing the tests.
mkdir -p storage/framework/testing/disks/local
vendor/bin/paratest
It's a little brittle but so far has worked perfectly for us.
From my experience this is usually not an issue with the file system. Most of the times I had a test not cleaning up correctly.
Depending on the file system and Paratest your tests are executed in a different order and then this errors happens.
There are a few things you can do to track this down:
Enable --debug mode when executing the tests on your build environment and check all the test that were executed before.
As of PHPUnit 7.3 use --order-by=random locally to execute your tests in a different order then they appear when reading from the file system. Execute it a few times, maybe you can then simulate this locally.
I see that you're using Flysystem: try to execute the tests with the Memory filesystem adapter to make sure it is really not a file system problem.
Make sure that every test creates the filesystem layout before (Testcase::setUp()) and cleans it when shut down (TestCase::teatDown()), otherwise one test can have an influence on another.
Make sure your tests don't depend on values that may change. For example I had problems with tests that involved dates and I've executed tests on Jenkins at 23:59 and they failed, because the date switched to the next day. In those cases pass a date to work with through the test.

Deploy Simple Symfony application to Azure : Slow?

Last week, I tried to deploy a simple symfony app on azure.
I choose the plan app service B2 (2cores / 3.5Go RAM).
PHP Version : 5.6.
First it took forever to complete the composer install. (I tried to go on S3, it was a little faster but not very different).
So I tried to optimize the php config, opcache, realpath_cache_size...etc (xdebug already disabled).
I even tried to enable wincache, but with no real improvment.
So now my app is deployed, but it is too slow to be usable.
A simple php app/console (in dev mode) takes ~23secondes.
It seems to recreate the cache everytime. On my local unix environnment (similar specs), it takes 6seconds when the cache is cold and 500ms when the dev cache is warm.
I think that the main problem is a filesystem issue, because to remove the dev cache folder it takes 16 seconds.
On my local unix environnment, similar specs, it takes ~200ms to remove the same folder.
Like I said I tried S3 Plan with a small improvment but not enough to explain this slowness.
One thing weird, it's that if I rerun the command php app/console just after it finished, the command takes 5seconds to run (much better). But If rerun it 5seconds after it finished, it takes 23seconds.
I already tried these solutions (even if the environnment is different) :
https://stackoverflow.com/a/17021255/6309878
Update : I tried to set the symfony app/cache folder to the local filesystem D:\local\cache, but no improvment, it may be worst.
Please try below steps and let me know if it improves the performance -
1) In the wwwroot directory of your site, create a .user.ini file (if it doesn’t already exist) and add “wincache.fcenabled=0”. This will disable Wincache.
2) Go to Azure Portal and go to the Application Settings for your app. In the App Settings section, add “WEBSITES_DYNAMIC_CACHE” with a value of 1.
3) Restart the site.

Revalidate opcache only after git push

I'm using PHP with OPcache. I only git-push to master to deploy my web site in production (not really, it's just after unit tests, but never mind). In php.ini file, OPcache settings are about "time" and "frequency". But I just want to reset cache after git pull on my server.
So I think I just need to call opcache_reset after git-pull on my production server and set opcache.validate_timestamps to 0 (never reset cache)
I did not read anything about that way, so I doubt: I don't know if it's a good practice. Did I miss something? Is there any risk or is it OK?
Thanks a lot!
P.S. : I'm using a PHP framework and composer (composer install is running just after git-pull)
In order to get the greatest benefit from OPCache you should disable opcache.validate_timestamps. If you subsequently call opcache_reset() from a script every time you deploy your code to the server, then your OPCache is cleared once for each new set of files, and the system doesn't waste resources constantly checking the files.
There's a couple of "gotchas", however:
First of all, Make sure that the call to opcache_reset() happens, or else you'll be running the old code. If you have a script to execute your deploy, make sure it fails loudly if this step doesn't execute.
Secondly, depending on exactly how PHP is running (mod_php vs php-fpm), you may need to execute the opcache_reset() function via a request to the browser, not via the command line. For example, the most obvious solution to clear the cache is to have a simple PHP file like the following
<?php
if (php_sapi() != "cli") die("Not accessible from web");
opcache_clear();
and execute that file on each code pull. Depending on the version of PHP and how it's run that may only clear the cache for the command line and not for your running web version.
If clearing from the command line doesn't work, consider creating a similar script and calling it via the web using curl or wget. For example, curl http://example.com/clear_cache.php?secret=abc123. If you create the script to be web accessible, then make sure it checks a secret key to prevent someone from loading up your server by constantly clearing your cache.
Finally, as others have suggested, to make your builds totally repeatable between testing and deployment, consider having the end of the test process create a .zip file of the entire code used for testing, including the libraries pulled down by composer. Rather than git pull on your server, just unzip the file over the code root. I realize that git pull && composer update is easy. However, as others have suggested, if a library gets updated between the time tests were run and the time of deployment, then your code may no longer work as expected.

svn export makes my page blank

I'm using LAMP with CodeIgniter for one of my projects; version controlled by SVN. Every time I execute svn export file:///svnrepo/project/trunk/www . --force when in the www directory and then reload the web page, it goes blank.
The website only shows up after I do a service httpd restart (Using CentOS 5).
I want to be able to execute the svn export using a Phing build script in the future and I don't want to have to get root privileges and restart apache every time when I do a build.
Is what I'm experience a common problem? How do I solve it without restarting apache?
Edit:
It seems someone has had this problem before: http://codeigniter.com/forums/viewthread/181642/
Ok I got it.SVN maintains a files last modified time which throws off the APC cache. So to solve it we update the last modified time of all the files after we run an SVN export. Here is my final script:
#!/bin/sh
svn export --force file:///home/steve/repo/exmaple/trunk \
/home/steve/public_html/example.com/public/
find /home/steve/public_html/exmaple.com/public | xargs touch
You can find more details here.
An alternative solution would be to set apc.stat=0 (reference) in the apc.ini, and then use apc_clear_cache() (reference) to force the removal of the opcode cache.
What's awesome about this solution is that when apc.stat is set to 0, it disables the check on each request to determine if the file has been modified. This results in a huge performance boost.
Additionally, using apc_clear_cache() to clear the APC cache tends to result in a cleaner build. I've run into wonky race conditions where certain files will get built out that have dependencies on others that have not yet been built out. This results in a spat of FATAL errors. The only caveat here is that apc_clear_cache() needs to be run via apache, so you'll need to implement a wgetor something similar for this.

Categories