I was trying to get a Laravel image up and running on Docker for Windows (Windows 11), and after config, it was taking several seconds to load each page (insanely slow, definitely outside Laravel benchmarks). I am using WSL2 and have native (full resources) allocated to the VM. Why is it running so slow?
I inspected resource allocation and it shouldn't be a problem (50% memory and all cores). I have a fairly beefy machine. I tried a reinstall and new Docker image and closed all competing tasks.
TL;DR: don't host docker files on the WSL mount. Either use non-WSL docker or do an SSH deployment to the WSL local filesystem.
The issue ended up being that my project files were loaded under /mnt/c in WSL, and, for some reason, this is VERY inefficient for file access and modification. I ended up moving my files to a local folder in the VM (for me, this was my home folder) and page loads reduced to sub-second. Just the move (mv x->y) took a few minutes and my fans were going crazy.
Related
This question already has answers here:
Docker in MacOs is very slow
(9 answers)
Closed 2 years ago.
I am following this tutorial (https://tighten.co/blog/converting-a-legacy-app-to-laravel/) to migrate a legacy app to Laravel and have made it as far as the "Spring Cleaning" section. My legacy code is in a legacy directory inside my Laravel build.
Our development environment uses Docker Compose to build a container on the host machine (which in my case is a Mac but can be either Windows or Linux as well depending on the developer). The source code is in a volume mounted to the container so that the developer's updates can be seen in the browser as soon as the developer reloads the page.
When I go to get Laravel to load it's default route (the basic route that Laravel builds with) I get page load times in excess of 6 minutes.
I have tried using cached volumes like so: ./:/var/www/portal/:cached
I have also tried following this tutorial (https://www.jeffgeerling.com/blog/2020/revisiting-docker-macs-performance-nfs-volumes) for using an NFS mount. I got it working but the page load size was still over 6 minutes for the default page.
What is causing this extremely slow page-load time? How are Dockerized Laravel Development Environments supposed to be structured to avoid this issue with Docker's VM on Mac and Windows?
I didn't have this problem when developing just in the legacy app. I'm not inclined to think that Laravel itself is causing so large a slowdown. My thought is that it is the Docker VM running on Mac but I haven't been able to prove that yet.
Docker on OS X is just slow. Newer Docker versions have improved this.
Are you actually deploying this Dockerized? If not, consider just using artisan serve.
Last week, I tried to deploy a simple symfony app on azure.
I choose the plan app service B2 (2cores / 3.5Go RAM).
PHP Version : 5.6.
First it took forever to complete the composer install. (I tried to go on S3, it was a little faster but not very different).
So I tried to optimize the php config, opcache, realpath_cache_size...etc (xdebug already disabled).
I even tried to enable wincache, but with no real improvment.
So now my app is deployed, but it is too slow to be usable.
A simple php app/console (in dev mode) takes ~23secondes.
It seems to recreate the cache everytime. On my local unix environnment (similar specs), it takes 6seconds when the cache is cold and 500ms when the dev cache is warm.
I think that the main problem is a filesystem issue, because to remove the dev cache folder it takes 16 seconds.
On my local unix environnment, similar specs, it takes ~200ms to remove the same folder.
Like I said I tried S3 Plan with a small improvment but not enough to explain this slowness.
One thing weird, it's that if I rerun the command php app/console just after it finished, the command takes 5seconds to run (much better). But If rerun it 5seconds after it finished, it takes 23seconds.
I already tried these solutions (even if the environnment is different) :
https://stackoverflow.com/a/17021255/6309878
Update : I tried to set the symfony app/cache folder to the local filesystem D:\local\cache, but no improvment, it may be worst.
Please try below steps and let me know if it improves the performance -
1) In the wwwroot directory of your site, create a .user.ini file (if it doesn’t already exist) and add “wincache.fcenabled=0”. This will disable Wincache.
2) Go to Azure Portal and go to the Application Settings for your app. In the App Settings section, add “WEBSITES_DYNAMIC_CACHE” with a value of 1.
3) Restart the site.
I currently have a problem with how to setup my virtual environment for my LAMP stack. The site I'm developing is a web application written in PHP and uses MySQL for the database. Right now, I have a single VM running inside Virtualbox with CentOS. I run the web/mysql server in it and have my code folder setup with GIT.
The host OS (Mac/Windows) currently is setup with a SAMBA share to access the code inside the GIT folder on the VM. From this, I use SourceTree and PHPStorm to manipulate the files and make commits. Permissions on the files/folders are set with a force mask (which doesn't seem to be possible with NFS)
(Samba server on CentOS(guest os), Samba client on Windows/Mac(host os))
The problems occur on occasion when I run my environment like this. I've had my GIT repository have weird errors and corrupting (HEAD detaching, index files corrupting / being too small, other .git files corrupting). There is also an issue where the file names are case insensitive and the CentOS guest OS runs it as case sensitive.
Ultimately, my questions is: How can I setup my development environment to execute the code inside my CentOS Guest OS, have file permissions not be a hassle when I'm commiting/executing, allow me to setup an environment that will match my multi-server environment (ie. multiple Virtualbox instances), and have it run into no or minor problems as if I was just running it all under CentOS for development?
I would prefer to be able to run VirtualBox, be able to develop my software using my PHPStorm/SourceTree applications on my Host OS, and avoid any problems that lead to corruption of my filesystem in GIT.
I'm using Phing as my build tool for a website i'm developing. I have a server running on localhost to test things on my own system, and i have a test environment on the server it's eventually going to run on. Deploying to that test environment is currently done by tarring all the built files, uploading the tar to the server and extracting it there.
However, since i'm also using quite a bit of images, this takes pretty damn long; 10 seconds for local deploy vs 4 minutes for a remote deploy. Is there any way to either compare files in 2 directories and only tar the ones that are newer in one dir (so i can keep a shadow copy of the build dir to compare file dates) or another best practice?
Something else i've been thinking about trying is uploading the site using git. Any ideas on that?
yesterday I had the same problem, this answer resolved my problem
Phing - Deploy with FTP but only overwrite when size has changed
I have a virtual machine (VirtualBox) running latest ubuntu server as my personal/development web server.
My configuration is the following:
My web server (running Zend Server CE) is running on the virtual machine.
My files, are on my host machine (Win7) and all the virtula machine does is to serve the files from a shared folder.
So if I go to /home/ronaldo/htdocs/project_name I can see all my files for that project - the shared folder in this case is the main folder - htdocs.
I use VirtualHost - so every new project has an address such as project_name.local.
Every now and then, when I add new files to a project - say, using Dreamweaver, PHPDesigner or even Internet explorer, that file is not recognized on the server.
It was working fine.
I think my last major change was to upgrade the server to the latest some weeks ago I think.
Now, lets say I'm working on an Opencart ecommerce project and I am creating a new module, with controllers, views, etc. The new files are not recognized until I reboot my server.
When I try to list the files on my web-server using "ls" - the new files appear in red.
Why is that?
Anyone has a similar setup or can help me with a workaround not to have to reboot the server.
Unmount/Mount the shared folder : http://www.virtualbox.org/manual/ch04.html#sf_mount_manual
You may get more specific help at virtualbox forums.
Or if it is an option, you can create a windows share and use that from Linux.