Error:
Updating vlucas/phpdotenv (v2.4.0 => v2.5.1): The following exception
is caused by a lack of memory or swap, or not having swap configured
Check
https://getcomposer.org/doc/articles/troubleshooting.md#proc-open-fork-failed-errors
for details
Currently the AWS Instance RAM: 2GB
Though we have fixed the issue by upgrading RAM from 2GB => 4GB but I have few doubts as increasing RAM this should not be the solution for a small site.
Could you please check the following points:
1) What is the recommended memory required by Laravel for updating the packages (composar.phar update)?
2) Does laravel use SWAP memory as well while composer update?
I had the same problem and never found the reason. You could run composer update --profile or even composer update --profile -vvv to have a detailed list of what's going on under the hood. The first one will show you how much memory is used.
It shouldn't be over 600Mb. But still you will run out of memory. As the laravel app goes into maintenance mode, it shouldn't add to it. You could run in a second shell the command top -ac and see what happens there.
The hosting company couldn't help me either as they blamed it on me lol.
But there is a way around. Upload the composer.lock only and on the production server run composer install --no-dev. It will give you a warning about outdated packages. Answer 'yes' and your prod server will be updated without a glitch. This command runs the laravel composer.json scripts commands too and updates the composer.json package versions..
Ideally, composer update is only run on your development environment, when you push your code in AWS server, the command you must use is composer install (which doesn't use too much memory).
If you want to know the difference between update or install, refer to this link: What are the differences between composer update and composer install?
Related
I'm trying to add a new composer dependency to my project, but it doesn't work. There's no error or anything - it just silently does nothing:
[vagrant#localhost project]$ composer require bshaffer/oauth2-server-bundle
Using version ^0.4.0 for bshaffer/oauth2-server-bundle
./composer.json has been updated
Loading composer repositories with package information
Updating dependencies (including require-dev)
[vagrant#localhost project]$ composer update --verbose
Loading composer repositories with package information
Updating dependencies (including require-dev)
That's it. That's all the output. What is going on? Why isn't composer downloading and installing the package?
Additional info:
The machine has PHP 5.6 installed.
The project has Symfony 2.8 installed and a bunch of other libraries.
There is a composer.lock file, but no matter what I do, the bshaffer/oauth2-server-bundle doesn't get added to it. I'm afraid to completely delete and recreate the file.
Deleting vendor/ and other auto-generate files, and then running composer install results in all the libraries being reinstalled except bshaffer/oauth2-server-bundle.
It was a memory issue after all. The project is in a virtual machine with 2GB of RAM allocated. Since there is basically nothing else there, that's enough for smooth daily operation. Except composer, apparently. I increased the amount of available RAM to the VM, and here are the results:
At 2GB - terminates silently, no error messages or anything
At 4GB - terminates with an error message that there's not enough memory to fork.
At 8GB - Works as expected, woohoo!
Seriously, 8GB? Bloatware these days! When I grew up...
The composer is running in the background. To see the output simply use the verbose mode, so modify your commaand to
composer require bshaffer/oauth2-server-bundle -vvv
Also you can try deleting the lock file that is composer-lock.json before re running the command
Please delete the file composer-lock.json and then try running composer install.
I have small project made in symfony2 when I try to build it on my server it's always fails when unzipping symfony. Build was OK and suddenly composer won't unzip symfony and I didn't change anything. I tried to build with Jenkins and also manually from bash with same result. It's not permissions problem and also internet connection on my server is OK.
Loading composer repositories with package information
Installing dependencies (including require-dev) from lock file
- Installing symfony/symfony (v2.3.4)
Downloading: 100%
[Symfony\Component\Process\Exception\ProcessTimedOutException]
The process "unzip '/path/vendor/symfony/symfony/6116f6f3
d4125a757858954cb107e64b' -d 'vendor/composer/b2f33269' && chmod -R u+w 'vendor/composer/b2f33269'" exceeded the timeout of 300 seconds.
Check with composer update/install -o -vvv whether the package is being loaded from composers' cache.
If yes, try clearing composer's cache or try adding --cache-dir=/dev/null.
To force downloading an archive instead of cloning sources, use the --prefer-dist option in combination with --no-dev.
Otherwise you could try raising composer's process timeout value:
export COMPOSER_PROCESS_TIMEOUT=600 # default is 300
composer config --global process-timeout 2000
The easiest method is add config option to composer.json file, Add process-timeout 0, That's all. It works anywhere.
{
.....
"scripts": {
"start": "php -S 0.0.0.0:8080 -t public public/index.php"
},
"config": {
"process-timeout":0
}
}
Composer itself impose a limit on how long it would allow for the remote git operation. A look at the Composer documentation confirms that the environment variable COMPOSER_PROCESS_TIMEOUT governs this. The variable is set to a default value of 300 (seconds) which is apparently not enough for a large clone operation using a slow internet connection.
Raise this value using:
COMPOSER_PROCESS_TIMEOUT=2000 composer install
It's an old thread but I found out the reason for time out was running a php debugger (PHPStorm was listening to xdebug connections) which caused the process timeout. When I closed the PHPStorm or disabled the xdebug extension, no time out occurred.
old thread but new problem for me. No solutions here were working when trying to install google/apiclient (it failed on google/apiclient-services) on an Ubuntu VM within a Windows 10 host.
After noticing Windows' "antimalware executable" taking up considerable CPU cycles when doing this composer install/update, I disabled "real-time protection" on the Windows 10 machine, and my composer update/install worked!!
Hope that helps someone.
Deleting composer cache worked for me.
rm -rf ~/.composer/cache/*
The Symfony Component has process timeout set to 60 by default. That's why you get errors like this:
[Symfony\Component\Process\Exception\ProcessTimedOutException]
The process "composer update" exceeded the timeout of 60 seconds.
Solution
Set timeout to 5 minutes or more
$process = new Process("composer update");
$process->setTimeout(300); // 5 minutes
$process->run();
I agree with most of what has been suggested above, but I had the same issue and what worked for me was deleting the vendor folder and re-run composer install
Regards
None of the solutions worked for me running on win10 wsl ubuntu (disabling firewall, removing debuggers, clearing cache, increasing timeout, deleting vendor).
The only way that worked was deleting vendor and composer.lock from the main machine, copying composer.json to a fresh machine, install php and composer, run composer install (it should take less than 1 second to execute), then copying the vendor dir to the other machine, and run composer update.
This is the problem slow NFS. Composer write cache into NFS directory. You must install composer globally and rewrite cache path.
This doesnt work:
php composer.phar install
Using this:
composer install
Before this run you must config composer globally. See this https://getcomposer.org/doc/00-intro.md#globally
Also, you must add this lines to your config.json:
"config": {
"cache-dir": "/var/cache/composer"
}
Works for me.
I'm creating a server in amazon and I'm having an error that I can not solve.
the image of my error:
For those who do not know the command, it just runs a script for creating tables in the database. I do not understand how a simple routine for creating 50 tables in the database can consume as much resource.
The error occurs when I try to run the command: php artisan migrate
I have 50 migrations, but I do not believe the problem is the amount of migrations.
For hours I've been looking for a solution on google, but I can not find.
I already tried: memory_limit = 128M in the php.ini file and did not solve the problem
On my local machine everything works perfect. I am using a t2micro (free) machine from amazon aws.
Do you have 'swap' enabled in your aws? and what's your PHP version? because there's a same bug in PHP "5.4.11", and also what's composer version?
I first want to say increase your memory limit from php.ini and check the time limit:
memory_limit = 2 GB ?
set_time_limit(0);
ini_set('memory_limit', '20000M');
Solution by OP.
What was causing the problem was a connection drive for mysql. With this command everything worked:
apt-get install php7.0-mysql
Sounds like you have an issue with your laravel installation. This is hard to diagnose without being local to the machine, but try the following commands to clear the cache from composer and laravel:
dump-autoload laravel installation. This regenerates the list of all classes that need to be included in the project
composer dump-autoload
Clear your laravel Installation's cache.
php artisan cache:clear
I am developing a PHP website. I have a version on my laptop where I develop everything and my web server which runs the site.
I have found that I can use composer to install PHPUnit only on my laptop and not on my web server using the "require-dev" option Using "require-dev" to install packages in composer
However, this comes with some downsides:
From now on I have to call php composer update --no-dev on the webserver, and if I forget --no-dev then its also installed on the web server
I have to use $ ./vendor/bin/phpunit to call phpunit
I have to do install phpunit for each project on my laptop.
Would't it be much better to just install phpunit on Ubuntu sudo apt-get install phpunit? This way I would not have to worry about using the --no-dev option on the server and I could simply call it by $ phpunit. Am I missing anything important here?
Fast answer is:
You can have a version of phppunit you want in your project and another in another. And --no-dev should be used in production anyway, because you don't want to install all the dev dependencies in Production
if you don't want to call ./vendor/bin/phpunit add a script to your composer.json and then run the tests by composer test or anything you create
Explained in the first one. It really makes sense, especially when you work with some legacy code that works only with some particular versions of php/phpunit etc.
I usually install phpunit, and other tools in the 'require-dev' section, but another entirely reasonable option is to download the phpunit.phar file from the website, and check it in with the rest of your code - updating it manually occasionally.
A local (or global) Composer install will allow for better control of exactly which version is available though, and you can see when it, or your other dependencies are out of date with composer outdated.
As for a production deployment, you should be automating it as much as possible, to make sure that exactly the same thing happens every time. With that, it's just another few characters in your deployment script or other mechanism.
I need to discus a very important thing for me (probably for all users who uses composer in their projects). I am working on laravel. Whenever i work on my local machine, i run composer update after adding new library in vendor.
Now it works fine on local machine. But when i upload these files on my server, it shoots error like "Undefine class ....". I dont know how to run composer update command on server (Also it might be not safe).
So may i just know which files are updated by using composer update on cmd. What other files needed to go live to avoid this error??
You may:
Run composer update on your local server only, whenever you want.
Commit/push every files (including composer.lock) but the vendor directory
Deliver to your production server (without vendor, with composer.lock)
Then run composer install on your production server. It will update all your dependencies according to your composer.lock(so the exact same versions as your last update on your local server).
So in other words: you should not run composer update on your server, just composer install on every delivery (and you will need to keep your composer.lock)
Hope it will help you.
run composer dump-autoload before composer update. If still doesn't work then try to clear composer cache using composer clear-cache
composer update will update the version of every package from you vendor folder.
normally if you can not ensure you need update all the version of every package, you should use composer install
the reason of error "undefined class" is normally caused from app.php.Because service provider is defined in app.php, but the class(package) not been installed.
in order to solve your problem, try do this in three ways:
composer install --no-scripts;
comment the service providers which not been installed already from app.php, then composer install
composer dump-autoload