Faster composer install - php

A composer install normally takes a few minutes. And on a production environment it's feels too slow.
Is it possible to make a composer install to a temp directory and then switch it? If that is possible the downtime should be about zero.
Or are there any other way to do a composer install faster?

I created a composer plugin to download packages in parallel.
https://packagist.org/packages/hirak/prestissimo
$ composer global require hirak/prestissimo
Please try it. In my environment, composer install become 10 times faster.

You can sometimes speed up composer install significantly by using the --prefer-dist flag, which just happens to be recommended for production use:
--prefer-dist: Reverse of --prefer-source, composer will install from dist if possible. This can speed up installs substantially on build servers and other use cases where you typically do not run updates of the vendors.
composer install docs here: http://getcomposer.org/doc/03-cli.md#install
Edited To Clarify Sometimes
I say it sometimes speeds up composer install because there are plenty of factors that go into it feeling slow, not the least of which are network performance and the current Github status. A slow install can be really frustrating, but it's not always b/c of Composer.

You are asking two different and unrelated things.
Yes, it is a solution to build the next version of your site in a separate directory then put it in place after you moved the old version out of the way. It is, actually, the best solution.
This is how the deployment scripts I build work:
prepare the next version of the site in a separate directory (let's say /var/www/new); the following list of items and their order is not static, some projects need a different flow:
get the last version of the code from the repo;
remove the files that are not needed on the live site (.gitignore, IDE project files, placeholders etc);
run composer install;
copy/generate the configuration files containing the settings for the live servers (the ones stored in the repository contain dummy values);
change the user and permissions of all files; make some directories writable by the web server;
create symlinks, directories etc; for example, the directory that contains the user uploaded files is somewhere outside the server directory and a symlink to it is created during deploy to make its content available through the web server;
move the live code out of the way; I use mv to move the entire directory (/var/www/html to /var/www/old, for example);
move the prepared new version to the proper place (mv /var/www/new /var/www/html);
move the previous version into the archive (after I remove the content of vendor and other files that do not change or are external).
The advantages:
the downtime is zero (microseconds, probably, between steps #2 and #3);
a failed build doesn't affect the live site (if handled properly);
reverting to the previous version (if needed) can be done easily.
Regarding the other question, the only way I know to speed composer up is to avoid running it using a PHP that loads the xdebug extension. The xdebug extension shouldn't be loaded on the production server anyway.

Related

Using Composer locally then uploading files through FTP

Bit of a strange set up but have come across a project where Composer has been used in a local environment to get a project started. The original developer did not have ssh access to the production server therefore he used Composer locally and uploaded the 'vendor' directory from his desktop to the server using FTP.
I now need to add the PHPMailer package so have done the following locally on my Mac:
cd Desktop/
composer require phpmailer/phpmailer
This has created the following structure on my desktop:
Desktop/composer.json
Desktop/composer.lock
Desktop/vendor/autoload.php
Desktop/vendor/composer/*
Desktop/vendor/phpmailer/*
Which of these do I need to upload through FTP? I realise vendor/phpmailer/* is the package I want, so will need uploading.
What about the others? I already have an autoloader configured so guessing vendor/autoload.php is not required here?
composer.json I could add the package to what's already there, e.g.
"require": {
*other packages here*
"phpmailer/phpmailer": "^5.2"
}
But I wasn't sure if that's necessary because I'm not going to be using ssh/Composer on the server to run any updates?
The usual workflow would be:
Checkout the current version from the version control.
Add dependencies via command line composer require new/package.
This will download the new package and update the autoloading.
Test the result locally or on a test website environment.
If happy with the result, upload the whole folder to the production server.
There may be several exceptions from this general workflow:
ad 1: If there is no version control, you'd probably better of starting a local git repo right now, and download the current production state into it as the first commit. Not having version control will make things harder, especially going back to known working versions. And because the files on the production server are probably unmanaged, you'd also check in the vendor folder into your newly created version control just to avoid canceling any changes that had been made to these files.
ad 2: Manually editing the composer.json file sometimes is a faster way to get what you want if you know what you are doing, but you'd have to correctly edit the JSON. For me it usually is too much hassle if I already have a command line ready. The command will also select a matching version that fits into the already installed dependencies. Manual editing may lead to version conflicts that you'd have to untangle. Remember to only install dependencies that work with the PHP version in production. You probably should run composer config platform.php X.Y.Z in order to add the production version of PHP into the composer.json file, which prevents Composer from installing dependency versions based on your development PHP. Adding the -g switch will add this setting to your global (user) setting instead, which will affect all composer operations you start, also for other projects.
ad 3: Manual editing will require you to run composer update on the command line, so there's probably no reason to not do composer require instead.
ad 4: How this could be done is entirely dependent on what environment you have to work with.
ad 5: At this stage you have assembled all files necessary to create a working website. Uploading them to production will always result in a working website unless the upload fails somehow. You could also use some "upload first to temporary folder, then move on the server" approach if you fear FTP would be unreliable. Some people take a different approach: They have a git repository on the production server and they simply push the version that should go live onto that remote repo. Some post-push scripts will run composer install then. This automated approach will also work (but not using FTP), but has the higher risk of something failing during deployment, and probably has no easy way back to the previous situation.
So in the end I'd say that uploading the whole folder structure via FTP (well, that protocol is insecure itself, better replace it with FTPS (FTP with SSL), SFTP or SCP) is better compared to running Composer on the production server.
Your specific question regarding which folders to upload: All of them. Especially upload the whole vendor folder. It contains the current autoloader and all dependency packages the software needs. If you worked correctly, you downloaded the existing composer.json and composer.lock file together with everything else and added the new dependency to it. This has changed both these files, added the new package to the vendor folder and the classes to the autoloader.
Don't fiddle with uploading only parts of the vendor folder, or manually editing a component of the autoloading. You will only create surprises for the developer coming after you if you do some aspect incorrectly, and it also takes more time. Composer is a very good tool to manage dependencies - use it!
You could copy the composer.json from the server to your local server, add the requirements and run composer update.
After that you can copy all files (composer.json, composer.lock and vendor folder to your server)...
Or you can copy local vendor/phpmailer into servers vendor folder, search for the entry of phpmailer in vendor/composer/autoload_psr4.php and add it to your servers vendor/composer/autoload_psr4.php.
Using this method also add phpmailers dependencies the same way.
composer depends phpmailer/phpmailer

Composer autoloader, github and deployment

I am sorry if this has been asked before but after searching for a while I couldn't really find answers to my dilemma.
I am part of team that is working on a PHP project and we use github for our version control. We would like to implement a PSR-4 autoloader and every single guide uses Composer so we would as well. Now, while searching I learned that the vendor folder should not be included into github, but only the composer.json and that every developer needs to install composer onto their own computer.
Does that require the autoloader to be created again on every developers computer.
And finally, when the project is done, we would like to upload it onto our website, but the only way we can do that is through FTP.
Which files should be uploaded to the live website and what would happen to th autoloader?
You need to commit the composer.lock file as well. That's super-important - it means that whenever someone else checks out the code they get the same set of dependencies (including their exact versions) installed to their /vendor directory.
That's why you don't need the /vendor directory to be committed - the lock file takes care of ensuring the dependencies are fixed.
The composer.json defines numerous potential versions of your dependencies that meet your requirements. Running composer update essentially checks to see if a more recent version is available that meets those requirements. That's the difference between install and update - install goes off the lock file and knows exactly what to look for - update goes off the json file and could return different results at different points in time.
In your composer.json you can define the autoloader by telling it where your root namespace lives.
"autoload": {
"psr-4": {
"RootNamespace\\": "library/src"
}
},
When your colleagues have run composer install it will create the autoloader for them in a consistent path.
You have options for deployment:
You can either upload the composer.lock file and run a composer install on production, or do it ahead of time and upload your vendor directory as part of the build.
I do the latter as I would prefer if there was an issue at this point to know about it before any files are changed on the production server. The alternative could leave a botched upgrade on production with missing dependencies. Safer to install those dependencies first and transfer everything in one go.
As an aside, I also like to install a fresh release to a separate folder on production named after the git commit and then symlink it as part of the deployment step. This ensures you don't have a half-updated application whilst you wait for the rest of the files to be uploaded. This approach would also eliminate the issue mentioned before, meaning you could do your composer install from production.

Deploying Composer project to Web

I am new to composer and would like to know how do you guys deploy a project to the production server using composer?
In deploying, would composer also push the dependency packages needed to the server?
Would/ Can composer build the application with minification process?
I think the current best practice is to not run Composer on the target production server. The regular process of deploying a web application usually requires several independent steps, and Composer is only suitable for some of them, regardless of what people make it do additionally.
You mention minification, and I would add the process of pulling in JavaScript dependencies in general. This is no domain for Composer. It has been done in the past to offer Composer packages that contain Jquery, but this requires additional work to put Jquery in the right directory afterwards, adding the need to run post install scripts or add installers that need configuration. I guess the right way to do it would be to use Bower for this.
So the deployment would be at least a three step process.
Use composer to install PHP dependencies.
use bower to install JavaScript dependencies.
Use rsync, SFTP, SCP or FTP(S) to move all the files to the server.
Any optimization steps would be done prior to moving the files onto the server inside the deployment script.
And if anything fails during the collection of dependencies, be it an unsuspected downtime of Github, or your deployment server running out of disk space, you don't end up with a halfway deployed new website version. You can stop the deployment script before syncing if anything is missing or gone wrong.
yes, composer install get all dependencies in your test server, check it, if everything work ok, then sync all the files to your production server, which can save you from the unexpected problem running composer install on your production server.
other way is to sign all the files from your test server after composer install, then use composer install to get all dependencies in your production server, sign the files, now you must check two signs separately generated in test and production server, if it match, congratulations, the deploy is ok.

Composer & composer.lock in GIT and merge conflicts

Here's our situation :
We have 3 different Laravel projects and all 3 projects rely on our Core project.
This Core project is a separate Laravel package hosted on our private repo and is used as a dependency for other projects.
Before, whenever something would change in the Core project we woud just run a composer update ourvendor/ourcorepackage on our servers for each project to pull in the core changes. However as of lately composer seems to suffer from serious memory issues when we try to run the update on our Digital Ocean staging environment with 512 MB Ram. See : https://github.com/composer/composer/issues/1898
The solution I always come across is people saying that you should always run composer install on your production servers. I can relate to that in terms of security because it can be dangerous if you update to a new version of some 3rd party package that can possibly break your code. But in our case we only update our own core package so we know what we're doing but this memory issue forces us to use the composer install method because it is less memory demanding.
So basically this is our current workflow :
When something changes in our core package we need to run a composer
update ourvendor/ourpackage on each project LOCALLY This generates a
composer.lock file
We commit the composer.lock file in our repo
On the servers for each project we run a git pull and run a composer
install. This will only update our core package and runs much faster
and has no memory issues vs composer update
However this solution raises 2 issues :
Since we're working with multiple devs on the same project we sometimes end up having merge conflicts for the composer.lock file when pulling in the changes locally.
Running a git pull on the server gives an error : Your local changes to the following files would be overwritten by merge: composer.lock
Please, commit your changes or stash them before you can merge.
So what am I supposed to do here? Before the pull on the server remove the composer.lock file?
How should we handle the merge conflicts for the composer.lock file?
It's a shame that composer update suffers from memory issues because that method seems much more logical. Just update the package you want and no hassle with the composer.lock file..
Please advice how a correct workflow with GIT and composer should be in our case and how to solve the conflicts above ?
Many thanks for your input
How can you test that a core update (or any other dependency that gets updated) doesn't break things in the projects using it if the developer don't do this step themselves?
That's why the usual workflow is expecting the composer update being run on a development machine having enough RAM (i.e. probably more than 1GB set as memory limit for PHP), and the update should be triggered manually by the developer (and if triggered automatically by a continuous integration build, the memory requirements apply to this machine as well).
There is no way around this memory requirement. A web server with only 512 MB RAM installed might be able to function as a staging server with barely any concurrent users present, but it shouldn't be used to update the Composer dependencies.
Personally I fix the merge conflicts in the composer.lock with a very easy system: Delete the lock file and run composer update. This will update all dependencies to the latest versions that satisfy the version requirements, and create a new working composer.lock file that get's committed during the merge.
I am not afraid to potentially update everything, because either it works as expected, or my tests will catch errors quickly.
I do select the 3rd party packages I use carefully:
they have to tag their versions, preferably using semantic versioning.
I do not use any branches for release versions (the rare occasions that someone used them during development were painful)
they should ship a new major version if they make backwards incompatible changes
the locally developed packages also follow these requirements
This works with around 270 packages served by our local Satis instance (probably also a factor to consider when trying to reduce memory footprint - only the packages known to Composer can end up in memory: Compare the ten thousand packages potentially available on packagist.org with 270 local packages). 60 packages of the 270 are locally developed by 20 developers, and randomly releasing new versions. The update failures in the last 2 years are very rare, and should be handled like other bugs: If a tagged version is detected to be incompatible, we release a bugfix release reverting the change, and tag the original change with a new major release, if the incompatible change is necessary.
So the workflow you ask for is probably like this:
Anytime, any developer should be able to run composer update on their local machine.
They should be able to detect if this breaks things on their local machine.
If nothing is broken, they commit the changes including the composer.lock file to Git
The staging server only runs composer install and will use exactly the versions that the developer used on his machine.
If nothing is broken on staging, that version is ready to be used on production.
Merging an already committed version on another developers machine will likely show merge conflicts with composer.lock.
Resolve conflicts on all other files.
The composer.lock file should be deleted.
From here, the workflow is like above, i.e.:
The developer should be able to run composer update on his local machine.
They should be able to detect if this breaks things on his local machine.
If nothing is broken... and so on.
Another approach (without doing composer update):
Copy your new (and deleted) lines from composer.json into a separate text file.
Use entire remote composer.json and composer.lock files.
During merge conflict mode do:
composer install
For every new package your wrote down in step 1 run composer require vendor/package:version
For every removed package your wrote down in step 1 run composer remove vendor/package
Testing!, Commiting, done!
This method will keep locks from remote branch (maybe master or develop branches), and only updates your new packages.
Sometime composer update can break things.
What I do is.
Discard all of my changes on composer.lock
Merge composer.json
Run composer install so that packages get installed according to composer.lock.
Take one of your package and run composer require vendor/package if you
added that package or composer remove vendor/package if you removed it.
if there are multiple packages it will be installed automatically from composer.json

Do you have to run Composer on localhost and on production?

I'm new to Composer (getcomposer.org) and wasn't sure how it works if I install a package locally using Composer and then push my codebase to my production server using Git. Do I have to run Composer again on the production server?
cheers,
J
When you setup your project, you add your dependencies into your composer.json file in your local project directory.
Once you have done this, you will need to run composer update. You can also run composer install, however, without a composer.lock file, composer install actually runs composer update.
Composer update goes out and resolves all the dependencies of all the libraries you are using, downloads them to the /vendor directory, creates an autoloader script and generates the composer.lock file.
For your project what you want to do is version your composer.json AND your composer.lock file.
On your production server, you will always run composer install, which insures that the libraries on your production server are the exact same ones you utilized in your development process.
composer install is also a lot faster as it does not have to do all the dependency management work, and can almost always just pull a specific commit#. It doesn't have to look at version strings. Thus is is usually very fast, once a server has already gone through it once.
In development the only time you should run composer update, is when you introduce a new library OR you have an issue where an underlying library has been changed and you know that you need to have composer go out and re-calculate the dependencies. composer update always recalculates and downloads the latest revisions of any library available even if the version level did not change. This means that there is a potential for something to have become broken, necessitating the potential for as full a set of regression tests as you might have available. In short, something having nothing to do with what you're actually changing could have broken, so you only want to introduce the potential for change when you are forced to.
Of course, if you did introduce a new library, you have no choice but to run composer update.
Once you run composer update, your composer.lock file will be updated (as expected) and the production server will pick this up when you run composer install on it.
As others stated, put the vendors in your gitignore. The point is that these are external libraries that you depend on, but that do not belong in your project, and should not be versioned. In the old days some people utilized git submodules, and it's a big PITA you really want to avoid, not to mention that submodules don't address dependencies of the libraries you included.
It depends how are you working. If you, like getcomposer.org says, are ignoring the "vendor" folder then you need to run it again. If you are versioning the "vendor" folder then you don't need to run it again.
Have in mind that composer will get in charge of managing your dependencies versions, so there is no need to put your dependencies files under versioning. If you put these files under git you will only make your repository bigger.
Read https://getcomposer.org/doc/01-basic-usage.md#installing-dependencies.
For clarification when you ignore the "vendor" folder Git don't track the files under the folder so if you clone the repo it will be like composer never was executed

Categories