I'm using LAMP with CodeIgniter for one of my projects; version controlled by SVN. Every time I execute svn export file:///svnrepo/project/trunk/www . --force when in the www directory and then reload the web page, it goes blank.
The website only shows up after I do a service httpd restart (Using CentOS 5).
I want to be able to execute the svn export using a Phing build script in the future and I don't want to have to get root privileges and restart apache every time when I do a build.
Is what I'm experience a common problem? How do I solve it without restarting apache?
Edit:
It seems someone has had this problem before: http://codeigniter.com/forums/viewthread/181642/
Ok I got it.SVN maintains a files last modified time which throws off the APC cache. So to solve it we update the last modified time of all the files after we run an SVN export. Here is my final script:
#!/bin/sh
svn export --force file:///home/steve/repo/exmaple/trunk \
/home/steve/public_html/example.com/public/
find /home/steve/public_html/exmaple.com/public | xargs touch
You can find more details here.
An alternative solution would be to set apc.stat=0 (reference) in the apc.ini, and then use apc_clear_cache() (reference) to force the removal of the opcode cache.
What's awesome about this solution is that when apc.stat is set to 0, it disables the check on each request to determine if the file has been modified. This results in a huge performance boost.
Additionally, using apc_clear_cache() to clear the APC cache tends to result in a cleaner build. I've run into wonky race conditions where certain files will get built out that have dependencies on others that have not yet been built out. This results in a spat of FATAL errors. The only caveat here is that apc_clear_cache() needs to be run via apache, so you'll need to implement a wgetor something similar for this.
Related
Is there any way to set up git such that it listens for updates from a remote repo and will pull whenever something changes? The use case is I want to deploy a web app using git (so I get version control of the deployed application) but want to put the "central" git repo on Github rather than on the web server (Github's interface is just soooo nice).
Has anyone gotten this working? How does Heroku do it? My Google-fu is failing to give me any relevant results.
Git has "hooks", actions that can be executed after other actions. What you seem to be looking for is "post-receive hook". In the github admin, you can set up a post-receive url that will be hit (with a payload containing data about what was just pushed) everytime somebody pushes to your repo.
For what it's worth, I don't think auto-pull is a good idea -- what if something wrong was pushed to your branch ? I'd use a tool like capistrano (or an equivalent) for such things.
On unix-likes you can create cron job that calls "git pull" (every day or every week or whatever) on your machine. On windows you could use task scheduler or "AT" command to do the same thing.
There are continuous integrations programs like Jenkins or Bamboo, which can detect commits and trigger operations like build, test, package and deploy. They do what you want, but they are heavy with dependencies, hard to configure and in the end they may use periodical check against git repository, which would have same effect like calling git pull by cron every minute.
I know this question is a bit old, but you can use the windows log and git to autopull your project using a webhook and php (assuming your project involves a webserver.
See my gist here :
https://gist.github.com/upggr/a6d92e2808e9628ebe0d01fd93569f4a
As some have noticed after trying this, if you use php exec(), it turns out that solving for permissions is not that simple.
The user that will execute the command might not be your own, but www-data or apache.
If you have root/sudo access, I recommend you read this Jonathan's blog post
When you aren't allowed/can't solve permissions
My solution was a bit creative. I noticed I could create a script under my username with a loop and git pull would work fine. But that, as pointed out by others, bring the question of running a lot of useless git pull every, say, 60 seconds.
So here the steps to a more delicate solution using webhooks:
deploy key: Go to your server and type:
ssh-keygen -t rsa -b 4096 -C "deploy" to generate a new deploy key, no need write-permissions (read-only is safer). Copy the public key to your github repository settings, under "deploy key".
Webhook: Go to your repository settings and create a webhook. Lets assume the payload address is http://example.com/gitpull.php
Payload: create a php file with this code example bellow in it. The purpose of the payload is not to git pull but to warn the following script that a pull is necessary. Here the simple code:
gitpull.php:
<?php
/* Deploy (C) by DrBeco 2021-06-08 */
echo("<br />\n");
chdir('/home/user/www/example.com/repository');
touch(GITPULLMASTER);
?>
Script: create a script in your preferred folder, say, /home/user/gitpull.sh with the following code:
gitpull.sh
#!/bin/bash
cd /home/user/www/example.com/repository
while true ; do
if [[ -f GITPULLMASTER ]] ; then
git pull > gitpull.log 2>&1
mv GITPULLMASTER GITPULLMASTER.`date +"%Y%m%d%H%M%S"`
fi
sleep 10
done
Detach: the last step is to run the script in detached mode, so you can log out and keep the script running in background.
There are 2 ways of doing that, the first is simpler and don't need screen software installed:
disown:
run ./gitpull.sh & to put it in background
then type disown -h %1 to detach and you can log out
screen:
run screen
run ./gitpull.sh
type control+a d to detach and you can log out
Conclusion
This solution is simple and you avoid messing with keys, passwords, permissions, sudo, root, etc., and also you prevent the script to flood your server with useless git pulls.
The way it works is that it checks if the file GITPULLMASTER exists; if not, back to sleep. Only if it exists, then do a git pull.
You can change the line:
mv GITPULLMASTER GITPULLMASTER.date +"%Y%m%d%H%M%S"`
to
rm GITPULLMASTER
if you prefer a cleaner directory. But I find it useful for debug to let the pull date registered (and untracked).
For our on-premises Windows test servers, we use Windows Task Scheduler tasks, set to run every 3 minutes, pulling from Bitbucket Cloud to repositories on those servers. While not instantaneous, it meets our needs, and has proven to be reliable.
I'm running Grav CMS on a Linode Ubuntu 16.04 server where PHP7 (php-fpm + nginx) returns cached results when listing directory contents. I first encountered the problem with FilesystemIterator, but it isn't limited to that class - the same problem appears when I use scandir.
Basically what happens is that any time I sync new content to the server, whether I use rsync or FTP, PHP will return the old contents of a particular folder.I've tried calling clearstatcache, but it didn't help – even if I called it from the appropriate PHP file, just before I scanned the directory.
touch'ing the files to update their mtime doesn't help either. Restarting the php-fpm service does work, however.
Is it possible that PHP caches the contents of the directory in some other way? Could it be the file system that is fooling PHP somehow?
There is but it's not something I would use in production.
Run sync first: $ sync
This command writes any cache data that hasn't been written to the disk out to the disk.
Free pagecache:echo 1 > /proc/sys/vm/drop_caches
Free dentries and inodes: echo 2 > /proc/sys/vm/drop_caches
Free pagecache, dentries and inodes: echo 3 > /proc/sys/vm/drop_caches
That being said I would also take a look at realpath_cache.
It turns out the problem I was having wasn't due to the OS or PHP, but to the Grav CMS itself – specifically how it caches page objects. It doesn't invalidate cached pages if all that had changed was the associated media files. Turning off the global cache setting for Grav helped this issue, but I've also opened an issue on the Grav repo to see if this is inteded behaviour or not.
Last week, I tried to deploy a simple symfony app on azure.
I choose the plan app service B2 (2cores / 3.5Go RAM).
PHP Version : 5.6.
First it took forever to complete the composer install. (I tried to go on S3, it was a little faster but not very different).
So I tried to optimize the php config, opcache, realpath_cache_size...etc (xdebug already disabled).
I even tried to enable wincache, but with no real improvment.
So now my app is deployed, but it is too slow to be usable.
A simple php app/console (in dev mode) takes ~23secondes.
It seems to recreate the cache everytime. On my local unix environnment (similar specs), it takes 6seconds when the cache is cold and 500ms when the dev cache is warm.
I think that the main problem is a filesystem issue, because to remove the dev cache folder it takes 16 seconds.
On my local unix environnment, similar specs, it takes ~200ms to remove the same folder.
Like I said I tried S3 Plan with a small improvment but not enough to explain this slowness.
One thing weird, it's that if I rerun the command php app/console just after it finished, the command takes 5seconds to run (much better). But If rerun it 5seconds after it finished, it takes 23seconds.
I already tried these solutions (even if the environnment is different) :
https://stackoverflow.com/a/17021255/6309878
Update : I tried to set the symfony app/cache folder to the local filesystem D:\local\cache, but no improvment, it may be worst.
Please try below steps and let me know if it improves the performance -
1) In the wwwroot directory of your site, create a .user.ini file (if it doesn’t already exist) and add “wincache.fcenabled=0”. This will disable Wincache.
2) Go to Azure Portal and go to the Application Settings for your app. In the App Settings section, add “WEBSITES_DYNAMIC_CACHE” with a value of 1.
3) Restart the site.
I'm using PHP with OPcache. I only git-push to master to deploy my web site in production (not really, it's just after unit tests, but never mind). In php.ini file, OPcache settings are about "time" and "frequency". But I just want to reset cache after git pull on my server.
So I think I just need to call opcache_reset after git-pull on my production server and set opcache.validate_timestamps to 0 (never reset cache)
I did not read anything about that way, so I doubt: I don't know if it's a good practice. Did I miss something? Is there any risk or is it OK?
Thanks a lot!
P.S. : I'm using a PHP framework and composer (composer install is running just after git-pull)
In order to get the greatest benefit from OPCache you should disable opcache.validate_timestamps. If you subsequently call opcache_reset() from a script every time you deploy your code to the server, then your OPCache is cleared once for each new set of files, and the system doesn't waste resources constantly checking the files.
There's a couple of "gotchas", however:
First of all, Make sure that the call to opcache_reset() happens, or else you'll be running the old code. If you have a script to execute your deploy, make sure it fails loudly if this step doesn't execute.
Secondly, depending on exactly how PHP is running (mod_php vs php-fpm), you may need to execute the opcache_reset() function via a request to the browser, not via the command line. For example, the most obvious solution to clear the cache is to have a simple PHP file like the following
<?php
if (php_sapi() != "cli") die("Not accessible from web");
opcache_clear();
and execute that file on each code pull. Depending on the version of PHP and how it's run that may only clear the cache for the command line and not for your running web version.
If clearing from the command line doesn't work, consider creating a similar script and calling it via the web using curl or wget. For example, curl http://example.com/clear_cache.php?secret=abc123. If you create the script to be web accessible, then make sure it checks a secret key to prevent someone from loading up your server by constantly clearing your cache.
Finally, as others have suggested, to make your builds totally repeatable between testing and deployment, consider having the end of the test process create a .zip file of the entire code used for testing, including the libraries pulled down by composer. Rather than git pull on your server, just unzip the file over the code root. I realize that git pull && composer update is easy. However, as others have suggested, if a library gets updated between the time tests were run and the time of deployment, then your code may no longer work as expected.
I've heard the phrase "deploying applications" which sounds much better/easier/more reliable than uploading individual changed files to a server, but I don't know where to begin.
I have a Zend Framework application that is under version control (in a Subversion repository). How do I go about "deploying" my application? What should I do if I have an "uploads" directory that I don't want to overwrite?
I host my application through a third party, so I don't know much other than FTP. If any of this involves logging into my server, please explain the process.
Automatic deploy + run of tests to a staging server is known as continuous integration. The idea is that if you check in something that breaks the tests, you would get notified right away. For PHP, you might want to look into Xinc or phpUnderControl
You'd generally not want to automatically deploy to production though. The normal thing to do is to write some scripts that automates the task, but that you still need to manually initiate. You can use frameworks such as Phing or other build-tools for this (A popular choice is Capistrano), but you can also just whisk a few shell-scripts together. Personally I prefer the latter.
The scripts themselves could do different things, depending on your application and setup, but a typical process would be:
ssh to production server. The rest of the commands are run at the production server, through ssh.
run svn export svn://path/to/repository/tags/RELEASE_VERSION /usr/local/application/releases/TIMESTAMP
stop services (Apache, daemons)
run unlink /usr/local/application/current && ln -s /usr/local/application/releases/TIMESTAMP /usr/local/application/current
run ln -s /usr/local/application/var /usr/local/application/releases/TIMESTAMP/var
run /usr/local/application/current/scripts/migrate.php
start services
(Assuming you have your application in /usr/local/application/current)
I wouldn't recommend automatic updating. Just because your unit tests pass doesn't mean your application is 100% working. What if someone checks in a random new feature without any new unit tests, and the feature doesn't work? Your existing unit tests might pass, but the feature could be broken anyway. Your users might see something that's half-done. With automatic deployment from a check-in, you might not notice for a few hours if something made it live that shouldn't have.
Anyhow, it wouldn't be that difficult to get an automatic deployment going if you really wanted. You'd need a post-check-in hook, and really the steps would be:
1) Do an export from the latest check-in
2) Upload export to production server
3) Unpack/config the newly uploaded export
I've always performed the last steps manually. Generally it's as simple as SVN export, zip, upload, unzip, configure, and the last two steps I just alias a couple of bash commands together to perform. Then I swap out the root app directory with the new one, ensuring I keep the old one around as a backup, and it's good to go.
If you're confident in your ability to catch errors before they'd automatically go live, then you could look at automating that procedure. It gives me the jibbly-jibblies though.
At my webdev company we recently started using Webistrano, which is a Web GUI to the popular Capistrano tool.
We wanted an easy to use, fast deployment tool with a centralized interface, accountability (who deployed which version), rollback to previous versions and preferably free. Capistrano is well-known as a deployment tool for Ruby on Rails applications, but not centralized and targeted mainly to Rails apps. Webistrano enhances it with a GUI, accountability, and adds basic support for PHP deployment (use the 'pure file' project type).
Webistrano is itself a Ruby on Rails app, that you install on a development or staging server. You add a project for each of your websites. To each project you add stages, such as Prod and Dev.
Each stage can have different servers to deploy to, and different settings. Write (or modify) a 'recipe', which is a ruby script that tells capistrano what to do. In our case I just used the supplied recipe and added a command to create a symlink to a shared uploads dir, just like you mentioned.
When you click Deploy, Webistrano SSHs into your remote server(s), does an svn checkout of the code, and any other tasks that you require such as database migrations, symlinking or cleanup of previous versions. All this can be tweaked of course, after all, it's simply scripted.
We're very happy with it, but it took me a few days to learn and set up, especially since I wasn't familiar with Ruby and Rails. Still, I can highly recommend it for production use in small and medium companies, since it's proven very reliable, flexible and has saved us many times the initial investment. Not only by speeding up deployments, but also by reducing mistakes/accidents.
This sort of thing is what you would call "Continous Integration". Atlassian Bamboo (cost), Sun Hudson (free) and Cruise Control (free) are all popular options (in order of my preference) and have support to handle PHPUnit output (because PHPUnit support JUnit output).
The deployment stuff can be done with a post build trigger. Like some other people on this thread, I would exercise great caution before doing automated deployments on checkin (and test passing).
check fredistrano, it's a capistrano clone
works great (litle bit confusing installing but after all runs great)
http://code.google.com/p/fredistrano/
To handle uploads, the classic solution is to move the actual directory out of the main webspace, leaving it only for a fresh version to be checked out (as I do in the script below) and then using Apache to 'Alias' it back into place as part of the website.
Alias /uploads /home/user/uploads/
There are less choices to you if you don't have as much control of the server however.
I've got a script I use to deploy a given script to the dev/live sites (they both run on the same server).
#!/bin/sh
REV=2410
REVDIR=$REV.20090602-1027
REPOSITORY=svn+ssh://topbit#svn.example.com/var/svn/website.com/trunk
IMAGES=$REVDIR/php/i
STATIC1=$REVDIR/anothersite.co.uk
svn export --revision $REV $REPOSITORY $REVDIR
mkdir -p $REVDIR/tmp/templates_c
chown -R username: $REVDIR
chmod -R 777 $REVDIR/tmp $REVDIR/php/cache/
chown -R nobody: $REVDIR/tmp $REVDIR/php/cache/ $IMAGES
dos2unix $REVDIR/bin/*sh $REVDIR/bin/*php
chmod 755 $REVDIR/bin/*sh $REVDIR/bin/*php
# chmod -x all the non-directories in images
find $IMAGES -type f -perm -a+x | xargs -r chmod --quiet -x
find $STATIC1 -type f -perm -a+x | xargs -r chmod --quiet -x
ls -l $IMAGES/* | grep -- "-x"
rm dev && ln -s $REVDIR dev
I put the revison number, and date/time which is used for the checked-out directory name. The chmod's in the middle also make sre the permissions on the images are OK as they are also symlinked to our dedicated image server.
The last thing that happens is an old symlink .../website/dev/ is relinked to the newly checked out directory. The Apache config then has a doc-root of .../website/dev/htdocs/
There's also a matching .../website/live/htdocs/ docroot, and again, 'live' is another symlink. This is my other script that will remove the live symlink, and replace it with whatever dev points to.
#!/bin/sh
# remove live, and copy the dir pointed to by dev, to be the live symlink
rm live && cp -d dev live
I'm only pushing a new version of the site every few dats, so you might not want to be using this several times a day (my APC cache wouldn't like more than a few versions of the site around), but for me, I find this to be very much problem-free for my own deployment.
After 3 years, I've learned a bit about deployment best practices. I currently use a tool called Capistrano because it's easy to set up and use, and it nicely handles a lot of defaults.
The basics of an automated deployment process goes like this:
Your code is ready for production, so it is tagged with the version of the release: v1.0.0
Assuming you've already configured your deployment script, you run your script, specifying the tag you just created.
The script SSH's over to your production server which has the following directory structure:
/your-application
/shared/
/logs
/uploads
/releases/
/20120917120000
/20120918120000 <-- latest release of your app
/app
/config
/public
...etc
/current --> symlink to latest release
Your Apache document root should be set to /your-application/current/public
The script creates a new directory in the releases directory with the current datetime. Inside that directory, your code is updated to the tag you specified.
Then the original symlink is removed and a new symlink is created, pointing to the latest release.
Things that need to be kept between releases goes in the shared directory, and symlinks are created to those shared directories.
It depends on your application and how solid the tests are.
Where I work everything gets checked into the repository for review and is then released.
Auto updating out of a repository wouldn't be smart for us, as sometimes we just check in so that other developers can pull a later version and merge there changes in.
To do what you are talking about would need some sort of secondary check in and out to allow for collaboration between developers in the primary check in area. Although I don't know anything about that or if its even possible.
There is also issues with branching and other like features that would need to be handled.