Deploy a media folder without storing it on github with capistrano - php

I work on a php website that deploys to our servers with capistrano. The site has a very large media folder that needs to be moved into the current deploy every time. Currently what happens is I have a shell script that does a 'mount --bind' every time it deploys. And on the before deploy it does an 'unmount' of that folder. The problem is that it isn't reliable and sometimes on cleanup it rms my media folder. I thought about putting the media folder in the github, but it is 500 mbs and needs to change as users create accounts.
So options I have thought of and want your opinions on, or your options that is better then I can think of.
An .htaccess rewrite that whenever it looks for the media folder it reroutes it to a subdomain that has the folder on it. I just don't know if this rewrite will work for creating files and directories or only reading them.
I tested this today and it worked, for reading from the subdomain, but I could not get the create or write to work
Find a beter way to deploy the media folder without having to rely on shell commands that seem to fail and destroy the folder
Restarting the server on every deploy (this unmounts all folders) then just do the mount after the deploy and server restarted. This is just time consuming if I am doing many deploys it wouldn't work well because I have staging and production on the same server, so if I am testing on staging, restart it, it also restarts my production site, thus unmounting production well I am on staging testing.
Those are the only options I could think of. Any help would be great.

Related

Best way to transfer staging files with changes to prod?

I have two directories in my www folder - html and staging. My domain has staging.*.com linked to the staging directory and *.com linked to the html directory.
My question is simple. Is there a simple way to make changes in the staging directory and then when I want to transfer those changes to prod, just copy over the files?
I'm running Ubuntu on an AWS instance.
Right now I'm dragging the files manually in an FTP transfer app but this really isn't sustainable long term.

Laravel / Nginx serving wrong site

I have tried everything.
I recently tried to deploy a new Laravel application on my Ubuntu Digital Ocean droplet. I have done this many times in the past and have done nothing out of the ordinary with this project. I'm using https://gist.github.com/jamieshepherd/50419bb148a4f43e8266 this as a template nginx configuration, which I base all of my sites off. However, this time I deployed and went to my domain, I was served another client's site. The site had no images, styles, scripts etc. - but sure enough this was the wrong site. Weirdly, I could go to /images/example.jpg and get the correct file which would be in /public/images/example.jpg - but the actual routes were being served from a completely different folder.
There is absolutely no reference to the other client in my project anywhere, I feel like it has something to do with Laravel or PHP5-FPM setting some kind of root directory to the other folder.
I have gone to /public/index.php and just echo and died a message, and this works fine. When I revert the change and let the application run it again displays the other client's application. Here's a list of other things I've tried.
Temporarily renaming the other client's folder, as expected, NewSite throws a 404 error
Temporarily disabling the other client's nginx block, NewSite continues to serve the other client's site (remember, not css/images/scripts, weirdly)
Reinstalling PHP5-FPM
Printing the document_root, everything looks normal (/web/NewSite/public)
Trying to run the site off a different port
Pulled a working Laravel application from a completely different server, put it on this server, set the domain to point to this working application, serves the other client's site
Grep'd for any references to the client site in any of the project folders just to check I wasn't going insane, confirmed I'm not insane, though after all this I'm not sure anymore
Really I'm lost, I have no idea how to debug this any more.
Site trying to deploy: http://paragon.gg
Client site view that it's servring: https://swellhunter.co.uk
Client site NGINX conf: https://gist.github.com/jamieshepherd/4ff5430ddb13ed04f22c
Edit: Update: To make matters even more bizarre. I have temporarily removed swellhunter from sites-enabled, visited paragon.gg, still serves swellhunter's index. WHAT? There is absolutely no reference to the site at all now, I've even moved the paragongg folder to a completely different place to check. How on earth is it even finding the view with no reference to go on?
Seems like an issue with Laravel rather than with nginx or with your VPS.
Since you pointed out simple PHP file is pointing out rightly.
SSH to your droplet and run you laravel app using php artisan serve.
And access your droplet on port:3000
If the problem persists you can rule out server issues and can concentrate on debugging laravel app. Maybe be there is something wrong with you laravel app itself, some unwanted changes etc. Did you verify your laravel App?
Can you paste an LS -l of the directory? Have you checked to see what you have in the Laravel app/config/app.php for url ? I just took a look at paragon.gg and put https://paragon.gg it gave me a cert error but after I accepted it the site looked fine. Is that not the result you wanted?

Creating a safe dev environment

We are a small team developing a Wordpress site. Till now we have been editing the same files online, which inevitably led to mistakes. We thought to use Git / Github to stop stepping on each others tows and manage source code efficiently. However I can not run the site locally on XAMPP as I am getting numerous PHP errors. What would you recommend in this case?
Maybe creating another folder on the server with identical content just for testing? Is it possible then to run Git on the server?
I operate in a team of devs and this is what we do.
Locally each set up our own wordpress install under a seperate vhost and locally modified dns. Get just plain jane wordpress running perfectly.
Create a dev testing site on a live server on the internet (often prefix the real url with dev-www.mysite.com)
Create a GIT repository, and have it auto push-deploy to the dev testing site (we create the folder structure in GIT from wp-content down, and only have custom themes/plugins)
Configure GIT to MANUALLY push changes to the production server (for going live)
On all systems we install the wp-migrate-db-pro plugin. Locally devs only PULL from the shared dev site to their local (which means planning what/when/where you create content).
Copy around the /uploads folder, incase there are a bunch of images/media uploaded.
This works when some of us run Windows/IIS/WordPress, Mac/Apache/WordPress, Linux/Apache/WordPress
As a general idea a deployment process could go like this:
developers push their git changes to the server 'staging' area
either manually or using git's post processing, changes are pushed is needed to a virtual Wordpress web host that acts as test or 'staging' server. This can be password- or geolocation-protected
once it's tested properly, then the code can be moved to production web host files, this can be accomplish with a simple script that goes like: stop web server - backup files - nuke files - move files from staging - start web server

What is the best development environment for Drupal to be able to move it to a different server at go live?

When i've worked on Drupal sites before, if there is internal access to the server, or if remote desktop access is available, i've always developed it on the machine it would be ran from when live, and just not made it public on the server.
However, what is the best thing to do if you don't have access to the server yet, for example if the client hasn't got anything in place?
I need to be able to build and test the solution on my local machine, or on my VPS which I have RDP access to, and be able to move it over with as much ease as possible to the clients server when ready.
Any tips or best practices? As far as i'm aware Drupal doesn't have any specific migration tools? I could be wrong though
I don't work with Drupal, but for Prestashop, Wordpress, Zencart, etc. I always use the same workflow:
I setup a vhost in my virtual sever, usually using a subdomain of my own domain (like customer.mydomain.com). Install the software with its DB etc. on the server. Setup FTP access.
I get a local copy of the files, which I maintain in a local git repository, pushing to github for backup purposes mainly.
I work with ZendStudio and configure a remote server and set it up to upload the files when I save them, so I can check them pretty much as if I were working locally. But the main advantage of this approach is that I can share the project with the customer as it progresses.
When I have to move to final server, at least with Wordpress, I have to search/replace the domain name, which wordpress saves on DB. But I do it locally. I download the entire DB as an SQL file through phpmyadmin, open it, searc-replace and upload it again via phpmyadmin to the permanent server.
With ZenCart and others the problem is the config file, which stores some paths. For long projects or long term customers I modify the config file to use some config details or anothers depending on the server name.
adding to the above comment...
Check the "backup and migrate" module and the "backup files" module. "Backup and migrate" is useful in any setup...
with this I was able to do a barebones drupal install and then migrate/replace the database with the one backed up from my local system... if the databases are named differently you will still need to edit the settings.php
"backup files" is useful for themes and content assets like images etc. but is essentially just a wrapper around gzip
I typically develop on my local machine and then upload to server once complete.
All you need to do is change the folder name in /sites/ and change the settings.php file to reflect the server settings/domain.
Something you should be aware of:
If you are uploading files on your local installation, the file paths will be wrong on the server and you will need to execute a one off mysql replace query.
Make sure you use relative paths in any hard coded links.

What is a good solution for deploying a PHP/MySQL site via FTP?

I am currently working on a web application that uses PHP and MySQL, but I do not have shell access to the server (working on that problem already...). Currently, I have source control with subversion on my local computer and I have a database on the local computer that I make all changes to. Then, once I've tested all the updates on my local computer I deploy the site manually. I use filezilla to upload the updated files and then dump my local database and import it on the deployment server.
Obviously, my current solution is not anywhere near ideal. For one major thing, I need a way to avoid copying my .svn files... Does anyone know what the best solution for this particular setup would be? I've looked into Capistrano a bit and Ant, but both of those look like it would be a problem that I do not have shell access...
I'm using Weex to synchronize a server via FTP. Weex is basically a non-interactive FTP client that automatically uploads and deletes files/directories on the remote server. It can be configured to not upload certain paths (like SVN directories), as well as to keep certain remote paths (like Log directories).
Unfortunately I have no solution at hand to synchronize MySQL databases as well...
Maybe you could log your database changes in "SQL patch scripts" (or use complete dumps), upload those with Weex and call a remote PHP script that executes the SQL patches afterwards.
I use rsync in production but you could do this:
Add a config table into your site to hold what level of DB you are currently at.
During development, store each set of SQL changes into a single file (I use something like delta_X-up.sql). These will stay in your SVN as well. So, for example, if you are at delta_5 and add a table between the current release and the new release, all the SQL needed will be put in delta_6-up.sql
When it comes time to build, export the repo instead of using a checkout. This lets you ignore all the SVN cruft that comes along since you won't need that into production.
Use Weex to push those changes into production (this would be were I would use rsync but you don't have that option). Call a remote script that checks your config DB to see what delta level you are currently at, parse the directory with you delta_x-up.sql files and see if there are any new ones. If there are, read them and run the SQL inside.
You can do a subversion export (rather than checkout) to different directory from your working copy, then it will remove all the .svn stuff for you

Categories