3 Magento Servers Migrated To 1 - Product Imagery - php

I recently started working for a company that sells a large amount of products online. As the business grew, instead of looking towards a more scalable solution for their hosting they have just added VPS instances with their hosting provider, created an SSH tunnel between the now three VPS machines to facilitate a local MySQL connection and assigned different domains to different VPS machines.
I've sorted out a large, scalable server that we're now migrating to and have setup the new server to be configured in the same way as the three VPS servers we have.
I had rsync'd the contents of one server's Magento installation down to my local machine, did some testing, changed some configuration options and everything works on the new server once uploaded. The issue that I'm having is that with some sites (there are around 15 spread across the three VPS machines), there are no product images.
I assume this is because Magento has put these product images in the file system of the server that domain is assigned to.
My question really is, is it safe to rsync the contents of ./media/catalog from each VPS server to the main server, or does Magento create a file-structure of its own and in doing that may overwrite things?
Hope you guys can point me in the right direction.
Cheers,
Dave

I work with a similar set up. We have a minimum 3 load balanced servers that operate from one database with an SSH tunnel.
You're right about where Magento is storing the images. When Magento creates images in /magento/media/ it will create two folders with the first two characters of the filename. eg a file named "Image1.jpg" uploaded to the catalog will go to:
/path/to/magento/media/catalog/i/m/Image1.jpg
Instead of rsyncing all the other servers back, we have set up our admin area to resolve directly to the main server, and rsync the media files from the main server out to the others every 5 minutes or so.
It's not a perfect solution by any means, but it works for us at the moment. We've looked at sharing a file system between the servers, but that has it's own drawbacks.
Hope this helps.

Related

Multiple Website Environment: Virtual hosts vs All websites inside 'htdocs'

I am using MAMP in my Mac for local website testing.
Now, I've always set up multiple websites in the same root folder: 'htdocs'. Ex: htdocs>project1; htdocs>project2.
However I am reading about virtual hosts and how to use a different root for each website.
Is there any huge advantages of doing so apart from:
having a different url - ex: www.localhost2.example? (if localhost2 name is possible)
having a different database environment for each website? (although I typically use just one database per website and I don't mind having all databases in the same PHPMyAdmin)
What are the practical advantages?
It is a good idea for creating a test environment but it has pros and cons, If you're going to instantiate one vm per website then you will find resources problems when making them work all at once.
I usualy instantiate a virtual machine when a need to have a specific machine config for the web app (i.E. Linux specific software like WHMCS or so..), but for regular testing of websites I do deploy them on the localhost, they consume less resources that way.

Can a computer run a localhost (MAMP etc) as a backup for if internet is not available - and update server database?

I have been looking into creating a custom mySQL Point Of Sales system so that there is one centralised database for inventory levels between multiple stores and online etc.
The biggest problem I see is the unlikely event that the internet drops out in the bricks and mortar stores. If this were to happen, could it be set up so that the POS system is running off a local mySQL database on that computer (using MAMP or something similar) and then once internet is available again, automatically sync the databases to update sales and inventory levels?
In regards to 'how is the actual POS system going to be accessed without internet' I'm was thinking that the POS system would be run on the server when internet is available, and then when the net drops out it would be run from files stores on the machine pointing to the local database on the machine.
Yes, a minimal & viable solution would just be to have all of the POS data entered locally as well as on the remote database, then it serves as a sort of backup in case anything happens to the central DB.
As far as automating the 'fix' of the central DB after an outage, maybe the best way is to have the central system request sales data from the local DBs of each store. If the workflow is setup like this, then you don't really have to do anything 'special' about internet outages.
The problem here is obviously writes. You can use replication to always have a local readable copy of the database, but it's tricky to have multiple masters when using replication. I haven't used MySQL Cluster, but it may be what you need.
But since the problem is writes you can possibly implement the writing part of the POS system as a service you send messages to. When the network is down, queue the messages and send them when online.
An easier solution may actually be to always ensure network stability. Set up some mobile (GSM/3G) connection for failover and possibly even a standard POTS telephone line as well.
MySQL Master/Slave Replication would seem the logical approach.
Your MAMP code works directly against the local (slave) database; and when the MAMP server has access to the master database, the databases stay synchronised. If the slave loses access to the Internet, it's still working locally; and when the internet connection is restored it will synchronise again with the master.
With a little care in the database design (particularly autoincrement pks), you can run multiple local/slave servers all with their own local store data, and the master is a repository of data across all stores.

Deploy Content to Multiple Servers (EC2)

I’ve been working on a cloud based (AWS EC2 ) PHP Web Application, and I’m struggling with one issue when it comes to working with multiple servers (all under an AWS Elastic Load Balancer). On one server, when I upload the latest files, they’re instantly in production across the entire application. But this isn’t true when using multiple servers – you have to upload files to each of them, every time you commit a change. This could work alright if you don’t update anything very often, or if you just have one or two servers. But what if you update the system multiple times in one week, across ten servers?
What I’m looking for is a way to ‘commit’ changes from our dev or testing server and have it ‘pushed’ out to all of our production servers immediately. Ideally the update would be applied to only one server at a time (even though it just takes a second or two per server) so the ELB will not send traffic to it while files are changing so as not to disrupt any production traffic that may be flowing to the ELB .
What is the best way of doing this? One of my thoughts would be to use SVN on the dev server, but that doesn’t really ‘push’ to the servers. I’m looking for a process that takes just a few seconds to commit an update and subsequently begin applying it to servers. Also, for those of you familiar with AWS , what’s the best way to update an AMI with the latest updates so the auto-scaler always launches new instances with the latest version of the software?
There have to be good ways of doing this….can’t really picture sites like Facebook, Google, Apple, Amazon, Twitter, etc. going through and updating hundreds or thousands of servers manually and one by one when they make a change.
Thanks in advance for your help. I’m hoping we can find some solution to this problem….what has to be at least 100 Google searches by both myself and my business partner over the last day have proven unsuccessful for the most part in solving this problem.
Alex
We use scalr.net to manage our web servers and load balancer instances. It worked pretty well until now. we have a server farm for each of our environments (2 production farms, staging, sandbox). We have a pre configured roles for a web servers so it's super easy to open new instances and scale when needed. the web server pull code from github when it boots up.
We haven't completed all the deployment changes we want to do, but basically here's how we deploy new versions into our production environment:
we use phing to update the source code and deployment on each web service. we created a task that execute a git pull and run database changes (dbdeploy phing task). http://www.phing.info/trac/
we wrote a shell script that executes phing and we added it to scalr as a script. Scalr has a nice interface to manage scripts.
#!/bin/sh
cd /var/www
phing -f /var/www/build.xml -Denvironment=production deploy
scalr has an option to execute scripts on all the instances in a specific farm, so each release we just push to the master branch in github and execute the scalr script.
We want to create a github hook that deploys automatically when we push to the master branch. Scalr has api that can execute scripts, so it's possible.
Have a good look at KwateeSDCM. It enables you to deploy files and software on any number of servers and, if needed, to customize server-specific parameters along the way. There's a post about deploying a web application on multiple tomcat instances but it's language agnostic and will work for PHP just as well as long as you have ssh enabled on your AWS servers.

Moving PHP site from one server to another

I have no idea about PHP.
I have PHP site hosted on Server A. Now I want to transfer the hosting to another company having Windows hosting on Server B.
Can I just transfer all the contents of WWWROOT folder of Server A to Server B? Will the site work that way? Or I do have to get the source, compile and publish it?
Thanks!
the process is this:
copy the content from server A to B (also db dump)
make sure your site is working on server B correct
set a redirect on server A to server B (usually in .htaccess file)
edit DNS entries to point to server B
wait that DNS changes have been picked up (note: as suggested by Emil you can reduce this time by lowering TTL setting on the DNS entries)
remove content from server A (end hosting)
PHP is not (usually) compiled, you should be able to simply copy the files and directories over and they should at least run. You may have to set up a database and connections to it, change some configuration in the scripts and you may or may not run into incompatibilities between different PHP versions and/or UNIX/Windows problems though, depending on how portable the scripts were written.
you don't need to compile anything. it's enough to copy project directory from one server to another. one thing can cause your project not working on ohter hosting, if there will not be installed some php-extensions that are required for you project.
and of course, if your project uses some databases, they must be created on new server
PHP scripts are source code and are compiled when needed. Simply moving the files is enough. Problems may occur if it is a package that has been isntalled on that server and may have some properties in various files about absolute paths to other files.
Also, issues will occur if the files are talking to a local SQL server or the such.
Many hosting companies offer a free (or sometimes payed) service to copy your website accross including any databases. Ask your hosting company for help.
No need to compile, however you have to make sure that the new server meets all the requirements of your application (e.g. server modules) and that paths are correctly configured. Also under some circumstances the PHP version can matter as well. Only one way to find out!

How create beta (testing) website?

How create beta (testing) website use same webroot and cake folder ?
That beta (testing) website probably is at http://beta.example.com or http://example.com/beta
A method I have been using for a couple of years is to set up staging server instances. These could be either separate physical servers, or on the same server using hostname checks. However, it is good practice to have separate web roots and separate databases under each instance. You'll be asking for trouble if different aspects of your site are shared between staging instances!
My setup is the following:
Development (a computer with the source code on, set up to serve to http://websitename.dev (a local domain).
Preview (a separate server, used to provide a preview of a website or a change to a website, before doing the extra work to putting it live). http://websitename.preview.mycompanyname.com
Next (this is on the same server as the live website, under a different web root, and connected to a different database. The reason for this server is because SO MANY TIMES has a site worked on the development machine, but when it is put live, something on the live server makes the site DIE. http://websitename.next.mycompanyname.com
Live (the usual live server setup) http://websitename.com
This is all achieved by assigning DNS records correctly (to point to the correct servers), and using the config script of my web server application, listening to the hostnames and serving the correct web root.
A testing or "staging" server should be set up completely independently of the production server. You should not reuse any component of the live system, which even includes the database. Just set up a copy of the production system with a separate database, separate files, if possible a separate (but identical) server.
The point of a test system is to test code that is possibly buggy and may delete all your live data, shoot your dog and take your lunch hostage. Also, your test system may not be compatible with the production system, depending on what you're going to change down the road.
As such, create a new virtual host in your Apache config (or whatever you're using) and set it up exactly like the production system. Done.

Categories