So i just setup my Xampp Apache server to load all the documents i create on my Google Drive. For example if i type 127.0.0.1, it will show me all my web files on my Google Drive. I set this up so i can develop across my laptop which i use at school and my desktop which i use at home without having to copy files back and forth between computer to computer. This works the way i want it to but i forgot one thing. How am i supposed to sync my databases that i create. My question to you is how can i sync my databases to the cloud or somewhere else so i don't have to export and import every time i switch devices?
Also i would like to stay away from using hosting as i won't be online all the time.
The database server (the application itself) expects exclusive access to the data files. If you try to synchronize a data file between two systems, you're going to have issues and probably data loss.
What you could do is synchronize the data directory and make sure you're only running one server at a time. So when you're done working on the laptop, shut down the MySQL server process/service (mysqld), wait for it to finish synchronizing, and then start up the mysqld on the desktop. I suspect this will work, but it's a pretty non-standard usage so anything could happen.
To make it easier, I'd definitely consider writing a wrapper script/batch file that first tests for the presence of a lock file, then (if non exists) creates one, starts the mysqld, and when exiting make sure mysqld is stopped before deleting the lock file.
Anyway, to make this happen you would first stop mysqld everywhere, take the one mysql data directory that you wish to use, copy it to your Google Drive, then edit all of your MySQL configuration files to point to the new data directory instead of the old one. Whether XAMPP makes this more difficult than it should be, I'm not sure, but with stock MySQL it should be pretty trivial.
Remember that just because it's possible doesn't make it a good idea, and likewise that just because it's not a good idea doesn't make it won't work. So I'm saying it's not a good idea to do this, but if done with proper attention it will "probably" work.
Hope that helps.
Related
I have a PHP app, which is working fine for me, both on test system and a production system.
But another user of my app wrote me, that it creates a lot of files .nfs00000* on his system and it slows down loading of the page.
My app does not create any files on the filesystem, all datas are stored into MySQL. So I was really surprised by this. But that user removed my PHP app from his website and the problem dissappeared.
I will be honest -- I know nothing about .nfs00000* files and I was not able to google out anything reasonable about them. Can someone please try to give me explanation, what they are, why they are created and if I can do anything to avoid their creation?
Thanx, Honza
Maybe this can help:
Under linux/unix, if you remove a file that a currently running process still has open, the file isn't really removed. Once the process closes the file, the OS then removes the file handle and frees up the disk blocks. This process is complicated slightly when the file that is open and removed is on an NFS mounted filesystem. Since the process that has the file open is running on one machine (such as a workstation in your office or lab) and the files are on the file server, there has to be some way for the two machines to communicate information about this file. The way NFS does this is with the .nfsNNNN files. If you try to remove one of these file, and the file is still open, it will just reappear with a different number. So, in order to remove the file completely you must kill the process that has it open.
If you want to know what process has this file open, you can use 'lsof .nfs1234'. Note, however, this will only work on the machine where the processes that has the file open is running. So, if your process is running on one machine (eg. bobac) and you run the lsof on some other burrow machine (eg. silo or prairiedog), you won't see anything.
(Source)
If your app is deleting or modifying some files it could be the cause of the problem.
I have a website right now that is currently utilizing 2 servers, a application server and a database server, however the load on the application server is increasing so we are going to add a second application server.
The problem I have is that the website has users upload files to the server. How do I get the uploaded files on both of the servers?
I do not want to store images directly in a database as our application is database intensive already.
Is there a way to sync the servers across each other or is there something else I can do?
Any help would be appreciated.
Thanks
EDIT: I am adding the following links for people that helped me understand this question more:
Synchronize Files on Multiple Servers
and
Keep Uploaded Files in Sync Across Multiple Servers - LAMP
For all Reading this post NFS seems to be the better of the 2.
NFS will keep files in sync but you could also use ftp to upload the files across all servers as well but NFS looks like the way to go.
This is a question for serverfault.
Anyway I think you should definitely consider getting in the "cloud".
Syncing uploads from one server to another is simply unreliable - you have no idea what kind of errors you can get and why you can get them. Also the syncing process will load both servers. For me the proper solution is going in the cloud.
Should you chose the syncing method you have a couple of solutions:
Use rsync to sync the files you need between the servers.
Use crontab to sync the files every X minutes/hours/days.
Copy the files upon some event (user login etc)
I got this answer from server fault:
The most appropriate course of action in a situation like this is to break the file share into a separate service of its own. Don't duplicate files if you have a network that can let the files be "everywhere (almost) at once." You can do this through NFS/CIFS or through a proper storage protocol like iSCSI. Mount as local storage in the appropriate directory. Depending on the performance of your network and your storage needs, this could add a couple of undetectable milliseconds to page load time.
So using NFS to share server files would work OR
as stated by #kgb you could specify one single server to hold all uploaded files and have other servers pull from that (just make sure you run a cron or something to back up the file)
Most sites solve this problem by using a 3rd party designated file server like Amazon S3 for the user uploads.
Another answer could be to use a piece of software called BTSync, it is very easy to install and use and could allow you to easily keep files in sync accross as many servers as you need to. It takes only 3 terminal commands to install and is very efficient.
Take a look here
and here
You can use db server for storage... Not in the db i mean, have a web server running there too. It is not going to increase cpu load much, but is going to require a better channel.
you could do it with rsync.. people have suggested using nfs.. but that way you create one point of failure... if the nfs server goes down.. both your servers are screwed... correct me if im wrong
What is the best process for updating a live website?
I see that a lot of websites (e.g. StackOverflow) have warnings that there will be downtime for maintenance in advance. How is that usually coded in? Do they have a config value which determines whether to display such a message in the website header?
Also, what do you do if your localhost differs from the production server, and you need to make sure that everything works the same after you transfer? In my case, I set up development.mydomain.com (.htaccess authentication required), which has its own database and is basically my final staging area before uploading everything to the live production site. Is this a good approach to staging?
Lastly, is a simple SFTP upload the way to go? I've read a bit about some more complex methods like using server-side hooks in Git.. Not sure how this works exactly or whether it's the approach I should be taking.
Thanks very much for the enlightenment..
babonk
This is (approximately) how it's done on Google App Engine:
Each time you deploy an application, it is associated with a subdomain according to it's version:
version-1-0.example.com
version-1-1.example.com
while example.com is associated with one of the versions.
When you have new version of server-side software, you deploy it to version-2-0.example.com, and when you are sure to put it live, you associate example.com with it.
I don't know the details, because Google App Engine does that for me, I just set the current version.
Also, when SO or other big site has downtime, that is more probable to be a hardware issue, rather than software.
That will really depend on your website and the platform/technology for your website. For simple website, you just update the files with FTP or if the server is locally accessible, you just copy your new files over. If you website is hosted by some cloud service, then you have to follow whatever steps they offer to you to do it because a cloud based hosting service usually won’t let you to access the files directly. For complicated website that has a backend DB, it is not uncommon that whenever you update code, you have to update your database as well. In order to make sure both are updated at the same time, you will have to take you website down. To minimize the downtime, you will probably want to have a well tested update script to do the actual work. That way you can take down the site, run the script and fire it up again.
With PHP (and Apache, I assume), it's a lot easier than some other setups (having to restart processes, for example). Ideally, you'd have a system that knows to transfer just the files that have changed (i.e. rsync).
I use Springloops (http://www.springloops.com/v2/) to host my git repository and automatically deploy over [S/]FTP. Unless you have thousands of files, the deploy feels almost instantaneous.
If you really wanted to, you could have an .htaccess file (or equivalent) to redirect to a "under maintenance" page for the duration of the deploy. Unless you're averaging at least a few requests per second (or it's otherwise mission critical), you may not even need this step (don't prematurely optimize!).
If it were me, I'd have a an .htacess file that holds redirection instructions, and set it to only redirect during your maintenance hours. When you don't have an upcoming deploy, rename the file to ".htaccess.bak" or something. Then, in your PHP script:
<?php if (file_exists('/path/to/.htaccess')) : ?>
<h1 class="maintenance">Our site will be down for maintenance...</h1>
<?php endif; ?>
Then, to get REALLY fancy, setup a Springloops pre-deploy hook to make sure your maintenance redirect is setup, and a post-deploy hook to change it back on success.
Just some thoughts.
-Landon
I mantain a custom PHP application (build for me) that is hosted in a web server. Sometimes I add new features or repair bugs, and after test in local I upload the changes to the web server. It's not a critical application (is a game), but the most of the time there are some people connected.
The steps that I make to upgrade the application:
Access via FTP (Filezilla)
Upload a .htaccess file that redirects all the people (except my IP) to a mantain.html file
Check that access is denied for other IP except mine.
Backup old code
Upload new code
Go to PhPMyAdmin
Backup DB
Execute scripts for the DB
Test that all works fine (if not -> revert the backups)
remove .htaccess file
I usually spend an average of 30 minutes doing these steps, and I'm wondering if there is any way to optimize, automatize or doing something to spend less time. Also I know that if I can automatize some steps there are less prone to have errors.
Several other answers suggest PHP-specific deployment tools, but being as I'm not very familiar with PHP, I'll offer some general tips. These suggestions may be redundant by some of the other tools already suggested, though.
First off, don't upload a new .htaccess file every time--just have two of them on your server. Perhaps call them .htaccess-permanent, and .htaccess-maintenence. Then create a symlink to the one that ought to be active. Then once you've tested that access is properly denied once, you don't have to do this manual testing phase every single time you do an upgrade.
I'd also write a shell script to do most everything for me. My new work flow would look like this:
Upload new code to server in a directory called new/
Log in to the server via shell, and execute the upgrade script
Test the new site
Run upgrade-finalize
The end.
Now for the interesting part, the upgrade script will do this:
It will delete the .htaccess symlink, and re-create it, pointing to .htaccess-maintenence.
It will copy the current code in current/ to backup/
It will back up the DB, using the exact same commands that PHPMyAdmin uses
It will move the contents of new/ (which you just uploaded) to current/
It will execute the scripts for the DB
And the upgrade-finalize script will simply:
Delete the .htaccess symlink, and re-create it, pointing to .htaccess-permanent once again
The only possibly tricky part here will be getting the exact commands that PHPMyAdmin uses to back up your database, but it's probably a simple mysqldump command, and you can probably get that info from PHPMyAdmin or some logs, or something. Sorry, I don't know more about PHPMyAdmin to help in this specific area.
Look into a deployment tool like Capistrano that allows you to automate those steps.
I usually spend an average of 30 minutes doing these steps, and I'm wondering if there is any way to optimize, automatize or doing something to spend less time.
There are many ways. For starters, steps one through eight can be done in a single shell script. You could checkout Phing, an automated deployment system. Also, you might want to delve in continuous integration for even more control over how and when the software can be deployed.
Doing this manually is, like you say, asking for trouble.
For starters, you could upload your files into a new webroot and when done, switch over the DocumentRoot in apache, leaving it available during the copy process. For any shared files you could use a symlink to a common folder (eg, uploaded images etc)
You could probably take the backup during operation as well if you don't care about consistency in the database. For migrations that doesn't "break" the functionality, you could also migrate it and test it on your new webroot with another hostname if consistency isn't a problem.
The best option is always to use multiple webservers so that you can take one offline for testing while the other one is operational, but you will still have problem with consistency, however I assume that is not an option since you don't mention it.
I've successfully made changes to my httpd.conf file in order to modify the DocumentRoot to my Dropbox folder. No longer does localhost point to /etc/www, but rather /home/Dropbox/www...
This is convenient because no matter which computer I'm on, the changes to my web files are synchronized, and Dropbox keeps a transparent versioning system in the background.
I'm wondering if it is also possible to store mySQL data (not necessarily the actual binaries) in my Dropbox folder. Data synchronization would be equivalently useful if this were possible. What kind of changes would one make to have databases, tables, and other user generated content pushed off to a Dropbox folder, rather than my local hard drive?
It is probably easier, and more reliable, to use a remote mysql database. Most web hosts offer mysql services, some even are free. Syncing mysql databases is a pain, no matter how you do it! If you start copying the data files themselves it is just waiting for corruption!
It is possible if you successfully can copy data folder of MySQL and point it to right there however you may encounter problems with the concurrency. That's not a recommended way. Why don't you use a version control system such as svn, git with a remote connections allowed MySQL server?
Databases get updated very frequently and dropbox will force it to update too frequently, but it will fail to sync sometimes and your connection will be wasted with DropBox updates. That's really not a good practice.
Sure you can, edit your my.cnf file and change datadir from what is was (perhaps /var/lib/mysql/) to /home/Dropbox/mysql...
Dropbox cannot handle file ownerships (and permissions), so if your original database file was owned by mysql:mysql, after every sync the owner would be youruser:yourgroup, the permissions set to 664 and the database would be read-only for mysql!
The solution is to add the user mysql to the group yourgroup, and then it works with 664 permissions and you don't have to manually change the ownerships back to mysql every time.