I've successfully made changes to my httpd.conf file in order to modify the DocumentRoot to my Dropbox folder. No longer does localhost point to /etc/www, but rather /home/Dropbox/www...
This is convenient because no matter which computer I'm on, the changes to my web files are synchronized, and Dropbox keeps a transparent versioning system in the background.
I'm wondering if it is also possible to store mySQL data (not necessarily the actual binaries) in my Dropbox folder. Data synchronization would be equivalently useful if this were possible. What kind of changes would one make to have databases, tables, and other user generated content pushed off to a Dropbox folder, rather than my local hard drive?
It is probably easier, and more reliable, to use a remote mysql database. Most web hosts offer mysql services, some even are free. Syncing mysql databases is a pain, no matter how you do it! If you start copying the data files themselves it is just waiting for corruption!
It is possible if you successfully can copy data folder of MySQL and point it to right there however you may encounter problems with the concurrency. That's not a recommended way. Why don't you use a version control system such as svn, git with a remote connections allowed MySQL server?
Databases get updated very frequently and dropbox will force it to update too frequently, but it will fail to sync sometimes and your connection will be wasted with DropBox updates. That's really not a good practice.
Sure you can, edit your my.cnf file and change datadir from what is was (perhaps /var/lib/mysql/) to /home/Dropbox/mysql...
Dropbox cannot handle file ownerships (and permissions), so if your original database file was owned by mysql:mysql, after every sync the owner would be youruser:yourgroup, the permissions set to 664 and the database would be read-only for mysql!
The solution is to add the user mysql to the group yourgroup, and then it works with 664 permissions and you don't have to manually change the ownerships back to mysql every time.
Related
So i just setup my Xampp Apache server to load all the documents i create on my Google Drive. For example if i type 127.0.0.1, it will show me all my web files on my Google Drive. I set this up so i can develop across my laptop which i use at school and my desktop which i use at home without having to copy files back and forth between computer to computer. This works the way i want it to but i forgot one thing. How am i supposed to sync my databases that i create. My question to you is how can i sync my databases to the cloud or somewhere else so i don't have to export and import every time i switch devices?
Also i would like to stay away from using hosting as i won't be online all the time.
The database server (the application itself) expects exclusive access to the data files. If you try to synchronize a data file between two systems, you're going to have issues and probably data loss.
What you could do is synchronize the data directory and make sure you're only running one server at a time. So when you're done working on the laptop, shut down the MySQL server process/service (mysqld), wait for it to finish synchronizing, and then start up the mysqld on the desktop. I suspect this will work, but it's a pretty non-standard usage so anything could happen.
To make it easier, I'd definitely consider writing a wrapper script/batch file that first tests for the presence of a lock file, then (if non exists) creates one, starts the mysqld, and when exiting make sure mysqld is stopped before deleting the lock file.
Anyway, to make this happen you would first stop mysqld everywhere, take the one mysql data directory that you wish to use, copy it to your Google Drive, then edit all of your MySQL configuration files to point to the new data directory instead of the old one. Whether XAMPP makes this more difficult than it should be, I'm not sure, but with stock MySQL it should be pretty trivial.
Remember that just because it's possible doesn't make it a good idea, and likewise that just because it's not a good idea doesn't make it won't work. So I'm saying it's not a good idea to do this, but if done with proper attention it will "probably" work.
Hope that helps.
So, I'm writing a php script which will be tied to a cron job that will backup my site's db on a regular basis. The db will get saved to a new sql file daily just incase anything unfortunate should happen to the live version. I am aware of how bad it would be for someone to be able to get a hold of one of these files, exposing both the db structure and user email addresses (passwords are encrypted).
I am not extraordinarily security savvy, and this is one of those things you HAVE to get right the first time around. I'm not to prideful to admit when it's best to ask for help so I figured I'd inquire with the trusted Stack Overflow community. (I realize it's likely there is a question similar to this somewhere, but I have been unable to find it).
What steps do I need to take to make sure these files can't be accessed? Note, it is an Apache server. Is it enough to store them in a directory outside of the root which is limited to group read/write (no public read)? Or is it necessary to password protect the directory or even encrypt the actual files? I'd really rather not if I don't have to (encrypt the files), it would just make it more of a pain to use them, but if it's needed...
Also relevant, access to these files is NOT being built into an application interface. I don't need or want to have them accessible by an http request. FTP only. So my question isn't regarding any password protection of a UI.
Thank you all so much for your time!
Storing on the server
If you must, store them outside of the web root and download them with something like rsync over ssh.
Best option (assuming you're running MySQL)
Don't store them on the server, but rather run a cron on your local machine and use ssh and MySQL to do the dump to your local system. That way there is no ominous file someone can have that contains all of your data (unless of course your local network is compromised).
Another option (again another example with MySQL)
You might also look into doing database replication with your local machine by setting up a local MySQL server.
I have a website right now that is currently utilizing 2 servers, a application server and a database server, however the load on the application server is increasing so we are going to add a second application server.
The problem I have is that the website has users upload files to the server. How do I get the uploaded files on both of the servers?
I do not want to store images directly in a database as our application is database intensive already.
Is there a way to sync the servers across each other or is there something else I can do?
Any help would be appreciated.
Thanks
EDIT: I am adding the following links for people that helped me understand this question more:
Synchronize Files on Multiple Servers
and
Keep Uploaded Files in Sync Across Multiple Servers - LAMP
For all Reading this post NFS seems to be the better of the 2.
NFS will keep files in sync but you could also use ftp to upload the files across all servers as well but NFS looks like the way to go.
This is a question for serverfault.
Anyway I think you should definitely consider getting in the "cloud".
Syncing uploads from one server to another is simply unreliable - you have no idea what kind of errors you can get and why you can get them. Also the syncing process will load both servers. For me the proper solution is going in the cloud.
Should you chose the syncing method you have a couple of solutions:
Use rsync to sync the files you need between the servers.
Use crontab to sync the files every X minutes/hours/days.
Copy the files upon some event (user login etc)
I got this answer from server fault:
The most appropriate course of action in a situation like this is to break the file share into a separate service of its own. Don't duplicate files if you have a network that can let the files be "everywhere (almost) at once." You can do this through NFS/CIFS or through a proper storage protocol like iSCSI. Mount as local storage in the appropriate directory. Depending on the performance of your network and your storage needs, this could add a couple of undetectable milliseconds to page load time.
So using NFS to share server files would work OR
as stated by #kgb you could specify one single server to hold all uploaded files and have other servers pull from that (just make sure you run a cron or something to back up the file)
Most sites solve this problem by using a 3rd party designated file server like Amazon S3 for the user uploads.
Another answer could be to use a piece of software called BTSync, it is very easy to install and use and could allow you to easily keep files in sync accross as many servers as you need to. It takes only 3 terminal commands to install and is very efficient.
Take a look here
and here
You can use db server for storage... Not in the db i mean, have a web server running there too. It is not going to increase cpu load much, but is going to require a better channel.
you could do it with rsync.. people have suggested using nfs.. but that way you create one point of failure... if the nfs server goes down.. both your servers are screwed... correct me if im wrong
I mantain a custom PHP application (build for me) that is hosted in a web server. Sometimes I add new features or repair bugs, and after test in local I upload the changes to the web server. It's not a critical application (is a game), but the most of the time there are some people connected.
The steps that I make to upgrade the application:
Access via FTP (Filezilla)
Upload a .htaccess file that redirects all the people (except my IP) to a mantain.html file
Check that access is denied for other IP except mine.
Backup old code
Upload new code
Go to PhPMyAdmin
Backup DB
Execute scripts for the DB
Test that all works fine (if not -> revert the backups)
remove .htaccess file
I usually spend an average of 30 minutes doing these steps, and I'm wondering if there is any way to optimize, automatize or doing something to spend less time. Also I know that if I can automatize some steps there are less prone to have errors.
Several other answers suggest PHP-specific deployment tools, but being as I'm not very familiar with PHP, I'll offer some general tips. These suggestions may be redundant by some of the other tools already suggested, though.
First off, don't upload a new .htaccess file every time--just have two of them on your server. Perhaps call them .htaccess-permanent, and .htaccess-maintenence. Then create a symlink to the one that ought to be active. Then once you've tested that access is properly denied once, you don't have to do this manual testing phase every single time you do an upgrade.
I'd also write a shell script to do most everything for me. My new work flow would look like this:
Upload new code to server in a directory called new/
Log in to the server via shell, and execute the upgrade script
Test the new site
Run upgrade-finalize
The end.
Now for the interesting part, the upgrade script will do this:
It will delete the .htaccess symlink, and re-create it, pointing to .htaccess-maintenence.
It will copy the current code in current/ to backup/
It will back up the DB, using the exact same commands that PHPMyAdmin uses
It will move the contents of new/ (which you just uploaded) to current/
It will execute the scripts for the DB
And the upgrade-finalize script will simply:
Delete the .htaccess symlink, and re-create it, pointing to .htaccess-permanent once again
The only possibly tricky part here will be getting the exact commands that PHPMyAdmin uses to back up your database, but it's probably a simple mysqldump command, and you can probably get that info from PHPMyAdmin or some logs, or something. Sorry, I don't know more about PHPMyAdmin to help in this specific area.
Look into a deployment tool like Capistrano that allows you to automate those steps.
I usually spend an average of 30 minutes doing these steps, and I'm wondering if there is any way to optimize, automatize or doing something to spend less time.
There are many ways. For starters, steps one through eight can be done in a single shell script. You could checkout Phing, an automated deployment system. Also, you might want to delve in continuous integration for even more control over how and when the software can be deployed.
Doing this manually is, like you say, asking for trouble.
For starters, you could upload your files into a new webroot and when done, switch over the DocumentRoot in apache, leaving it available during the copy process. For any shared files you could use a symlink to a common folder (eg, uploaded images etc)
You could probably take the backup during operation as well if you don't care about consistency in the database. For migrations that doesn't "break" the functionality, you could also migrate it and test it on your new webroot with another hostname if consistency isn't a problem.
The best option is always to use multiple webservers so that you can take one offline for testing while the other one is operational, but you will still have problem with consistency, however I assume that is not an option since you don't mention it.
I've been working on a website on my own xampp server on my computer with my own database and everything and so far it's been pretty smooth, surprisingly. Now I want to upload it to a host, and I found a free web host and I was able to upload the site through dreamweaver/ftp. I exported my DB into an SQL query and than ran that query on the live DB so that they would have the same data.
I'm curious, what's the best way to keep these DB's in sync?
1) In my header.php, I specify some connection variables for my local db and I have to make sure to change them when I upload header.php to the site so they have the correct connection variables for the remote db. Maybe if I had a file on my hosting server and a file on my local server that specified the connection information and just never messed with them?
2) If I change something in my local DB, I have to copy the SQL and run it in my remote one as well. Is there a good way to handle this?
Thanks again!
http://www.databasejournal.com/features/mysql/article.php/3355201/Database-Replication-in-MySQL.htm
For #1, you can either do that--the route most people take--or make the config file check the IP before loading server specific configuration. If the IP is 127.0.0.1, you load your development configuration. If it is the IP of the host, it loads a different config.
I personally do not know of a better way to handle #2. So, this answer will have to be incomplete.
1: Yes, create a config.php file with the server-specific information and include it when you need it. This is incredibly common and normal. Ideally, you can keep this file a little separate from your other files so that it's easy to grab all of your application files and copy them to the live server without also copying the config file. Keep a backup of your live config file somewhere, because one day you will overwrite it, and it's much better for your heart if you don't have to scramble to figure out what the live database password was.
2: There are some automated ways of handling this, but they're very complicated. What I usually do is create an empty text file named changes.sql or something. As I make changes to the dev database, I paste the CREATE TABLE and ALTER TABLE, etc. queries into the changes.sql file. This way I have one file with all the changes I need to make to the live server when I'm ready to update the live site. After I do the update, I save the changes.sql file somewhere and create a new empty file for the next changes.
More 2: You can also just do a dump of the whole dev database and copy it live. Most sites, though, have data on the live server that should not be destroyed or copied to dev - user information, orders, login tracking, user comments, whatever. So you generally do no not want to just replace all your live data with dev data.
I usually keep the template stuff separate from the db connection, global variable stuff, and session stuff with an include file like 'init.php' or 'config.php'. When you update your stuff, most likely you won't need to overwrite that file.
I use linux, so I use 'mysqldump' to get a .sql file, upload to server, then just upload 'mysql -u user -p databasename < database.sql'. It would be great if there was a quicker way that I don't know of.