Backup MySQL Database to Dropbox - php

In the past I have received a lot of help from the SO community, so once I figured this out, I thought here's my opportunity to give back a little. Hopefully it helps someone.
The issue I was faced with was having my core site built on WordPress, with another database for an e-commerce section of the site, I wanted to backup the entire site (all files, both databases, etc.) to Dropbox on a daily basis.
After a lengthy search, I couldn't find anything that did exactly what I was looking for.
Disclaimer: You don't need to be running WordPress or an e-commerce site for this to work. It will work on any MySQL database(s) and requires PHP.
I came across the WordPress Backup to Dropbox plugin, which got me about 90% there. The plugin allowed me to back up all the files on the site plus it does a WordPress database backup at a frequency you schedule.
The problem is that the plugin only does a backup of the WordPress database, but not my e-commerce database.
I also found a MySQL backup to Dropbox tutorial (credit where it's due), which some of the code below is based on. It is a great tutorial, but I wanted it to backup and delete the backup at different times - the tutorial backed up and deleted all at the same time.

The solution I came up with is not specific to WordPress or an e-commerce site. Anyone who has a MySQL database and can run PHP should be able to benefit from this. Perhaps with a few tweaks to my answer, but still they should be able to accomplish the end result.
To store a backup of the e-commerce database, I created a folder in my site's root directory (/temp - call it whatever you want). Then I had to actually create the database backup. Open up a text editor and create a file called backup_dropbox.php.
backup_dropbox.php
<?php
// location of your /temp directory relative to this file. In my case this file is in the same directory.
$tempDir = "";
// username for e-commerce MySQL DB
$user = "ecom_user";
// password for e-commerce MySQL DB
$password = "ecomDBpa$$word";
// e-commerce DB name to backup
$dbName = "ecom_db_name";
// e-commerce DB hostname
$dbHost = "localhost";
// e-commerce backup file prefix
$dbPrefix = "db_ecom";
// create backup sql file
$sqlFile = $tempDir.$dbPrefix.".sql";
$createBackup = "mysqldump -h ".$dbHost." -u ".$user." --password='".$password."' ".$dbName." > ".$sqlFile;
exec($createBackup);
//to backup multiple databases, copy all of the above code for each DB, rename the variables to something unique, and set their values to whatever is appropriate for the different databases.
?>
Now this script should create a backup of the database "ecom_db_name" whenever it is run. To get it to run on a scheduled interval (I want it to run just a couple minutes before my WordPress backup starts to run at 7am). You can either use WP-Cron (if your site gets enough traffic to reliably trigger it to run at the right time) or schedule a cron job.
I am no expert on cron jobs and these types of commands, so there may be a better way. I have used this on two different sites and run them two different ways. Play around with what works best for you.
The first way is on a directory that is not password protected, the second is for a password protected directory. (Replace username and Password with your username and password, and obviously set example.com/temp/backup_dropbox.php to wherever the file resides on your server).
Cron Job to run backup_dropbox.php 5 minutes before WP backup
55 6 * * * php /home/webhostusername/public_html/temp/backup_dropbox.php
OR
55 6 * * * wget -q -O /dev/null http://username:Password#example.com/temp/backup_dropbox.php
Now the cron job is set up to run backup_dropbox.php and create my database backup every day at 6:55am. The WordPress to Dropbox backup that starts at 7am usually takes about 5-6 minutes, but could take a little longer.
I want to delete my .sql backup files after they have successfully been backed up to Dropbox so its not sitting out there forever for someone to somehow open/download the database file.
Fire up the text editor again, and create another file called clr_bkup.php.
clr_bkup.php
<?
$tmpDir = "";
//delete the database backup file
unlink($tmpDir.'db_ecom.sql');
// if you had multiple DB backup files to remove just copy the line above for each backup, and replace 'db_ecom.sql' with your DB backup file name
?>
Since the WordPress backup takes a few minutes to finish up, I want to run a cron job to execute clr_bkup.php at 10 past 7, which should give it enough time. Again, the first cron job below is for an unprotected directory, and the second for a password protected directory.
Cron Job to run clr_bkup.php 10 minutes after WP backup starts
10 7 * * * php /home/webhostusername/public_html/temp/clr_bkup.php
OR
10 7 * * * wget -q -O /dev/null http://username:Password#example.com/temp/clr_bkup.php
Sequence of events
To help wrap your head around what's going on, here's the timeline:
6:55am: Cron Job is scheduled to run backup_dropbox.php, which creates a backup file of my database.
7:00am: WordPress Backup to Dropbox runs, and backs up all files that have changed since the last backup, which includes my 5 minute old, newly created database backup.
7:10am: By now the Dropbox backup has finished up, so the Cron Job is scheduled to run clr_bkup.php, which removes the backup file from the server.
Variables, Notes, and Misc. Info
Timing
The first thing that hung me up was getting the timing right. For simplicity, I used the times in the example above as if everything was happening in the same time zone. In reality, my web host's server is in the US West Coast, while my WordPress timezone is set to the US East Coast (a 3 hour difference). My actual cron jobs are set to run 3 hours earlier (server time) than what is displayed above. This will be different for everyone. The best bet is to know the time difference up front.
Run Backup with a Time Check
In the directory that is not password protected, I wanted to keep the backup_dropbox.php script from running at any other time of the day than 6:55am (by visiting it in a browser at 10am for example). I included a time check at the beginning of the backup_dropbox.php file, which basically checks to see if it isn't 6:55am, then don't let it execute the rest of the code. I modified backup_dropbox.php to:
<?php
$now = time();
$hm = date('h:i', $now);
if ($hm != '06:55') {
echo "error message";
} else {
// DB BACKUP code from above goes here
}
?>
I suppose you could also add this to the clr_bkup.php file to only let it delete the backup files at 7:10am, but I didn't really see the need since the only time clr_bkup.php will do anything is between 6:55-7:10am anyhow. Up to you though if you decide to go that route.
Not on WordPress?
There are a number of free and paid services that will backup your website either to Dropbox or another similar service like Google Drive, Amazon S3, Box, etc., or some will store the files on their servers for a fee.
Backup Machine, Codeguard, Dropmysite, Backup Box, or Mover to name a few.
Want Redundant Offsite Backups?
There are plenty of services that will allow you to automatically create remote redundant backups on any of the cloud storage sites listed above.
For example if you backup your site to Dropbox, you can use a service called If This Then That (IFTTT) to automatically add files uploaded to a particular Dropbox folder to Google Drive. That way should Dropbox ever have an issue with their servers, you'll also have a Google Drive backup. Backup Box listed above could also do something like this.
Hope this helps
There may be a better way of doing all of this. I was in a pinch and needed to figure something out that works reliably, which this does. If there are any improvements that can be made, please share in the comments.

I think this post explain a solution wich can help you:
http://ericsilva.org/2012/07/05/backup-mysql-database-to-dropbox/

Related

General error: 5 database is locked in PDO using sqlite [duplicate]

When I enter this query:
sqlite> DELETE FROM mails WHERE (id = 71);
SQLite returns this error:
SQL error: database is locked
How do I unlock the database so this query will work?
In windows you can try this program http://www.nirsoft.net/utils/opened_files_view.html to find out the process is handling db file. Try closed that program for unlock database
In Linux and macOS you can do something similar, for example, if your locked file is development.db:
$ fuser development.db
This command will show what process is locking the file:
> development.db: 5430
Just kill the process...
kill -9 5430
...And your database will be unlocked.
I caused my sqlite db to become locked by crashing an app during a write. Here is how i fixed it:
echo ".dump" | sqlite old.db | sqlite new.db
Taken from: http://random.kakaopor.hu/how-to-repair-an-sqlite-database
The SQLite wiki DatabaseIsLocked page offers an explanation of this error message. It states, in part, that the source of contention is internal (to the process emitting the error). What this page doesn't explain is how SQLite decides that something in your process holds a lock and what conditions could lead to a false positive.
This error code occurs when you try to do two incompatible things with a database at the same time from the same database connection.
Changes related to file locking introduced in v3 and may be useful for future readers and can be found here: File Locking And Concurrency In SQLite Version 3
If you want to remove a "database is locked" error then follow these steps:
Copy your database file to some other location.
Replace the database with the copied database. This will dereference all processes which were accessing your database file.
Deleting the -journal file sounds like a terrible idea. It's there to allow sqlite to roll back the database to a consistent state after a crash. If you delete it while the database is in an inconsistent state, then you're left with a corrupted database. Citing a page from the sqlite site:
If a crash or power loss does occur and a hot journal is left on the disk, it is essential that the original database file and the hot journal remain on disk with their original names until the database file is opened by another SQLite process and rolled back. [...]
We suspect that a common failure mode for SQLite recovery happens like this: A power failure occurs. After power is restored, a well-meaning user or system administrator begins looking around on the disk for damage. They see their database file named "important.data". This file is perhaps familiar to them. But after the crash, there is also a hot journal named "important.data-journal". The user then deletes the hot journal, thinking that they are helping to cleanup the system. We know of no way to prevent this other than user education.
The rollback is supposed to happen automatically the next time the database is opened, but it will fail if the process can't lock the database. As others have said, one possible reason for this is that another process currently has it open. Another possibility is a stale NFS lock, if the database is on an NFS volume. In that case, a workaround is to replace the database file with a fresh copy that isn't locked on the NFS server (mv database.db original.db; cp original.db database.db). Note that the sqlite FAQ recommends caution regarding concurrent access to databases on NFS volumes, because of buggy implementations of NFS file locking.
I can't explain why deleting a -journal file would let you lock a database that you couldn't before. Is that reproducible?
By the way, the presence of a -journal file doesn't necessarily mean that there was a crash or that there are changes to be rolled back. Sqlite has a few different journal modes, and in PERSIST or TRUNCATE modes it leaves the -journal file in place always, and changes the contents to indicate whether or not there are partial transactions to roll back.
the SQLite db files are just files, so the first step would be to make sure it isn't read-only. The other thing to do is to make sure that you don't have some sort of GUI SQLite DB viewer with the DB open. You could have the DB open in another shell, or your code may have the DB open. Typically you would see this if a different thread, or application such as SQLite Database Browser has the DB open for writing.
My lock was caused by the system crashing and not by a hanging process. To resolve this, I simply renamed the file then copied it back to its original name and location.
Using a Linux shell that would be:
mv mydata.db temp.db
cp temp.db mydata.db
If a process has a lock on an SQLite DB and crashes, the DB stays locked permanently. That's the problem. It's not that some other process has a lock.
I had this problem just now, using an SQLite database on a remote server, stored on an NFS mount. SQLite was unable to obtain a lock after the remote shell session I used had crashed while the database was open.
The recipes for recovery suggested above did not work for me (including the idea to first move and then copy the database back). But after copying it to a non-NFS system, the database became usable and not data appears to have been lost.
Some functions, like INDEX'ing, can take a very long time - and it locks the whole database while it runs. In instances like that, it might not even use the journal file!
So the best/only way to check if your database is locked because a process is ACTIVELY writing to it (and thus you should leave it the hell alone until its completed its operation) is to md5 (or md5sum on some systems) the file twice.
If you get a different checksum, the database is being written, and you really really REALLY don't want to kill -9 that process because you can easily end up with a corrupt table/database if you do.
I'll reiterate, because it's important - the solution is NOT to find the locking program and kill it - it's to find if the database has a write lock for a good reason, and go from there. Sometimes the correct solution is just a coffee break.
The only way to create this locked-but-not-being-written-to situation is if your program runs BEGIN EXCLUSIVE, because it wanted to do some table alterations or something, then for whatever reason never sends an END afterwards, and the process never terminates. All three conditions being met is highly unlikely in any properly-written code, and as such 99 times out of 100 when someone wants to kill -9 their locking process, the locking process is actually locking your database for a good reason. Programmers don't typically add the BEGIN EXCLUSIVE condition unless they really need to, because it prevents concurrency and increases user complaints. SQLite itself only adds it when it really needs to (like when indexing).
Finally, the 'locked' status does not exist INSIDE the file as several answers have stated - it resides in the Operating System's kernel. The process which ran BEGIN EXCLUSIVE has requested from the OS a lock be placed on the file. Even if your exclusive process has crashed, your OS will be able to figure out if it should maintain the file lock or not!! It is not possible to end up with a database which is locked but no process is actively locking it!!
When it comes to seeing which process is locking the file, it's typically better to use lsof rather than fuser (this is a good demonstration of why: https://unix.stackexchange.com/questions/94316/fuser-vs-lsof-to-check-files-in-use). Alternatively if you have DTrace (OSX) you can use iosnoop on the file.
I added "Pooling=true" to connection string and it worked.
This error can be thrown if the file is in a remote folder, like a shared folder. I changed the database to a local directory and it worked perfectly.
I found the documentation of the various states of locking in SQLite to be very helpful. Michael, if you can perform reads but can't perform writes to the database, that means that a process has gotten a RESERVED lock on your database but hasn't executed the write yet. If you're using SQLite3, there's a new lock called PENDING where no more processes are allowed to connect but existing connections can sill perform reads, so if this is the issue you should look at that instead.
I have such problem within the app, which access to SQLite from 2 connections - one was read-only and second for writing and reading. It looks like that read-only connection blocked writing from second connection. Finally, it is turns out that it is required to finalize or, at least, reset prepared statements IMMEDIATELY after use. Until prepared statement is opened, it caused to database was blocked for writing.
DON'T FORGET CALL:
sqlite_reset(xxx);
or
sqlite_finalize(xxx);
I just had something similar happen to me - my web application was able to read from the database, but could not perform any inserts or updates. A reboot of Apache solved the issue at least temporarily.
It'd be nice, however, to be able to track down the root cause.
lsof command on my Linux environment helped me to figure it out that a process was hanging keeping the file open.
Killed the process and problem was solved.
This link solve the problem. : When Sqlite gives : Database locked error
It solved my problem may be useful to you.
And you can use begin transaction and end transaction to not make database locked in future.
Should be a database's internal problem...
For me it has been manifested after trying to browse database with "SQLite manager"...
So, if you can't find another process connect to database and you just can't fix it,
just try this radical solution:
Provide to export your tables (You can use "SQLite manager" on Firefox)
If the migration alter your database scheme delete the last failed migration
Rename your "database.sqlite" file
Execute "rake db:migrate" to make a new working database
Provide to give the right permissions to database for table's importing
Import your backed up tables
Write the new migration
Execute it with "rake db:migrate"
In my experience, this error is caused by: You opened multiple connections.
e.g.:
1 or more sqlitebrowser (GUI)
1 or more electron thread
rails thread
I am nore sure about the details of SQLITE3 how to handle the multiple thread/request, but when I close the sqlitebrowser and electron thread, then rails is running well and won't block any more.
I ran into this same problem on Mac OS X 10.5.7 running Python scripts from a terminal session. Even though I had stopped the scripts and the terminal window was sitting at the command prompt, it would give this error the next time it ran. The solution was to close the terminal window and then open it up again. Doesn't make sense to me, but it worked.
I just had the same error.
After 5 minets google-ing I found that I didun't closed one shell witch were using the db.
Just close it and try again ;)
I had the same problem. Apparently the rollback function seems to overwrite the db file with the journal which is the same as the db file but without the most recent change. I've implemented this in my code below and it's been working fine since then, whereas before my code would just get stuck in the loop as the database stayed locked.
Hope this helps
my python code
##############
#### Defs ####
##############
def conn_exec( connection , cursor , cmd_str ):
done = False
try_count = 0.0
while not done:
try:
cursor.execute( cmd_str )
done = True
except sqlite.IntegrityError:
# Ignore this error because it means the item already exists in the database
done = True
except Exception, error:
if try_count%60.0 == 0.0: # print error every minute
print "\t" , "Error executing command" , cmd_str
print "Message:" , error
if try_count%120.0 == 0.0: # if waited for 2 miutes, roll back
print "Forcing Unlock"
connection.rollback()
time.sleep(0.05)
try_count += 0.05
def conn_comit( connection ):
done = False
try_count = 0.0
while not done:
try:
connection.commit()
done = True
except sqlite.IntegrityError:
# Ignore this error because it means the item already exists in the database
done = True
except Exception, error:
if try_count%60.0 == 0.0: # print error every minute
print "\t" , "Error executing command" , cmd_str
print "Message:" , error
if try_count%120.0 == 0.0: # if waited for 2 miutes, roll back
print "Forcing Unlock"
connection.rollback()
time.sleep(0.05)
try_count += 0.05
##################
#### Run Code ####
##################
connection = sqlite.connect( db_path )
cursor = connection.cursor()
# Create tables if database does not exist
conn_exec( connection , cursor , '''CREATE TABLE IF NOT EXISTS fix (path TEXT PRIMARY KEY);''')
conn_exec( connection , cursor , '''CREATE TABLE IF NOT EXISTS tx (path TEXT PRIMARY KEY);''')
conn_exec( connection , cursor , '''CREATE TABLE IF NOT EXISTS completed (fix DATE, tx DATE);''')
conn_comit( connection )
One common reason for getting this exception is when you are trying to do a write operation while still holding resources for a read operation. For example, if you SELECT from a table, and then try to UPDATE something you've selected without closing your ResultSet first.
I was having "database is locked" errors in a multi-threaded application as well, which appears to be the SQLITE_BUSY result code, and I solved it with setting sqlite3_busy_timeout to something suitably long like 30000.
(On a side-note, how odd that on a 7 year old question nobody found this out already! SQLite really is a peculiar and amazing project...)
Before going down the reboot option, it is worthwhile to see if you can find the user of the sqlite database.
On Linux, one can employ fuser to this end:
$ fuser database.db
$ fuser database.db-journal
In my case I got the following response:
philip 3556 4700 0 10:24 pts/3 00:00:01 /usr/bin/python manage.py shell
Which showed that I had another Python program with pid 3556 (manage.py) using the database.
An old question, with a lot of answers, here's the steps I've recently followed reading the answers above, but in my case the problem was due to cifs resource sharing. This case is not reported previously, so hope it helps someone.
Check no connections are left open in your java code.
Check no other processes are using your SQLite db file with lsof.
Check the user owner of your running jvm process has r/w permissions over the file.
Try to force the lock mode on the connection opening with
final SQLiteConfig config = new SQLiteConfig();
config.setReadOnly(false);
config.setLockingMode(LockingMode.NORMAL);
connection = DriverManager.getConnection(url, config.toProperties());
If your using your SQLite db file over a NFS shared folder, check this point of the SQLite faq, and review your mounting configuration options to make sure your avoiding locks, as described here:
//myserver /mymount cifs username=*****,password=*****,iocharset=utf8,sec=ntlm,file,nolock,file_mode=0700,dir_mode=0700,uid=0500,gid=0500 0 0
I got this error in a scenario a little different from the ones describe here.
The SQLite database rested on a NFS filesystem shared by 3 servers. On 2 of the servers I was able do run queries on the database successfully, on the third one thought I was getting the "database is locked" message.
The thing with this 3rd machine was that it had no space left on /var. Everytime I tried to run a query in ANY SQLite database located in this filesystem I got the "database is locked" message and also this error over the logs:
Aug 8 10:33:38 server01 kernel: lockd: cannot monitor 172.22.84.87
And this one also:
Aug 8 10:33:38 server01 rpc.statd[7430]: Failed to insert: writing /var/lib/nfs/statd/sm/other.server.name.com: No space left on device
Aug 8 10:33:38 server01 rpc.statd[7430]: STAT_FAIL to server01 for SM_MON of 172.22.84.87
After the space situation was handled everything got back to normal.
If you're trying to unlock the Chrome database to view it with SQLite, then just shut down Chrome.
Windows
%userprofile%\Local Settings\Application Data\Google\Chrome\User Data\Default\Web Data
or
%userprofile%\Local Settings\Application Data\Google\Chrome\User Data\Default\Chrome Web Data
Mac
~/Library/Application Support/Google/Chrome/Default/Web Data
From your previous comments you said a -journal file was present.
This could mean that you have opened and (EXCLUSIVE?) transaction and have not yet committed the data. Did your program or some other process leave the -journal behind??
Restarting the sqlite process will look at the journal file and clean up any uncommitted actions and remove the -journal file.
As Seun Osewa has said, sometimes a zombie process will sit in the terminal with a lock aquired, even if you don't think it possible. Your script runs, crashes, and you go back to the prompt, but there's a zombie process spawned somewhere by a library call, and that process has the lock.
Closing the terminal you were in (on OSX) might work. Rebooting will work. You could look for "python" processes (for example) that are not doing anything, and kill them.

SimplePie CRON job in Windows XAMPP

I am attempting to set up a CRON job on XAMPP in Windows but I am having some trouble. I have the timing set up so that it should run every 5 minutes, and this part works because I see my command prompt pop up every five minutes.
This is the code for the CRON.BAT file that runs. Both locations are correct for their respective files.
C:\xampp\php\php.exe C:\xampp\htdocs\codeigniter214\update_simplepie_cache.php
This is my update_simplepie_cache.php file. I'm pretty sure this is the part that's failing, because the MySQL database isn't updating even though the feeds have new items in them. I tried to follow the SimplePie instructions, but it hasn't worked so far.
<?php
$this->load->library('rss');
$feed = $this->rss;
$cache_location = 'mysql://root#127.0.0.1:3306/news_test'; // change to your cache location
$feed->set_feed_url('http://www.theverge.com/rss/frontpage', 'http://gigaom.com/tag/rss-feeds/feed/');
$feed->set_cache_location($cache_location);
$feed->set_cache_duration(9999999); // force cache to update immediatlely
$feed->set_timeout(5); // optional, if you have a lot of feeds a low timeout may be necessary
$feed->init();
?>
Can anyone see what I'm missing here? Thank you.

How do you monitor a file on a web server and log every access, ideally by IP address, in a database (MySQL)?

For security reasons, there is a certain file on my web server I want to be able to monitor access to. Every time it is accessed, I want to have an entry added to a MySQL log table. This way, I can actively respond to security breaches from within the web application.
The Apache HTTP Server provides logging capabilities.
The server access log records all requests processed by the server. The location and content of the access log are controlled by the CustomLog directive. The LogFormat directive can be used to simplify the selection of the contents of the logs. This section describes how to configure the server to record information in the access log.
It can be used to write the log to a file. If you need to store in a MySQL table, run a cron job to import the file into the database.
Further information on logs is here:
http://httpd.apache.org/docs/1.3/logs.html#accesslog
Its been removed from PHP7 but for anyone else who finds this post there are a number of options within the FAM (now PECL) extension. This function http://php.net/manual/en/function.fam-monitor-file.php seems to describe what is needed here
Additionally you can access a lot of detail about the files status with http://php.net/manual/en/function.stat.php. Put this within a cron or sleep driven script and you can then see when its changed.
The file may be accessed from three points:
Direct filesystem access
Call to the url like www.example.com/importantfile.jpg (apache serves the file)
Call to some php script on your server www.example.com/readfile.php?name=important.jpg which reads the file.
If you are concerned only about case 2 then check the solution of Rishi Dua.
But if you want more than that then you should write a script with fileatime() call and then add it to cron to run every minute for example.
The pseudocode for it:
<?php
$previous_access_time = get_previous_access_time(); // get the previous last access time from you remembered in db or textfile or whatever
$current_access_time = fileatime('path/to/very_important_file.jpg');
if ($previous_access_time != $current_access_time) {
log_access_to_db();
save_new_access_time(); // update the new last access time
}
This solution however has some problems.
First is that you can get only the access time but not the user-id or ip of who accessed the file.
Second is that as the manual says, some Unix system do not update the access time and so the solution would fail.
If you are seriously concerned about the security, then I think you have to check for some audit util like this

find public html folder using php's ftp functions

I have a php script that logs into my servers via the ftp function, and backs up the entire thing easily and quickly, but I can't seem to find a way to let the script determine the folder where the main index file is located.
For example, I have 6 servers with a slew of ftp accounts all set up differently. Some log into the FTP root that has httpdocs/httpsdocs/public_html/error_docs/sub_domains and folders like that, and some log in directly to the httpdocs where the index file is. I only want to backup the main working web files and not all the other stuff that may be in there
I've set up a way to define the working directory, but that means I have to have different scripts for each server or backup I want to do.
Is it possible to have the php script find or work out the main web directory?
One option would be to set up a database that has either the directory to use or nothing if the ftp logs in directly to that directory, but I'm going for automation here.
If it's not possible I'll go with the database option though.
You cannot figure out through FTP alone what the root directory configured in apache is - unless you fetch httpd.conf and parse it, which I'm fairly sure you don't want to do. Presumably you are looping to do this backup from multiple servers with the same script?
If so, just define everything in an array, and loop it with a foreach and all the relevant data will be available in each iteration.
So I would do something like this:
// This will hold all our configuration
$serverData = array();
// First server
$serverData['server1']['ftp_address'] = 'ftp://11.22.33.44/';
$serverData['server1']['ftp_username'] = 'admin';
$serverData['server1']['ftp_password'] = 'password';
$serverData['server1']['root_dir'] = 'myuser/public_html';
// Second server
$serverData['server2']['ftp_address'] = 'ftp://11.22.22.11/';
$serverData['server2']['ftp_username'] = 'root';
$serverData['server2']['ftp_password'] = 'hackmeplease';
$serverData['server2']['root_dir'] = 'myuser/public_html';
// ...and so on
// Of course, you could also query a database to populate the $serverData array
foreach ($serverData as $server) {
// Process each server - all the data is available in $server['ftp_address'], $server['root_dir'] etc etc
}
No, you can't do it reliably without knowledge of how Apache is setup for each of those domains. You'd be better off with the database/config file route. One-time setup cost for that plus a teensy bit of maintenance as sites are added/modded/removed.
You'll probably spend days getting a detector script going, and it'll fail the next time some unknown configuration comes up. Attemping to create an AI is hard... you have to get it to the Artificial Stupidity level first (e.g. the MS Paperclip).

PHP Windows copy network

$tmpUploadFolder = "C:\\www\\intranet\\uploads";
//$finalUploadFolder = "file:////server//photos//overwrite";
$finalUploadFolder = "file://server/photos/overwrite";
//$finalUploadFolder = "\\\\server\\photos\\overwrite";
//$finalUploadFolder = "\\server\photos\overwrite";
//$finalUploadFolder = "P:\\overwrite";
//$finalUploadFolder = "P:/overwrite";
$from = $tmpUploadFolder . "\\" . $_REQUEST['ext'];
$to = $finalUploadFolder. "\\" . $_REQUEST['ext'];
copy($from, $to);
I am trying to do a PHP upload using a jquery tool. The Jquery tool nicely places the file onto the PHP upload dir before the page submit. So i want to (upon post of the form) quickly move the file from it's tmp folder location (it'll already be there you see) to it's final destination on an image store server (I use the _REQUEST['ext'] variable to hold the filename jquery held.
Rest assured these paths are good they work lovely in dos. As you can see I have tried every known unc syntax I know.
I cannot for the life of me get php to work I have written a VBS "copy . file" and tried to trigger it under whost.exe via system() in php, i've downloaded the oldeskool runas.exe and tried to get it to copy via system(), I have used unc paths and network shares, and mapped network drives, I have made apache service "log on as " administraor and even a custom adhoc new user made just for this and given it full permissions
It works fine if I change P:\ to C:\
I KNOW IT'S EFECTIVE PERMISSONS RE: APACHE - BUT WE DO NOT RUN ACTIVE DIRECTORY AND I CAN'T GET IT TO WORK
it simply will not let me copy this file onto a network and this is a major major MAJOR problem child for me.
Is there a solution? If you are going to help me with things like "it's file permissions" then I am going to need a break down of exact and careful instructions because I am pulling my hair out because I know it's file permissions rights but I just can't get it to work
I am tired now.. please help?
ok I figured it out so for the benefit of those going after me here is the solution THAT WORKS
1.make sure php windows "apache2.2" service is running as a administrator user (I made a user called apacheusr and gave it a password and popped it into local administrators) you do this by right clicking properties on the "apache2.2" service in administrative tools->services and going to the logon tab->this account and picking the apacheusr
2.because I don't run active directory I made this apacheusr user on BOTH machines (phpserver/ imageserver) as a local administrator user and gave them BOTH the same username password and tick password never expires.
3.I then log in/out at least once onto windows with both these accounts. (don't ask me why but it seemed to help, it stopped the runasexe --that I gave up with-- moaning in dos)
4.finally on the php server right click share the destination folder on imageserver and make damn well sure this apacheusr can log in to that folder. The simplest way to do this is when you log/in/out as apacheusr on your php server and try to go to your image server folder - you then need to be on the imagesever and tick everything correctly in the share/permissions bit
THEN the final bit is (where _REQUEST['ext'] is a file name EG: "pic.jpg")
$tmpUploadFolder = "C:\\www\\intranet\\uploads";
$finalUploadFolder = "\\\\server\\photos\\overwrite";
$from = $tmpUploadFolder . "\\" . $_REQUEST['ext'];
$to = $finalUploadFolder. "\\" . $_REQUEST['ext'];
copy($from, $to);
The above code works!
In what environment do you run php? Apache? IIS? These run most of the time as a service with System Credentials and cannot access Network shares...
Change the Webserver Account to a User that can write and it should work (with one of those URLs at least)

Categories