So I had to write a script that will separate certain images on a network file server and back them up preserving the file structure. To do this, I am mounting the file server as a folder on my linux box where the script will be running. The file server is a windows box.
The file server was mounted like this:
mount -t cifs //xxx.xxx.xxx.xxx/pictures$ -o username=imageuser,password=pa$$word images
If I run a copy command like this:
cp images/somefolder/subfolder/someimage.jpg images/differentfolder/subfolder/someimage.jpg
My question is this:
Will "someimage.jpg" be simply be copied from one location to the other on the windows machine, or will the image be downloaded to the linux box over the network and then uploaded to the new location on the windows machine?
Edit: If the file will be round-tripped - I would like to know how to do it without that or at least to be pointed in the right direction where I can read up on a way to do it.
Neither cp nor the smb protocol are smart enough to realize that the source + destination of the file are on the same remote server. cp will simply do its usual thing and slurp all the data from the source file (copying it to the client machine), then spit it back out in the target file on the server. So yes, it'll be a round-trip through the client.
A better solution for this sort of thing is using an SSH remote command, turning it into a purely server-side operation:
ssh imageuser#x.x.x.x 'cp sourcefile targetfile'
You can still keep the fileserver mounted on your local machine to actually see what files you're dealing with, but do all the file copy/move operations via the ssh commands for efficiency. Since the server is a Windows machine, you'll probably have to install cygwin and get an ssh server running.
Related
I used to have a Windows OS server where i uploaded some old php web files to it. I could then access them, edit them, and view them online via my host name.
After much debating and reasoning, we had to change the OS of the server from Windows to Linux. After the change had been completed, a backup of the server was uploaded to the new Linux installation where all my old files were kept.
I could view these files online as I used to do when the server had windows OS.
The only thing I did encounter was the following:
a) I downloaded my files from the server using putty,
b) I deleted the old copy in my Linux server,
c) I then re-uploaded the same file that used to be in the server without making absolutely no change whatsoever to it, to the exact place where it was,
d) When I try to access it via its web address like I did earlier, it throws an error message saying..."The page isn't working".
I don't know much about Linux and there fore I am stuck. I don't know what the problem is. I can't understand why I can view all the files via their web address if they were placed there from the backup, but when I download them, delete their file from the server and then re-upload the exact same downloaded file to the exact place where it used to work, I get an error message.
Extra info: I connect to this Linux server from a windows OS machine using putty.
I found the problem. Since I migrated from a Windows OS server to a Linux Cent OS server, I didn't know that you had to configure the privileges of each folder in order to be accessed from the web. By default, my uploaded files where tagged by ownership of "user". The server was configured to only display files that were tagged by ownership of "root". The way I solved this was by typing the following command in the terminal.
NOTE: "You have to be in the folder where the file you are going to change ownership is."
sudo chown root:root filename.php
sudo -> Execute in admin mode
chown -> Change ownership of file to...
root:root -> ... root instead of user
filename.php -> the name of my file
Executing this corrected the error. Hope it helps someone else since I coudn't find anything related.
I was experimenting with shell_exec and commands, and I can't seem to get this work. Im using the php shell_exec() function and running a screen capture command to take a snapshot of the desktop. When running the script locally through coda, it works fine. Then i ran it through my installion of apache through htdocs, and it runs, yet it doesn't save the image anywhere? For browsers, do i need to change the directory at all? this is what my super simple script looks like. Is this even possible?
<?
$command = "screencapture -iWP ~/Random/test.png";
shell_exec($command);
?>
I don't currently have access to a Mac to test, but I'd be extremely surprised and more than a little concerned if that did work.
On the server, apache should be running under a different user ID to the logged in user, which means the attempt to grab the framebuffer should fail.
If it did at least write (or try to write) an image, then it will be ~/Random/test.png; e.g. if apache runs as a user called apache, the target filename is ~apache/Random/test.png
OSX is basically UNIX, and a key feature of UNIX-like operating systems is security. The video framebuffer should only be accessible to processes running under the UID of the logged in user (or root). Daemon processes like apache httpd should be running under their own, non-root UID.
You probably have to specify the full path of the executable. Also, specifying the full path of the output PNG would help too because they're different users.
Run the command
which screencapture
To find the path that the executable is located in. The output file must be in a writable directory for the user that apache is running under (usually apache). You can check the user that apache is running under by looking in the apache configuration, or just running "top" (as root).
Hope that helps.
I have a simple script which copies a file from one SMB mount to another. The source file system is the same, but the web server is different. I'm using PHP to process the file by copying it to a temp directory, then performing additional tasks on it. This setup was working at one point in time but it seems that it's no longer working correctly. Can someone point me in the right direction?
fstab mounts:
//192.168.0.x/share /media/folder smbfs username=user,password=mypass
//192.168.0.x/share2 /media/folder2 smbfs username=user,password=mypass
php code:
copy('/media/folder/filename.txt','/media/folder2/temp/filename.txt');
Error:
Warning: copy(/media/folder2/temp/filename.txt): failed to open stream: Permission denied in /www/myphp.php on line xx
Folder permissions (not the mount, but the source folder on the fileserver):
/media/folder = 777
/media/folder2/temp = 777
system("cp /media/folder/filename.txt /media/folder2/temp/filename.txt");
Might work for you.
sounds like a question that is specific to permissions and the OS and not PHP .. what webserver? what is the server running as? nobody:nobody? can nobody:nobody or www-root:www-root read/write data into the directories you are trying to access?
sudo su - nobody
probably wont work as it will most likely have a /bin/false shell
nobody may not be the right account .. ps auxw | grep apache | awk {'print $1'} and see which user it is running as ... then try changing over to that account with sudo
Before PHP can have access to write the files, you need to ensure the user which the webserver is running as ... has access to read/write to the directory you are trying your copy on.
I changed the command to:
copy('/media/folder/filename.txt','/tmp/filename.txt');
Apparently it's more difficult to process files on an SMB share than I thought. The file should be removed when the computer's rebooted, or possibly at regular intervals, depending on the system setup.
I'm looking for a way to read the file contents of a file located on a network share. I can use the IP Address of the share host and the share folder to get to the location but I don't know the correct command and syntax to do that file_get_contents? fopen?
$text = fopen('//128.251.xxx.xxx/Common/sample.txt', 'r');
or something like that?
UPDATE*
I also cannot access the file through a browser window although I know the file is located in that exact directory...
The best way (depending on your needs) is to simply mount it on your local machine, and then read from the mount. That lets all the network interaction be abstracted away.
So, in Linux:
mount -t cifs //192.168.XXX.XXX/share /path/to/mount -o user=username,password=password
Then, in php, just access it from the mount point:
$data = file_get_contents('/path/to/mount/path/to/file.txt');
According to PHP Filesystem manual UNC/SMB files should be accessible. Try the following:
\\server\share\file.txt
It may be one of those cases where the // may be mistook as a reference to the root directory. And given this is an SMB path, I would use the traditional backslashes.
I want to copy a zip file from remote to my local system using SCP.
I have a php file where i use php function exec();
if i run upload.php like http://www.abc.com/upload.php.
The zip file should copy to my local linux folder my path is
/var/www/html/mydirectory/
How can i do this ?
You can use PHP's PECL ssh2 extension that provides ssh2_scp_send.
In order to automate any ssh connection like scp you have to set up a pair of auth keys.
This will allow your remote computer to connect to your local computer with out a password prompt. A simple google search will show you how to set this up. The resource I used is http://linuxproblem.org/art_9.html.
The auth key allow the computers to recognize each other and handshake with out a user prompt but remember doing this does provide free ssh access from your remote location to your home computer without a password, so handle permissions carefully.
A better way than an scp if you don't need encryption is to set up a wget on your local computer to grab off your remote computer's web dir.
To me, it seems like you are asking how to download a zip file from your remote web server. In that case, you could simply give the browser the direct path to the zip and let it download it. You can't push a file from the server to the local machine with SCP. Use HTTPS if you're concerned about security. If the zip file is outside of the web directory, you can use PHP to read the file (assuming apache has access to it) and then output it to the browser.