This is the situation:
I have a LAMP server, which serves HTML, PHP, etc... Now I have remote folder, somewhere in the web, which has a directory full of PHP files, images, an MVC folder structure (CodeIgniter), etc...
Now, What I want to do is that instead of every time I want to serve those PHP files, instead of downloading them and uploaded them into my LAMP server, I want to use those PHP files directly and serve them in my LAMP server.
Again, I want the PHP files from a folder in another server, which I only have access to the direct link to each individual file, being serve in my LAMP server, so if I access my website, for instance: www.website.com/page1, gets the folder structure from the remote web server or all PHP files, and get serve within my server.
I know this sounds a little bit complicated but I'm not sure what to use... Maybe reverse proxy? Do you think I may download the files directly and constantly syncing the files? If anyone gets with a good solution I may even pay that person...
EDIT(1)
Good answers so far... but I think I did not make a good question so here it goes again:
I have access to a "list" of PHP files, and in order to get them I need to authenticate myself using oath via PHP. Once I get authenticated, I can retrieve a list of PHP, html, etc.. files, each one of them having a public URL that anyone can access. So the think is that instead of downloading all files in that repository, and serve those files, I want to be able to reuse that repository's web space and I just serve these files myself. So basically I want to be able to have symbolic links to urls, which I think is not possible, but being able to just read the files and serve the PHP logic, even though the files are elsewhere.
I'm concern about the security issues involved, but if someone could help me I will be thankful... Also if you are interested in what I'm doing I always can use a partner for this project which I intent to use it in charity, but still can pay that person.
This is not a smart thing to do. You open yourself up to potential security issues, but at a minimum, you will significantly slow your site down.
I would recommend that you simply script synchronizing the files on both servers over SSH by a script.
Edit: ManseUK's suggestion if rsync is also a good one.
If you have ftp access to the remote server, you could mount the folder using fuse, and serve as usual for apache.
Do you have the ability to mount the remote folder as an NFS volume, or perhaps with SSHFS? If those options are available, either could work for you. You'd mount the remote folder locally and tell your local web server to serve files from that path.
Not that it would be the most efficient setup in the world, but I don't know why you have all this split apart in the first place. ;)
You could write a cronjob to grab the remote file list every X minutes/hours/days then store the results locally, then write a simple script to parse those results upon request. Alternatively, you could still use an NFS or SSHFS mount to read the remote paths in real time and build whatever URL's you need.
Related
The situation is a bit complicated here.
We are receiving large folders with lots of files on a distant fileserver (accessible with FTP, let's say FTP1).
Thoses folders can have a complex arboresence and weigh between 50Mo and 4Go.
With PHP, the goal is to remove unwanted files (.exe, .pdf ...).
Take all files and put them on the root folder and then order them by creating a new defined arborescence.
And after this process, the webserver should send everything to another distant FTP server (FTP2).
Then folders/files can be removed from FTP1
With laravel and Storage everything is easy to make but my main concern is about the speed.
Is it better to
Copy file on webserver, launch process, copy to distant and then clean
Process directly on FTP1 and then copy to FTP2
Copy to FTP2 and then process directly on FTP2
I dont have that much experience in IT infrastructure/architecture but both FTP are only accessible through internet and will never be in the same network as the webserver.
The connection between FTP's servers and webserver should be on high availability but we all knows what it means ...
I dont expect an answer but more like a guideline or the the usual way of dealing with this case.
I don't quite understand you. By 50Mo do you mean 50mb? But anyway, since you are just looking for a rough guide and from what i see,
You should
Reduce the need for FTP because FTP is extremely slow. So if you can have 1 FTP transfer over 2, it is definitely better.
Check your ftp servers hardware specification. You obviously want the faster one to do the processing.
I develop some python applications so I know how to do this in python locally, but am working with some PHP developers (I know nothing of PHP) who say this can't be done in PHP. This is the idea: A php driven remote website which creates / hosts files. Using a web browser I want to download from this website a series of folders and files onto the local machine overwriting already existing files/folders with the same name. So in my browser I click on a download button which asks me to browse to a local or network folder to download the folders and files to. Currently we are just downloading a single .zip file containing all these files and folders which we have to unzip and manually move, copy paste, etc, very messy and cumbersome. There must be a better way with PHP and some other language?
No, it's not possible to access from a PHP (server-side language) to the Client Machine (from a Browser) and manipulate directly his file system, hard drive, or something like that. This is not the way it works.
Just think about it for a moment, if it could be accomplish, we have serious security threat, for example we visit a page like somebadassdude.com and they have a PHP script that create unlimited folders and files to fill up all our HD... and that is soft.
But hopefully the browsers dont allow this by security design.
Look at this:
As you can see at the Diagram, the Browser and the Server response each other through HTTP Requests & Responses. There's no a communication between them like a local program running at the Client OS. You treat with his Browser, and there's no way to command the Browser to manipulate the client hard-disk, and if that can happend, look at the security consern that I mentioned before.
To be more clearer, your PHP script is running at your server, not at the client machine. It only response when a user/browser request a specific resource at your server, and response with a HTTP Response, and it can contain HTML, or Json, or a File (to be downloaded or visualized by external program), or whatever.
You have limited options:
If it is something for a Intranet, or
local network, and you have access to that network, locally or
remotely like with a VPN access. You could share a folder over
network, in that way you can use a Php Script or Python script in
order to create the folders and copy the files to it, without have to
download a zip, and unzip manually from the Browser.
Using a Java Applet. Why? Because a Java Applet runs
on the Client Side, so you have access to his computer (if the user
allow it), and you certeinly can manipulate (create, delete, read,
etc. folders and files) his hard-drive. So when the user choose the files to download,
you fire the Java Applet, and let em request to the server the files
that the user has marked. When you have the files downloaded, create
or overwrite the files on the client machine.
Create and run a program in the Client Machine, in detriment of a Web Page, by this way you gain the needed flexibility. But of course, it have his own complexities.
So IHMO i think the Java Applet maybe is the best suited solution for you:
Do not have to change much your actual business model
It doesn't require a large time investment.
It is cross-platform, Java can work on a plenty of operating systems, and Java Applets in the most popular browsers.
By the way, I personally dislike Java, but it's a tool, and you have to use the right tool for a job.
Cheers.
I am designing a web-based file-managment system that can be conceptualised as 3 different servers:
The server that hosts the system interface (built in PHP) where users 'upload' and manage files (no actual files are stored here, it's all meta).
A separate staging server where files are placed to be worked on.
A file-store where the files are stored when they are not being worked on.
All 3 servers will be *nix-based on the same internal network. Users, based in Windows, will use a web interface to create an initial entry for a file on Server 1. This file will be 'uploaded' to Server 3 either from the user's local drive (if the file doesn't currently exist anywhere on the network) or another network drive on the internal network.
My question relates to the best programmatic approach to achieve what I want to do, namely:
When a user uploads a file (selecting the source via a web form) from the network, the file is transferred to Server 3 as an inter-network transfer, rather than passing through the user (which I believe is what would happen if it was sent as a standard HTTP form upload). I know I could set up FTP servers on each machine and attempt to FXP files between locations, but is this preferable to PHP executing a command on Server 1 (which will have global network access), to perform a cross-network transfer that way?
The second problem is that these are very large files we're talking about, at least a gigabyte or two each, and so transfers will not be instant. I need some method of polling the status of the transfer, and returning this to the web interface so that the user knows what is going on.
Alternatively this upload could be left to run asyncrhonously to the user's current view, but I would still need a method to check the status of the transfer to ensure it completes.
So, if using an FXP solution, how could polling be achieved? If using a file move/copy command from the shell, is any form of polling possible? PHP/JQuery solutions would be very acceptable.
My final part to this question relates to windows network drive mapping. A user may map a drive (and select a file from), an arbitrarily specified mapped drive. Their G:\ may relate to \server4\some\location\therein, but presumably any drive path given to the server via a web form will only send the G:\ file path. Is there a way to determine the 'real path' of mapped network drives?
Any solution would be used to stage files from Server 3 to Server 2 when the files are being worked on - the emphasis being on these giant files not having to pass through the user's local machine first.
Please let me know if you have comments and I will try to make this question more coherant if it is unclear.
As far as I’m aware (and I could be wrong) there is no standard way to determine the UNC path of a mapped drive from a browser.
The only way to do this would be to have some kind of control within the web page. Could be ActiveX or maybe flash. I’ve seen ActiveX doing this, but not flash.
In the past when designing web based systems that need to know the UNC path of a user’s mapped drive I’ve had to have a translation of drive to UNC path stored server side. I did have a luxury though of knowing which drive would map to what UNC path. If the user can set arbitrary paths then this obviously won’t work.
Ok, as I’m procrastinating and avoiding real work I’ve given this some thought.
I’ll preface this by saying that I’m in no way a Linux expert and the system I’m about to describe has just been thought up off the top of my head and is not something you’d want to put into any kind of production. However, it might help you down the right path.
So, you have 3 servers, the Interface Server (LAMP stack I’m assuming?) your Staging Server and your File Store Server. You will also have Client Machines and Network Shares. For the purpose of this design your Network Shares are hosted on nix boxes that your File Store can scp from.
You’d create your frontend website that tracks and stores information about files etc. This will also hold the details about which files are being copied, which are in Staging and so on.
You’ll also need some kind of Service running on the File Store Server. I’ll call this the File Copy Service. This will be responsible for coping the files from your servers hosting the network shares.
Now, you’ve still got an issue with how you figure out what path the users file is actually on. If you can stop users from mapping their own drives and force them to use consistent drive letters then you could keep a translation of drive letter to UNC path on the server. If you can’t, well I’ll let you figure that out. If you’re in a windows domain you can force the drive mappings using Group Policies.
Anyway, the process for the system would work something like this.
User goes to system and selects a file
The Interface server take the file path and calls the File Copy Service on the File Store Server
The File Copy Service connects to the server that hosts the file and initiates the copy. If they’re all nix boxes you could easily use something like SCP. Now, I haven’t actually looked up how to do it but I’d be very surprised if you can’t get a running total of percentage complete from SCP as it’s copying. With this running total the File Copy Service will be updating the database on the Interface Server with how the copy is doing so the user can see this from the Interface Server.
The File Copy Service can also be used to move files from the File Store to the staging server.
As i said very roughly thought out. The above would work, but it all depends a lot on how your systems are set up etc.
Having said all that though, there must be software that would do this out there. Have you looked?
If iam right is this archtecture:
Entlarge image
1.)
First lets sove the issue of "inter server transfer"
I would solve this issue by mount the FileSystem from Server 2 and 3 to Server 1 by NFS.
https://help.ubuntu.com/8.04/serverguide/network-file-system.html
So PHP can direct store files on file system and dont need to know on which server the files realy is.
/etc/exports
of Server 2 + 3
/directory/with/files 192.168.IPofServer.1 (rw,sync)
exportfs -ra
/etc/fstab
of Server 1
192.168.IPofServer.2:/var/lib/data/server2/ /directory/with/files nfs rsize=8192,wsize=8192,timeo=14,intr
192.168.IPofServer.3:/var/lib/data/server3/ /directory/with/files nfs rsize=8192,wsize=8192,timeo=14,intr
mount -a
2.)
Get upload progress for realy large files,
here are some possibilitys to have a progress bar for http uploads.
But for a resume function you would have to use a flash plugin.
http://fineuploader.com/#demo
https://github.com/valums/file-uploader
or you can build it by your selfe using the apc extension
http://www.amwsites.com/blog/2011/01/use-a-combination-of-jquery-php-apc-uploadprogress-to-show-progress-bar-during-an-upload/
3.)
Lets Server load files from Network drive.
This i would try with a java applet to figurre out the real network path and send this to server, so the server can fetch the file in background.
But i never didt thinks like this before and have no further informations.
I am planing to build a CMS-system.
The CMS system will be located and fully administrated at:
www.mycompany.com/customer
The final website will be located at:
www.customer.com
I need to figure out the best way to copy files (eg. images)
from: www.mycompany.com/customer/media
to: www.customer.com/media
Note: The CMS and customer page will be located on different hosts. And I want to build this function using PHP.
Some thoughts:
The optimal solution would be if the two directories could be cloned automaticly, no matter how the images are uploaded or updated. Maby if there is a way to detect changes to www.mycompany.com/customer/media, then www.customer.com/media could be notifyed about it and send a request to update the image.
A wish would also be that images only could be accessed from www.mycompany.com/customer/media if logged in to the CMS :S
Any tips?
You should not use FTP (insecure), or PHP for the replication,
try rsync instead :-
What is rsync ?
http://en.wikipedia.org/wiki/Rsync
rsync is a software application and network protocol for Unix-like and Windows systems which synchronizes files and directories from one location to another while minimizing data transfer using delta encoding when appropriate. An important feature of rsync not found in most similar programs/protocols is that the mirroring takes place with only one transmission in each direction. rsync can copy or display directory contents and copy files, optionally using compression and recursion.
In another word, is designed meant for mirroring or replicating (is industry standard)
In general,
setup public key to allow source server to able to ssh into destination server
setup a cronjob in the source server to do rsync
What does the cronjob do ?
In nutshell, it should rsync the selected source directory to destination server,
a quick example :-
* * * * * rsync -avz /home/www.mycompany.com/www $HOST:/home/www.customer.com/www
^ source server directory ^ destination server,
and directory
However, rsync is too hard to describe in few sentences, you can take a look :-
http://www.cyberciti.biz/tips/linux-use-rsync-transfer-mirror-files-directories.html (as a start)
Other possibilities is make use of version controlling software, like -:
git
svn
Or make use on CDN (like #Amir Raminfar has mentioned), which itself is already a complete solution for file distribution.
Just make an ftp call or scp call from mycompany.com to customer.com and upload the file.
Alternately, you can use a CDN which is probably a little too much work but it will give you optimal solution and with great performance. You can upload to a common server and have customer.com pull from that website for the images. This gives you the benefit for parallel download for the assets.
Finally, you can run some kind of web service on customer.com that can accept uploads.
I wouldn't do what you suggested which is notification system because it won't be exactly instantaneous and it will result in some time before actually seeing the image.
You can create DB for www.mycompany.com. But host must allows remote access to DB.
Create crontab for customer.com which will be check new records within remote DB. This script could be allows for registred users.
You should create htaccess for www.mycompany.com images folder. To allow access from itself and customer.com to download (say copy) images.
In this way all registred users can download any image. It could be wrong for you.
If you need to limit access for each image then you should to store images in DB and generate encrypt url for each image and request. In this case you shold to use remote DB or SOAP service for getting images.
I'm building a web server out of a spare computer in my house (with Ubuntu Server 11.04), with the goal of using it as a file sharing drive that can also be accessed over the internet. Obviously, I don't want just anyone being able to download some of these files, especially since some would be in the 250-750MB range (video files, archives, etc.). So I'd be implementing a user login system with PHP and MySQL.
I've done some research on here and other sites and I understand that a good method would be to store these files outside the public directory (e.g. /var/private vs. /var/www). Then, when the file is requested by a logged in user, the appropriate headers are given (likely application/octet-stream for automatic downloading), the buffer flushed, and the file is loaded via readfile.
However, while I imagine this would be a piece of cake for smaller files like documents, images, and music files, would this be feasible for the larger files I mentioned?
If there's an alternate method I missed, I'm all ears. I tried setting a folders permissions to 750 and similar, but I could still view the file through normal HTTP in my browser, as if I was considered part of the group (and when I set the permissions so I can't access the file, neither can PHP).
Crap, while I'm at it, any tips for allowing people to upload large files via PHP? Or would that have to be don via FTP?
You want the X-Sendfile header. It will instruct your web server to serve up a specific file from your file system.
Read about it here: Using X-Sendfile with Apache/PHP
That could indeed become an issue with large files.
Isn't it possible to just use FTP for this?
HTTP isn't really meant for large files but FTP is.
The soluton you mentioned is the best possible when the account system is handled via PHP and MySQL. If you want to keep it away from PHP and let the server do the job, you can protect the directory by password via .htaccess file. This way the files won't go through the PHP, but honestly there's nothing you should be worried about. I recommend you to go with your method.