Send file from PHP page to another - php

I need to send a file from one PHP page (on which client uploads their files) to another PHP page on another server were files will be finaly stored.
To comunicate now I use JSON-RPC protocol; is it wise to send the file this way?
$string = file_get_contents("uploaded_file_path");
send the string to remote server and then
file_put_contents("file_name", $recieved_string_from_remte);
I understand that this approach takes twice the time than uploading directly to the second server.
Thanks
[edit]
details:
i need to write a service allowing some php (may be joomla) user to use a simple api to upload files and send some other data to my server which analyze them , put in a db and send back a response
[re edit]
i need to create a Simple method allowing the final user to do that, who will use this the interface on server 1 (the uploading) use the php and stop, so remote ssh mount ore strange funny stuff

If I were you, I'd send the file directly to the second server and store its file name and/or some hash of the file name (for easier retrieval) in a database on the first server.
Using this approach, you could query the second server from the first one for the status of the operation. This way, you can leave the file processing to the second machine, and assign user interaction to the first machine.

As i said in my comment, THIS IS NOT RECOMMENDABLE but anyway....
You can use sockets reading byte by byte:
http://php.net/manual/en/book.sockets.php
or you can use ftp:
http://php.net/manual/en/book.ftp.php
Anyway, the problem in your approuch is doing the process async or sync with the user navigation? I really suggest you passed it by sql or ftp and give the user a response based on another event (like a file watching, then email, etc) or using sql (binary, blob, etc)

Use SSHFS on machine 1 to map a file path to machine 2 (using SSH) and save the uploaded file to machine 2. After the file is uploaded, trigger machine 2 to do the processing and report back as normal.
This would allow you to upload to machine 1, but actually stream it to machine 2's HD so it can be processed faster on that machine.
This will be faster than any SQL or manual file copy solution, because the file transfer happens while the user is uploading the file.

If you don't need the files immediately after receiving them (for processing etc), then you can save them all in one folder on Server 1 and set up a cron to scp the contents of the folder to Server 2. All this assuming you are using linux servers, this is one of the most secure and efficient ways to do it.
For more info please take a look at http://en.wikipedia.org/wiki/Secure_copy or google scp.

Related

Best way to "pipe" file contents from a remote server, via a 2nd server, output to browser

I have numerous storage servers, and then more "cache" servers which are used to load balance downloads. At the moment I use RSYNC to copy the most popular files from the storage boxes to the cache boxes, then update the DB with the new server IDs, so my script can route the download requests to a random box which has the file.
I'm now looking at better ways to distribute the content, and wondering if it's possible to route requests to any box at random, and the download script then check if the file exists locally, if it doesn't, it would "get" the file contents from the remote storage box, and output the content in realtime to the browser, whilst keeping the file on the cache box so that the next time the same request is made, it can just serve the local copy, rather than connecting to the storage box again.
Hope that makes sense(!)
I've been playing around with RSYNC, wget and cURL commands, but I'm struggling to find a way to output the data to browser as it comes in.
I've also been reading up on reverse proxies with nginx, which sounds like the right route... but it still sounds like they require the entire file to be downloaded from the origin server to the cache server before it can output anything to the client(?) some of my files are 100GB+ and each server has a 1gbps bandwidth limit, so at best, it would take 100s to download a file of that size to the cache server before the client will see any data at all. There must be a way to "pipe" the data to the client as it streams?
Is what I'm trying to achieve possible?
You can pipe data without downloading the full file using streams. One example for downloading a file as a stream would be the Guzzle sink feature. One example for uploading a file as a stream would be the Symfony StreamedResponse. Using those the following can be done:
Server A has a file the user wants
Server B gets the user request for the file
Server B uses Guzzle to setup a download stream to server A
Server B outputs the StreamedResponse directly to the user
Doing so will serve the download in real-time without having to wait for the entire file to be finished. However I do not know if you can stream to the user and store the file on disk at the same time. There's a stream_copy_to_stream function in PHP which might allow this, but don't know that for sure.

Inter-network File Transfers using PHP with polling

I am designing a web-based file-managment system that can be conceptualised as 3 different servers:
The server that hosts the system interface (built in PHP) where users 'upload' and manage files (no actual files are stored here, it's all meta).
A separate staging server where files are placed to be worked on.
A file-store where the files are stored when they are not being worked on.
All 3 servers will be *nix-based on the same internal network. Users, based in Windows, will use a web interface to create an initial entry for a file on Server 1. This file will be 'uploaded' to Server 3 either from the user's local drive (if the file doesn't currently exist anywhere on the network) or another network drive on the internal network.
My question relates to the best programmatic approach to achieve what I want to do, namely:
When a user uploads a file (selecting the source via a web form) from the network, the file is transferred to Server 3 as an inter-network transfer, rather than passing through the user (which I believe is what would happen if it was sent as a standard HTTP form upload). I know I could set up FTP servers on each machine and attempt to FXP files between locations, but is this preferable to PHP executing a command on Server 1 (which will have global network access), to perform a cross-network transfer that way?
The second problem is that these are very large files we're talking about, at least a gigabyte or two each, and so transfers will not be instant. I need some method of polling the status of the transfer, and returning this to the web interface so that the user knows what is going on.
Alternatively this upload could be left to run asyncrhonously to the user's current view, but I would still need a method to check the status of the transfer to ensure it completes.
So, if using an FXP solution, how could polling be achieved? If using a file move/copy command from the shell, is any form of polling possible? PHP/JQuery solutions would be very acceptable.
My final part to this question relates to windows network drive mapping. A user may map a drive (and select a file from), an arbitrarily specified mapped drive. Their G:\ may relate to \server4\some\location\therein, but presumably any drive path given to the server via a web form will only send the G:\ file path. Is there a way to determine the 'real path' of mapped network drives?
Any solution would be used to stage files from Server 3 to Server 2 when the files are being worked on - the emphasis being on these giant files not having to pass through the user's local machine first.
Please let me know if you have comments and I will try to make this question more coherant if it is unclear.
As far as I’m aware (and I could be wrong) there is no standard way to determine the UNC path of a mapped drive from a browser.
The only way to do this would be to have some kind of control within the web page. Could be ActiveX or maybe flash. I’ve seen ActiveX doing this, but not flash.
In the past when designing web based systems that need to know the UNC path of a user’s mapped drive I’ve had to have a translation of drive to UNC path stored server side. I did have a luxury though of knowing which drive would map to what UNC path. If the user can set arbitrary paths then this obviously won’t work.
Ok, as I’m procrastinating and avoiding real work I’ve given this some thought.
I’ll preface this by saying that I’m in no way a Linux expert and the system I’m about to describe has just been thought up off the top of my head and is not something you’d want to put into any kind of production. However, it might help you down the right path.
So, you have 3 servers, the Interface Server (LAMP stack I’m assuming?) your Staging Server and your File Store Server. You will also have Client Machines and Network Shares. For the purpose of this design your Network Shares are hosted on nix boxes that your File Store can scp from.
You’d create your frontend website that tracks and stores information about files etc. This will also hold the details about which files are being copied, which are in Staging and so on.
You’ll also need some kind of Service running on the File Store Server. I’ll call this the File Copy Service. This will be responsible for coping the files from your servers hosting the network shares.
Now, you’ve still got an issue with how you figure out what path the users file is actually on. If you can stop users from mapping their own drives and force them to use consistent drive letters then you could keep a translation of drive letter to UNC path on the server. If you can’t, well I’ll let you figure that out. If you’re in a windows domain you can force the drive mappings using Group Policies.
Anyway, the process for the system would work something like this.
User goes to system and selects a file
The Interface server take the file path and calls the File Copy Service on the File Store Server
The File Copy Service connects to the server that hosts the file and initiates the copy. If they’re all nix boxes you could easily use something like SCP. Now, I haven’t actually looked up how to do it but I’d be very surprised if you can’t get a running total of percentage complete from SCP as it’s copying. With this running total the File Copy Service will be updating the database on the Interface Server with how the copy is doing so the user can see this from the Interface Server.
The File Copy Service can also be used to move files from the File Store to the staging server.
As i said very roughly thought out. The above would work, but it all depends a lot on how your systems are set up etc.
Having said all that though, there must be software that would do this out there. Have you looked?
If iam right is this archtecture:
Entlarge image
1.)
First lets sove the issue of "inter server transfer"
I would solve this issue by mount the FileSystem from Server 2 and 3 to Server 1 by NFS.
https://help.ubuntu.com/8.04/serverguide/network-file-system.html
So PHP can direct store files on file system and dont need to know on which server the files realy is.
/etc/exports
of Server 2 + 3
/directory/with/files 192.168.IPofServer.1 (rw,sync)
exportfs -ra
/etc/fstab
of Server 1
192.168.IPofServer.2:/var/lib/data/server2/ /directory/with/files nfs rsize=8192,wsize=8192,timeo=14,intr
192.168.IPofServer.3:/var/lib/data/server3/ /directory/with/files nfs rsize=8192,wsize=8192,timeo=14,intr
mount -a
2.)
Get upload progress for realy large files,
here are some possibilitys to have a progress bar for http uploads.
But for a resume function you would have to use a flash plugin.
http://fineuploader.com/#demo
https://github.com/valums/file-uploader
or you can build it by your selfe using the apc extension
http://www.amwsites.com/blog/2011/01/use-a-combination-of-jquery-php-apc-uploadprogress-to-show-progress-bar-during-an-upload/
3.)
Lets Server load files from Network drive.
This i would try with a java applet to figurre out the real network path and send this to server, so the server can fetch the file in background.
But i never didt thinks like this before and have no further informations.

PHP File Upload in Sharded Server Configuration

We use multiple servers to handle incoming web requests which are load-balanced in a round-robin fashion. I've run into an issue that I'm not sure how to solve.
Using AJAX (qqFileUploader), I am uploading a file. By default it goes into the /tmp folder which is fine. The problem is when I try to retrieve that file, that retrieval request gets handled by the next server in line which does not have the file I uploaded. If I keep repeating the request over and over again, it will eventually reach the original server (via round robin load balancing) where the file was stored and then I can open it. Obviously that is not a good solution.
Here is essentially the code: http://jsfiddle.net/Ap27Z/. I removed some of it for brevity. You will see that the uploader object makes a call to a PHP file to do the file upload and then after the file upload is complete, another AJAX call is made to a script to process the .csv file. This is where the process is getting lost in the round-robin.
I read a few questions here on SO relating to uploading files to memory and it seems that it is essentially not currently feasible. Is there another option I can use to upload a file and handle it all within the same request?
The classic solution to this type of problem is to use sticky sessions on your load balancer. That may not be a good solution for you as it would be modifying your whole setup to fix a small problem.
I would suggest adding a sub-domain prefix for each machine e.g. upload goes to www.example.com and then each server is allocated an additional subdomain www1.example.com, www2.example.com which are always passed directly to that server, rather than round robin DNS.
As part of the success result, you could pass back the server name that points to the exact server, rather than the load-balanced name, and then all subsequent Ajax calls that reference the uploaded data use that server specific domain name, rather than the generic load balanced domain name.
Is there another option I can use to upload a file and handle it all
within the same request?
Sure, why not? The code that handles the POSTing of the data can do whatever you want it to do.
There are (at least) 2 solutions to your problem:
You change the load-balancing.
There are several load balancing proxies out there which support session affinity a.k.a. "sticky sessions". That means that a user always gets the same server within a session.
Two programs that can act in this way are HAProxy (related question here on SO) and nginx with a custon module (tutorial here).
You chance the files' location.
The other choice would be to change the location of your stored files to some place that all of your servers can access via the same location. This could be, for example, an NFS mount or a database (with the files stored as BLOBS). This way, it doesn't matter which server processes the request, as all of them have access to the file.

Upload to ftp using php

Is it possible to upload files (even big files) to a ftp using PHP?
Been reading about ftp_connect() and it looks like I can, or can't I?
I had a look at this example, it's in Italian but you can read the code anyway, if that does what I'm asking, will I have to add an html form? I need to be able to pick up a file form my computer via a web page and upload it to an ftp basically.
Anyone?
Thanks
Especially on large files, you should make sure that the maximum execution time for the script is big enough to complete the transfer before the script is aborted. You can choose the maximum execution time in the php.ini file.
You will have to use an HTML form if you want to pick up the file from your computer via a web page.
As soon as the form is submitted, you can access the file using the $_FILES array. You can use this information to get a temporary path to where the file is stored, and can read it from there to upload it to a remote server using the FTP functions.
You could also split the two processes by using the PHP script only to drop the file into the local file system, and then use a second program which runs locally to do the upload. This has the advantage that you won't run into problems when multiple users upload simultaneously and your FTP is set up in a way that it allows only 1 simultaneous connection. You could program the second script also in PHP and run it using a Cronjob for example once per 30min.
If your goal is a direct stream from your computer to the FTP server, this is not easily possible using a pure PHP / HTML solution since the PHP script is only invoked when the file transfer from your computer to the machine serving the PHP script is complete.

PHP upload field destination of other server

I have a website which I'll call website.com that is located on server1. website.com has a field to upload a file. When someone uploads a file on website.com, I don't want the file uploaded to server1, I want it to upload to another server, server2. What is the best way to do this? Can I do this using php, a shell script?
After the file is uploaded to server2, I have a shell script to execute on the file which I will also eventually have to figure out how to run from server1.
I hope this makes sense, thanks in advance.
another possible way to do this is by uploading this file to your website.com site and use CURL to send the image to another server. once this completes you can remove the image again.
see CURL PHP send image for more information.
-- UPDATE --
For SSH connection you need to install additional libraries in order to allow php to make SSH connection. an excellent tutorial can be found here.
-- UPDATE 2 --
The question intrigued me, so i expanded my research. there seems to be another PHP Library phpseclib around on Sourceforge. In the documentation on page 5 there is some information on how it works.
The only good way to make this to work is to read the image to binary, and send it over the the other server, as text and write that into an file, hence creating an image from the source of the original.
Also place the image in a public folder that is accepts calls from your website1 domain, this way you also prevent hot linking your images and saves considerable data.
I also came across this for help with phpseclib.
in the end i wouldnt choose for a solution like this. I would swap your website from server1 to server2, just to keep everything in one place.
Cant you put the script to handle the upload on Server 2?
You can have your HTML pages with the form served for server 1, but call the PHP for the upload from server 2.
Update
For example...
Server 1 has a file index.php which has a form:
<form action='http://server2.com/some_directory/uploader.php' method='POST'>
.... Some form code
</form>
The form on index.php points to a PHP script on server 2, via a URL. That PHP script can now handle the input.
Of course this will only work if server2 is connected accessible from the internet, if not you will have to use some sort of shell script on server 1 to move the files on the internal network when they are uploaded to server 1.

Categories