I need to know how much bandwidth is used when uploading a file through a form.
Let me explain a bit more easily. I have a file containing a upload form that is hosted on a web host. When the user uploads a file it is uploaded through this form and to another server through FTP, so basically I'm creating a FTP connection inside the PHP file that is stored on a web host.
How much bandwidth is used if I upload a 100MB file? And is it the receiving server (the server we upload to through FTP in PHP file), is it the web host (where we are hosting the PHP file that opens the FTP connection) or is it both that uses the bandwidth needed to upload a 100MB file?
When you use 100MB of bandwidth (transfer to first server) and another 100MB of bandwidth (transfer to another server), that's 200MB of bandwidth. 100MB download, 100MB upload. Sometimes your provider will bill those separately.
100 + 100 = 200. It really is that simple.
(Note that there is overhead in all cases, but not a ton.)
Related
I'm new to Linux/Apache/PHP installations.
I have a Nextcloud installation. If I upload a large file using the browser the upload speed is about 2 - 3 MB/d (HTTP/2). I have tried using HTTP 1.1 here was the upload speed about 10 MB/s. If I upload the same file using WinSCP the upload speed reaches 50 MB/s.
So there is a huge difference in the upload speed. Any idea how I Improve the upload speed from the browser?
Phpinfo as image: https://drive.google.com/file/d/1njwVwY8x6TxXWp5-9yVRmxio2I766nv4/view?usp=sharing
Nextcloud chunk the file before sending to the server. There are some common issues on it :
if you have an antivirus, it scans all the chunks and not the global file. It's take more time.
if you use object storage, the chunk are saved in it before get it back to reconstruct the real file and then saved it back in the object storage. This take very long time.
And, there is the limit of ajax (to send the chuncks). Web browser can have 6 xhr openened at the same time.
You can try to change the chunck file size to increase it : https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/big_file_upload_configuration.html#configuring-nextcloud
Good luck
I'm working on a project that processes some files on the server at the user's request, then uploads the resulting ZIP file as a blob to Azure for storage. A few cases involve extremely large files which take about an hour to upload. It would be helpful if I could at any random moment run a separate process that queries Azure for the upload progress while the main process on the local server is still preoccupied with uploading the file.
Is there a way to do this in PHP? (If it's of any help, this project is running on Phalcon.)
Is there a way to monitor progress of uploaded to Azure cloud server via PHP?
I don't think it's possible because uploading file is a single task and even though internally the file is split into multiple chunks and these chunks get uploaded, the code actually wait for the entire task to finish.
I have logic to upload images to s3 in php. I need to upload the same set of images in a SFTP server. In my view there are 2 options. First is to find a logic to upload image from my local to the server, when I am uploading images to s3 and the other option is to write some script to transfer the images from s3 to sftp server. I need the same set of images to be in the server and s3.
Out of the 2 approaches, which one is optimal? Is there any other way to approach my requirement? Any sample php script available for local to SFTP file transfer, if yes please provide the code.
I cannot say for sure which one is optimal, but I can definitely see a potential issue with option #1. If you perform the second upload (i.e. from your "local" server to the SFTP server) during the first upload, you make PHP wait on that operation before returning the response to the client. This could cause some unnecessary hanging for the user agent connecting to the local server.
I would explore option #2 first. If possible, look into SSHFS. This is a way to mount a remote filesystem over SSH. It uses the SFTP to transfer files. This may be a possible solution where all you have to do is write the file once to the local server's filesystem and then again to the mounted, remote filesystem. SSHFS would take care of the transfer for you.
Which upload method is preferable ? Individual file upload through FTP or zip file upload through file manager.
Do any file lost while using zip file upload?
Uploading a zip file can be faster as the total size you'll transfer will be smaller; Just think that you'll have to unzip you file after the transfer.
And if you zipped all your filed correctly, nothing will be lost.
It's a lot quicker to upload a zipped file and extract it on the server. If only FTP could support remote unzipping.
If it's a large file, I tend to upload the .zip via FTP and then extract it via cPanel.
If you can, do Zip File upload via FTP.
First of all, text files is good compressed so you save on the size that needs to be uploaded to server.
When moving separate files via FTP there usually is separate connection to server for each file, so it will be very slow.
Also, if you can, don't use File Manager that mostly all host providers offers, because moving files via browser has 30s timeout (unless it's increased, but still not recommended). Only use it if there is no other possibility to extract Zip files via FTP. But still, it will take some time to upload big files.
My PHP is configured with a limit of 2MB for file uploads.
If I try and upload (through a PHP script) a file which is more than 2MB, the browser doesn't stop when it gets to 2MB, it uploads the entire thing, and then my PHP script says it's too large.
My question is, why does the browser not stop at 2MB and reject the file? Since the file won't be stored if it's over the limit, where does this data being uploaded actually go?
My VPS is configured with 512MB RAM and 7GB storage. Does this mean someone can upload a file bigger than 512MB or 7GB and it will kill the server because it runs out of memory/space?
PHP only gets the request after it's completed. If you want to abort earlier, there are methods in your webserver, like Apache's LimitRequestBody or nginx's client_max_body_size. Those fail quite ugly though, to make it more user friendly another option is to use chunked uploads, there are several options mentioned in this question