I'm using PHP and my script outputs a list of links for files that can be downloaded by the user. To download each file, as a user I would have to copy the url and paste it into something like Free Download Manager.
I would like to improve the user experience and have a "Download" button that would handle or initiate the download process.
I'm thinking either have code written in PHP to act as a download manager, or tie the "Download" button to the functionality of a firefox or such add-on that act as a download manager. Does anyone have good suggestions for the sort of thing I'm trying to do?
Update:
Let's say that:
- my script presents the user with a list of files that can be downloaded.
- next to each file, there's a checkbox, then at the bottom a button that says "download selected".
If this is the setup I have, if I use force download, then clicking the "download selected" button will force dl 12 files at the same time, so not exactly like a download manager. I'm thinking this probably requires something that takes both PHP and Firefox behavior into account.
You can use php header() to force download for single file per time and multiple times.
Some links for you to reference from.
http://w-shadow.com/blog/2007/08/12/how-to-force-file-download-with-php/
http://www.ryboe.com/tutorials/php-headers-force-download
Another good example from php.net: readfile()
Related
I have three weeks looking for this:
I have this page which have to download a file after verifying the data given by the user. I do the validation and execute an external script which gives me an URL. And I use that URL to download another file which I'm gonna execute later.
I'd tried to download the file with curl, wget, file_put_content, javascript, ajax and jquery with no luck, the file is 150MB+ so I created a nice progress bar to tell the user how the download is going and already have a method to read the downloaded size.
I fact I'm able to download the file with cURL, but the problem is that the "do_form.php" won't load until the file is completely downloaded, so I want to load the page first, then download the file - in the background -, so I can show the user the progress of the download.
Please tell me that this is possible... Thanks!!
Have a look at Gearman. Maybe a gearman background job fits your needs. This job can even run on the same machine as the web server. Of course the early response to the page has the disadvantage that it contains no success information. You will need to poll for that information using AJAX again.
If you decide to do so, you should also have a look at GearmanManager as this eases things a lot.
I have a program that users want to download. Instead of offering a link to download and they download the program and enter the information, I would rather have my PHP script do it.
My website requires a username and password to login, there is a private page where a user can download a file. When a user goes to the "download" page, there is specific options on the page the user must choose. Once the choices are selected and the user clicks "Download", I want PHP to go to my programs source, add the missing data inside the source, compile it into a .exe, and bring it back to the user.
Thing is, I have no idea how to do this and im scared that PHP will put the information for one user and accidently give one users information to everyone else.
How can I make it that each binary is different?
I'm currently making my program in Windows (using Code::Blocks) but im going to be hosting the files on a Linux or FreeBSD server. Just wanted to let that out incase there is something I need to know.
Since you're in effect allowing the user to enter arbitrary code onto your server, this seems like a serious security risk. Why is customization of the executable needed? Could it be a configuration file?
If you want to go ahead with this, it can be done pretty easily. Perhaps use a templating engine like Mustache (not the C++ language feature templates). Write your source files to include this mustache templates.
When the user has posted the info, you just need to run the command to apply the template to get the finished source, then run your makefile to generate the finished executable. Then you can give it like anything else.
If you want to make sure the users can't get the same binary, an easy way would be to delete the finished executable after each run. There's better ways of doing this, but this is easy.
I have a interesting question for you all.
My client wants a way to automatically save a PDF that is opened up in Chrome. Currently he achieves this by either clicking on the Save button in Acrobat (in the browser), or by right clicking, thus bringing up the context menu and clicking Save As --> PDF. He would like this process automated.
I did a lot of searching and I cannot find a solution to this question.
So I ask all of you, is there a way to automatically save a PDF that is embedded in a HTML page?
I'm assuming I would have to do some screen scraping to get the HTML page, which would include the link with the PDF but I'm not sure where to go from there.
Any help will be greatly appreciated.
Thank you very much, and have a great day.
Web pages require user interaction before writing files to the user's hard drive. This is a security measure to keep any ol' web page from putting files on your disk.
It is possible to make file Save As dialog popup when the user clicks on something and ready to save a particular file to the disk, but in some browsers the user will still have to finish the operation. In other browsers, downloaded files can be pre-configured to go to a particular "downloads" directory and the user does not have to take another step after the browser initiates the save.
If, when the browser requests a file, the response is given a header:
Content-Disposition: attachment; filename="xxx.pdf"
The browser will try to save it. I don't know what happens if the web server does this as one request of many in a web page. The way I've seen this used is the user clicks a download button and the browser requests a specific URL that indicates to the server that the user wants to save the file and then, and only then, the web server includes this header.
You can do this with htaccess. Add the following to your .htaccess file.
AddType application/octet-stream .pdf
I need to get user download some file (for example, PDF). What will be longer:
send this file by PHP (with specific headers),
or put it in http public folder, and get user the public link to download it (without PHP help)?
In 1st case the original file could be in private zone.
But I'm thinking it will take some time to send this file by PHP.
So how I can measure PHP spent time to sending file and how much memory it can consumed?
P.S. in the 1st case, when PHP sends headers and browser (if pdf plugin is installed) will try to opening it inside browser, is PHP still working, or it push out whole file after headers sent immediately? Or if plugin not installed and browser will show "save as" dialog PHP still working ?
There will be very little in it if you are worried about download speeds.
I guess it comes down to how big your files are, how many downloads you expect to have, and if your documents should be publicly accessible, the download speed of the client.
Your main issue with PHP is the memory it consumes - each link will create a new process, which would be maybe 8M - 20M depending on what your script does, whether you use a framework etc.
Out of interest, I wrote a symfony application to offer downloads, and to do things like concurrency limiting, bandwidth limiting etc. It's here if you're interested in taking a look at the code. (I've not licensed it per se, but I'm happy to make it GPL3 if you like).
I'm sorry to bother you with my issues, but i'm facing a a problem that I have some trouble to fix.
I have a website with a login restricted area.
Once the user is logged he can access to the files of my company (big file)
But to avoid the link to be spread all over the internet when a user want to download the file, located on an external url: he clicks on a url which will contain the name of the file crypted in md5 , which redirect on a php script which is going to generate in php, headers with the download using fsockopen.
However this does not support resume of download which is not very practical when we are downloading somes files of 2 or 3 gb or when you are using a downloader.
How can I do to enable resume ?
I have seen some php scripts using fread method , however i don't think it would be a good idea in my case, because for big files, it could make lag the server.. when you do a progressive fread on a 2gb files, good luck for the process when they are 30 poeple downloading the file in the meantime.
If you use fopen() and fseek() like this, you're essentially doing the same as any Webserver does to reply HTTP-RANGE requests.
You could make it so that it doesn't allow the file to be downloaded unless they are logged in.
So instead of providing them with a direct link to the file, they get a link like foo.com/download.php?myfile.ext
And the download.php would check the session before providing the user with the file download.