I'm extracting server side .zip using this:
<?php
$unzip = shell_exec("unzip zipp1.zip");
?>
It's working fine, but it doesn't overwrite existing files (and I need it!).
Everything is in the same folder, chmod 777.
Can I add something to fix?
Tnx!
If you really need to use shell, you can write unzip -o zipp1.zip.
However, PHP has a library for working with Zip archives, called ZipArchive: http://php.net/manual/en/class.ziparchive.php
There you can extractTo which overwrites by default: http://php.net/manual/en/ziparchive.extractto.php
It's usually a good security practice to disable shell_exec(), so using PHP's libraries is recommended.
Is it possible to download a file using Wget? I want download that file into my default browser's download directory.
It's possible, but it wouldn't achieve the effect you desire.
Running wget would cause the file to be downloaded to the server (which is something you'd be better off using the cURL library for instead of shelling out to an external binary).
If you want the browser to download it, then you need to output the file from PHP, not save it to a file on the server.
Try something like this :
shell_exec('wget -P path_to_default_download_directory google.com');
I need to download files in bulk each 0-2.5 MB from an Url to my server(Linux CentOS/can be any other too).
I would like to use the wget (if you have another solution then please post it):
My first approach is to test it with only 1 file:
wget -U --load-cookies=cookies.txt "url"
This is the Shell Response:
The Problem is that it doesn't download the file but only the empty html. The necessary cookie is saved in the right format in the file and the download works in the browser.
If it works to download the 1 file, I want to use a txt with all the urls (e.g. urls.txt) where the urls are like the above but only one parameter is changing. Then I want also that it downloads maybe 10-100 files at a time.
If you have a solution in PHP or Python for this, it will help me too.
Thank you for your help!
I have solved it now with aria2. Its a great Tool for such Things.
Basically:
for i in foo bar 42 baz; do
wget -other -options -here "http://blah/blah?param=$i" -O $i.txt
done
Note the -O parameter, which lets you set the output filename. foo.txt" is a little easier to use thandata-output?format=blahblahblah`.
I'm trying to write a script that allows an admin of a photo uploading system download all their photos at once.
Currently I am using
system('zip -r '.$_SERVER['DOCUMENT_ROOT'].'/zip.zip '.$_SERVER['DOCUMENT_ROOT'].'/images/photo-uploads';
to zip the files but this seems to echo names and locations all the files onto the page.
Is there anyway to get around this? If not what is the best way to zip files on server.
You might use exec('zip -r '.$_SERVER['DOCUMENT_ROOT'].'/zip.zip '.$_SERVER['DOCUMENT_ROOT'].'/images/photo-uploads'); instead.
You can use ZipArchive extension instead (if you are allowed to) of calling system zip like that, because it makes your code non-portable.
You can also use output buffering:
ob_start();
system('zip -r '.$_SERVER['DOCUMENT_ROOT'].'/zip.zip '.$_SERVER['DOCUMENT_ROOT'].'/images/photo-uploads';
ob_end_clean();
This will stop any output from being shown from the system command.
For a particular project I have, no server side code is allowed. How can I create the web site in php (with includes, conditionals, etc) and then have that converted into a static html site that I can give to the client?
Update: Thanks to everyone who suggested wget. That's what I used. I should have specified that I was on a PC, so I grabbed the windows version from here: http://gnuwin32.sourceforge.net/packages/wget.htm.
If you have a Linux system available to you use wget:
wget -k -K -E -r -l 10 -p -N -F -nH http://website.com/
Options
-k : convert links to relative
-K : keep an original versions of files without the conversions made by wget
-E : rename html files to .html (if they don’t already have an htm(l) extension)
-r : recursive… of course we want to make a recursive copy
-l 10 : the maximum level of recursion. if you have a really big website you may need to put a higher number, but 10 levels should be enough.
-p : download all necessary files for each page (css, js, images)
-N : Turn on time-stamping.
-F : When input is read from a file, force it to be treated as an HTML file.
-nH : By default, wget put files in a directory named after the site’s hostname. This will disabled creating of those hostname directories and put everything in the current directory.
Source: Jean-Pascal Houde's weblog
build your site, then use a mirroring tool like wget or lwp-mirror to grab a static copy
I have done this in the past by adding:
ob_start();
In the top of the pages and then in the footer:
$page_html = ob_get_contents();
ob_end_clean();
file_put_contents($path_where_to_save_files . $_SERVER['PHP_SELF'], $page_html);
You might want to convert .php extensions to .html before baking the HTML into the files.
If you need to generate multiple pages with variables one quite easy option is to append the filename with md5sum of all GET variables, you just need to change them in the HTML too. So you can convert:
somepage.php?var1=hello&var2=hullo
to
somepage_e7537aacdbba8ad3ff309b3de1da69e1.html
ugly but works.
Sometimes you can use PHP to generate javascript to emulate some features, but that cannot be automated very easily.
Create the site as normal, then use spidering software to generate a HTML copy.
HTTrack is software I have used before.
One way to do this is to create the site in PHP as normal, and have a script actually grab the webpages (through HTTP - you can use wget or write another php script that just uses file() with URLs) and save them to the public website locations when you are "done". Then you can just run the script again when you decide to change the pages again. This method is quite useful when you have a slowly changing database and lots of traffic, as you can eliminate all SQL queries on the live site.
If you use modx it has a built in function to export static files.
If you have a number of pages, with all sorts of request variables and whatnot, probably one of the spidering tools the other commenters have mentioned (wget, lwp-mirror, etc) would be the easiest and most robust solution.
However, if the number of pages you need to get is low, or at least manageable, you've got a few options which don't require any third party tools (not that you should discount them JUSt because they are third party).
You can use php on the command line to get it to output directly into a file.
php myFile.php > myFile.html
Using this method could get painful (though you could put it all into a shell script), and it doesn't allow you to pass variables in the same way (eg: php myFile.php?abc=1 won't work).
You could use another PHP file as a "build" script which contains a list of all the URLs you want and then grabs them via file_get_contents() or file() and writes them to a local file. Using this method, you can also get it to check if the file has changed (md5_file() should work for that), so you'll know what to give your client, should they only want updates.
Further to #2, before you write the output to file, scan it for local urls and then add those to your list of files to download. While you're there, change those urls to link to what you'll eventually name your output so you have a functioning web at the end. Note of caution here - if this is sounding good, you could probably use one of the tools which already exist and do this for you.
Alternatively to wget you could use (Win|Web)HTTrack (Website) to grab the static page. HTTrack even corrects links to files and documents to match the static output.
You can use python or visual basic (or your choice) to create your static files all at once then upload them.
For a project with 11 million business listings in excel files I used VBA to extract the spreadsheet data into 11 mil small .php files, then zipped, ftp'd, unzipped.
https://contactlookup.us
Voila - a super fast business directory
I started with Jekyll, but after about half million entries the generator got bogged down. For 11 million it looked like it would finalize the build in about 2 months!
I do it on my own web site for certain pages that are guaranteed not to change -- I simply run a shell script that could be boiled to (warning: bash pseudocode):
find site_folder -name \*.static.php -print -exec Staticize {} \;
with Staticize being:
# This replaces .static.php with .html
TARGET_NAME="`dirname "$1"`/"`basename "$1" .static.php`".html
php "$1" > "$TARGET_NAME"
wget is probably the most complete method. If you don't have access to that, and you have a template based layout, you may want to look into using Savant 3. I recommend Savant 3 highly over other template systems like Smarty.
Savant is very light weight and uses PHP as the template language, not some proprietary sublanguage. The command you would want to look up is fetch(), which will "compile" your template and place it in a variable that you can output.
http://www.phpsavant.com/