I have a PHP script running that lists files in a certain directory on the server. Is there any way to access the file's icon metadata? Lots of issues with this I suppose (eg: depends on the OS hosting the script. depends on whether the file is using a custom icon. still have to convert the icn file to something that can be displayed in a browser) but any suggestions are welcome. I guess I could display a different icon depending on the file extension, but it would be nice to do it automatically.
You might be able to plug into the /icons folder that most apache installations have setup for their default directory listings.
It's not OS dependent at least.
You should be able to craft a url that displays an icon for a particular extension.
GDLib or ImageMagic possibly is that what you are looking for... But if you want to access metadata GDLib won't help. Not sure about ImageMagic.
Actually, you can create thumbnails with their help and cache them somewhere to avoid performance issues.
If you want to extract IPTC information from file you can use getimagesize function fith extra parameter.
Related
Im planning to add file manager (very basic once) because I never used FTP functions, and it looks easier (FTP connection loses when scripts is done). I would simply use POST request (or what should I?) instead of FTP functions. Is it good idea? Anyone knows restrictions?
As far as I can see only FTP functions are to post and receive files.
What you need to do is add dynamic form where you can select multiple files and upload them to specific directory of your chose.
You will need to get all available directories and files in them, probably with some kind of recursive function. More optimal way is to get directories/files of current folder and when you click on folder it will get files/folder for it.
Can it be done - sure. Is it a good idea - no. People will have access for uploading malicious files, we are not talking about images here, php scripts, shell scripts, executable viruses and so on...
If you are doing this only for yourself, for file posting and receiving I suggest you to use FTP clients for that.
I wouldn't recommend it, but it's probably best to use a 3rd party tool, rather than to write your own.
PHP File Manager
PHPfileNavigator2
FileManager
...
Keep in mind that both PHP and your webserver can put certain restrictions on the size of files that you can transfer, it is of course possible to change these in the configuration files.
I don't know anything about imagemagick, I have a need to convert images that users upload to gif format and resize and optimize said images as well. Before I look further into it, can someone please clarify whether imagemagick is a software or a standalone script? Meaning do I need to have it installed on my server or can I just uload the files and use the script's commands?
I'm referring to imagemagick for PHP.
It depends what you mean by "install."
If you are talking about the PHP extensions, then yeah, those have to be "installed."
If you are talking about the standalone binary programs, then no, they don't have to be "installed." If you can find a copy of the programs designed for the exact type and version of the operating system your server runs, you could place them somewhere accessible, give them execute permissions, and call them from your script. However, some shared hosting providers prohibit you from running compiled binaries in any way whatsoever, so this might not be such a good idea.
Your best bet is going to be either convincing your hosting provider to install it, or switching providers to one that already has it installed.
try MagickStudio
To convert, edit, or compose your image directly from a Web page, press Browse to browse and select your image file or enter the URL of your image. Next, set any of the optional parameters below. Finally, press view to continue.
I wrote a script in php that allows me to get a list of files in a directory as an array, parse each one for a particular string, and then it displays all of the files that contain the search string.
My IT staff won't let me install php on the server though. Can this be done with javascript without ActiveX? Everything I could find on this is pretty old.
Alternatively, is there a way to make php functions like opendir and readir work on a remote server?
Thanks
Nor JavaScript / ActiveX will help you do what you need to do if the directories are on a remote server. So no, can't be done.
This would be quite the security violation if you could. You will need to have something installed on the server that has file system access. If this is on an internal network, you may be able to simply enable directory browsing... but there aren't any client-side-only solutions to this.
I need to include pdf files in some webpages, and I'm gettin' in troubles.
The app is a simple newspaper's archive, in which i can read right on page or download as pdf files, one file per page. What my customer can provide me is one pdf file for each page; what my customer wants from me is to navigate them in indexes (with page thumbnail) and have a read from a choosen one direcly in page; I'm using php/mysql.
I started trying out to use the <object> tag with type="application/pdf", but i found it's deprecate 'cause it's not crossplatform at all (there's no support on linux's browsers, but even my windows' firefox 3.5 couldn't show me anything).
I guessed I could transform that pdf in something different (html or simply images are good enough), but the only thing i found is ImageMagick, that I cannot use as I must install on server and I can't, as I'm not admin of that machine.
So, I'm finally looking for suggestions
Thanks
Display the pdf inline using an IFRAME. The thumbnail you can generate with imageMagik. You should be able to use the command line version of ImageMagik to resize and convert to jpg.
edit
Your best bet is to talk to the server admin and have them install php support for ImageMagik then you can use it as a class.
If you can't get support to install on the server, you will have to use the command line version.
You might be able to Google around for a library that wraps the command line, but it would be trivial to write it yourself.
With this in place you can create a large readable black and white png for each page. It should click through to the pdf.
I've bumped into a problem while working at a project. I want to "crawl" certain websites of interest and save them as "full web page" including styles and images in order to build a mirror for them. It happened to me several times to bookmark a website in order to read it later and after few days the website was down because it got hacked and the owner didn't have a backup of the database.
Of course, I can read the files with php very easily with fopen("http://website.com", "r") or fsockopen() but the main target is to save the full web pages so in case it goes down, it can still be available to others like a "programming time machine" :)
Is there a way to do this without read and save each and every link on the page?
Objective-C solutions are also welcome since I'm trying to figure out more of it also.
Thanks!
You actually need to parse the html and all css files that are referenced, which is NOT easy. However a fast way to do it is to use an external tool like wget. After installing wget you could run from the command line
wget --no-parent --timestamping --convert-links --page-requisites --no-directories --no-host-directories -erobots=off http://example.com/mypage.html
This will download the mypage.html and all linked css files, images and those images linked inside css.
After installing wget on your system you could use php's system() function to control programmatically wget.
NOTE: You need at least wget 1.12 to properly save images that are references through css files.
Is there a way to do this without read and save each and every link on the page?
Short answer: No.
Longer answer: if you want to save every page in a website, you're going to have to read every page in a website with something on some level.
It's probably worth looking into the Linux app wget, which may do something like what you want.
One word of warning - sites often have links out to other sites, which have links to other sites and so on. Make sure you put some kind of stop if different domain condition in your spider!
If you prefer an Objective-C solution, you could use the WebArchive class from Webkit.
It provides a public API that allows you to store whole web pages as .webarchive file. (Like Safari does when you save a webpage).
Some nice features of the webarchive format:
completely self-contained (incl. css,
scripts, images)
QuickLook support
Easy to decompose
Whatever app is going to do the work (your code, or code that you find) is going to have to do exactly that: download a page, parse it for references to external resources and links to other pages, and then download all of that stuff. That's how the web works.
But rather than doing the heavy lifting yourself, why not check out curl and wget? They're standard on most Unix-like OSes, and do pretty much exactly what you want. For that matter, your browser probably does, too, at least on a single page basis (though it'd also be harder to schedule that).
I'm not sure if you need a programming solution to 'crawl websites' or personally need to save websites for offline viewing, but if its the latter, there's a great app for Windows — Teleport Pro and SiteCrawler for Mac.
You can use IDM (internet downloader management) for downloading full webpages, there's also HTTrack.