im using php
I want to read a url ex http://www.cnn.com
create a new pdf and store to the server
Try this: http://code.google.com/p/wkhtmltopdf/. It's open source, and seems to be based on the WebKit engine, which is used in many browsers today (Chrome, Safari, iOS, Android). It also has PHP bindings!
Here's the link to the GitHub page: https://github.com/mreiferson/php-wkhtmltox.
You can't just create a pdf from a web page. Especially if you don't want to make a jpeg of the whole thing and stuff that in a pdf.
Related
I have a website textscloud.com In this website i make the image with the PHP GD library. Here is a link to a demo:
In this page i allow the user to download the image on which text will pe printed. download link is like
This download.php file has a header for making the image with PHP GD Library and download the file like this
header("Content-type: image/png");
But google didn't crawl these images. Does anyone know the solution? I can't store these image in server.
You don't mention how you are feeding the beast, so I suggest you start by providing google a site map via their webmaster toolkit. You can specifically list the images that you want crawled. Google provides good help articles to get you going.
Google can't index images that are not stored permanently, I'm quite sure it can't even index images without context (i.e. which are not part of a describing/linking page).
You can try to:
Send a cache header to allow caching of the image.
Rewrite the actual url to someting like: http://textscloud.com/get_img/download/VkZaU1FtUXdNVVZWVkZKT1ZWUXdPUT09.png (should match your filename header)
If you look at evernote or use their web clipper browser addon, it can save a webpage completely with all styles and images of page clipped. So for example if I save this very page with it, it will be saved as is.
Does any one have an idea of how evernote does it? I want to do it with either PHP or JavaScript but am not sure how to save a webpage with all styles and images. I know about Internet Explorer's mht format but that's not what evernote does.
So basically it would be great if one can save a webpage with all styles and images (excluding dynamic content such as JS) in single file and be able to open it in any major browser ? Any pointer to such script would also be helpful.
I have also noticed similar thing in Gmail, when you copy any part of page and paste that in Gmail Compose, it renders it as it was or same happens in MS Word too.
Thanks for your help and hints :)
Replace linked stylesheets with style blocks containing the CSS copied from the linked stylesheets. Replace image sources with data URLs.
Or just shell out a call to wget -mk and mirror the site:
<?php
system('wget -mk http://foo.com/bar');
?>
You could do file_get_contents() and then go and recursively download whatever scripts/images you need.
I'm currently trying to deliver MP4 video for use in HTML5 video (using video-js) via a PHP script for controlling video access. After some research I was able to get this working, with the help of the stackoverflow article found here. If I navigate to the PHP script, I can view the video as if I were viewing it via its absolute path (for instance localhost/myvideo.mp4 rather than localhost/myscript.php) in Firefox, Safari and IE. My problem is with Google Chrome, which simply shows a blacked out screen with a small media player in the centre, and does nothing.
I did try using a quick rewrite such as localhost/avideo.mp4 which routes to the PHP script, but unfortunately this didn't change anything.
Here's my script:
if (is_file($uri)) {
header('Content-Type: video/mp4');
if (isset($_SERVER['HTTP_RANGE'])) {
$this->rangeDownload($uri);
exit;
} else {
header("Content-Length: ".filesize($uri));
$this->readfile_chunked($uri);
exit;
}
} else {
//error
}
The rangeDownload method has been taken directly from appendix A of this link as suggested in the aforementioned stackoverflow article.
Maybe the problem is with the URL (more specifically, the extension). Normally, you would use Content-Disposition header, but I understand that this is not desirable when delivering content to mobiles.
Try using localhost/myscript.php/myvideo.mp4
It is important not to use the "Content-Disposition" HTTP header, since some phones refuse to accept content when using it.
By including the filename on the URL, you will trick the phone to think it's a real file and to accept it.
Now, when you send the download URL to the customer, you don't normally know yet what device the customer has, so you don't know what file formats the device will support. Therefore, you can't include the filename on that URL, and once again, you will need an intermediate download page. Once more, we will use a URL like:
http://wap.mydomain.tld/get.php/123456abcdef
This time, when the customer connects to download the content, the get.php script will not create a temporary file, but point to another script which streams the file contents.
Supposing the resultant content to download will be "image.jpg", the intermediate download page could point the customer to a URL like:
http://wap.mydomain.tld/download.php/123456abcdef/image.jpg
From ( http://mobiforge.com/developing/story/content-delivery-mobile-devices )
I understand you're using video-js but I recommend using html5media (also check out the github page for more info). I had to make videos available on a website for work and I tried a few things including video-js but html5media was the only one that I could get working in all browsers.
A side note that might help others: One of the requirements was that we hosted all files so that we wouldn't be reliant on third-party servers to serve JavaScript files or flash players, I can't remember if with video-js this was easy but I know with html5media we were able to download flowplayer and have everything on our servers.
And to generate the 3 recommend video formats (mp4, WebM and Theora) I used Miro Video Converter
How could I use imagegrabscreen to get a thumbnail image and a full size image of a specific website.
I was thinking that I could have an array that I feed the wanted uri's into but I am a bit stuck on how I would set the wxh of the image I need to grab. I also think that I would need a thumbnail class and a fullimage class and call them when required.
Any better Ideas?
Keep in mind that imagaegrabscreen is Windows-only. If you have multiple displays set up, this function will only grab the primary display. Also, for this to work, your Apache service must be set to Allow service to interact with desktop otherwise you will just get a blank image.
This discussion covers the use of imagegrabscreen pretty well: Getting imagegrabscreen to work
There are a lot of other discussions about saving webpages as images, too - here are a few:
Website screenshots
Web Page Screenshots with PHP?
How can I generate a screenshot of a webpage using a server-side script?
PHP: How to capture browser window screen with php?
What is the best way to create a web page thumbnail?
Screenshot of current page using PHP
shell tool which renders web site including javascript
In any languages, Can I capture a webpage and save it image file? (no install, no activeX)
i want to open PDF file in a iframe OR windows Extjs
and let user click to add labels
what scripts can i use ?
im coding with Extjs / php /mysql
i use fpdf/fpdfi libraries to write on a PDF file
any idea ? help please
thank
Simply pointing iframe to PDF file only works when user has allowed it's web browser to embed Adobe Reader (I'm not even sure that other PDF-readers support this at all). This might be the common configuration for IE users, but in other browsers and especially on other OS-es it's not as common.
Another option is to use a service that renders your PDF as web page. For example using google docs it's dead easy:
<iframe
src="http://docs.google.com/gview?url=http://yourdomain.com/file.pdf&embedded=true"
style="width:600px; height:500px;"></iframe>