how to get file from xooplate.com using php - php

i see http://xooplate.com/templates/download/13693 return a file.
i have using
$ch = curl_init("http://xooplate.com/templates/download/13693");
$fp = fopen("example_homepage.zip", "w");
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_exec($ch);
curl_close($ch);
fclose($fp);
function get_include_contents($filename) {
if (is_file($filename)) {
ob_start();
include $filename;
$contents = ob_get_contents();
ob_end_clean();
return $contents;
}
return false;
}
$string = get_include_contents('http://xooplate.com/templates/download/13693');
but not working, i expected one a help, thank for all

Updated Answer :
Hey buddy, i checked that site. they put onClick jQuery method on the download button (widget.js line 27) with the post domain security. I tried to trigger the onClick event but due to domain check security functiion server only allows http://www.stumbleupon.com and http://xooplate.com to post the data using AJAX. So it wouldn't be possible using curl or PHPDOM. Sorry. :) Great Security.
Below Solution is not the answer !
Hi I found a solution using PHP Simple HTML DOM Library. Just Download it and use the following code
<?php
include("simple_html_dom.php");
$html = file_get_html('http://xooplate.com/templates/details/13693-creative-circular-skills-ui-infographic-psd');
foreach($html->find('a[class=btn_green nturl]') as $element)
echo $element->href . '<br>';
?>

Related

Getting whole HTML element with PHP

I want to get the whole element <article> which represents 1 listing but it doesn't work. Can someone help me please?
containing the image + title + it's link + description
<?php
$url = 'http://www.polkmugshot.com/';
$content = file_get_contents($url);
$first_step = explode( '<article>' , $content );
$second_step = explode("</article>" , $first_step[3] );
echo $second_step[0];
?>
You should definitely be using curl for this type of requests.
function curl_download($url){
// is cURL installed?
if (!function_exists('curl_init')){
die('cURL is not installed!');
}
$ch = curl_init();
// URL to download
curl_setopt($ch, CURLOPT_URL, $url);
// User agent
curl_setopt($ch, CURLOPT_USERAGENT, "Set your user agent here...");
// Include header in result? (0 = yes, 1 = no)
curl_setopt($ch, CURLOPT_HEADER, 0);
// Should cURL return or print out the data? (true = retu rn, false = print)
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
// Timeout in seconds
curl_setopt($ch, CURLOPT_TIMEOUT, 10);
// Download the given URL, and return output
$output = curl_exec($ch);
// Close the cURL resource, and free system resources
curl_close($ch);
return $output;
}
for best results for your question. Combine it with HTML Dom Parser
use it like:
// Find all images
foreach($output->find('img') as $element)
echo $element->src . '<br>';
// Find all links
foreach($output->find('a') as $element)
echo $element->href . '<br>';
Good Luck!
I'm not sure I get you right, But I guess you need a PHP DOM Parser. I suggest this one (This is a great PHP library to parser HTML codes)
Also you can get whole HTML code like this:
$url = 'http://www.polkmugshot.com/';
$html = file_get_html($url);
echo $html;
Probably a better way would be to parse the document and run some xpath queries over it afterwards, like so:
$url = 'http://www.polkmugshot.com/';
$xml = simplexml_load_file($url);
$articles = $xml->xpath("//articles");
foreach ($articles as $article) {
// do sth. useful here
}
Read about SimpleXML here.
extract the articles with DOMDocument. working example:
<?php
$url = 'http://www.polkmugshot.com/';
$content = file_get_contents($url);
$domd=#DOMDocument::loadHTML($content);
foreach($domd->getElementsByTagName("article") as $article){
var_dump($domd->saveHTML($article));
}
and as pointed out by #Guns , you'd better use curl, for several reasons:
1: file_get_contents will fail if allow_url_fopen is not set to true in php.ini
2: until php 5.5.0 (somewhere around there), file_get_contents kept reading from the connection until the connection was actually closed, which for many servers can be many seconds after all content is sent, while curl will only read until it reaches content-length HTTP header, which makes for much faster transfers (luckily this was fixed)
3: curl supports gzip and deflate compressed transfers, which again, makes for much faster transfer (when content is compressible, such as html), while file_get_contents will always transfer plain

How get content from specific link?

I have link in specific variable eg.
$link = 'http://google.com'
and I try to get content from this link with function fopen.
Eg. : $var = fopen("'".$link."'", "rb");
echo stream_get_contents($var); ,
but without success. Error is
Warning: file_get_contents('http://google.com'): failed to open stream: No such file or directory in /var/www/...
If I use directly
$var = fopen('http://google.com', "rb");
echo stream_get_contents($var)
this work perfectly?
How do I fix this or what method to use if I link is a variable?
Based on your posted code, this worked for me. Try it using this method:
<?php
$link = "http://www.google.com";
$var = fopen($link, "rb");
echo stream_get_contents($var)
?>
This always worked for me.
$url = 'http://google.com';
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$data = curl_exec($ch);
curl_close($ch);

fopen alternative for FPDF (PHP)

I am using FPDF to extract info from a PNG file. Unfortunately, the server has fopen disabled. Can anyone recommend a good way of getting around this? Any help would be much appreciated. Thanks in advance!
function _parsepng($file)
{
// Extract info from a PNG file
$f = fopen($file,'rb');
if(!$f)
$this->Error('Can\'t open image file: '.$file);
$info = $this->_parsepngstream($f,$file);
fclose($f);
return $info;
}
You can try curl but usually if your hosting company disables one they also disable the other.
I really don't know if it can work :-)
but you may try with fsockopen
http://www.php.net/manual/en/function.fsockopen.php
file:// can be used as protocol
hope this helps
Ended up using a cURL workaround. Thanks everyone for the input!
function _parsepng($file)
{
$ch = curl_init($file);
$fp = fopen('/tmp/myfile.png', 'wb');
curl_setopt($ch, CURLOPT_HEADER, false);
curl_setopt($ch, CURLOPT_FILE, $fp);
$image_data = curl_exec($ch);
curl_close($ch);
// Extract info from a PNG file
$f = fopen('/tmp/myfile.png','rb');
if(!$f)
$this->Error('Can\'t open image file: '.$file);
$info = $this->_parsepngstream($f,$file);
fclose($f);
return $info;
}

Get the filesize of a js file on another domain using php

How do I get the filesize of js file on another website. I am trying to create a monitor to check that a js file exists and that it is more the 0 bytes.
For example on bar.com I would have the following code:
$filename = 'http://www.foo.com/foo.js';
echo $filename . ': ' . filesize($filename) . ' bytes';
You can use a HTTP HEAD request.
<?php
$url = "http://www.neti.ee/img/neti-logo.gif";
$head = get_headers($url, 1);
echo $head['Content-Length'];
?>
Notice: this is not a real HEAD request, but a GET request that PHP parses for its Content-Length. Unfortunately the PHP function name is quite misleading. This might be sufficient for small js files, but use a real HTTP Head request with Curl for bigger file sizes because then the server won't have to upload the whole file and only send the headers.
For that case, use the code provided by Jakub.
Just use CURL, here is a perfectly good example listed:
Ref: http://www.php.net/manual/en/function.filesize.php#92462
<?php
$remoteFile = 'http://us.php.net/get/php-5.2.10.tar.bz2/from/this/mirror';
$ch = curl_init($remoteFile);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); //not necessary unless the file redirects (like the PHP example we're using here)
$data = curl_exec($ch);
curl_close($ch);
if ($data === false) {
echo 'cURL failed';
exit;
}
$contentLength = 'unknown';
$status = 'unknown';
if (preg_match('/^HTTP\/1\.[01] (\d\d\d)/', $data, $matches)) {
$status = (int)$matches[1];
}
if (preg_match('/Content-Length: (\d+)/', $data, $matches)) {
$contentLength = (int)$matches[1];
}
echo 'HTTP Status: ' . $status . "\n";
echo 'Content-Length: ' . $contentLength;
?>
Result:
HTTP Status: 302
Content-Length: 8808759
Another solution. http://www.php.net/manual/en/function.filesize.php#90913
This is just a two step process:
Crawl the the js file and store it to a variable
Check if the length of the js file is greater than 0
thats it!!
Here is how you can do it in PHP
<?php
$data = file_get_contents('http://www.foo.com/foo.js');
if(strlen($data)>0):
echo "yay"
else:
echo "nay"
?>
Note: You can use HTTP Head as suggested by Uku but then if you are seeking for the page content if js file has content then you would have to crawl again :(

There are any open-soure PHP Web Proxy ready to use?

I need a PHP Web Proxy that read html, show to the user and rewrite all the links for when the user click in the next link the proxy will handle the request again, just like this code, but with additionaly sould make the rewrite of all the links.
<?php
// Set your return content type
header('Content-type: text/html');
// Website url to open
$daurl = 'http://www.yahoo.com';
// Get that website's content
$handle = fopen($daurl, "r");
// If there is something, read and return
if ($handle) {
while (!feof($handle)) {
$buffer = fgets($handle, 4096);
echo $buffer;
}
fclose($handle);
}
?>
I hope I have explained well. This question is for not reinventing the wheel.
Another additional question. This kind of proxies will deal with contents like Flash?
For an open source solution, check out PHProxy. I've used it in the past and it seemed to work quite well from what I can remember.
It will sort of work, you need to rewrite any relative path to apsolute, and I think cookies won't work in this case. Use cURL for this operations...
function curl($url){
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,$url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER,1);
return curl_exec($ch);
curl_close ($ch);
}
$url = "http://www.yahoo.com";
echo curl($url);

Categories