I write a parser receiving video.No can`t save the resulting video itself to the desired folder to me. How to fix, here's the code.
foreach ($href as $key => $link) {
$doc = file_get_html('https://example.com/'.$link);
foreach($doc->find("#video source") as $el) {
$video[]="https:".$el->src;
}
}
//finally I get
/* array(
"http:example/video1.mp4",
"http:example/video2.mp4",
"http:example/video3.mp4")
*/
$dirSubtitles=$_SERVER['DOCUMENT_ROOT'].'/video/';
foreach ($video as $key=>$address) {
$url = $address;
$path = $dirSubtitles;
$fp = fopen($path."video".$key, 'w');
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_FILE, $fp);
$data = curl_exec($ch);
curl_close($ch);
fclose($fp);
}
You might want to read about file_put_contents()
I hope this will help you in the right direction.
Related
I am trying to get the data of any website in my .txt file using curl in php.
so there is some issue in code. yesterday i get the data in web_content.txt file. But i don not know why and where error generate i am not getting the site url in .txt file.
Here is my code you can check and please help me what should i do to remove this error and get the data.
<?php
if($_REQUEST['url_text'] == "") {
echo "<div class='main_div border_radius'><div class='sub_div'> Please type any site URL.</div></div>";
} else {
$site_url = $_REQUEST['url_text'];
$curl_ref=curl_init();//init the curl
curl_setopt($curl_ref,CURLOPT_URL,$site_url);// Getting site
curl_setopt($curl_ref,CURLOPT_CONNECTTIMEOUT,2);
curl_setopt($curl_ref,CURLOPT_RETURNTRANSFER,true);
$site_data = curl_exec($curl_ref);
if (empty($site_data)) {
echo "<div class='main_div border_radius'><div class='sub_div'> Data is not available.</div></div>";
}
else {
echo "<div class='main_div border_radius'><div class='sub_div'>".$site_data."</div></div>";
}
$fp = fopen("web_content.txt", "w") or die("Unable to open file!");//Open a file in write mode
curl_setopt($curl_ref, CURLOPT_FILE, $fp);
curl_setopt($curl_ref, CURLOPT_HEADER, 0);
curl_exec($curl_ref);
curl_close($curl_ref);
fclose($fp);
}
?>
if(isset($_GET['start_tag_textbox']) && ($_GET['end_tag_textbox'])) {
get_tag_data($_GET['start_tag_textbox'],$_GET['end_tag_textbox']);
}
header('content-type: text/plain');
function get_tag_data ($start,$end) {
$file_data= "";
$tag_data = "";
$file = fopen("web_content.txt","r");
$file_data = fread($file,filesize("web_content.txt"));
fclose($file);
$last_pos = strpos($start,'>');
$start = substr_replace($start,".*>",$last_pos);
preg_match_all('#'.$start.'(.*)'.$end.'#U',$file_data,$matches);
for($i=0; $i<count($matches[1]);$i++){
echo $matches[1][$i]."\n";
}
}
?>
<?php
$url = "http://www.ucertify.com";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$output = curl_exec($ch);
curl_close($ch);
$file = fopen("web_content.txt","w");
fwrite($file,$output);
fclose($file);
?>
I am using a function inside a PHP class for reading images from array of URLs and writing them on local computer.
Something like below:
function ImageUpload($urls)
{
$image_urls = explode(',', $urls);
foreach ($image_urls as $url)
{
$url = trim($url);
$img_name = //something
$source = file_get_contents($url);
$handle = fopen($img_name, "w");
fwrite($handle, $source);
fclose($handle);
}
}
It successfully read and write 1 or 2 images but raise 500 Internal severs for reading 2nd or 3rd image.
There is nothing important in Apache log file. Also i replace file_get_contents command with following cURL statements, but result is the same (it seems cURL reads one more image than file_get_contents).
$ch=curl_init();
curl_setopt($ch,CURLOPT_URL,$url);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,500);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,true);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER,false);
$source = curl_exec($ch);
curl_close($ch);
unset($ch);
Also the problem is only for reading from http URLs, and if I have images on somewhere local, there is no problem for reading and writing them.
I don't see any handler for reading in the loop , your $handle = fopen($img_name, "w"); is just for writing , you also need $handle = fopen($img_name, "r"); for reading ! because you can't read handle (fread () ) for fopen($img_name, "w");.
Additional answer :
Could you modify to (and see if it works):
.........
$img_name = //something
$context = stream_context_create($image_urls );
$source= file_get_contents( $url ,false,$context);
.....
.....
I have made some changed to your code, hope that helps :)
$opts = array(
'http' => array(
'method'=>"GET",
'header'=>"Content-Type: text/html; charset=utf-8"
)
);
$context = stream_context_create($opts);
$image_urls = explode(',', $urls);
foreach ($image_urls as $url) {
$result = file_get_contents(trim($url),TRUE,$context);
if($result === FALSE) {
print "Error with this URL : " . $url . "<br />";
continue;
}
$handle = fopen($img_name, "a+");
fwrite($handle, $result);
fclose($handle);
}
Here is my code, it's not working perfectly. Can anybody help me to identify the problem?
$urls = file('list.txt', FILE_IGNORE_NEW_LINES);
foreach($urls as $url) {
copy(trim($url),"c:/data/$url");
echo "$url is done";
ob_flush();
flush();
}
Some URL's do not exist.
I want each file from URL to be saved with the name of the URL.
A URL will look like: http://site.com/index.htm
Here's a PHP cURL function:
/* gets the data from a URL */
function get_urlData($url) {
$ch = curl_init();
$timeout = 5;
curl_setopt($ch,CURLOPT_URL,$url);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,$timeout);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
Have it on your PHP file before your existent foreach.
Your code adapted:
$urls = file('list.txt', FILE_IGNORE_NEW_LINES);
foreach($urls as $url) {
$data = get_urlData($url);
copy($data, "c:/data/$url");
echo "$url is done";
ob_flush();
flush();
}
Not tested but should work just fine.
$urls = file('list.txt', FILE_IGNORE_NEW_LINES);
foreach($urls as $url) {
$con=file_get_contents($url);
if($con !== false)
{
if(file_put_contents(trim($url),$con))
{
echo "$url is done";
//don't really see the point of these, but okay...
ob_flush();
flush();
}
}
}
That should do it.
Is it possible to make curl, access a url and the result as a file resource? like how fopen does it.
My goals:
Parse a CSV file
Pass it to fgetcsv
My obstruction: fopen is disabled
My chunk of codes (in fopen)
$url = "http://download.finance.yahoo.com/d/quotes.csv?s=USDEUR=X&f=sl1d1t1n&e=.csv";
$f = fopen($url, 'r');
print_r(fgetcsv($f));
Then, I am trying this on curl.
$curl = curl_init();
curl_setopt($curl, CURLOPT_VERBOSE, true);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, false);
curl_setopt($curl, CURLOPT_POST, true);
curl_setopt($curl, CURLOPT_POSTFIELDS, $param);
curl_setopt($curl, CURLOPT_URL, $url);
$content = #curl_exec($curl);
curl_close($curl);
But, as usual. $content already returns a string.
Now, is it possible for curl to return it as a file resource pointer? just like fopen? Using PHP < 5.1.x something. I mean, not using str_getcsv, since it's only 5.3.
My error
Warning: fgetcsv() expects parameter 1 to be resource, boolean given
Thanks
Assuming that by fopen is disabled you mean "allow_url_fopen is disabled", a combination of CURLOPT_FILE and php://temp make this fairly easy:
$f = fopen('php://temp', 'w+');
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, $url);
curl_setopt($curl, CURLOPT_FILE, $f);
// Do you need these? Your fopen() method isn't a post request
// curl_setopt($curl, CURLOPT_POST, true);
// curl_setopt($curl, CURLOPT_POSTFIELDS, $param);
curl_exec($curl);
curl_close($curl);
rewind($f);
while ($line = fgetcsv($f)) {
print_r($line);
}
fclose($f);
Basically this creates a pointer to a "virtual" file, and cURL stores the response in it. Then you just reset the pointer to the beginning and it can be treated as if you had opened it as usual with fopen($url, 'r');
You can create a temporary file using fopen() and then fwrite() the contents into it. After that, the newly created file will be readable by fgetcsv(). The tempnam() function should handle the creation of arbitrary temporary files.
According to the comments on str_getcsv(), users without access to the command could try the function below. There are also various other approaches in the comments, make sure you check them out.
function str_getcsv($input, $delimiter = ',', $enclosure = '"', $escape = '\\', $eol = '\n') {
if (is_string($input) && !empty($input)) {
$output = array();
$tmp = preg_split("/".$eol."/",$input);
if (is_array($tmp) && !empty($tmp)) {
while (list($line_num, $line) = each($tmp)) {
if (preg_match("/".$escape.$enclosure."/",$line)) {
while ($strlen = strlen($line)) {
$pos_delimiter = strpos($line,$delimiter);
$pos_enclosure_start = strpos($line,$enclosure);
if (
is_int($pos_delimiter) && is_int($pos_enclosure_start)
&& ($pos_enclosure_start < $pos_delimiter)
) {
$enclosed_str = substr($line,1);
$pos_enclosure_end = strpos($enclosed_str,$enclosure);
$enclosed_str = substr($enclosed_str,0,$pos_enclosure_end);
$output[$line_num][] = $enclosed_str;
$offset = $pos_enclosure_end+3;
} else {
if (empty($pos_delimiter) && empty($pos_enclosure_start)) {
$output[$line_num][] = substr($line,0);
$offset = strlen($line);
} else {
$output[$line_num][] = substr($line,0,$pos_delimiter);
$offset = (
!empty($pos_enclosure_start)
&& ($pos_enclosure_start < $pos_delimiter)
)
?$pos_enclosure_start
:$pos_delimiter+1;
}
}
$line = substr($line,$offset);
}
} else {
$line = preg_split("/".$delimiter."/",$line);
/*
* Validating against pesky extra line breaks creating false rows.
*/
if (is_array($line) && !empty($line[0])) {
$output[$line_num] = $line;
}
}
}
return $output;
} else {
return false;
}
} else {
return false;
}
}
i am using this code to scrap the amazon.com
$ch = curl_init(); // create a new cURL resource
$url='http://www.amazon.com/s/ref=sr_pg_1?rh=n%3A133140011%2Ck%3Aenglish+literature&sort=paidsalesrank&keywords=english+literature&ie=UTF8&qid=1327432144';
// set URL and other appropriate options
curl_setopt($ch, CURLOPT_URL,$url);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
$data = curl_exec($ch); // grab URL and pass it to the browser
//echo $data; ok till now
curl_close($ch);
$dom = new DOMDocument();
#$dom->loadHTML($data); // avoid warnings
$xpath = new DOMXPath($dom);
//getting titles
$book_t = $xpath->query('//div[#class="title"]/a[#class="title"]');
foreach ($book_t as $tag) {
print_r(trim($tag->nodeValue));
echo '<br/>';
}
$author = $xpath->query('//div[#class="title"]/span[#class="ptBrand"]');
foreach ($author as $tag) {
echo '<br/>';
//print_r($tag->nodeValue);
$s=$tag->nodeValue;
print_r(str_replace('by ', '', $s));
echo '<br/>';
}
Up to this step, i am okay, now i want to save this in csv file, but i don't know how to do it can somebody please help me? how should i code it? if you provide me code, my learning will be better.
also, does this code need improvement? if yes, how?
there's a function fputcsv ... you can use it like this:
<?php
$list = array (
array('aaa', 'bbb', 'ccc', 'dddd'),
array('123', '456', '789'),
array('"aaa"', '"bbb"')
);
$fp = fopen('file.csv', 'w');//your file csv
foreach ($list as $fields) {
fputcsv($fp, $fields);
}
fclose($fp);
?>
each array in the array list will be a line in the csv file..