Download Campaign Performance Reports using Bings Ads in PHP - php

I was struck in this since a weeks. Please tell me if any one can help me out from this.
I tried this samples they given. I'm trying to download only campaign performance reports, Where i'm able to download a zip file which has a csv file in it. Here is the another direct example i followed for keywords and did the same way for campaign performance. Which giving me a link to download the reports. When i'm trying to download the url manually i can download but I cannot download it through my code.
function DownloadFile($reportDownloadUrl, $downloadPath) {
if (!$reader = fopen($reportDownloadUrl, 'rb')) {
throw new Exception("Failed to open URL " . $reportDownloadUrl . ".");
}
if (!$writer = fopen($downloadPath, 'wb')){
fclose($reader);
throw new Exception("Failed to create ZIP file " . $downloadPath . ".");
}
$bufferSize = 100 * 1024;
while (!feof($reader)) {
if (false === ($buffer = fread($reader, $bufferSize))) {
fclose($reader);
fclose($writer);
throw new Exception("Read operation from URL failed.");
}
if (fwrite($writer, $buffer) === false) {
fclose($reader);
fclose($writer);
$exception = new Exception("Write operation to ZIP file failed.");
}
}
fclose($reader);
fflush($writer);
fclose($writer);
}
But I couldn't download the file. I can't move forward from there so any help like downloading reports in any-other form or the simple code changes in current method is greatly appreciated. Thanks in advance.

When I tried this I also had problems with the DownloadFile function so replaced this with a different version
function DownloadFile($reportDownloadUrl, $downloadPath) {
$url = $reportDownloadUrl;
// Example for the path would be in this format
// $path = '/xxx/yyy/reports/keywordperf.zip';
// using the server path and not relative to the file
$path = $downloadPath;
$fp = fopen($path, 'w');
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_SSLVERSION, 3);
curl_setopt($ch, CURLOPT_FILE, $fp);
if($result = curl_exec($ch)) {
$status = true;
}
else
{
$status = false;
}
curl_close($ch);
fclose($fp);
return status;
}

Related

Best way to check if URL is a video file in PHP?

I'm trying to find a way to be (almost) sure that an URL is real video file.
I've of course check get_headers to check if URL exist and header content type :
function get_http_response_code($theURL)
{
$headers = get_headers($theURL);
return substr($headers[0], 9, 3);
}
function isURLExists($url)
{
if(intval(get_http_response_code($url)) < 400)
{
return true;
}
return false;
}
function isFileVideo($url)
{
$headers = get_headers( $url );
$video_exist = implode(',',$headers);
if (strpos($video_exist, 'video') !== false)
{
return true;
}
else
{
return false;
}
}
Maybe i answer to myself, but maybe there are other more robust solution ( for video type mainly) .
Don't know if it's possible, but could i just download the file metadatas first and return the file related to this test ?
Thanks a lot !
Of course you can't be sure, but the best practice is to check the first bytes of the file and identify the MIME type based on this information.
An example of it as to find in this Q & A: https://stackoverflow.com/a/8225754/2797243
You can try this code,
<?php
function getUrlMimeType($url) {
$buffer = file_get_contents($url);
$finfo = new finfo(FILEINFO_MIME_TYPE);
return $finfo->buffer($buffer);
}
?>
You need to enable the extension on your PHP.ini
php_fileinfo.dll
If you want to download some portion of file use,
$filename = $url;
$portion=8192; // if you want upto 8192 byte to read
$handle = fopen($filename, "rb");
$contents = fread($handle, $portion);
fclose($handle);
If you want to take some portion of $url from inside file use,
$filename = $url;
$from=10000; // if you want to read file from 1000 byte
$to=9999; //if you want to read up to 999 9byte
$handle = fopen($filename, "rb");
$skip= fread($handle, $from);
$contents = fread($handle, $to);
fclose($handle);
Then you can cheque mime type of file.
thanks

How to download image from URL in php?

I am downloading the image from URL using file_get_contents function. But in some case I get the error like this
"file_get_contents(http://www.aaaaa.com/multimedia/dynamic/02456/RK_2456652g.jpg): failed to open stream: HTTP request failed!".
Only for some of the URL am getting this error, other images are easily downloaded using this code.
I am getting this error when I run this code in cron. But when I manually downloading the image from same URL its downloading. Can anyone please help me to solve this problem.
My code is
$arr="http://www.aaaa.com/multimedia/dynamic/02077/saibaba_jpg_2077721g.jpg";
$file = basename($arr);
$date = date('Ymd');
$id=2225;
if (!file_exists('../media/'.$date)) {
$path=mkdir('../media/'.$date, 0777, true);
}
if (!file_exists('../media/'.$date.'/'.$id)) {
$path=mkdir('../media/'.$date.'/'.$id, 0777, true);
}
//Get the file
$content= file_get_contents($arr);
//Store in the filesystem.
$fp = fopen('../media/'.$date.'/'.$id.'/'.$file, "w");
fwrite($fp, $content);
fclose($fp);
If you are execute your function then you must provide full path to of your file.LIKE
$arr="http://www.aaaa.com/multimedia/dynamic/02077/saibaba_jpg_2077721g.jpg";
$file = basename($arr);
$date = date('Ymd');
$id=2225;
if (!file_exists('/var/www/path_to_Your_folder/media/'.$date)) {
$path=mkdir('/var/www/path_to_Your_folder/media/'.$date, 0777, true);
}
if (!file_exists('/var/www/path_to_Your_folder/media/'.$date.'/'.$id)) {
$path=mkdir('/var/www/path_to_Your_folder/media/'.$date.'/'.$id, 0777, true);
}
//Get the file
$content= file_get_contents($arr);
//Store in the filesystem.
$fp = fopen('/var/www/path_to_Your_folder/media/'.$date.'/'.$id.'/'.$file, "w");
fwrite($fp, $content);
fclose($fp)

fopen(): SSL: Connection reset by peer error in php

I'm trying to download a report from my bing ads account and i'm encountering the following errors:
Warning: fopen(): SSL: Connection reset by peer in xxxx...
Warning: fopen(): Failed to enable crypto in xxx...
function PollGenerateReport($proxy, $reportRequestId)
{
// Set the request information.
$request = new PollGenerateReportRequest();
$request->ReportRequestId = $reportRequestId;
return $proxy->GetService()->PollGenerateReport($request)->ReportRequestStatus;
return $proxy->GetService()->PollGenerateReport($request)>ReportRequestStatus;
}
// Using the URL that the PollGenerateReport operation returned,
// send an HTTP request to get the report and write it to the specified
// ZIP file.
function DownloadFile($reportDownloadUrl, $downloadPath)
{
if (!$reader = fopen($reportDownloadUrl, 'rb'))
{
throw new Exception("Failed to open URL " . $reportDownloadUrl . ".");
}
if (!$writer = fopen($downloadPath, 'wb'))
{
fclose($reader);
throw new Exception("Failed to create ZIP file " . $downloadPath . ".");
}
$bufferSize = 100 * 1024;
while (!feof($reader))
{
if (false === ($buffer = fread ($reader, $bufferSize)))
{
fclose($reader);
fclose($writer);
throw new Exception("Read operation from URL failed.");
}
if (fwrite($writer, $buffer) === false)
{
fclose($reader);
fclose($writer);
$exception = new Exception("Write operation to ZIP file failed.");
}
}
fclose($reader);
fflush($writer);
fclose($writer);
}
Since i'm a newbie to php, i'm asking for any assistance/tipps on how to convert the fopen() function (which from research seems to be the problem here) to curl. I'm using the bing API to download the report and running the script on a server.
Thanks.
My first idea is that the URL might be password protected?
If it is possible it would be better to export the report and then import it on your server.
Alternatively see if BING has documentation on how to access their reports externally, is there an API (Application Protocol Interface)?

Get file name after remote file grabbing

I'm using the PHP file grabber script. I put URL of the remote file on the field and then the file is directly uploaded to my server. The code looks like this:
<?php
ini_set("memory_limit","2000M");
ini_set('max_execution_time',"2500");
foreach ($_POST['store'] as $value){
if ($value!=""){
echo("Attempting: ".$value."<br />");
system("cd files && wget ".$value);
echo("<b>Success: ".$value."</b><br />");
}
}
echo("Finished all file uploading.");
?>
After uploading a file I would like to display direct url to the file : for example
Finished all file uploading, direct URL:
http://site.com/files/grabbedfile.zip
Could you help me how to determine file name of last uploaded file within this code?
Thanks in advance
You can use wget log files. Just add -o logfilename.
Here is a small function get_filename( $wget_logfile )
ini_set("memory_limit","2000M");
ini_set('max_execution_time',"2500");
function get_filename( $wget_logfile )
{
$log = explode("\n", file_get_contents( $wget_logfile ));
foreach ( $log as $line )
{
preg_match ("/^.*Saving to: .{1}(.*).{1}/", $line, $find);
if ( count($find) )
return $find[1];
}
return "";
}
$tmplog = tempnam("/tmp", "wgetlog");
$filename = "";
foreach ($_POST['store'] as $value){
if ($value!=""){
echo("Attempting: ".$value."<br />");
system("cd files && wget -o $tmplog ".$value); // -o logfile
$filename = get_filename( $tmplog ); // current filename
unlink ( $tmplog ); // remove logfile
echo("<b>Success: ".$value."</b><br />");
}
}
echo("Finished all file uploading.");
echo "Last file: ".$filename;
Instead of using wget like that, you could do it all using cURL, if that is available.
<?php
set_time_limit(0);
$lastDownloadFile = null;
foreach ($_POST['store'] as $value) {
if ($value !== '' && downloadFile($value)) {
$lastDownloadFile = $value;
}
}
if ($lastDownloadFile !== null) {
// Print out info
$onlyfilename = pathinfo($lastDownloadFile, PATHINFO_BASENAME);
} else {
// No files was successfully uploaded
}
function downloadFile($filetodownload) {
$fp = fopen(pathinfo($filetodownload, PATHINFO_BASENAME), 'w+');
$ch = curl_init($filetodownload);
curl_setopt($ch, CURLOPT_TIMEOUT, 50);
curl_setopt($ch, CURLOPT_FILE, $fp); // We're writing to our file pointer we created earlier
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); // Just in case the server throws us around
$success = curl_exec($ch); // gogo!
// clean up
curl_close($ch);
fclose($fp);
return $success;
}
Some words of caution however, letting people upload whatever to your server might not be the best idea. What are you trying to accomplish with this?

How to check if a list of domain names have a site?

I have a huge list of domain names in the form of abcde.com
What I have to do is to check if the domains have a page otherwise I get the server not found message.
What is a code that will check this automatically and return me something if there is a site ? I am familiar with PHP.
Thank you.
Something simple would be:
foreach ($domains as $domain) {
$html = file_get_contents('http://'.$domain);
if ($html) {
//do something with data
} else {
// page not found
}
}
If you have them in a txt file, with each line containing the domain name you could do this:
$file_handle = fopen("mydomains.txt", "r");
while (!feof($file_handle)) {
$domain = fgets($file_handle);
//use code above here
}
}
fclose($file_handle);
You can connect to each domain/hostname using cURL.
Example:
// I'm assuming one domain per line
$h = fopen("domains.txt", "r");
while (($host = preg_replace("/[\n\r]/", "", fgets($h))) !== false) {
$ch = curl_init($host);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
if (curl_exec($ch) !== false) {
// code for domain/host with website
} else {
// code for domain/host without website
}
curl_close($ch);
}

Categories