I'm trying to copy css file from a link to my server (for example from this link https://ielm.nl/static/version1658321538/frontend/Zitec/ielm/default/css/styles-m.min.css), using copy(). And in principle the file is downloaded, but there are only hieroglyphs inside it...
result after copying
The most interesting thing is that some css files it downloads fine, but the one from the link I provided below - with hieroglyphs
Here is the code:
$context = stream_context_create(array(
'http' => array(
'header' => array('User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; rv:2.2) Gecko/20110201'),
),
));
copy('https://ielm.nl/static/version1658321538/frontend/Zitec/ielm/default/css/styles-m.min.css', $local_path, $context);
Looks like the contents are encoded as gzip. I would recommend doing the following:
file_put_contents($local_path, gzdecode(file_get_contents('https://ielm.nl/static/version1658321538/frontend/Zitec/ielm/default/css/styles-m.min.css')));
Related
I'm trying to upload a file from a '<form enctype="multipart/form-data"><input type="file">...' to AWS via a presigned URL using PHP Curl.
While the file appears to be uploaded successfully, after I download the recently uploaded file, trying to open the newly downloaded file fails. I get either "this file is corrupted" or "it looks like we don't support this file format" or "we cant open this file" depending on the file type. I'm getting no response from AWS via curl_exec or curl_error on the upload.
I basically copied the PHP code from POSTMAN since POSTMAN uploads the file successfully when the file is attached as "binary". Since my code attaches the file via CURLOPT_POSTFIELDS in my code, could this be a problem? Every other example I've seen attaches the file via curl_file_create and CURLOPT_POSTFIELDS, so that's what I'm using.
The only instructions I have in uploading the file is:
curl -X PUT \
"https://PRESIGNED_PUT_URL_FROM_RESPONSE" \
--upload-file ~/Documents/Files/the_file.pdf
I've tried to change the content-type in the header to the actual uploaded file type, but neither works. I've tried both CURLFile and curl_file_create. The files after uploading to my test server and my website server are all still valid prior to upload to AWS.
$file_name = $post_files[$FILES_file]["name"];
$content_type = $post_files[$FILES_file]["type"];
$TheFileSize = $post_files[$FILES_file]["size"];
$tmp_name = $post_files[$FILES_file]["tmp_name"];
$curl = curl_init();
$cFile = curl_file_create($tmp_name, $content_type, $file_name);
$payload = array('upload-file' => $cFile);
$curlOptions = array(
CURLOPT_URL => $TheAWSPresignedURL,
CURLOPT_RETURNTRANSFER => 1,
CURLOPT_CUSTOMREQUEST => "PUT",
CURLOPT_ENCODING => "",
CURLOPT_POST => 1,
CURLOPT_MAXREDIRS => 10,
CURLOPT_TIMEOUT => 300,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_INFILESIZE => $TheFileSize,
CURLOPT_HTTPHEADER => array(
"Accept: */*",
"Accept-Encoding: gzip, deflate",
"Cache-Control: no-cache",
"Connection: keep-alive",
"Content-Length: ".$TheFileSize,
"Content-Type: multipart/form-data",
"Host: s3.amazonaws.com"
),
);
curl_setopt_array($curl, $curlOptions);
curl_setopt($curl, CURLOPT_POSTFIELDS, $payload);
$response = curl_exec($curl);
I'm looking for the file to be uploaded successfully to AWS via a presigned URL, downloaded successfully, and opened successfully.
[UPDATE]
Attempted to upload text files instead of images and pdfs, which were the main file types we were developing for. Downloading those files ended up being "successful" in that I was able to open them up, however, there was text added to the beginning of the text file.
If uploaded as Content-Type:multipart/form-data, the file was still downloadable, but when opened it, this text was added to the beginning of the file.
--------------------------9cd644e15677104b
Content-Disposition: form-data; name="upload-file"; filename="SimpleTextFile.txt"
Content-Type: text/plain
If uploaded as Content-Type: $content_type, the download link opened the file in a new browser window with this text added to the file.
--------------------------3dca3db029b41ae2
Content-Disposition: attachment; name="upload-file"; filename="SimpleTextFile2.txt"
Content-Type: text/plain
Replacing $payload = array('upload-file' => $cFile); with $payload = file_get_contents( $tmp_name ); solved my issue. Looks like I don't need curl_file_create or the array and key upload-file at all.
Also, https://stackoverflow.com/a/21888136 metioned loading the content directly should remove the header information, and it did.
Thanks all for insight, as you did point me in the right direction!
I am trying to upload pictures to Facebook, so I can use them in some sponsored posts.
The pictures are in my computer, so I need to upload them first to Facebook. I read that I need to upload them using multipart/form-data, but I don't know nothing related to it. If I create this this multipart/form-data, wouldn't I need to create a form and interact with it (click in the upload button, choose the picture and submit).
Is there a way to do it automatically? How can I create this multipart/form-data and use it inside my PHP without having to click in the submit button? Just get the result of it and use it to create my post, and also upload more than 1 picture at a time.
Try this
<?
$filename='image.png';
$link='http://domain.com/somepage';
$post=array('file'=>'#'.$filename);
$ch=curl_init();
curl_setopt($ch,CURLOPT_URL,$link);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,10);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch,CURLOPT_SSL_VERIFYPEER,0);
curl_setopt($ch,CURLOPT_SSL_VERIFYHOST,0);
curl_setopt($ch,CURLOPT_HTTPHEADER,array(
"User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.57 Safari/537.17",
"Accept-Language: ru-RU,ru;q=0.9,en;q=0.8",
"Accept: text/html, application/xml;q=0.9, application/xhtml+xml, image/png, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1"));
curl_setopt($ch,CURLOPT_POST,1);
curl_setopt($ch,CURLOPT_POSTFIELDS,$post);
$pagetext=curl_exec($ch);
curl_close($ch);
?>
image.png is a example name of image, that place in root directory.
This script send file image.png to the page http://domain.com/somepage by POST request.
Another POST variables may be add to $post array. # is need to send file.
I am using fopen to check the existence of an image file (and as a precursor for extracting the image from the external url).
It is working perfectly fine for most images, for example,
SY300.jpg">http://ecx.images-amazon.com/images/I/51DbiFInDUL.SY300.jpg
But it is not working for images from a website like Victoria's Secret, for example:
http://dm.victoriassecret.com/product/428x571/V360249.jpg
Is this a permissions problem? And if so, is there any work around?
$url = "http://dm.victoriassecret.com/product/428x571/V360249.jpg";
$handle = #fopen($url,'r');
if($handle !== false){
return true;
}
For successful link, $handle returns "Resource ID #11", but for unsuccessful link like Victoria's Secret, $handle returns nothing.
Additionally, exif_imagetype is not returning anything for the images (we have the exif extension installed).
Is there any work around for this? We are building a bookmarklet that allows users to extract pictures from sites. We noticed that other bookmarklets are able to get around this (i.e. Pinterest) and are able to get the pictures from Victoria's Secret.
It's don't show a data due to hotlink protection defined in .htaccess file. You need to grab a data as a client. I tried you can using CURL if you put HTTP header information of user agent read contents and save to file.
In my solutions your problem is solved.
Note: Be note for filetype on remote server that are using in header, there are for an example GIF file image/gif so you can put another filetype example for PNG.
Example of solution that WORKS:
error_reporting(E_ALL);
ini_set('display_errors', '1');
$url = "http://dm.victoriassecret.com/product/428x571/V360249.jpg";
function getimg($url) {
$headers[] = 'Accept: image/gif, image/x-bitmap, image/jpeg, image/pjpeg';
$headers[] = 'Connection: Keep-Alive';
$headers[] = 'Content-type: application/x-www-form-urlencoded;charset=UTF-8';
$user_agent = 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)';
$process = curl_init($url);
curl_setopt($process, CURLOPT_HTTPHEADER, $headers);
curl_setopt($process, CURLOPT_HEADER, 0);
curl_setopt($process, CURLOPT_USERAGENT, $user_agent);
curl_setopt($process, CURLOPT_TIMEOUT, 30);
curl_setopt($process, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($process, CURLOPT_FOLLOWLOCATION, 1);
$return = curl_exec($process);
curl_close($process);
return $return;
}
$imgurl = $url;
$imagename= basename($imgurl);
if(file_exists($imagename)){continue;}
$image = getimg($imgurl);
file_put_contents($imagename,$image);
Note: If you are on Linux filesystem be sure that root folder is writeable (CHMOD) otherwise will not save a file in a path.
And so you are talking about EXIF data, and how is CURL downloaded image is identical to orignal I've checked with md5sum between original image on victoriasecret server and downloaded using CURL. However, a results are SAME, IDENTICAL so you can grab and analyzing downloaded data for future... and delete if you don't need anymore.
On a Linux platform you can use for testing identical files by sum of md5 result using md5sum:
md5sum V360249.jpg V360249_original.jpg
893a47cbf0b4fbe4d1e49d9d4480b31d V360249.jpg
893a47cbf0b4fbe4d1e49d9d4480b31d V360249_original.jpg
A result are same and you can be sure that exif_imagetype information is correctly and identical.
By removing the # symbol, I was able to get a more meaningful error:
Warning: fopen(http://dm.victoriassecret.com/product/428x571/V360249.jpg) [function.fopen]: failed to open stream: HTTP request failed! in [removedSomedatahere]/test.php on line 5
It does similar in curl, wget, and fopen with no other options set. I would hypothesize that this has something to do with cookies or other setting not being set, but I don't have a direct answer for you. Hopefully that helps a little.
[Edited - Solution based on comments]
So it appears that using curl may be a better option in this case if you also set the user agent. The site was blocking based on the user agent. So the solution is to set a commonly used browser as the agent.
Here is an example of setting the user agent:
curl_setopt($ch,CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13');
Please see this link to understand how to set the user agent in curl.
I have a function in PHP that calls curl to retrieve an image. When I print that to a file and examine it in the browser, the image looks great. When I use "echo" of the curl results as the return value from my PHP script, the browser shows the broken-image icon (see an example of that icon: http://www.artifacting.com/blog/wp-content/uploads/2007/01/error_icon.gif).
$ch = curl_init();
$options = array(
CURLOPT_CONNECTTIMEOUT => 120,
CURLOPT_TIMEOUT => 120,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_FRESH_CONNECT => 1,
CURLOPT_HEADER => 0,
CURLOPT_USERAGENT => "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)",
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_HTTPPROXYTUNNEL => 1,
CURLOPT_POST => 1,
CURLOPT_RETURNTRANSFER => 1,
CURLOPT_BINARYTRANSFER => 1,
);
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt_array($ch, $options);
curl_setopt($ch, CURLOPT_POSTFIELDS, $param_list);
// The http response code is 200, and the body length is 50kb.
$body = curl_exec($curl_params);
// This produces a file containing an image that looks good when viewed in a browser.
$bodyFile = #fopen("img.jpg", "w");
fprintf($bodyFile, "%s", $body. "\n");
fclose($bodyFile);
// This does not render in the browser. Instead I see the broken image icon.
$contentType = "image/jpeg";
header('Content-type: ' . $contentType);
echo $body;
Any ideas? Help!
The answer depends on what you mean by "broken". If the top half of the image is appearing but the bottom half isn't, then either you have a bad version in your cache from a dropped packet (empty cache, refresh, try again) or the script is being cut off prematurely from using too many resources or from running too long. If emptying the cache doesn't resolve the issue, check you php.ini settings and see if increasing the script's time to live or max memory resolves the issue.
If the image is a bunch of meaningless ASCII, you echo'd something or sent a header before this point in your code. The most common unseen cause of this is having a single empty line before your <?php at the top of the page. Make sure there isn't a single byte (even non-printable!) before the <?php if this is the case.
If the image is definitely an image file, but it's static, a gray box, random colors, etc.- then this is a problem with the content-type. Trying to parse a JPG image as a PNG will sometimes yield a gray box or other random "failure" images.
Please make sure you'll set the right cURL params.
This example works fine for me:
<?php
$ch = curl_init ("http://www.google.com/images/logos/ps_logo2.png");
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_BINARYTRANSFER,1);
$image=curl_exec($ch);
curl_close ($ch);
header("Content-Type: image/png");
echo $image;
?>
It turned out that I had another PHP file that was writing out an extra newline character. There was a newline at the end of that file after the "?>" line.
I'm trying to retrieve information from an online XML file and it takes too long to get that information. It even get most of the times timeout error.
The strange part is that when i open the link directly on the browser is fast.
$xmlobj = simplexml_load_file("http://apple.accuweather.com/adcbin/apple/Apple_Weather_Data.asp?zipcode=EUR;PT;PO019;REGUA");
print header("Content-type: text/plain");
print_r($xmlobj);
That's because they're blocking depending what browser you're using.
Try this:
$curl = curl_init();
curl_setopt($curl, CURLOPT_USERAGENT, 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.6) Gecko/2009012700 SUSE/3.0.6-1.4 Firefox/3.0.6');
curl_setopt($curl, CURLOPT_URL,'http://apple.accuweather.com/adcbin/apple/Apple_Weather_Data.asp?zipcode=EUR;PT;PO019;REGUA');
$xmlstr = curl_exec($curl);
$xmlobj = simplexml_load_string($xmlstr);
print header("Content-type: text/plain");
print_r($xmlobj);
BTW. in the file you can see "Redistribution Prohibited", so you might want to look for some royalty-free source of weather data.
The above code works perfectly fine for me. Try reading another xml file (small size) from a different location.
Looks like a firewall issue for me!
Once you've sent the faux user agent headers with cURL as vartec pointed out, it might be a good idea to cache the XML to your server. For weather, maybe an hour would be a good time (play with this, if the RSS is updating more frequently, you may want to try 15 minutes).
Once it is saved locally to your server, reading it and parsing the XML will be much quicker.
Keep in mind too that the RSS does state Redistribution Prohibited. IIRC there are a few free online weather RSS feeds, so maybe you should try another one.