I am trying to upload pictures to Facebook, so I can use them in some sponsored posts.
The pictures are in my computer, so I need to upload them first to Facebook. I read that I need to upload them using multipart/form-data, but I don't know nothing related to it. If I create this this multipart/form-data, wouldn't I need to create a form and interact with it (click in the upload button, choose the picture and submit).
Is there a way to do it automatically? How can I create this multipart/form-data and use it inside my PHP without having to click in the submit button? Just get the result of it and use it to create my post, and also upload more than 1 picture at a time.
Try this
<?
$filename='image.png';
$link='http://domain.com/somepage';
$post=array('file'=>'#'.$filename);
$ch=curl_init();
curl_setopt($ch,CURLOPT_URL,$link);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,10);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch,CURLOPT_SSL_VERIFYPEER,0);
curl_setopt($ch,CURLOPT_SSL_VERIFYHOST,0);
curl_setopt($ch,CURLOPT_HTTPHEADER,array(
"User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.57 Safari/537.17",
"Accept-Language: ru-RU,ru;q=0.9,en;q=0.8",
"Accept: text/html, application/xml;q=0.9, application/xhtml+xml, image/png, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1"));
curl_setopt($ch,CURLOPT_POST,1);
curl_setopt($ch,CURLOPT_POSTFIELDS,$post);
$pagetext=curl_exec($ch);
curl_close($ch);
?>
image.png is a example name of image, that place in root directory.
This script send file image.png to the page http://domain.com/somepage by POST request.
Another POST variables may be add to $post array. # is need to send file.
Related
I'm trying to copy css file from a link to my server (for example from this link https://ielm.nl/static/version1658321538/frontend/Zitec/ielm/default/css/styles-m.min.css), using copy(). And in principle the file is downloaded, but there are only hieroglyphs inside it...
result after copying
The most interesting thing is that some css files it downloads fine, but the one from the link I provided below - with hieroglyphs
Here is the code:
$context = stream_context_create(array(
'http' => array(
'header' => array('User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; rv:2.2) Gecko/20110201'),
),
));
copy('https://ielm.nl/static/version1658321538/frontend/Zitec/ielm/default/css/styles-m.min.css', $local_path, $context);
Looks like the contents are encoded as gzip. I would recommend doing the following:
file_put_contents($local_path, gzdecode(file_get_contents('https://ielm.nl/static/version1658321538/frontend/Zitec/ielm/default/css/styles-m.min.css')));
I use a program called Nitro PDF to design PDFs, it has an option to create forms with submit buttons that submit to a URL. I tried to make a php script that would receive the PDF file and write it to disk but I can't figure out how to make this happen because normally you have to specify a name on $_FILES to receive it like "fileToUpload". Here is what it is sending to the server and then it starts sending the actual file:
POST /pdf.php HTTP/1.1..
Accept: */*..
Content-Type: application/pdf..
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0;Windows NT 5.1)..
Host: 192.168.3.212..
Content-Length: 481677..
Connection: Keep-Alive..
Cache-Control: no-cache....
It's not being posted as a form, the PDF is being put directly into the POST data, which you can read with php://input.
<?php
copy("php://input", "filename.pdf");
I am using fopen to check the existence of an image file (and as a precursor for extracting the image from the external url).
It is working perfectly fine for most images, for example,
SY300.jpg">http://ecx.images-amazon.com/images/I/51DbiFInDUL.SY300.jpg
But it is not working for images from a website like Victoria's Secret, for example:
http://dm.victoriassecret.com/product/428x571/V360249.jpg
Is this a permissions problem? And if so, is there any work around?
$url = "http://dm.victoriassecret.com/product/428x571/V360249.jpg";
$handle = #fopen($url,'r');
if($handle !== false){
return true;
}
For successful link, $handle returns "Resource ID #11", but for unsuccessful link like Victoria's Secret, $handle returns nothing.
Additionally, exif_imagetype is not returning anything for the images (we have the exif extension installed).
Is there any work around for this? We are building a bookmarklet that allows users to extract pictures from sites. We noticed that other bookmarklets are able to get around this (i.e. Pinterest) and are able to get the pictures from Victoria's Secret.
It's don't show a data due to hotlink protection defined in .htaccess file. You need to grab a data as a client. I tried you can using CURL if you put HTTP header information of user agent read contents and save to file.
In my solutions your problem is solved.
Note: Be note for filetype on remote server that are using in header, there are for an example GIF file image/gif so you can put another filetype example for PNG.
Example of solution that WORKS:
error_reporting(E_ALL);
ini_set('display_errors', '1');
$url = "http://dm.victoriassecret.com/product/428x571/V360249.jpg";
function getimg($url) {
$headers[] = 'Accept: image/gif, image/x-bitmap, image/jpeg, image/pjpeg';
$headers[] = 'Connection: Keep-Alive';
$headers[] = 'Content-type: application/x-www-form-urlencoded;charset=UTF-8';
$user_agent = 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)';
$process = curl_init($url);
curl_setopt($process, CURLOPT_HTTPHEADER, $headers);
curl_setopt($process, CURLOPT_HEADER, 0);
curl_setopt($process, CURLOPT_USERAGENT, $user_agent);
curl_setopt($process, CURLOPT_TIMEOUT, 30);
curl_setopt($process, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($process, CURLOPT_FOLLOWLOCATION, 1);
$return = curl_exec($process);
curl_close($process);
return $return;
}
$imgurl = $url;
$imagename= basename($imgurl);
if(file_exists($imagename)){continue;}
$image = getimg($imgurl);
file_put_contents($imagename,$image);
Note: If you are on Linux filesystem be sure that root folder is writeable (CHMOD) otherwise will not save a file in a path.
And so you are talking about EXIF data, and how is CURL downloaded image is identical to orignal I've checked with md5sum between original image on victoriasecret server and downloaded using CURL. However, a results are SAME, IDENTICAL so you can grab and analyzing downloaded data for future... and delete if you don't need anymore.
On a Linux platform you can use for testing identical files by sum of md5 result using md5sum:
md5sum V360249.jpg V360249_original.jpg
893a47cbf0b4fbe4d1e49d9d4480b31d V360249.jpg
893a47cbf0b4fbe4d1e49d9d4480b31d V360249_original.jpg
A result are same and you can be sure that exif_imagetype information is correctly and identical.
By removing the # symbol, I was able to get a more meaningful error:
Warning: fopen(http://dm.victoriassecret.com/product/428x571/V360249.jpg) [function.fopen]: failed to open stream: HTTP request failed! in [removedSomedatahere]/test.php on line 5
It does similar in curl, wget, and fopen with no other options set. I would hypothesize that this has something to do with cookies or other setting not being set, but I don't have a direct answer for you. Hopefully that helps a little.
[Edited - Solution based on comments]
So it appears that using curl may be a better option in this case if you also set the user agent. The site was blocking based on the user agent. So the solution is to set a commonly used browser as the agent.
Here is an example of setting the user agent:
curl_setopt($ch,CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13');
Please see this link to understand how to set the user agent in curl.
Whenever i use curl(php) to download a page it downloads everything on the page like images, css files or javascript files. but sometimes i dont want to download these. can i control the resources that i download through curl. i have gone through the manual but i havent found an option that can make this happen? Please dont suggest getting the whole page and then using some regex magic because that would still download the page and increase load time.
this is a demo code where i download a page from mozilla.com
<?php
$url="http://www.mozilla.com/en-US/firefox/new/";
$userAgent="Mozilla/5.0 (Windows NT 5.1; rv:2.0)Gecko/20100101 Firefox/4.0";
//$accept="text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8";
$encoding="gzip, deflate";
$header['lang']="en-us,en;q=0.5";
$header['charset']="ISO-8859-1,utf-8;q=0.7,*;q=0.7";
$header['conn']="keep-alive";
$header['keep-alive']=115;
$ch=curl_init();
curl_setopt($ch,CURLOPT_USERAGENT,$userAgent);
curl_setopt($ch,CURLOPT_URL,$url);
curl_setopt($ch,CURLOPT_ENCODING,$encoding);
curl_setopt($ch, CURLOPT_HTTPHEADER, $header);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch,CURLOPT_FOLLOWLOCATION,1);
curl_setopt($ch,CURLOPT_AUTOREFERER,1);
$content=curl_exec($ch);
curl_close($ch);
echo $content;
?>
when i echo the content it shows the images too. i saw in firebug's network tab that images and external js files are being downloaded
PHP's curl only fetches what you tell it to. It doesn't parse html to look for javascript/css <link> tags and <img> tags and doesn't fetch them automatically.
If you have curl downloading those resources, then it's your code telling it to do so, and it's up to you to decide what to fetch and what not to. Curl only does what you tell it to.
you can avoid the download by using
echo htmlentities($content);
I'm trying to retrieve information from an online XML file and it takes too long to get that information. It even get most of the times timeout error.
The strange part is that when i open the link directly on the browser is fast.
$xmlobj = simplexml_load_file("http://apple.accuweather.com/adcbin/apple/Apple_Weather_Data.asp?zipcode=EUR;PT;PO019;REGUA");
print header("Content-type: text/plain");
print_r($xmlobj);
That's because they're blocking depending what browser you're using.
Try this:
$curl = curl_init();
curl_setopt($curl, CURLOPT_USERAGENT, 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.6) Gecko/2009012700 SUSE/3.0.6-1.4 Firefox/3.0.6');
curl_setopt($curl, CURLOPT_URL,'http://apple.accuweather.com/adcbin/apple/Apple_Weather_Data.asp?zipcode=EUR;PT;PO019;REGUA');
$xmlstr = curl_exec($curl);
$xmlobj = simplexml_load_string($xmlstr);
print header("Content-type: text/plain");
print_r($xmlobj);
BTW. in the file you can see "Redistribution Prohibited", so you might want to look for some royalty-free source of weather data.
The above code works perfectly fine for me. Try reading another xml file (small size) from a different location.
Looks like a firewall issue for me!
Once you've sent the faux user agent headers with cURL as vartec pointed out, it might be a good idea to cache the XML to your server. For weather, maybe an hour would be a good time (play with this, if the RSS is updating more frequently, you may want to try 15 minutes).
Once it is saved locally to your server, reading it and parsing the XML will be much quicker.
Keep in mind too that the RSS does state Redistribution Prohibited. IIRC there are a few free online weather RSS feeds, so maybe you should try another one.