Why i'm getting sometimes this error?
**Bad Request**
Your browser sent a request that this server could not understand.
Apache Server at control.digitalcoding.com Port 80
When
$UserAgent = "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11";
everything works fine, but not with
Opera/7.52 (Windows NT 5.1; U) [en]
Mozilla/5.0 (Windows; U; Windows NT 5.1; rv:1.7.3) Gecko/20041001 Firefox/0.10.1
Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1
for example. What is the problem?
HtmlReciever.php
<?php
if(empty($_GET["Link"]))
{
echo "empty";
die;
}
$LinkToFetch = urldecode($_GET["Link"]);
$UserAgent = urldecode($_GET["UserAgent"]);
function iscurlinstalled()
{
if (in_array ('curl', get_loaded_extensions()))
{
return true;
}
else
{
return false;
}
}
// If curl is instaled
if(iscurlinstalled()==true)
{
$ch = curl_init($LinkToFetch);
curl_setopt($ch, CURLOPT_USERAGENT,$UserAgent);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_BINARYTRANSFER, true);
$HtmlCode = curl_exec($ch);
curl_close($ch);
}
else
{
$HtmlCode = file_get_contents($LinkToFetch);
}
echo $HtmlCode;
?>
I must say that i'm running RecieverHtml.php from another .php with GET like this
http://127.0.0.1/reciever/RecieverHtml.php?Link=http%3A%2F%2Fwww.digitalcoding.com%2Ftools%2Fdetect-browser-settings.html&UserAgent=Mozilla%2F5.0+%28Windows+NT+6.1%3B+rv%3A10.0.1%29+Gecko%2F20100101+Firefox%2F10.0.1%0D%0A
This depends on the server your request is sent to. If the server checks the user agent and allows only requests that match a limited/incomplete/outdated list of common browser user agents, the server might return a generic 400 status code.
If you don't have control over the server and want your script to work, use the user agent that works and forget about the others. The user agent you provide with your request is "wrong" anyway, as it is not Chrome doing the actual request but your server running your PHP script.
EDIT:
You can also pass the user agent of the browser that requests your PHP script by using the following code:
curl_setopt($ch, CURLOPT_USERAGENT, $_REQUEST['HTTP_USER_AGENT']);
Just keep in mind that the value might be empty or exotic (like. Lynx/2.8.8dev.3 libwww-FM/2.14 SSL-MM/1.4.1) and be rejected by the server.
Related
This question already has answers here:
How can I access an array/object?
(6 answers)
Closed 9 months ago.
I am trying to get an array from the AnimeCharactersDatabase in order to then produce a table with the results. I had it working once before but cannot remember how I got it to work.
Looking at the url (http://www.animecharactersdatabase.com/api_series_characters.php?character_q=Usagi), "search_results" should be itself an array of characters which have arrays of info within them.
<?php
$url= "http://www.animecharactersdatabase.com/api_series_characters.php?character_q=Usagi";
/* gets the data from a URL */
function get_acdb($url)
{
//ACDB requires certain agents for the query per their documentation.
$agents = array(
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:7.0.1) Gecko/20100101 Firefox/7.0.1',
'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100508 SeaMonkey/2.0.4',
'Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 6.0; en-US)',
'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_7; da-dk) AppleWebKit/533.21.1 (KHTML, like Gecko) Version/5.0.5 Safari/533.21.1' ,
'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:56.0) Gecko/20100101 Firefox/56.0 Waterfox/56.2.14',
'Lynx/2.8.7dev.4 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.8d',
'Lynx/2.8.9dev.8 libwww-FM/2.14 SSL-MM/1.4.1 GNUTLS/3.4.9',
'Lynx/2.8.3dev.9 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.6',
'Opera/9.80 (Windows NT 5.3; U; x64; en-US) Presto/2.12.388 Version/12.18'
);
$ch = curl_init();
$timeout = 5;
curl_setopt($ch,CURLOPT_URL,$url);
//I believe this should make it return the data, not just true.
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,$timeout);
curl_setopt($ch,CURLOPT_USERAGENT,$agents[array_rand($agents)]);
$data = curl_exec($ch);
curl_close($ch);
return json_decode($data, true);
}
/* Parse the data into Characters. */
$arr = get_acdb($url);
$arr = array($arr["search_results"]);
//Upon testing, this is always an array of 1, and $arr[0] shows no value. This is where I need help, making the for each loop actually add characters to the new array SearchArray.
$i=-1;
foreach ($arr[0] as $xyz) {
//I never get within this function because there $arr[0] doesn't seem to be an array.
$i=$i+1;
$CharName = $arr[0][$i]["name"];
$CharID = $arr[0][$i]["id"];
$SeriesID = $arr[0][$i]["anime_id"];
$SeriesName = $arr[0][$i]["anime_name"];
$medialenA = strlen($CharName) + 25;
$medialenB = strlen($SeriesName) + 2;
$mediatype = substr($arr[0][$i]["desc"],$medialenA);
$mediatype = substr($mediatype,0,strlen($mediatype) -$medialenB);
$CharSex = $arr[0][$i]["gender"];
{
//Add relevant matches to Array.
$SearchArray[] = array(
'CharID'=>$CharID,
'Name'=>$CharName,
'SeriesID'=>$SeriesID,
'SeriesName'=>$SeriesName,
'Sex'=>$CharSex,
);
}
}
?>
I have modified a bit your code but it is fully functional
<?php
$url = "https://www.animecharactersdatabase.com/api_series_characters.php?character_q=Usagi";
function getData($url){
$ch = curl_init();
$timeout = 5;
curl_setopt($ch,CURLOPT_URL,$url);
//I believe this should make it return the data, not just true.
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,$timeout);
curl_setopt($ch,CURLOPT_USERAGENT,'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:7.0.1) Gecko/20100101 Firefox/7.0.1');
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
$results= getData($url);
$objectResults= json_decode($results);
foreach($objectResults->search_results as $character){
echo $character->anime_id."\n";
echo $character->anime_name."\n";
echo $character->anime_image."\n";
echo $character->character_image."\n";
echo $character->id."\n";
echo $character->gender."\n";
echo $character->name."\n";
echo $character->desc."\n";
echo"\n\n";
}
I am trying to get the title of a few pages in PHP with this code. It works fine with almost every link except for a few, for example, with 9gag.
function download_page($url)
{
$agent = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36';
$ch = curl_init();
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_VERBOSE, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_USERAGENT, $agent);
curl_setopt($ch, CURLOPT_URL, $url);
$data = curl_exec($ch);
return $data;
}
function get_title_tag($str)
{
$pattern = '/<title[^>]*>(.*?)<\/title>/is';
if(preg_match_all($pattern, $str, $out))
{
return $out[1][0];
}
return false;
}
$url = "https://9gag.com/gag/avPBX3b";
$data = download_page($url);
echo $extracted_title = get_title_tag($data);
It echoes
Attention Required! | Cloudflare
which seems to be protected by a Cloudflare bot verification page. But when I try to post this link on any social network, they are able get the title and all the metadata required. How is it possible?
Edit:
Even if I use the opengraph.io API, I get:
"root":{
"error":{
"code": 2005
"message": "Got 403 error from server."
}
}
just replace agent string and it should work OK, from:
$agent = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36';
to:
$agent = 'facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)';
I see that CloudFlare has enabled captcha verification if standard agent strings are present so this will easily bypass this. I'm puzzled with security here but that is out of scope of this question
You can make use of Facebook's Graph API.
https://graph.facebook.com/v7.0/?fields=og_object&id=https://9gag.com/gag/avPBX3b
JSON Output:
{
"og_object": {
"id": "994417753967326",
"description": "More memes, funny videos and pics on 9GAG",
"title": "32 Places People Have Mispronounced Their Entire Life",
"type": "article",
"updated_time": "2020-06-12T15:54:27+0000"
},
"id": "https://9gag.com/gag/avPBX3b"
}
You can read more about it's usage here.
I am trying to make a sitescraper. I made it on my local machine and it works very fine there. When I execute the same on my server, it shows a 403 forbidden error.
I am using the PHP Simple HTML DOM Parser. The error I get on the server is this:
Warning:
file_get_contents(http://example.com/viewProperty.html?id=7715888)
[function.file-get-contents]: failed
to open stream: HTTP request failed!
HTTP/1.1 403 Forbidden in
/home/scraping/simple_html_dom.php on
line 40
The line of code triggering it is:
$url="http://www.example.com/viewProperty.html?id=".$id;
$html=file_get_html($url);
I have checked the php.ini on the server and allow_url_fopen is On. Possible solution can be using curl, but I need to know where I am going wrong.
I know it's quite an old thread but thought of sharing some ideas.
Most likely if you don't get any content while accessing an webpage, probably it doesn't want you to be able to get the content. So how does it identify that a script is trying to access the webpage, not a human? Generally, it is the User-Agent header in the HTTP request sent to the server.
So to make the website think that the script accessing the webpage is also a human you must change the User-Agent header during the request. Most web servers would likely allow your request if you set the User-Agent header to an value which is used by some common web browser.
A list of common user agents used by browsers are listed below:
Chrome: 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'
Firefox: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:75.0) Gecko/20100101 Firefox/75.0
etc...
$context = stream_context_create(
array(
"http" => array(
"header" => "User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36"
)
)
);
echo file_get_contents("www.google.com", false, $context);
This piece of code, fakes the user agent and sends the request to https://google.com.
References:
stream_context_create
Cheers!
This is not a problem with your script, but with the resource you are requesting. The web server is returning the "forbidden" status code.
It could be that it blocks PHP scripts to prevent scraping, or your IP if you have made too many requests.
You should probably talk to the administrator of the remote server.
Add this after you include the simple_html_dom.php
ini_set('user_agent', 'My-Application/2.5');
You can change it like this in parser class from line 35 and on.
function curl_get_contents($url)
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
function file_get_html()
{
$dom = new simple_html_dom;
$args = func_get_args();
$dom->load(call_user_func_array('curl_get_contents', $args), true);
return $dom;
}
Have you tried other site?
It seems that the remote server has some type of blocking. It may be by user-agent, if it's the case you can try using curl to simulate a web browser's user-agent like this:
$url="http://www.example.com/viewProperty.html?id=".$id;
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch,CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13');
$html = curl_exec($ch);
curl_close($ch);
Write this in simple_html_dom.php for me it worked
function curl_get_contents($url)
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch,CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13');
$html = curl_exec($ch);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
function file_get_html($url, $use_include_path = false, $context=null, $offset = -1, $maxLen=-1, $lowercase = true, $forceTagsClosed=true, $target_charset = DEFAULT_TARGET_CHARSET, $stripRN=true, $defaultBRText=DEFAULT_BR_TEXT, $defaultSpanText=DEFAULT_SPAN_TEXT)
{
$dom = new simple_html_dom;
$args = func_get_args();
$dom->load(call_user_func_array('curl_get_contents', $args), true);
return $dom;
//$dom = new simple_html_dom(null, $lowercase, $forceTagsClosed, $target_charset, $stripRN, $defaultBRText, $defaultSpanText);
}
I realize this is an old question, but...
Just setting up my local sandbox on linux with php7 and ran across this. Using the terminal run scripts, php calls php.ini for the CLI. I found that the "user_agent" option was commented out. I uncommented it and added a Mozilla user agent, now it works.
Did you check your permissions on file? I set up 777 on my file (in localhost, obviously) and I fixed the problem.
You also may need some additional information in the conext, to make the website belive that the request comes from a human. What a did was enter the website from the browser an copying any extra infomation that was sent in the http request.
$context = stream_context_create(
array(
"http" => array(
'method'=>"GET",
"header" => "User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/50.0.2661.102 Safari/537.36\r\n" .
"accept: text/html,application/xhtml+xml,application/xml;q=0.9,
image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3\r\n" .
"accept-language: es-ES,es;q=0.9,en;q=0.8,it;q=0.7\r\n" .
"accept-encoding: gzip, deflate, br\r\n"
)
)
);
In my case, the server was rejecting HTTP 1.0 protocol via it's .htaccess configuration. It seems file_get_contents is using HTTP 1.0 version.
Use below code:
if you use -> file_get_contents
$context = stream_context_create(
array(
"http" => array(
"header" => "User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36"
)
));
=========
if You use curl,
curl_setopt($curl, CURLOPT_USERAGENT,'User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36');
I am working on a Roblox group payout API, and if it works I am planning to set it open for public
Problem: It shows output {}, but it doesn't payout anything
Before I could start working on this, I first needed to create a manual payout where I got all the POST parameters and headers. Here is what I got:
METHOD: POST
URL: https://web.roblox.com/groups/3182156/one-time-payout/false
REQUEST BODY: percentages=%7B%22457792390%22:%221%22%7D
HEADERS:
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
referer: https://web.roblox.com/my/groupadmin.aspx?gid=3182156&_=1528631875891
cookie: GuestData=UserID=-608861174; RBXMarketing=FirstHomePageVisit=1; RBXSource=rbx_acquisition_time=6/9/2018 6:18:42 AM&rbx_acquisition_referrer=https://v3rmillion.net/showthread.php?tid=583440&rbx_medium=Direct&rbx_source=v3rmillion.net&rbx_campaign=&rbx_adgroup=&rbx_keyword=&rbx_matchtype=&rbx_send_info=1; rbx-ip=; __utmc=200924205; __utmz=200924205.1528621282.6.4.utmcsr=robuxrewards.site|utmccn=(referral)|utmcmd=referral|utmcct=/; __utma=200924205.428322191.1519910430.1528621282.1528630905.7; RBXImageCache=timg=63313634633937632D393938342D346262642D613663612D333133653130363363373938253231372E3130332E32392E32303925362F31302F323031382031313A34333A303220414D3E2434B19B5881BB5B51486D88F43FC8F5D5787F; __utmt_b=1; gig_hasGmid=ver2; .ROBLOSECURITY=HERE_WAS_A_COOKIE; RBXEventTrackerV2=CreateDate=6/10/2018 6:52:37 AM&rbxid=455629576&browserid=15138233029; __RequestVerificationToken=w6L7tvgTk0c8TeMvuz8QnvVEoF7W7mMxk6UcefoCygoXk97mWkqQGKiLD6XLz5Bssx9FTqkFCzvclhqdrVyww9VcrNY1; RBXSessionTracker=sessionid=a45dce07-ff59-4590-8881-b4200425cf02; __utmb=200924205.11.10.1528630905
I deleted the .ROBLOSECURITY because with that you can login into my account. But that is all the info I got. With the request body: percentages=%7B%22457792390%22:%221%22%7D, When I decode that, I get this: percentages={"457792390":"1"} That is good, because my user id is 457792390 and the amount I payed out is 1. So I created a code that should make this work, and make it automatic. Here it is:
<?php
// Receive
$module = $_GET['module'];
$cookie = $_GET['cookie'];
$amount = $_GET['amount'];
$group_id = $_GET['group_id'];
$user_id = $_GET['user_id'];
/* https://freewebhost.fun/api.php?module=group_payout&cookie=YOUR_COOKIE_HERE&amount=YOUR_AMOUNT_HERE&group_id=YOUR_GROUP_ID_HERE&user_id=USERNAME_HERE */
// The function
function group_payout($cookie, $amount, $group_id, $user_id) {
// preset stuff
$content_type = "application/x-www-form-urlencoded; charset=UTF-8";
// further
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,"https://web.roblox.com/groups/".$group_id."/one-time-payout/false");
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, "percentages=%7B%22" . $user_id . "%22:%22" . $amount . "%22%7D");
curl_setopt($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36");
curl_setopt($ch, CURLOPT_HTTPHEADER, Array("Content-Type: ".$content_type, "Cookie: .ROBLOSECURITY=".$cookie."; RBXViralAsquisition=time=1/24/2018 11:50:50 AM&referrer=https://www.google.nl/&originatingsite=www.google.nl&viraltarget=945929481; RBXSource=rbx_acquisition_time=6/11/2018 1:47:00 AM&rbx_acquisition_referrer=&rbx_medium=Direct&rbx_source=&rbx_campaign=&rbx_adgroup=&rbx_keyword=&rbx_matchtype=&rbx_send_info=1; __utzm=200924205.1516985949.4.3.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=(not%20provided); "));
curl_setopt($ch, CURLOPT_REFERER, 'https://web.roblox.com/my/groupadmin.aspx?gid='.$group_id.'#nav-payouts');
// Lets go
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$server_output = curl_exec ($ch);
curl_close ($ch);
echo $server_output;
}
if ($module == "group_payout") {
group_payout($cookie, $amount, $group_id, $user_id);
}
?>
I really don't know what the problem can be.
Edit
So, in the comments somebody told me to try out PostMan. Here are the results:
https://pastebin.com/raw/iN4UQPBE (it's too big for the character limit here).
I don't know what to do with these results.
Your XSRF token is invalid. You should include it in the request headers.
To get your XSRF token, send a POST request to https://api.roblox.com/sign-out/v1 with your cookie in the headers. The XSRF token should be in the response headers.
I'm getting this error when I try to access non-English (Unicode) URLs using PHP's file_get_contents() function. The URL was: http://ml.wikipedia.org/wiki/%E0%B4%B2%E0%B4%AF%E0%B4%A3%E0%B5%BD_%E0%B4%AE%E0%B5%86%E0%B4%B8%E0%B5%8D%E0%B4%B8%E0%B4%BF
I've got this error:
Warning: file_get_contents(http://ml.wikipedia.org/wiki/%E0%B4%B2%E0%B4%AF%E0%B4%A3%E0%B5%BD_%E0%B4%AE%E0%B5%86%E0%B4%B8%E0%B5%8D%E0%B4%B8%E0%B4%BF) [function.file-get-contents]: failed to open stream: HTTP request failed! HTTP/1.0 403 Forbidden..
Fatal error: Call to a member function find() on a non-object in G:\xampp\htdocs\codes\htmlParse1.php on line 8
Is there any restriction for the file_get_contents() function? Does it only accept English URLs?
You are missing header information like user agent. I would advice you just use Just use curl
$url = 'http://ml.wikipedia.org/wiki/%E0%B4%B2%E0%B4%AF%E0%B4%A3%E0%B5%BD_%E0%B4%AE%E0%B5%86%E0%B4%B8%E0%B5%8D%E0%B4%B8%E0%B4%BF';
$ch = curl_init($url); // initialize curl handle
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.52 Safari/537.17");
curl_setopt($ch, CURLOPT_REFERER, "http://ml.wikipedia.org");
curl_setopt($ch, CURLOPT_ENCODING, "UTF-8");
$data = curl_exec($ch);
print($data);
Live CURL Demo
If you must use file_get_content
$options = array(
'http'=>array(
'method'=>"GET",
'header'=>"Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\n" .
"Cookie: centralnotice_bucket=0-4.2; clicktracking-session=M7EcNiC2Zcuko7exVGUvLfdwxzSK3Boap; narayam-scheme=ml\r\n" .
"User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.52 Safari/537.17"
)
);
$url = 'http://ml.wikipedia.org/wiki/%E0%B4%B2%E0%B4%AF%E0%B4%A3%E0%B5%BD_%E0%B4%AE%E0%B5%86%E0%B4%B8%E0%B5%8D%E0%B4%B8%E0%B4%BF';
$context = stream_context_create($options);
$file = file_get_contents($url, false, $context);
echo $file ;
Live file_get_content Demo
If there is a 403 Forbidden, the connection should work.
That's just a warning, that the webserver responded with the status code 403. Wikipedia denies downloading without valid user agent:
Scripts should use an informative User-Agent string with contact information, or they may be IP-blocked without notice.
The second error should be from the next lines that are handling the result (a String object) of your file_get_contents(...) call.
Edit: You should try setting your user agent with e.g. ini_set('user_agent', 'wikiPHP'); before doing the request. That should work fine.