ive searched everywhere and cannot find how to post data using vb.net
So i was wondering if someone can convert this curl code I made into vb.net :)
$useragent="Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1";
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_USERAGENT, $useragent);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE ); // return into a variable
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_POST, TRUE);
curl_setopt($ch, CURLOPT_POSTFIELDS, $data);
$result = curl_exec( $ch ); // run!
curl_close($ch);
$data being an array, not sure how it will work in vb.net though.
CURL is an independent application and a PHP extension allows you to utilize it seamlessly from inside PHP code. So you can install CURL and get it work via your shell commands from .NET ... while at the same time you might find this useful as well.
http://curl.haxx.se/libcurl/dotnet/
Related
Is it possible to write a PHP function that returns HTML-string of any possible link the same way the browser does? Example of links: "http://google.com", "", "mywebsite.com", "somesite.com/.page/nn/?s=b#85452", "lichess.org"
What I've tried:
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, $url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, false);
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($curl, CURLOPT_SSLVERSION, 3);
curl_setopt($curl, CURLOPT_CONNECTTIMEOUT, 20);
curl_setopt($curl, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13');
$data = curl_exec($curl);
if(curl_errno($curl)){
echo 'Curl error: ' . curl_error($curl);
}
echo $data;
curl_close($curl);
Sadly enough, for some links this code returns blank page because of SSL or any other stuff, but for some links it works.
Or is there any alternative to CURL? I just do not understand why php cannot retrieve any html out of the box.
CURL may fail on SSL sites if you're running an older version of PHP. Make sure your OS and PHP version are up-to-date.
You may also opt to use file_get_contents() which works with URLs and is generally a simpler alternative if you just want to make simple GET requests.
$html = file_get_contents('https://www.google.com/');
I'm trying to get information from the Spotify API. When accessing this URL in my browser, it all works perfect; https://api.spotify.com/v1/search?q=Led%20Zeppelin%20Kashmir&type=track
However, then I use this code to try to get the data I'm just getting a white page. I've Googled and searched Stackoverflow, but still no cigar. Does anyone know why this code doesn't work?
Appreciate any help on this.
$artist = 'Led Zeppelin';
$title = 'Kashmir';
$spotifyURL = 'https://api.spotify.com/v1/search?q='.$artist.'%20'.$title.'&type=track';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $spotifyURL);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:x.x.x) Gecko/20041107 Firefox/x.x");
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
$json = curl_exec($ch);
$json = json_decode($json);
curl_close($ch);
echo '<pre>'.print_r($json, true).'</pre>';
Your URL contains spaces. Use the following line instead:
$spotifyURL = 'https://api.spotify.com/v1/search?q='.urlencode($artist.' '.$title).'&type=track';
I'm creating a script that is scraping the site www.piratebay.se. The script was working OK two-three days ago but now I'm having problems with it.
This is my code:
$URL = 'http://thepiratebay.se';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $URL);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE);
curl_setopt($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1");
curl_setopt($ch, CURLOPT_COOKIE, "language=pt_BR; c[thepiratebay.se][/][language]=pt_BR");
$fonte = curl_exec ($ch);
curl_close ($ch);
echo $fonte;
The response of this code is not clean HTML, but looks like this instead:
��[s۸N>��k�9��-ىmI7��$�8�.v��͕���$h���y�G�Sg:ӷ>�5����ʱ�aor&���.v)���������) d�w��8w�l����c�u""1����F*G��ِ�2$�6�C�}��z(bw�� 4Ƒz6�S��t4�K��x�6u���~�T���ACJb��T^3�USPI:Mf��n�'��4��� ��XE�QQ&�c5�`'β�T Y]D�Q�nBfS�}a�%� ���R) �Zn��̙ ��8IB�a����L�
I already tried to use user agent on .htaccess, PHP and cURL but to no success.
Add this:
curl_setopt($ch, CURLOPT_ENCODING , "gzip");
Tested on my local environment, works fine with it.
I'm using the following function that based on cURL
$url = "http://www.web_site.com";
$string = #file_get_contents($url);
if(!$string){
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.5) Gecko/20041107 Firefox/1.0');
$string = curl_exec($ch);
curl_close($ch);
}
But suddenly my website stopped due to this function and once i remove curl it works fine
so i thought my hosting disabled it so i checked it out
Click here to check it out
and it should be working so what is wrong ?
~ any help , what shall i say to my hosting provider !!
The file_get_contents method doesn't look to the URL header, try using cURL with the CURLOPT_FOLLOWLOCATION enabled and CURLOPT_MAXREDIRS to the value you prefer.
I am trying to use cURL to automate a login with multiple steps involved. The problem I am running into is that I get the first page of the login fine but the next page I hit I must select or hit a link to continue. How the heck do I "keep going". I've tried taking the next URL and putting it into my cURL code but it does not work as it just goes directly to that page and errors because I have not gone to the first page of the login process. Here is my code.
$ch = curl_init();
$data = array('fp_software' => '', 'fp_screen' => '', 'fp_browser' => '','txtUsername' => "$username", 'btnLogin' => 'Log In');
curl_setopt($ch, CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13');
curl_setopt($ch, CURLOPT_URL, 'https://www.website.com/Login.aspx');
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, $data);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_exec($ch);
curl_close ($ch);
The next url is www.website.com/PassMarkFrame.aspx - Basically I need to crawl threw this login process.
I tried this...but it didn't work.
curl_setopt($ch, CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13');
curl_setopt($ch, CURLOPT_URL, 'https://www.website.com/Login.aspx'); // use the URL that shows up in your <form action="...url..."> tag
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, $data);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_exec($ch);
curl_setopt($ch, CURLOPT_URL, 'https://www.website.com/PassMarkFrame.aspx'); // use the URL that shows up in your <form action="...url..."> tag
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_exec($ch);
curl_close ($ch);
Is that the right syntax?
Don't close the curl handle after each stage. if cookies are being set, and you haven't configured the cookiejar/cookiefile options, then you start with a brand new sparkly fresh and clean CURL with no "memory" of the previous requests.
Keep the same curl handle going, and any cookies set by the site will be preserved.