How to extract source code from https://twitter.com in PHP - php

I try to download sourcecode of a twitter webpage with a php code:
$continut_pp = file_get_contents('https://twitter.com/');
echo $continut_pp;
The problem is that result is null. I think the problem comes from the https, well how I can extract an https source coude in PHP code?

Try using the file_get_contents function. Just give it the full web address and it should return the HTML source. I hope this helps.
As the first function I suggested did not work, you could try this one: var markup = document.documentElement.innerHTML;. However, it is in Javascript and not PHP.

Related

Blank results on json parse on https api calling using php and MAMP

I´m trying a very simple php script which is about calling a json data through api calling on a online https link using MAMP.
However if I use the following code I have blank results:
<?php
$cnmkt = "https://api.coinmarketcap.com/v1/ticker/?limit=50";
$json = file_get_contents($cnmkt);
$fgc = json_decode($json,true);
echo $fgc[1]['percent_change_7d'];
?>
But if i copy/paste the content of the https link into a test.json file locally, substituting the https link with the test.json file on $cnmkt variable, the same exact script works properly.
I know i´m missing something very obvious, if someone could help me that would be very much appreciated, thanks.
Stefano
The script is working fine. I get an expected result of 4.63
Disable your AV/firewall and check again.

file_get_html() not working with airbnb

I have a problem with file_get_html(), i don't understand why it doesn't work can you help me? my code
$html = file_get_html('https://www.airbnb.fr/');
if ($html) {
echo "good";
}
Have a good day!
I think, server just blocks your request, you will not be able to fetch data from it, using simple HTTP requests.
You can try using curl, proxies, or both (there are ready to use solutions for this, like: AngryCurl, or RollingCurl)
It doesnt work because you have to include the simple_dom_html class to make it work. You can find the code on their official page:
http://simplehtmldom.sourceforge.net/
Then you can simply get the HTML and output it like this:
// Dump contents (without tags) from HTML
echo file_get_html('http://www.google.com/')->outertext;
or if you want to save the result in a variable
// Dump contents (without tags) from HTML
$html = file_get_html('http://www.google.com/')->outertext;
More info: http://simplehtmldom.sourceforge.net/

How to load the generaled html from a php file? loadHTMLFile not working

I am trying to get the generated HTML from my index.php page. I need to create a CSV file from an HTML table that I generated with PHP.
My problem is I can't manage to get my HTML table in a PHP variable.
I have tried using loadHTMLFile but it doesn't seem work with php files.
I also tried using file_get_contents but it gets the content before PHP is executed so there is PHP in the middle of the table.
Anyone knows how I can manage to do that?
You can try this:
ob_start();
include 'http://domain/file.php';
$html = ob_get_clean();
Thanks
Thanks #CD001 for the answer. There are 2 solutions : file_get_contents or using cURL.
file_get_contents with the full HTTP URL works well. I will use cURL because I also need to login to the page
Thank you guys.

Get Request to an API url

I have signed up to a synonym API.. see the details on this page
I am having trouble implementing this in my php code.
If I copy and paste the link into the web browser, I can see the results no problem.
Instead of typing the word in manually, I wish to have a variable in the link with the relevant word i.e. $variable_with_word_stored as shown below.
http://words.bighugelabs.com/api/2/xxxxxxxx/$variable_with_word_stored/php
//format could be php (I would unserialize)..or json..I could decode it?
Any ideas guys? Thanks.
It sounds like you mean you want the result from calling that webpage and store it in a variable. What you should be looking to do is sending a http get request to that page within the code.
Check out using curl with php, you can send a http request to your requested url, capture the result back and parse it through json_decode
http://php.net/manual/en/curl.examples-basic.php
try it like this, maybe that you dont need curl:
$key = "xxxxxxxx";
$word = "love";
echo file_get_contents("http://words.bighugelabs.com/api/2/$key/$word/php");

Simple HTML DOM only returns partial html of website

I had a big PHP script written out to scrape images from this site: "http://www.mcso.us/paid/", but when it didn't work I butchered my code to simply echo the whole page.
I found that the table with the image links I want doesn't show up. I believe it's because the remote site uses ASP to generate the table. Is there a way around this? Am I wrong? Please help.
<?php
include("simple_html_dom.php");
set_time_limit(0);
$baseURL = "http://www.mcso.us/paid/";
$html = file_get_html($baseURL);
echo $html;
?>
There's no obvious reason why them using ASP would cause this, have you tried navigating the page with JavaScript turned off? It's a more likely scenario that the tables are generated through JS.
Do note that the search results are retrieved through ajax ( page http://www.mcso.us/paid/default.aspx ) by making a POST request, you can use cURL http://php.net/manual/en/book.curl.php , use chrome right-click-->inspect element---> network and make a search you will see all the info there (post variables etc ...)

Categories