file get content from iframe - php

I would like to get content of a iframe with id or class of that iframe.
$links = "https://www.amazon.co.uk/Book-Secret-Wisdom-Prophetic-Evolution/dp/599054314X/ref=pd_sbs_14_2/257-3608675-7951114?_encoding=UTF8&pd_rd_i=599054314X&pd_rd_r=f103ccc1-7985-11e9-987c-178a1a538946&pd_rd_w=E5xsZ&pd_rd_wg=206kw&pf_rd_p=18edf98b-139a-41ee-bb40-d725dd59d1d3&pf_rd_r=MV7NZ41V278ECZM1135G&psc=1&refRID=MV7NZ41V278ECZM1135G";
$res = #file_get_contents($links);
$dom = new DomDocument();
#$dom->loadHTML($res);
$xpath = new DOMXpath($dom);
I would like to get content by id:
$dom->getElementById('iframeContent');
However, it alway return a page, not content of that iframe.
Anyone meet that problem?, pls help.

Iframes don't have content (well, they might, but it is alternative content for when iframes are not supported by the client).
They have a src attribute containing a URL pointing at an external document.
You need to read the src attribute value, resolve it to an absolute URL (if it isn't one already), then make an HTTP request to it, parse that, and extract the data from it.

Related

How to get dynamically created html <audio> tag in php

I am trying to read the html <audio> tag in PHP, But it is creating dynamically
This is the URL! I'm using to read
$dom = new DOMDocument();
#$dom->loadHTML($html);
foreach (iterator_to_array($dom->getElementsByTagName('audio')) as $node) {
$this->printnode($node);
}
In printnode() function it is showing like no <audio> tag exits because it is creating dynamically
After seeing the structure, yes the url for the actual audio is being loading dynamically via JS.
But the audio playlist data is still visible. Use that:
$xpath = new DOMXPath($dom);
$playlist_data = $xpath->evaluate('string(//script[#id="playlist-data"])');
$data = json_decode($playlist_data, 1);
echo $data['audio'];
Its inside another script tag on JSON string format. So basically, access this data and get the value as a string. Then you'll get the JSON string, and as usual, load it into json_decode and the parser will do its thing returning you with an array, then access the audio url like any normal array
Sidenote: I just used xpath as personal preference, you can use:
$playlist_data = $dom->getElementById('playlist-data')->nodeValue;
if you choose to do so.

xpath getting data from iframe domXPath php

I am playing around scraping website technique, For ex link, Its always returning empty for description.
The reason is its populated by JS with the following code, How do we go about with these kinds of senarios.
// Frontend JS
P.when('DynamicIframe').execute(function(DynamicIframe){
var BookDescriptionIframe = null,
bookDescEncodedData = "book desc data",
bookDescriptionAvailableHeight,
minBookDescriptionInitialHeight = 112,
options = {},
iframeId = "bookDesc_iframe";
I am using php domxpath as below
$file = 'sample.html';
$dom = new DOMDocument();
$dom->preserveWhiteSpace = false;
// I am saving the returned html to a file and reading the file.
#$dom->loadHTMLFile($file);
$xpath = new DOMXPath($dom);
// This xpath works on chrome console, but not here
// because the content is dynamically created via js
$desc = $xpath->query('//*[#id="bookDesc_iframe"]')
Everytime when you see these kinds of JavaScript Generated content and especially from big guys like amazon, google, you should immediately think that it would have a graceful degradation implementation.
Meaning it would be done for where Javascript doesn't work like links browser for better browser coverage.
Lookout for <noscript> you may find one. and with that you can solve the problem.

Pull content from one wordpress site to another wordpress site

I am trying to find a way of displaying the text from a website on a different site.
I own both the sites, and they both run on wordpress (I know this may make it more difficult). I just need a page to mirror the text from the page and when the original page is updated, the mirror also updates.
I have some experience in PHP and HTML, and I also would rather not use Js.
I have been looking at some posts that suggest cURL and file_get_contents but have had no luck editing it to work with my sites.
Is this even possible?
Look forward to your answers!
Both cURL and file_get_contents() are fine to get the full html output from an url. For example with file_get_contents() you can do it like this:
<?php
$content = file_get_contents('http://elssolutions.co.uk/about-els');
echo $content;
However, in case you need just a portion of the page, DOMDocument and DOMXPath are far better options, as with the latter you also can query the DOM. Below is working an example.
<?php
// The `id` of the node in the target document to get the contents of
$url = 'http://elssolutions.co.uk/about-els';
$id = 'comp-iudvhnkb';
$dom = new DOMDocument();
// Silence `DOMDocument` errors/warnings on html5-tags
libxml_use_internal_errors(true);
// Loading content from external url
$dom->loadHTMLFile($url);
libxml_clear_errors();
$xpath = new DOMXPath($dom);
// Querying DOM for target `id`
$xpathResultset = $xpath->query("//*[#id='$id']")->item(0);
// Getting plain html
$content = $dom->saveHTML($xpathResultset);
echo $content;

How does OffLiberty.com parse links to get files?

Anybody any idea how they do it? I currently use OffLiberty.com to parse Mixcloud links to get the raw MP3 URL for use in a custom HTML5 player for iOS compatibility, I was just wondering if anyone knew how exactly their process works, so I could create something similar that would 'cut out the middleman' so to speak, so my end-user wouldn't have to go to an external site to get a link to the MP3 for the mix they want to post. Just a thought really, not terribly important if it couldn't be done, but it would be a nice touch :)
Anybody any idea?
Note that I'm against content scraping and you should ask those website permission to scrap their MP3 URLs. Else, if I was them, I'd block you right now and ad vitam æternam.
Anyway, you can parse its HTML using DOMDocument.
For example :
<?php
// just so you don't see parse errors
$internal_errors = libxml_use_internal_errors(true);
// initialize the document
$doc = new DomDocument();
// load a page
$doc->loadHTMLFile('http://www.mixcloud.com/LaidBackRadio/le-motel-on-the-road/');
// initialize XPATH for the document
$xpath = new DomXPath($doc);
// span with "data-preview-url" seems to contain MP3 url
// we request them inside a DomNodeList http://www.php.net/manual/en/class.domnodelist.php
$mp3 = $xpath->query('//span[#data-preview-url]');
foreach($mp3 as $m){
// we print the attribute value
echo $m->attributes->getNamedItem('data-preview-url')->nodeValue . '<br/>';
}
libxml_use_internal_errors($internal_errors);

Finding and Echoing out a Specific ID from HTML document with PHP

I am grabbing the contents from google with PhP, how can I search $page for elements with the id of "#lga" and echo out another property? Say #lga is an image, how would I echo out it's source?
No, i'm not going to do this with Google, Google is strictly an example and testing page.
<body><img id="lga" src="snail.png" /></body>
I want to find the element named "lga" and echo out it's source; so the above code I would want to echo out "snail.png".
This is what i'm using and how i'm storing what I found:
<?php
$url = "https://www.google.com/";
$page = file($url);
foreach($page as $part){
}
?>
You can achieve this using the built-in DOMDocument class. This class allows you to work with HTML in a structured manner rather than parsing plain text yourself, and it's quite versatile:
$dom = new DOMDocument();
$dom->loadHTML($html);
To get the src attribute of the element with the id lga, you could simply use:
$imageSrc = $dom->getElementById('lga')->getAttribute('src');
Note that DOMDocument::loadHTML will generate warnings when it encounters invalid HTML. The method's doc page has a few notes on how to suppress these warnings.
Also, if you have control over the website you are parsing the HTML from, it might be more appropriate to have a dedicated script to serve the information you are after. Unless you need to parse exactly what's on a page as it is served, extracting data from HTML like this could be quite wasteful.

Categories