This question already has answers here:
Use XPath with PHP's SimpleXML to find nodes containing a String
(4 answers)
Closed 9 years ago.
I have an xml feed which I am extracting using PHP, i have the code written to find the values I need and display correctly on the page.
XML code is:
<Agents>
<Agent>
<id></id>
<description></description>
<name></name>
</Agent>
</Agents>
PHP Code
<?php
$url = "urlgoeshere";
$xml = simplexml_load_file($url);
for ($html = "", $i = 0; $i < 10; $i++)
{
$id = $xml->Agent[$i]->id;
$name = $xml->Agent[$i]->name;
$description = $xml->Agent[$i]->description;
$html .= "<h1>$id</h1><h2>$name</h2><p>$description</p>";
}
echo $html;
This is set to load 11 agents which works fine but I want to change this and load only one specific Agent based on its id.
So for example if an agent has an id of 1200 on the xml field I want to find that and load only that one Agent but can't seem to work out an easy way to do this.
Just use an if condition with a continue
$idToFind = 1200;
for ($i = 0; $i < 10; $i++) {
$id = $xml->Agent[$i]->id;
if ($id != $idToFind)
continue;
else {
$name = $xml->Agent[$i]->name;
$description = $xml->Agent[$i]->description;
$html .="<h1>$id</h1><h2>$name</h2><p>$description</p>";
}
}
You have two options. Either you filter client-side (in your code) or you filter server-side.
Server side
If you request the XML file e.g. from a RESTfull service you might want to pass a parameter directly to your request.
Instead of requesting example.com/agents.xml you can maybe request example.com/agents/1.xml or something like that. In that case you have to check the API you request the XML file from. The pro for this type of filtering is, that you have to load a smaller xml file with less data and traffic.
Client side
If you are unable to filter the data on the server side, you need to check it in your PHP code. The simpelest option would be to add an if statement in your loop. And since you are talking about 1200 agents it might be the easiest aswell. In case you have to load more entries or speed and efficiency is required for your application you might want to use another XML parser. The SimpleXML class loads the whole file into the CPU. I have written a relatively efficient way to parse an XML file with the XML Reader which is more efficient and requires less memory. Feel free to edit the example to fit your needs.
Related
I'm newbie for xml files related stuff. i've stuck with an issue.
I have a mysql query which fetches url data nearly 5000 rows (1 row contains 1 url).
so i've implemented a cron which fetches 1000 rows at time from mysql with pagination. i need to do some validations on the urls and should append the valid urls in an xml file.
Here is my code
public function urlcheck()
{
$xFile = $this->base_path."sitemap/path/urls.xml";
$page = 0;
$cache_key = 'valid_urls';
$page = $this->cache->redis->get($cache_key);
if(!$page){
$page=0;
}
$xFile = simplexml_load_file($xFile);
$this->load->model('productnew/productnew_es6_m');
$urls= $this->db->query("SELECT url FROM product_data where `active` = 1 limit ".$page.",1000")->result();
$dom = new DOMDocument('1.0','UTF-8');
$dom->formatOutput = true;
$root = $dom->createElement('urlset');
$root->setAttribute('xsi:schemaLocation', 'http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd');
$root->setAttribute('xmlns:xsi', 'http://www.w3.org/2001/XMLSchema-instance');
$root->setAttribute('xmlns', 'http://www.sitemaps.org/schemas/sitemap/0.9');
$dom->appendChild($root);
foreach($urls as $val)
{
// validations here
$url = $dom->createElement('url');
$root->appendChild($url);
$lastmod = $dom->createElement('lastmod', date("Y-m-d"));
$url->appendChild($lastmod);
$page++;
}
$dom->saveXML();
$dom->save($xFile) or die('XML Create Error');
if(sizeof($urls) == 0){
$page = 0;
}
print_r($page);
$this->cache->redis->save($cache_key, $page, 432000);
// echo '<xmp>'. $dom->saveXML() .'</xmp>';
// $dom->saveXML();
// $dom->save($xFile) or die('XML Create Error');
}
After my first cron execution, 300 valid urls out of 1000 urls are saved to xml file,
Now lets say In my second cron execution i have 200 valid urls out of 1000.
My expected result is to append these 200 to the existing xml file so that my xml file contains total 500 valid urls, and xml file should get refresh after 5000 urls as i mentioned above.
But after executing the cron every time, old url data is being replaced with latest once.
I was wondering how do I save the url values without overwriting the XML.
Thank you in Advance!
As per the comment above you are opening the file with one api (SimpleXML) but saving a new document with DOMDocument - thus overwriting previous work. Without SimpleXML perhaps you can try like this - though it is untested.
public function urlcheck(){
$file=$this->base_path."sitemap/path/urls.xml";
$cache_key='valid_urls';
$page=$this->cache->redis->get($cache_key);
if(!$page)$page=0;
$dom=new DOMDocument('1.0','UTF-8');
$dom->formatOutput = true;
$col=$dom->getElementsByTagName('urlset');
if( !empty( $col ) )$root=$col->item(0);
else{
$root=$dom->createElement('urlset');
$dom->appendChild( $root );
$root->setAttribute('xsi:schemaLocation','http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd');
$root->setAttribute('xmlns:xsi','http://www.w3.org/2001/XMLSchema-instance');
$root->setAttribute('xmlns','http://www.sitemaps.org/schemas/sitemap/0.9');
}
# does a `page` node exist - if so use the value as the $page variable
$col=$com->getElementsByTagName('page');
if( !empty( $col ) )$page=intval( $col->item(0)->nodeValue );
$this->load->model('productnew/productnew_es6_m');
$urls=$this->db->query("SELECT `url` FROM `product_data` where `active` = 1 limit ".$page.",1000")->result();
foreach( $urls as $val ){
$url = $dom->createElement('url');
$root->appendChild($url);
$lastmod = $dom->createElement('lastmod', date("Y-m-d"));
$url->appendChild($lastmod);
$page++;
}
$node=$dom->createElement( 'page', $page );
$root->insertBefore( $node, $root->firstChild );
if( empty( $urls ) )$page=0;
$dom->save( $file );
$this->cache->redis->save( $cache_key, $page, 432000 );
}
Appending to the document looks fine, but you don't open the file to which you want to append to from disk thought. Therefore on each page you start with 0 urls in the XML and append to the empty root node.
But after executing the cron every time, old url data is being replaced with latest once.
This is exactly the behaviour you describe and which sounds like you don't load the XML file in the first place, just write it.
So the question perhaps is how to open an XML file, append looks good by your description already.
Let's review, by reversing the introduction sentences of your question:
I need to do some validations on the urls and should append the valid urls in an xml file.
so i've implemented a cron which fetches 1000 rows at time from mysql with pagination.
I have a mysql query which fetches url data nearly 5000 rows (1 row contains 1 url).
Assuming the file to append each 1000 url-set to is already on disk (page 2-5), you would need to append. If however on page 1 the file would already be on disk, you would append to some other page 1-5.
So it looks like you have written the code only for when you're on the first page - to create a new document (and append to it).
And despite your question, appending does work, you write it yourself:
old url data is being replaced with latest once.
The only thing that does not work is to open the file on page 2 - 5.
So let's rephrase the question: How to open an XML file?
But first of all, the variable $page is not meant to stand for page as in page 1 - 5 above. It's just a variable with a questionable name and $page stands for the number of URLs processed so far in the cycle and not for the page in the pagination.
Regardless of its name, I'll use it for its value for this answer.
So now lets open the existing document for appending when $page is not 0:
...
$dom = new DOMDocument('1.0','UTF-8');
$dom->formatOutput = true;
if ($page !== 0) {
$dom->load(dom_import_simplexml($xFile)->ownerDocument->documentURI)
}
$col=$dom->getElementsByTagName('urlset');
...
only on the first run you'll have the described behaviour that the file is created new - and in that case it's fine (on the first run $page === 0).
in any other case $page is not 0 and the file is opened from disk.
I've left the other parts of your code alone so that this example is only introducing this 3-line if-clause.
The documentation for the load($file) function is available in the PHP docs, just in case you missed it so far:
https://www.php.net/manual/en/domdocument.load.php
Try to not re-use the same variable names if you want to come up to speed. Here I had to recycle a whole SimpleXMLElement and import it into DOM only to obtain the original xml-file-path to open the document - which was not available as plain string any longer despite it once was under the variable $xFile. But that just as a comment in the margin.
And as you're already using Redis, you perhaps may want to queue the URLs into it and process from there, then you'll likely not need the database paging. See Lists of the Redis Data-Types.
You can then also put the good URLs in there in a second list.
With two lists you can even constantly check the progress in Redis directly.
And when finally done, you can write the whole file at once in one transaction out of the good URLs in Redis.
If you want to throw some more (minimal) tech on it, take a look at Beanstalkd.
I need to scrape this HTML page using PHP ...
http://www.cittadellasalute.to.it/index.php?option=com_content&view=article&id=6786:situazione-pazienti-in-pronto-soccorso&catid=165:pronto-soccorso&Itemid=372
... I need to extract the numbers for the rows "Rosso", "Giallo", Verde" and "Bianco" (note that these numbers are dynamic so they can change when you refresh the page but it doesn't matter....).
I've seen that these rows are inside some IFrames (for example ... http://listeps.cittadellasalute.to.it/?id=01090201 ), and the values are loaded using an ajax request (for examples http://listeps.cittadellasalute.to.it/gtotal.php?id=01090101).
Are there some solutions to scrape directly (I'd like to avoid to parse singular jsons ....), these values from the original HTML page using PHP and $xpath->query?
Suggestions / examples?
I think the problem is that the values aren't in the original page, they are built once the page is loaded. So you would need to use something which will honour all the Javascript functionality (i.e. Selinium webdriver) which is a bit overkill for what you want to do (I assume). Much easier to directly process the IFrame.
You could extract the URL's of the IFrames from the original page ...
$url = "http://www.cittadellasalute.to.it/index.php?option=com_content&view=article&id=6786:situazione-pazienti-in-pronto-soccorso&catid=165:pronto-soccorso&Itemid=372";
$pageContents = file_get_contents($url);
$page = simplexml_load_string($pageContents, "SimpleXMLElement", LIBXML_NOERROR | LIBXML_ERR_NONE);
$ns = $page->getDocNamespaces();
$page->registerXPathNamespace('def', array_values($ns)[0]);
$iframes = $page->xpath("//def:iframe");
foreach ( $iframes as $frame ) {
echo "iframe:".$frame['src'].PHP_EOL;
}
Which gives (just now)
iframe:http://listeps.cittadellasalute.to.it/?id=01090101
iframe:http://listeps.cittadellasalute.to.it/?id=01090201
iframe:http://listeps.cittadellasalute.to.it/?id=01090301
iframe:http://listeps.cittadellasalute.to.it/?id=01090302
You can then process these pages.
for security reasons we need to disable a php/mysql for a non-profit site as it has a lot of vulnerabilities. It's a small site so we want to just rebuild the site without database and bypass the vulnerability of an admin page.
The website just needs to stay alive and remain dormant. We do not need to keep updating the site in future so we're looking for a static-ish design.
Our current URL structure is such that it has query strings in the url which fetches values from the database.
e.g. artist.php?id=2
I'm looking for a easy and quick way change artist.php so instead of fetching values from a database it would just include data from a flat html file so.
artist.php?id=1 = fetch data from /artist/1.html
artist.php?id=2 = fetch data from /artist/2.html
artist.php?id=3 = fetch data from /artist/3.html
artist.php?id=4 = fetch data from /artist/4.html
artist.php?id=5 = fetch data from /artist/5.html
The reason for doing it this way is that we need to preserve the URL structure for SEO purposes. So I do not want to use the html files for the public.
What basic php code would I need to achieve this?
To do it exactly as you ask would be like this:
$id = intval($_GET['id']);
$page = file_get_contents("/artist/$id.html");
In case $id === 0 there was something else besides numbers in the query parameter. You could also have the artist information in an array:
// datafile.php
return array(
1 => "Artist 1 is this and that",
2 => "Artist 2..."
)
And then in your artist.php
$data = include('datafile.php');
if (array_key_exists($_GET['id'], $data)) {
$page = $data[$_GET['id']];
} else {
// 404
}
HTML isn't your best option, but its cousin is THE BEST for static data files.
Let me introduce you to XML! (documentation to PHP parser)
XML is similar to HTML as structure, but it's made to store data rather than webpages.
If instead your html pages are already completed and you just need to serve them, you can use the url rewriting from your webserver (if you're using Apache, see mod_rewrite)
At last, a pure PHP solution (which I don't recommend)
<?php
//protect from displaying unwanted webpages or other vulnerabilities:
//we NEVER trust user input, and we NEVER use it directly without checking it up.
$valid_ids = array(1,2,3,4,5 /*etc*/);
if(in_array($_REQUEST['id'])){
$id = $_REQUEST['id'];
} else {
echo "missing artist!"; die;
}
//read the html file
$html_page = file_get_contents("/artist/$id.html");
//display the html file
echo $html_page;
Currently I am using unset() for removing a parent node in simpleXML, and writing it back to the XML file
I tried this code and it was working a while ago, after cleaning my code I can't find why it doesn't work all of the sudden,
The debugging approaches I took: the file can be accessed, I can enter the loop and if statement, the file gets saved (notepad++ asks me to reload), but the <systemInfo></systemInfo> does not get deleted
Here is my sample Code:
$userorig = $_POST['user'];
$userinfos = simplexml_load_file('userInfo.xml'); // Opens the user XML file
foreach ($userinfos->userinfo->account as $account)
{
// Checks if the user in this iteration of the loop is the same as $userorig (the user i want to find)
if($account->user == $userorig)
{
echo "hello";
$rootSystem = $account->systemInfo;
unset($rootSystem);
}
}
$userinfos->saveXML('userInfo.xml');
My XML File:
<userinfos>
<userinfo>
<account>
<user>TIGERBOY-PC</user>
<toDump>2014-03-15 03:20:44</toDump>
<toDumpDone>0</toDumpDone>
<initialCheck>0</initialCheck>
<lastChecked>2014-03-16 07:12:17</lastChecked>
<alert>1</alert>
<systemInfo>
... (many nodes and sub nodes here) ...
</systemInfo>
</account>
</userinfo>
</userinfos>
Rather than iterating over the whole xml, use xpath to select the node:
$userorig = $_POST['user'];
$userinfos = simplexml_load_file('userInfo.xml'); // Opens the user XML file
$deletethisuser = $userinfos->xpath("/userinfos/userinfo/account[user = '$userorig']/systemInfo")[0];
unset($deletethisuser[0]);
Comments:
the [0] in the xpath... line requires PHP >= 5.4, in case you are running on a lower version, either update or go:
$deletethisuser = $userinfos->xpath("/userinfos/userinfo/account[user = '$userorig']/systemInfo");
unset($deletethisuser[0][0]);
Advised reading: hakre's answer in this thread: Remove a child with a specific attribute, in SimpleXML for PHP
It worked again, sorry, I did not know why it worked, I keep running it on multiple instances, and now it works, the program has weird behavior, but tried it for around 15 tries, it did its job
I'm making an interface-website to update a concert-list on a band-website.
The list is stored as an XML file an has this structure :
I already wrote a script that enables me to add a new gig to the list, this was relatively easy...
Now I want to write a script that enables me to edit a certain gig in the list.
Every Gig is Unique because of the first attribute : "id" .
I want to use this reference to edit the other attributes in that Node.
My PHP is very poor, so I hope someone could put me on the good foot here...
My PHP script :
Well i dunno what your XML structure looks like but:
<gig id="someid">
<venue></venue>
<day></day>
<month></month>
<year></year>
</gig>
$xml = new SimpleXmlElement('gig.xml',null, true);
$gig = $xml->xpath('//gig[#id="'.$_POST['id'].'"]');
$gig->venue = $_POST['venue'];
$gig->month = $_POST['month'];
// etc..
$xml->asXml('gig.xml)'; // save back to file
now if instead all these data points are attributes you can use $gig->attributes()->venue to access it.
There is no need for the loop really unless you are doing multiple updates with one post - you can get at any specific record via an XPAth query. SimpleXML is also a lot lighter and a lot easier to use for this type of thing than DOMDOcument - especially as you arent using the feature of DOMDocument.
You'll want to load the xml file in a domdocument with
<?
$xml = new DOMDocument();
$xml->load("xmlfile.xml");
//find the tags that you want to update
$tags = $xml->getElementsByTagName("GIG");
//find the tag with the id you want to update
foreach ($tags as $tag) {
if($tag->getAttribute("id") == $id) { //found the tag, now update the attribute
$tag->setAttribute("[attributeName]", "[attributeValue]");
}
}
//save the xml
$xml->save();
?>
code is untested, but it's a general idea