Im using a remote xml feed, and I don't want to hit it every time. This is the code I have so far:
$feed = simplexml_load_file('http://remoteserviceurlhere');
if ($feed){
$feed->asXML('feed.xml');
}
elseif (file_exists('feed.xml')){
$feed = simplexml_load_file('feed.xml');
}else{
die('No available feed');
}
What I want to do is have my script hit the remote service every hour and cache that data into the feed.xml file.
Here is a simple solution:
Check the last time your local feed.xml file was modified. If the difference between the current timestamp and the filemtime timestamp is greater than 3600 seconds, update the file:
$feed_updated = filemtime('feed.xml');
$current_time = time();
if($current_time - $feed_updated >= 3600) {
// Your sample code here...
} else {
// use cached feed...
}
<?php
$cache = new JG_Cache();
if(!($feed = $cache->get('feed.xml', 3600))) {
$feed = simplexml_load_file('http://remoteserviceurlhere');
$cache->set('feed.xml', $feed);
}
Use any file based caching mechanism e.g. http://www.jongales.com/blog/2009/02/18/simple-file-based-php-cache-class/
$feedmtime = filemtime('feed.xml');
$current_time = time();
if(!file_exists('feed.xml') || ($current_time - $feedmtime >= 3600)){
$feed = simplexml_load_file($url);
$feed->asXML('feed.xml');
}else{
$feed = simplexml_load_file('feed.xml');
}
return $feed;
Take a look at Simple PHP caching.
I created a simple PHP class to tackle this issue. Since I'm dealing with a variety of sources, it can handle whatever you throw at it (xml, json, etc). You give it a local filename (for storage purposes), the external feed, and an expires time. It begins by checking for the local file. If it exists and hasn't expired, it returns the contents. If it has expired, it attempts to grab the remote file. If there's an issue with the remote file, it will fall-back to the cached file.
Blog post here: http://weedygarden.net/2012/04/simple-feed-caching-with-php/
Code here: https://github.com/erunyon/FeedCache
Related
I'm newbie for xml files related stuff. i've stuck with an issue.
I have a mysql query which fetches url data nearly 5000 rows (1 row contains 1 url).
so i've implemented a cron which fetches 1000 rows at time from mysql with pagination. i need to do some validations on the urls and should append the valid urls in an xml file.
Here is my code
public function urlcheck()
{
$xFile = $this->base_path."sitemap/path/urls.xml";
$page = 0;
$cache_key = 'valid_urls';
$page = $this->cache->redis->get($cache_key);
if(!$page){
$page=0;
}
$xFile = simplexml_load_file($xFile);
$this->load->model('productnew/productnew_es6_m');
$urls= $this->db->query("SELECT url FROM product_data where `active` = 1 limit ".$page.",1000")->result();
$dom = new DOMDocument('1.0','UTF-8');
$dom->formatOutput = true;
$root = $dom->createElement('urlset');
$root->setAttribute('xsi:schemaLocation', 'http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd');
$root->setAttribute('xmlns:xsi', 'http://www.w3.org/2001/XMLSchema-instance');
$root->setAttribute('xmlns', 'http://www.sitemaps.org/schemas/sitemap/0.9');
$dom->appendChild($root);
foreach($urls as $val)
{
// validations here
$url = $dom->createElement('url');
$root->appendChild($url);
$lastmod = $dom->createElement('lastmod', date("Y-m-d"));
$url->appendChild($lastmod);
$page++;
}
$dom->saveXML();
$dom->save($xFile) or die('XML Create Error');
if(sizeof($urls) == 0){
$page = 0;
}
print_r($page);
$this->cache->redis->save($cache_key, $page, 432000);
// echo '<xmp>'. $dom->saveXML() .'</xmp>';
// $dom->saveXML();
// $dom->save($xFile) or die('XML Create Error');
}
After my first cron execution, 300 valid urls out of 1000 urls are saved to xml file,
Now lets say In my second cron execution i have 200 valid urls out of 1000.
My expected result is to append these 200 to the existing xml file so that my xml file contains total 500 valid urls, and xml file should get refresh after 5000 urls as i mentioned above.
But after executing the cron every time, old url data is being replaced with latest once.
I was wondering how do I save the url values without overwriting the XML.
Thank you in Advance!
As per the comment above you are opening the file with one api (SimpleXML) but saving a new document with DOMDocument - thus overwriting previous work. Without SimpleXML perhaps you can try like this - though it is untested.
public function urlcheck(){
$file=$this->base_path."sitemap/path/urls.xml";
$cache_key='valid_urls';
$page=$this->cache->redis->get($cache_key);
if(!$page)$page=0;
$dom=new DOMDocument('1.0','UTF-8');
$dom->formatOutput = true;
$col=$dom->getElementsByTagName('urlset');
if( !empty( $col ) )$root=$col->item(0);
else{
$root=$dom->createElement('urlset');
$dom->appendChild( $root );
$root->setAttribute('xsi:schemaLocation','http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd');
$root->setAttribute('xmlns:xsi','http://www.w3.org/2001/XMLSchema-instance');
$root->setAttribute('xmlns','http://www.sitemaps.org/schemas/sitemap/0.9');
}
# does a `page` node exist - if so use the value as the $page variable
$col=$com->getElementsByTagName('page');
if( !empty( $col ) )$page=intval( $col->item(0)->nodeValue );
$this->load->model('productnew/productnew_es6_m');
$urls=$this->db->query("SELECT `url` FROM `product_data` where `active` = 1 limit ".$page.",1000")->result();
foreach( $urls as $val ){
$url = $dom->createElement('url');
$root->appendChild($url);
$lastmod = $dom->createElement('lastmod', date("Y-m-d"));
$url->appendChild($lastmod);
$page++;
}
$node=$dom->createElement( 'page', $page );
$root->insertBefore( $node, $root->firstChild );
if( empty( $urls ) )$page=0;
$dom->save( $file );
$this->cache->redis->save( $cache_key, $page, 432000 );
}
Appending to the document looks fine, but you don't open the file to which you want to append to from disk thought. Therefore on each page you start with 0 urls in the XML and append to the empty root node.
But after executing the cron every time, old url data is being replaced with latest once.
This is exactly the behaviour you describe and which sounds like you don't load the XML file in the first place, just write it.
So the question perhaps is how to open an XML file, append looks good by your description already.
Let's review, by reversing the introduction sentences of your question:
I need to do some validations on the urls and should append the valid urls in an xml file.
so i've implemented a cron which fetches 1000 rows at time from mysql with pagination.
I have a mysql query which fetches url data nearly 5000 rows (1 row contains 1 url).
Assuming the file to append each 1000 url-set to is already on disk (page 2-5), you would need to append. If however on page 1 the file would already be on disk, you would append to some other page 1-5.
So it looks like you have written the code only for when you're on the first page - to create a new document (and append to it).
And despite your question, appending does work, you write it yourself:
old url data is being replaced with latest once.
The only thing that does not work is to open the file on page 2 - 5.
So let's rephrase the question: How to open an XML file?
But first of all, the variable $page is not meant to stand for page as in page 1 - 5 above. It's just a variable with a questionable name and $page stands for the number of URLs processed so far in the cycle and not for the page in the pagination.
Regardless of its name, I'll use it for its value for this answer.
So now lets open the existing document for appending when $page is not 0:
...
$dom = new DOMDocument('1.0','UTF-8');
$dom->formatOutput = true;
if ($page !== 0) {
$dom->load(dom_import_simplexml($xFile)->ownerDocument->documentURI)
}
$col=$dom->getElementsByTagName('urlset');
...
only on the first run you'll have the described behaviour that the file is created new - and in that case it's fine (on the first run $page === 0).
in any other case $page is not 0 and the file is opened from disk.
I've left the other parts of your code alone so that this example is only introducing this 3-line if-clause.
The documentation for the load($file) function is available in the PHP docs, just in case you missed it so far:
https://www.php.net/manual/en/domdocument.load.php
Try to not re-use the same variable names if you want to come up to speed. Here I had to recycle a whole SimpleXMLElement and import it into DOM only to obtain the original xml-file-path to open the document - which was not available as plain string any longer despite it once was under the variable $xFile. But that just as a comment in the margin.
And as you're already using Redis, you perhaps may want to queue the URLs into it and process from there, then you'll likely not need the database paging. See Lists of the Redis Data-Types.
You can then also put the good URLs in there in a second list.
With two lists you can even constantly check the progress in Redis directly.
And when finally done, you can write the whole file at once in one transaction out of the good URLs in Redis.
If you want to throw some more (minimal) tech on it, take a look at Beanstalkd.
I am writing a script where it checks for an updated version from an external server. I use this code in the config.php file to check for latest version.
$data = get_theme_data('http://externalhost.com/style.css');
$latest_version = $data['Version'];
define('LATEST_VERSION', $latest_version);
This is fine and I can fetch the latest version (get_theme_data is WordPress function) but the problem is that it will be executed on every single load which I do not want. I also do not want to only check when a form is submitted for example. Alternatively I was looking into some sort of method to cache the result or maybe check the version every set amount of hours? Is such thing possible and how?
Here, gonna make it easy for you. Store the time you last checked for the update in a file.
function checkForUpdate() {
$file = file_get_contents('./check.cfg', true);
if ($file === "") {
$fp = fopen('./check.cfg', 'w+');
fwrite($fp, time() + 86400);
fclose($fp);
}
if ((int)$file > time()) {
echo "Do not updatE";
} else {
echo "Update";
$fp = fopen('./check.cfg', 'w+');
fwrite($fp, time() + 86400);
fclose($fp);
}
}
You can obviously make this much more secure/efficient if you want to.
Edit: This function will check for update once every day.
A scheduled task like this should be set up as a separate cron or at job. You can still write everything in PHP, just make a script that runs from the command line and does the updating. Checkout "man crontab" for details, and/or check which scheduling services your server is running.
Ok so I have these requirements that i need and I really dont know where to start. Here is what i have
What I need is some PHP code that will grab the latest article from the RSS feed from a wordpress blog. When the PHP grabs the RSS feed, cache it and look for a newer version if the cache is empty or if 24 hours have passed. I need this code to be pretty full proof and be able to run without a DB behind it. Can you just cache the RSS results in memory?
I found this but i am not sure it will be useful in this situation...What i am looking for is some good direction on what/how I can do this. And if there is already a tool out there that can help with this...
Thanks in advance
So if you want to cache the feed itself, it would be pretty simple to do this with a plain text file. Something like this should do the trick:
$validCache = false;
if (file_exists('rss_cache.txt')) {
$contents = file_get_contents('rss_cache.txt');
$data = unserialize($contents);
if (time() - $data['created'] < 24 * 60 * 60) {
$validCache = true;
$feed = $data['feed'];
}
}
if (!$validCache) {
$feed = file_get_contents('http://example.com/feed.rss');
$data = array('feed' => $feed, 'created' => time());
file_put_contents('rss_cache.txt', serialize($data));
}
You could then access the contents of the RSS feed with $feed. If you wanted to cache the article itself, the changes should be fairly obvious.
I'm working on a site with a simple php-generated twitter box with user timeline tweets pulled from the user_timeline rss feed, and cached to a local file to cut down on loads, and as backup for when twitter goes down. I based the caching on this: http://snipplr.com/view/8156/twitter-cache/. It all seemed to be working well yesterday, but today I discovered the cache file was blank. Deleting it then loading again generated a fresh file.
The code I'm using is below. I've edited it to try to get it to work with what I was already using to display the feed and probably messed something crucial up.
The changes I made are the following (and I strongly believe that any of these could be the cause):
- Revised the time difference code (the linked example seemed to use a custom function that wasn't included in the code)
Removed the "serialize" function from the "fwrites". This is purely because I couldn't figure out how to unserialize once I loaded it in the display code. I truthfully don't understand the role that serialize plays or how it works, so I'm sure I should have kept it in. If that's the case I just need to understand where/how to deserialize in the second part of the code so that it can be parsed.
Removed the $rss variable in favor of just loading up the cache file in my original tweet display code.
So, here are the relevant parts of the code I used:
<?php
$feedURL = "http://twitter.com/statuses/user_timeline/#######.rss";
// START CACHING
$cache_file = dirname(__FILE__).'/cache/twitter_cache.rss';
// Start with the cache
if(file_exists($cache_file)){
$mtime = (strtotime("now") - filemtime($cache_file));
if($mtime > 600) {
$cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/75168146.rss');
$cache_static = fopen($cache_file, 'wb');
fwrite($cache_static, $cache_rss);
fclose($cache_static);
}
echo "<!-- twitter cache generated ".date('Y-m-d h:i:s', filemtime($cache_file))." -->";
}
else {
$cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/#######.rss');
$cache_static = fopen($cache_file, 'wb');
fwrite($cache_static, $cache_rss);
fclose($cache_static);
}
//END CACHING
//START DISPLAY
$doc = new DOMDocument();
$doc->load($cache_file);
$arrFeeds = array();
foreach ($doc->getElementsByTagName('item') as $node) {
$itemRSS = array (
'title' => $node->getElementsByTagName('title')->item(0)->nodeValue,
'date' => $node->getElementsByTagName('pubDate')->item(0)->nodeValue
);
array_push($arrFeeds, $itemRSS);
}
// the rest of the formatting and display code....
}
?>
ETA 6/17 Nobody can help…?
I'm thinking it has something to do with writing a blank cache file over a good one when twitter is down, because otherwise I imagine that this should be happening every ten minutes when the cache file is overwritten again, but it doesn't happen that frequently.
I made the following change to the part where it checks how old the file is to overwrite it:
$cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/75168146.rss');
if($mtime > 600 && $cache_rss != ''){
$cache_static = fopen($cache_file, 'wb');
fwrite($cache_static, $cache_rss);
fclose($cache_static);
}
…so now, it will only write the file if it's over ten minutes old and there's actual content retrieved from the rss page. Do you think this will work?
Yes your code is problematic, because whatever Twitter sends you, you write it.
You should test the file you get from Twitter like this:
if (($mtime > 600) && ($cache_rss = file_get_contents($feedURL)))
{
file_put_contents($cache_rss);
}
file_get_contents() return false if there is an error, check it before caching some new content.
Awhile ago I came across a script that basically fetched a list of countries/states from a web resource if it wasn't located in a database, and this script would then populate the database with those contents and from then on, rely on them from then on.
Since I'm working on a localization class of my own, I'll be using the same locale data Zend is using, in the form of around ~60 or so xml files which contain localised data such as countries, languages for locales.
I figure since the framework I'm working on will rely on these files from now on ( where it isn't now ), and none of the servers now have this data, should I:
Setup my web application to download these files from a central server where all the content is stored in a .tar.gz, unpack them, store them on the server and then rely on them
Create a separate script to do this, and not actually do this within the application.
Pseudo code:
if ( !data ) {
resource = getFile( 'http://central-server.com/tar.gz' );
if ( resource ) {
resource = unpack( directory, resource )
return true
}
throw Exception('could not download files.')
}
I would go for the first option iff the data needs to be contantly updated, otherwise I would choose your second option.
Here is a method I developed some years ago, that was part of a GeoIP class:
function Update()
{
$result = false;
$databases = glob(HIVE_DIR . 'application/repository/GeoIP/GeoIP_*.dat');
foreach ($databases as $key => $value)
{
$databases[$key] = basename($value);
}
$databases[] = 'GeoIP.dat.gz';
$date = date('ym');
if ((!in_array('GeoIP_' . $date . '.dat', $databases)) && (date('j') >= 2))
{
if ($this->Hive->Filesystem->Write(HIVE_DIR . 'application/repository/GeoIP/GeoIP.dat.gz', file_get_contents('http://www.maxmind.com/download/geoip/database/GeoIP.dat.gz'), false) === true)
{
$handler = gzopen(HIVE_DIR . 'application/repository/GeoIP/GeoIP.dat.gz', 'rb');
$result = $this->Hive->Filesystem->Write(HIVE_DIR . 'application/repository/GeoIP/GeoIP_' . $date . '.dat', gzread($handler, 2 * 1024 * 1024), false);
gzclose($handler);
foreach ($databases as $database)
{
$this->Hive->Filesystem->Delete(HIVE_DIR . 'application/repository/GeoIP/' . $database);
}
}
}
return $result;
}
Basically the Update() was executed every single time, it would then check if the day of the month equal or higher than 2 (MaxMind releases GeoIP databases on the first day of the month) and if a a database for that month didn't existed already. Only if both these conditions where true the method would download, unpack, rename the database and remove all the old databases from previous months.
In your case, since you're dealing with locales, doing a periodical check similar to this once in a while might not be a bad idea, since countries change stuff (names, currencies, calling codes, etc...) a lot.
if this is a library, i would probably have this be part of the setup steps. an error can be printed if data isn't there.
Have an install script do the downloading, or throw an error if its not available. Downloading as requested from the server could lead to timeouts and would likely turn away users. fsockopen is the easiest way to do this and deal with sockets by hand if you don't have CURL setup and can't fopen/fread remote files.