Issues Fetching Proper Listings' Data Using RETS - php

I am trying tot download a live feed of property listings from the CREA's DDF. I am making an API request via PHP to their DDF and am pulling and downloading all the recent listings into my DB. This works fine, the issue is my clients listings which are in the DDF and should be pulled with all the other listings are not being pulled. I seem to get what it seems like all the listings for the surrounding area, but maybe not since I can't receive my client's listings which should be a part of the pull. When I talked to CREA people they said my clients listings are in the DDF so I should be able to pull them with all the other listings. I was hoping to get some advice from some people who have a better understanding of making request like this, or even better using CREA's DDF.
I will provide my code below, I will try to only include stuff that would be relevant and take out the unnecessary code to make this a little easier. If you want to see more of the code, I will add those parts on request.
If I understand correctly I need to add onto my paramaters array but I really dont know why my request is doing this so any help would be awesome!
Here is the code for my download.php file
$TimeBackPull = "-24 hours";
/* RETS Variables */
require("PHRets_CREA.php");
$RETS = new PHRets();
$RETSURL = "http://data.crea.ca/Login.svc/Login";
$RETSUsername = "**********************";
$RETSPassword = "**********************";
$RETS->Connect($RETSURL, $RETSUsername, $RETSPassword);
$RETS->AddHeader("RETS-Version", "RETS/1.7.2");
$RETS->AddHeader('Accept', '/');
$RETS->SetParam('compression_enabled', true);
$RETS_PhotoSize = "LargePhoto";
$RETS_LimitPerQuery = 100;
if($debugMode /* DEBUG OUTPUT */)
{
$RETS->SetParam("catch_last_response", true);
$RETS->SetParam("debug_file", "CREA_Anthony.txt");
$RETS->SetParam("debug_mode", true);
}
$DBML = "(LastUpdated=" . date('Y-m-d', strtotime($TimeBackPull)) . ")";
$params = array("Limit" => 1, "Format" => "STANDARD-XML", "Count" => 1);
$results = $RETS->SearchQuery("Property", "Property", $DBML, $params);
$totalAvailable = $results["Count"];
for($i = 0; $i < ceil($totalAvailable / $RETS_LimitPerQuery); $i++)
{
$startOffset = $i*$RETS_LimitPerQuery;
$params = array("Limit" => $RETS_LimitPerQuery, "Format" => "STANDARD-XML", "Count" => 1, "Offset" => $startOffset);
$results = $RETS->SearchQuery("Property", "Property", $DBML, $params);
foreach($results["Properties"] as $listing)
{
//Do Some Stuff
}
}
Here is what my current request looks like in the return xml file
http://data.crea.ca/Search.svc/Search?SearchType=Property&Class=Property&Query=%28LastUpdated%3D2015-09-22%29&QueryType=DMQL2&Count=1&Format=STANDARD-XML&Limit=1&StandardNames=0
Another thing is that the CREA people said there should be around 1900 active listings to pull but when I count the results I only get around 182 right now

Did you get this resolved?
CREA DDF is weird animal. They only partially support the RETS spec, not fully. They've designed their system around "Destinations" (aka, "Data Feeds") and "Tech Provider" separations.
1) Each of your clients creates one or more data feeds, each of which is assigned a unique DestinationID. When setting up the feed, they select you as the Tech Provider, so their listings get included in your feed, too.
2) You, as the tech provider, have a single feed where you can pull all listings across all of your clients. Where this breaks down, though, is that each listing does not reference which feed/destination that it belongs to. You need to pull data in the context of a particular destination, and then manually associate the current DestinationID with the listings that come through.
CREA thinks that they've made it simpler by having a single Tech Provider feed, but they've actually made it more difficult because they are providing incomplete data on the responses. You, as the developer, need to manually do the associations at your end.
Are your clients selecting you as their Technology Provider during the setup of their Data Feeds? Do you see their feeds show in your Destinations table?

Related

Reviews in Google API

I am using below code to find Google reviews for property. What I am trying to do is, I am fetching review for property then I will compare it with old review of that property (which is in the DB). If it is greater than the system's property, then it sends email.
This file is run for every hour(as a cron file) and i enable the billing in Google API, so max limit is 1,50,000.
But for some reason API does not return the exact count of reviews.
For example:
I run this file for the one property which has 4 reviews, but API returns 0 for 2 or 3 times then after some time it returns 4 reviews.
I don't know the reason behind it. I also noticed that we can see the reviews on google search page and in Google+. Same you can write reviews in multiple places, like in Google+ and in Google Map.
And to check reviews, I am using google plus url. So is it possible that the review does exist, but in another area(like in Google search page but not in Google+)?
/* call api to get review count of Google */
$url = "https://maps.googleapis.com/maps/api/place/details/json?";
$params = array(
"placeid" => $google_place_id,
"key" => $google_api_key
);
$url .= http_build_query($params);
$resjson = file_get_contents($url);
$msg = $resjson;
Yii::log($msg,'info', 'application');
$resjson = json_decode($resjson,true);
$review_count = $resjson['result']['user_ratings_total']=='' ? 0 : $resjson['result']['user_ratings_total'];
/* If review is greater than 0 then check old review and if it's not same then send email */
if($review_count>0)
{
if(sizeof($ressql)>0)
{
/* if google plus review is greater then system's google+ review then send email */
if($review_count>trim($ressql[0]['google_plus_review']))
{
$this->send_googleplusmail($prop_id);
$msg = "Google+ Review for property id (Mail Sent):".$prop_id." , New Review:$review_count, Old Review: ".$ressql[0]['google_plus_review'];
Yii::log($msg,'info', 'application');
}
}
}
$sql=" INSERT INTO tbl_review_alert (propertyid, google_plus_review) VALUES ";
$sql.="('{$prop_id}','{$review_count}')";
$sql.=" ON DUPLICATE KEY UPDATE propertyid= {$prop_id},google_plus_review= {$review_count}";
$this->insert_review($sql);
My Question is:
(1) Is it possible that the review does exist, but in another area(like in Google search page but not in Google+)? If yes, then in this case can i obtain the URL where review is posted?
(2) Are all of the reviews are sync in Google?
(3) Or i am doing something wrong in my code?
I think I've spot where the problem is.
The reason why you can't see the existing reviews about that Place is that it seems that there're 2 google+ accounts for the same; The only difference (at least the first I've noticed) is in the zip code, MA 02116 vs. MA 02111.
Take a look at:
https://plus.google.com/101511264527346460079/about
and
https://plus.google.com/105917345931375838754/about
As you can see, in the second one there are the same reviews you see in
the search page
And by inserting the address "The Kensington, 665 Washington St, Boston, MA 02116, Stati Uniti" into the finder, I obtain a placeid different from the other one.
Now by using this last one in
$params = array(
"placeid" => $google_place_id, // the new placeid here
"key" => $google_api_key
);
I can then get the 5 reviews in the Place API json response.

How to get 200+ hundred of Pages in Silverstripe in a single query?

One of our Silverstripe sites is on shared hosting and having major performance issues. The issues seem to be caused by the shared SQL server throttling the number of queries that can be made.
The pages that are running the slowest get 200+ hundred pages to place on a Google Map:
$DirectoryItems = DirectoryItem::get()->where("\"Latitude\" IS NOT NULL AND \"Longitude\" IS NOT NULL ")->sort('Title ASC');
$MapItems = new ArrayList();
foreach ($DirectoryItems as $DirectoryItem) {
$MapItems->push(new ArrayData(array(
"Latitude" => $DirectoryItem->Latitude,
"Longitude" => $DirectoryItem->Longitude,
"MapMarkerURL" => $DirectoryItem->MapMarkerURL,
"Title" => addslashes($DirectoryItem->Title),
"Link" => $DirectoryItem->Link()
)));
}
Each of the 200+ MapItems generate it's own SQL Query which is overloading the shared SQL server.
I started off trying to get the same information with a single query:
$DirectoryItems = DB::query('SELECT `DirectoryItem`.`Latitude`, `DirectoryItem`.`Longitude`, `DirectoryItem`.`MapMarkerURL`, `SiteTree_Live`.`Title`
FROM `DirectoryItem`, `SiteTree_Live`
WHERE `DirectoryItem`.`ID` = `SiteTree_Live`.`ID`
AND `DirectoryItem`.`Latitude` IS NOT NULL AND `DirectoryItem`.`Longitude` IS NOT NULL
ORDER BY `SiteTree_Live`.`Title`');
$MapItems = new ArrayList();
foreach ($DirectoryItems as $DirectoryItem) {
$MapItems->push(new ArrayData(array(
"Latitude" => $DirectoryItem['Latitude'],
"Longitude" => $DirectoryItem['Longitude'],
"MapMarkerURL" => $DirectoryItem['MapMarkerURL'],
"Title" => addslashes($DirectoryItem['Title']),
"Link" => ??????
)));
}
But this falls over when it comes to getting the link to the DirectoryItem.
I thought about adding the Link as a DB field in DirectoryItem but that's beginning to feel needless complicated for what should be a straightforward operation.
What is the best way of getting the information for 200+ DirectoryItems in a single query?
Did you have a look into caching? If you show the same items on the map all the time you don't need to hit the db on every request.
See
Silverstripe Docs for caching
partial caching of elements of your site
Static publisher module for real fast, static pages managed with SilverStripe
Static publish queue module, another approach for generating static pages
It takes a huge load off your server if you cache properly.
If you still have problems when caching you should think about a better server.
The SiteTree class has static function, which is used in the CMS to get the link for a certain SiteTreeID. So you just need to extend your SQL Query to get the ID and you can get the link to any page via ID by calling:
$link = SiteTree::link_shortcode_handler(array('id' => $id), false);
Edit: wmk suggested a different and probably more future-proof way of using:
$page = SiteTree::get()->byID($id);
if ($page instanceof SiteTree) $link = $page->Link();
Untested; src: http://api.silverstripe.org/master/class-SiteTree.html

Typo3 getting data and convert it to specific xml schema

i am new to typo3, so sorry, if this is too obvious.
I do not expect a complete solution, just the topics i would need to read about in order to solve the task would be perfectly enough. :)
Here the task:
I have a typo 3 installation with job advertisements in it. Now the company wants to publish that data in to a social website, which needs to have the job advertisement data put on a server in an xml feed, which looks like this: http://www.kununu.com/docs#jobfeed. Don't worry about what it says in there, it's just stuff like Job Title, Description etc.
Like i said, i am completely new to this and just have a vague idea.
My thoughts so far were something like this:
I probably need to write a plugin, which pulls the data out of typo3 by the push of a button
That Plugin need to establish a database connection to pull the data (probably it's mysql, but i am not entirely sure yet)
The data need to be formatted, which is either done by some string operations or by some kind of xml handler i assume.
Sidenote: I read something about TypoScript, but i'd like to avoid that, since it's a one time project and probably not worth the time to learn it. For me at least.
Thank you loads for your help in advance.
Cheers
you can handle that (basicly with typoscript). The other part has to come from PHP (F.e. extbase plugin) ... First part creates the XML output. Second part uses my Demo plugin to include data (Pages+special fields) from DB.
Within TS we are able to create a new typeNum. With that you can call your XML. ( ?type=1234 ) Within the BE you can select each page for output.
If you need help just contact me. I would send you the plugin.
sitemap = PAGE
sitemap {
typeNum = 1234
config {
# Set charset to utf-8
metaCharset = utf-8
# Deactivate TYPO3 Header Code
disableAllHeaderCode = 1
# Content-type settings
additionalHeaders = Content-type:text/xml;charset=utf-8
# Do not cache the page
no_cache = 1
# No xhtml cleaning trough TYPO3
xhtml_cleaning = 0
}
10 = USER_INT
10 {
userFunc = TYPO3\CMS\Extbase\Core\Bootstrap->run
pluginName = Sitemap
extensionName = Srcxmlprovider
controller = Sitemap
vendorName = Sourcecrew
action = exportXml
switchableControllerActions {
Sitemap {
1 = exportXml
}
}
}
}
Ich habe die EXT noch schnell ins TER gepushed. Ein Tutorial liegt innerhalb.
http://typo3.org/extensions/repository/view/srcxmlprovider

How can I get an id from a simplepie post which can be used to look it up later?

I've recently started developing a portfolio website which I would like to link to my wordpress blog using simplepie. It's been quite a smooth process so far - loading names and descriptions of posts, and linking them to the full post was quite easy. However, I would like the option to render the posts in my own website as well. Getting the full content of a given post is simple, but what I would like to do is provide a list of recent posts which link to a php page on my portfolio website that takes a GET variable of some sort to identify the post, so that I can render the full content there.
That's where I've run into problems - there doesn't seem to be any way to look up a post according to a specific id or name or similar. Is there any way I can pull some unique identifier from a post object on one page, then pass the identifier to another page and look up the specific post there? If that's impossible, is there any way for me to simply pass the entire post object, or temporarily store it somewhere so it can be used by the other page?
Thank you for your time.
I stumbled across your question looking for something else about simplepie. But I do work with an identifier while using simplepie. So this seems to be the answer to your question:
My getFeedPosts-function in PHP looks like this:
public function getFeedPosts($numberPosts = null) {
$feed = new SimplePie(); // default options
$feed->set_feed_url('http://yourname.blogspot.com'); // Set the feed
$feed->enable_cache(true); /* Enable caching */
$feed->set_cache_duration(1800); /* seconds to cache the feed */
$feed->init(); // Run SimplePie.
$feed->handle_content_type();
$allFeeds = array();
$number = $numberPosts>0 ? $numberPosts : 0;
foreach ($feed->get_items(0, $number) as $item) {
$singleFeed = array(
'author'=>$item->get_author(),
'categories'=>$item->get_categories(),
'copyright'=>$item->get_copyright(),
'content'=>$item->get_content(),
'date'=>$item->get_date("d.m.Y H:i"),
'description'=>$item->get_description(),
'id'=>$item->get_id(),
'latitude'=>$item->get_latitude(),
'longitude'=>$item->get_longitude(),
'permalink'=>$item->get_permalink(),
'title'=>$item->get_title()
);
array_push($allFeeds, $singleFeed);
}
$feed = null;
return json_encode($allFeeds);
}
As you can see, I build a associative array and return it as JSON what makes it really easy using jQuery and ajax (in my case) on the client side.
The 'id' is a unique identifier of every post in my blog. So this is the key to identify the same post also in another function/on another page. You just have to iterate the posts and compare this id. As far as I can see, there is no get_item($ID)-function. There is an get_item($key)-function but it is also just taking out a specific post from the list of all posts by the array-position (which is nearly the same way I suggest).

How can I create a RSS feed with a limit statement in wordpress

I am basically creating an iphone app that get's it's data from wordpress. Wordpress will serve audio and video links via a RSS feed to the iphone app. I have the feed and audio player working great but can't seem to find anything related to how to create a custom feed where I can specify pagination like start=0&items=10. A plugin would be great but I can code something up in PHP if anyone has any ideas.
I'm going to answer this question by changing the standard RSS feed of a WordPress installation to respond to limits passed by query parameters. As you say you've already got a working feed, this should hopefully give you everything else you need.
By default, the standard feeds in WordPress are limited by the setting "Syndication feeds show the most recent X items" on the Settings→Reading page, and are unpaginated, as that wouldn't generally make sense for an RSS feed. This is controlled by WordPress's WP_Query::get_posts() method, in query.php, if you're interested in taking a look at how things work internally.
However, although the feed query's limit is set to LIMIT 0, X (where X is the above setting, 10 by default) , you can override the limit by filtering the query in the right place.
For example, the filter post_limits will filter the LIMIT clause of the query between the point it's set up by the default code for feeds and the time it's run. So, the following code in a plugin -- or even in your theme's functions.php -- will completely unlimit the items returned in your RSS feeds:
function custom_rss_limits($limits) {
if (is_feed()) {
// If this is a feed, drop the LIMIT clause completely
return "";
} else {
// It's not a feed; leave the normal LIMIT in place.
return $limits;
}
}
add_filter('post_limits', 'custom_rss_limits');
(At this point I should mention the obvious security implications -- if you've got 20,000 posts on your blog, you'll cause yourself a lot of server load and bandwidth if if lots of people start grabbing your feed, and you send out all 20,000 items to everyone. Therefore, bear in mind that whatever you end up doing, you may still want to enforce some hard limits, in case someone figures out your feed endpoint can be asked for everything, say by analysing traffic from your iPhone app.)
Now all we've got to do is to respond to query parameters. First of all, we register your two query parameters with WordPress:
function rss_limit_queryvars( $qv ) {
$qv[] = 'start';
$qv[] = 'items';
return $qv;
}
add_filter('query_vars', 'rss_limit_queryvars' );
That allows us to pass in the start and items variables you're suggesting for your URL parameters.
All we have to do then is to adjust our original LIMIT changing function to respond to them:
function custom_rss_limits($limits) {
if (is_feed()) {
global $wp_query;
if (isset($wp_query->query_vars['start']) &&
isset($wp_query->query_vars['items'])) {
// We're a feed, and we got pagination parameters. Override our
// standard limit.
// First convert to ints in case anyone's put something hinky
// in the query string.
$start = intval($wp_query->query_vars['start']);
$items = intval($wp_query->query_vars['items']);
$limits = "LIMIT $start, $items";
} else {
// We weren't passed pagination parameters, so just
// leave the default limits alone.
}
}
return $limits;
}
add_filter('post_limits', 'custom_rss_limits');
And there you go. Throw those last two blocks of code at WordPress, and you can now use a URL like this on any of your existing feeds:
http://example.com/feed/?start=30&items=25
For this example, you'll get the normal RSS feed, but with 25 items starting from item number 30.
...and if you don't pass the query parameters, everything will work like normal.

Categories