I'm trying to get the DISQUS comments and comment counts associated with a particular image ID for my PHP site.
1) Get comment count:
To get comment count, I've followed DISQUS' guide, but it just gives me a link to where the comments are on the comic... and not the total number... They say:
Append #disqus_thread to the href attribute in your links. This will tell Disqus which links to look up and return the comment count. For example:
Link.
But how would I get that count if my url string was like this:
Link
So my questions are:
Where would I append #disqus_thread?
How can I get the comments count from that one comic URL and display those total comments on another page?
Why does it just give me a link to the comments and not the comment number for that associated comic?
2) Get specific comments, like most recent or most popular
I haven't really found any documentation on this with the exception of this where they say I'd probably need to write my own script...
Any thoughts?
Thanks!
The comment counting script will basically look up the thread and return the comment counts that match the URL - so it needs to be an absolute URL for that to work.
Assuming you were looking at this document, you'll also notice that there's an optional data-disqus-identifier attribute you can use, and you would use this in conjunction with a disqus_identifier in your commenting embed code. This will override the URL lookup and it will instead pull comment counts for the identifier. You will still need to append the #disqus_thread anchor to your URL, however.
For the second question, you would need to use the API to code a widget to display comments outside of the embed. There's a couple of different approaches you can take:
Load comments directly from the API using either the posts/listPopular or posts/list endpoints (in conjunction with your disqus_identifier)
Load numerous threads details and use the RSS feed of the latest comments using the threads/set endpoint. You can also use this to get the comment counts directly from the API rather than use the comment counting script.
Related
We're using Facebook comments on a Wordpress blog and the comment count returned by the following tag does not match the actual number of comments on the page.
<fb:comments-count href=http://example.com/></fb:comments-count>
You can see an example here where the comment count returned is 168 even though there are only 2 comments on the page.
The Facebook Graph API returns the correct number of comments for this url as seen here but unfortunately using the count returned by the Graph API as demonstrated in the SO Post below is disallowed by our host WPEngine since the php.ini setting for allow_url_include must be set to off.
<fb:comments-count> not working on my WordPress powered blog
Any ideas on what might be going wrong or another alternative for returning the correct comment count?
The example URL you mentioned is http://www.civilbeat.com/2014/02/21257-gene-park-the-debate-over-race-history-and-racism-in-hawaii/, whereas your code for the like count points to
<fb:comments-count href='http://www.civilbeat.com/posts/2014/02/21/21257-gene-
park-the-debate-over-race-history-and-racism-in-hawaii/'>
That is a different URL than the one of the page - notice the extra /posts/ part right after the domain name, that is not in the page URL you mentioned before. And if you check the second one via API, you see that it indeed has a comment count of 168.
I`m trying to build query with Wiki API that will return all internal links from specific article in id format.
I have pageId of some article. For example for article "Android (Operational System)" id is 12610483.
In my client side i need to work only with id and later obtain all information only by id.
My goal is to find all internal links(ids of articles) from give article id.
Unfortunately, the only possible way i found is to obtain links that represented by titles of articles:
http://en.wikipedia.org/w/api.php?action=parse&format=json&pageid=12610483&prop=links
Is there any other way to obtain ids of links as well and not only titles?
What you want to do is to use action=query&prop=links to get data from the pagelinks database table, instead of parsing the page text.
This will still give you only page titles (because a link can lead to a non-existent page, which implies no page id).
But you can fix that by using prop=links as a generator:
http://en.wikipedia.org/w/api.php?action=query&format=json&pageids=12610483&generator=links&gpllimit=max
If the article has many links (like the one you suggested), you will need to use paging (see the gplcontinue element).
I think you need to use PHP Simple HTML DOM Parser
you cant find it here
http://simplehtmldom.sourceforge.net/
I'm trying to achieve a few things with DISQUS as demonstrated in the picture below (boxed in red) for my comics website...
Get comment count of a particular comic
Get specific comments, like most recent and most popular
Truncate the comment, but allow the user to click read more to either expand it or link them to the actual comic
1) Get comment count:
To get comment count, I've followed DISQUS' guide, but it just gives me a link to where the comments are on the comic... and not the total number... They say:
Append #disqus_thread to the href attribute in your links. This will
tell Disqus which links to look up and return the comment count. For
example:
Link.
But how would I get that count if my url string was like this:
Link
So my questions are:
Where would I append #disqus_thread?
How can I get the comments count from that one comic URL and display those total comments on another page?
Why does it just give me a link to the comments and not the comment number for that associated comic?
2) Get specific comments, like most recent or most popular
I haven't really found any documentation on this with the exception of this where they say I'd probably need to write my own script...
3) Truncate comments
If #2 is possible at all, then how would I be able to truncate it?
Thanks!
I tried with graph api asked many things to there but none of them are returns comments.
I want to get all comments, and put in a separate page for search engines to scan and index.
comments are very rich full and I want them
You can retreive comments (actually posts and their comments) from the Comments plugin from this api url:
https://graph.facebook.com/comments/?ids={$url_of_your_page}
please see the documentation here: http://developers.facebook.com/docs/reference/fql/
the table you need to query is this here:
http://developers.facebook.com/docs/reference/fql/comment/
(it is also one of the first examples)
the key is that you can specify an identifier for every comment plugin that you embed. using this identifier you can then select comments using the graph api.
regarding search engines you should not make the impression that you are serving different content to the spiders than to the users so it would probably be a good idea to load the comments over the api by default (please cache them) and then replace it by the javascript box if javascript is available, so the users can write new comments.
even better (in my opinion) would be always to display the comments in html in your website and only load the facebook comments plugin if the user wants to make a new comment. but that probably requires one additional step for the user.
you can also read about the ajax chrawling scheme
If you want to use the graph API what you do is grab the comment ID in the json (the very first number you see and it appears as {user_id}_{status_message_ID} example: 1234_5678910) and then have https://graph.facebook.com/{the number you got}?access_token={access token}
Do reduce strain what you could do is run the system that you do to put all the status messages and comments onto your website (for instance a 'for each' or 'while' statements etc) then add an 'if' statement that says if count (under comments on the json) is more than 3 it'll retrieve the json for that post using the id and spit out your data.
I hope this was of some help. Please say if you want anything explained further.
Regards,
Jon
You can use this url to get comments of an url and paginate them:
?fields=og_object{comments.order(reverse_chronological).limit(10).after(NgZDZD)}&id={YOUR_URL}
Even It works with latest(v2.10) graph version.
I want to write an app that parse particular threas on a phpbb forum. So if a thread has 200 pages with 10 posts (that doesn't give you the ability to adjust the post count per page), and has an address like this:
http://www.forum.com/viewtopic.php?t=10&postdays=0&postorder=asc&start=0
where start parameter changes when you navigate to the next pages of the same thread, how do you get the full thread in one go?
I tried:
http://www.forum.com/viewtopic.php?t=10&postdays=0&postorder=asc&start=0&end=2000
but didn't work.
Surely there must be a way to do this I imagine.
If you're parsing, just parse each page, then add up the results in the end. If the forum doesn't have an open API, or any way to display all of the posts on one page, this is what your are going to have to do. Perhaps you could write a recursive function that checks for a 'next page' link or something similar, follows it, then returns all of the data from the pages compiled.
EDIT: looking at example url you gave, have you tried changing the t variable? you said it was 10 posts per page, and that was set to 10, so maybe that's what controls posts per page.
http://www.forum.com/viewtopic.php?t=2000&postdays=0&postorder=asc&start=0
Some super handsome fellow wrote a MOD for this if it is your forum:
http://www.phpbb.com/community/viewtopic.php?f=69&t=1101295