I am looking for a way to create functionality, similar to when you post a link to the existed web-site in facebook. If this statement is rather ambiguous, I will try to elaborate.
When you paste your link and submit your post, facebook together with you link gives a small preview of the page, you are posting (text and may be a small image)
What are the ways to achieve this?
I read the similar post, but the thing is that I do not need an image so much, text will be sufficient.
Working in PHP, but language is not important, because I am looking for a high level idea.
Previously I was thinking about parsing content of the link with cURL but the thing is that in a lot of situations the text returned by facebook is not available on the page.
Is there other ways?
From what I can tell, Facebook pulls from the meta name="description" tag's content attribute on the linked page.
If no meta description tag is available, it seems to pull from the beginning of the first paragraph <p> tag it can find on the page.
Images are pulled from available <img> tags on the page, with a carousel selection available to pick from when posting.
Finally, the link subtext is also user-editable (start a status update, include a link, and then click in the link subtext area that appears).
Personally I would go with such a route: cURL the page, parse it for a meta tag description and if not grab some likely data using a basic algorithm or just the first paragraph tag, and then allow user editing of whatever was presented (it's friendlier to the user and also solves issues with different returns on user-agent). Do the user facing control as ajax so that you don't have issues with however long it takes your site to access the link you want to preview.
I'd recommend using a DOM library (you could even use DOMDocument if you're comfortable with it and know how to handle possibly malformed html pages) instead of regex to parse the page for the <meta>, <p>, and potentially also <img> tags. Building a regex which will properly handle all of the myriad potential different cases you will encounter "in the wild" versus from a known set of sites can get very rough. QueryPath usually comes recommended, and there are stackoverflow threads covering many of the available options.
Most modern sites, especially larger ones, are good about populating the meta description tag, especially for dynamically generated pages.
You can scrape the page for <img> tags as well, but you'll want to then host the images locally: You can either host all of the images, and then delete all except the one chosen, or you can host thumbnails (assuming you have an image processing library installed and turned on). Which you choose depends on whether bandwidth and storage are more important, or the one-time processing of running an imagecopyresampled, imagecopyresized, Gmagick::thumbnailimage, etc, etc. (pick whatever you have at hand/your favorite). You don't want to hot link to the images on the page due to both the morality of it in terms of bandwidth and especially the likelihood of ending up with broken images when linking any site with hotlink prevention (referrer/etc methods), or from expiration/etc. Personally I would probably go for storing thumbnails.
You can wrap the entire link entity up as an object for handling expiration/etc if you want to eventually delete the image/thumbnail files on your own server. I'll leave particular implementation up to you since you asked for a high level idea.
but the thing is that in a lot of situations the text returned by facebook is not available on the page.
Have you looked at the page's meta tags? I've tested with a few pages so far and this is generally where content not otherwise visible on the rendered linked pages is coming from, and seems to be the first choice for Facebook's algorithm.
Full disclosure upfront, I'm a developer at ThumbnailApp.com.
It's an JSON API service with an optional Javascript SDK which I think does exactly what you're after: It will parse a string to detect any urls and return the title, description and thumbnail of the asset. If the page has OpenGraph tags, it will use those for the image thumbnail. It's currently in private beta but we're adding more accounts each week.
If you feel that you really need a do-it-yourself solution:
Checkout the python based Webkit2Png and the headless browser PhantomJs. They can render webpages to an image (default size is 800x600), then you'll have to write some code to resize and crop the image like taswyn mentioned. Ideally you would then upload the resized image to Amazon S3 and then get it hosted on a CDN such as CloudFront.
To get the title and description, first get the URL content (cURL or whatever) and you will need to check the content-type header to make sure it's a webpage. If it is, you can then use a HTML parser such as the SimpleHTMLDOM PHP library to grab the title and description meta data. If you want it exactly like Facebook you will also need to check for any OpenGraph tags specifically the og:image tag.
Also don't forget about caching. The first render and description parsing can take a long time. Even if your site is fast, the webpage you're rendering could be slow and the best approach is to render / parse it once, then just save and return the resized image and meta data for subsequent requests. Depending on what your requirements are you may need to refresh the cached data every hour or you could get away with refreshing it once a day.
To do it yourself takes quite a bit of work and lots of server configuration. I feel using a 3rd party service is a better way to go, but obviously I have a biased opinion :)
Related
I am looking for a proper way to implement lazy loading of images without harming printability and accessibility, and without introducing layout shift (content jump), preferrably using native loading=lazy and a fallback for older browsers. Answers to the question How lazy loading images using JavaScript works?
included various solutions none of which completely satisfy all of these requirements.
An elegant solution should be based on valid and complete html markup, i.e. using <img src, srcset, sizes, width, height, and loading attributes instead of putting the data into data- attributes, like the popular javascript libraries lazysizes and vanilla-lazyload do. There should be no need to use <noscript> elements either.
Due to a bug in chrome, the first browser to support native lazyloading, images that have not yet been loaded will be missing in the printed page.
Both javascript libraries mentioned above, require either invalid markup without any src attribute at all, or an empty or low quality placeholder (LQIP), while the src data is put into data-src instead, and srcset data put into data-srcset, all of which only works with javascript. Is this considered an acceptable or even best practice in 2020, and does this neither harm the site accessibility, cross-device compatibility, nor search engine optimization?
Update:
I tried a workaround for the printing bug using only HTML and CSS #media print background images in this codepen . Even if this worked as intended, there would be a necessary css directive for each and every image, which is neither elegant nor generic. Unfortunately there is no way to use media queries inside the <picture> element either.
There is another workaround by Houssein Djirdeh at at lazy-load-with-print-ctl1l4wu1.now.sh using javascript to change loading=lazy to loading=eager when a "print" button is clicked. The same function could also be used onbeforeprint.
I made a codepen using lazysizes.
I made another codepen using vanilla-lazyload .
I thought about forking a javascript solution to make it work using src and srcset, but this must probably have been tried before, the tradeoff would be that once the lazyloading script starts to act on the image elements, the browser might have already started downloading the source files.
Just show me your hideous code, I don't want to read!
If you don't want to read my ramblings the final section "Demo" contains a fiddle you can investigate (commented reasonably well in the code) with instructions.
Or there is a link to the demo on a domain I control here that is easier to test against if you want to use that.
There is also a version that nearly works in IE here, for some reason the "preparing for print" screen doesn't disappear before printing but all other functionality works (surprisingly)!
Things to try:
Try it at different browser sizes to see the dynamic image requesting
try it on a slower connection and check the network tab to see the lazy loading in action and the dynamic change in how lazy loading works depending on connection speed.
try pressing CTRL + P when the network connection is slow (without scrolling the page) to see how we load in images not yet in the DOM before printing
try loading the page with a slow network connection and then using FILE > PRINT to see how we handle images that have not yet loaded in that scenario.
Version 0.1, proof of concept
So there is still a long way to go, but I thought I would share my solution so far.
It is complex (and flawed) but it is about 90% of what you asked for and potentially a better solution than current image lazy loading.
Also as I am awful at writing clean JS when prototyping an idea. I can only apologise to any of you brave enough to try and understand my code at this stage!
only tested in chrome - so as you can imagine it might not work in other browsers, especially as grabbing the content of a <noscript> tag is notoriously inconsistent. However eventually I hope this will be a production ready solution.
Finally it was too much work to build an API at this stage, so for the image resizing I utilised https://placehold.it - so there are a few lines of redundant code to be removed there.
Key features / Benefits
No wasted image bytes
This solution calculates the actual size of the image to be requested. So instead of adding breakpoints in something like a <picture> element we actually say we want an image that is 427px wide (for example).
This obviously requires a server-side image resizing solution (which is beyond the scope of a stack overflow answer) but the benefits are massive.
First of all if you change all of your breakpoints on the site it doesn't matter, so no updating picture elements everywhere.
Secondly the difference between a 320px and 400px wide image in terms of kb is over 40% so picking a "similarly sized" image is not ideal (which is basically what the <picture> element does).
Thirdly if people (like me) have massive 4K monitors and a decent connection speed then you can actually serve them a 4K image (although connection speed detection is an improvement I need to make in version 0.2).
Fourthly, what if an image is 50% width of it's parent container at one screen size, 25% width of it's parent container at another, but the container is 60% screen width at one screen size and 80% screen width at another.
Trying to get this right in a <picture> element can be frustrating at best. It is even worse if you then decide to change the layout as you have to recalculate all of the width percentages etc.
Finally this saves time when crafting pages / would work well with a CMS as you don't need to teach someone how to set breakpoints on an image (as I have yet to see a CMS handle this better than just setting the breakpoints as if every image is full width on the screen).
Minimal Markup (and semantically correct markup)
Although you wanted to not use <noscript> and avoid data attributes I needed to use both.
However the markup you write / generate is literally an <img> element written how you normally would wrapped in a <noscript> tag.
Once an image has fully loaded all clutter is removed so your DOM is left with just an <img> element.
If you ever want to replace the solution (if browser technology improves etc.) then a simple replace on the <noscripts> would get you to a standard HTML markup ready for improving.
WebP
Of course this solution requests WebP images if supported (its all about performance!). On the server side you would need to process these accordingly (for example if an image is a PNG with transparency you send that back even if a WebP image is requested).
Printing
Oh this was a fun one!
There is nothing we can do if we send a document to print and an image has not loaded yet, I tried all sorts of hacks (such as setting background images) but it just isn't possible (or I am not clever enough to work it out....more likely!)
So what I have done is think of real world scenarios and cover them as gracefully as possible.
If the user is on a fast connection we lazy load the images, but we don't wait for scroll to do this. This could mean a bit more load on our servers but I am acting like printing is highly important (second only to speed).
If the user is on a slow connection then we use traditional lazy loading.
If they press CTRL + P we intercept the print command and display a message while the images are loading. This concept is taken from the example OP gave by Houssein Djirdeh but using our lazy loading mechanism.
If a user prints using FILE > PRINT then we instead display a placeholder for images that have not yet loaded explaining that they need to scroll the page to display the image. (the placeholders are approximately the same size as the image will be).
This is the best compromise I could think of for now.
No layout shifts (assuming content to be lazy loaded is off-screen on page load).
Not a 100% perfect solution for this but as "above the fold" content shouldn't be lazy loaded and 95% of page visits start at the top of the page it is a reasonable compromise.
We use a blank SVG (created at the correct proportions "on the fly") using a data URI as a placeholder for the image and then swap the src when we need to load an image. This avoids network requests and ensures that when the image loads there is no Layout Shift.
This also means the page is semantically correct at all times, no empty hrefs etc.
The layout shifts occur if a user has already scrolled the page and then reloads. This is because the <img> elements are created via JavaScript (unless JavaScript is disabled in which case the image displays from the <noscript> version of the image). So they don't exist in the DOM as it is parsed.
This is avoidable but requires compromises elsewhere so I have taken this as an acceptable hit for now.
Works without JavaScript and clean markup
The original markup is simply an image inside a <noscript> tag. No custom markup or data-attributes etc.
The markup I have gone with is:
<noscript class="lazy">
<img src="https://placehold.it/1500x500" alt="an image" width="1500px" height="500px"/>
</noscript>
It doesn't get much more standard and clean as that, it doesn't even need the class="lazy" if you don't use <noscript> tags elsewhere, it is purely for collisions.
You could even omit the width and height attributes if you didn't care about Layout Shift but as Cumulative Layout Shift (CLS) is a Core Web Vital I wouldn't recommend it.
Accessibility
The images are just standard images and alt attributes are carried over.
I even added an additional check that if alt attributes are empty / missing a big red border is added to the image via a CSS class.
Issues / compromises
Layout Shift if page already scrolled
As mentioned previously if a page is already scrolled then there will be massive layout shifts similar to if a standard image was added to a page without width and height attributes.
Accessibility
Although the image solution itself is accessible the screen that appears when pressing CTRL + P is not. This is pure laziness on my part and easy to resolve once a more final solution exists.
The lack of Internet Explorer support (see below) however is a big accessibility issue.
IE
UPDATE
There is a version that nearly works in IE11 here. I am investigating if I can get this to work all the way back to IE9.
Also tested in Firefox, Edge and Safari (mobile), seems to work there.
ORIGINAL
Although this isn't tested in Firefox, Safari etc. it is easy enough to get to work there if there are issues.
However accessing the content of <noscript> tags is notoriously difficult (and impossible in some versions) in IE and other older browsers and as such this solution will probably never work in IE.
This is important when it comes to accessibility as a lot of screen reader users rely on IE as it works well with JAWS.
The solution I have in mind is to use User Agent sniffing on the server and serve different markup and JavaScript, but that is complex and very niche so I am not going to do that within this answer.
Checking Latency
I am using a rather crude way of checking latency (to try and guess if someone is on a 3G / 4G connection) of downloading a tiny image twice and measuring the load time.
2 unneeded network requests is not ideal when trying to go for maximum performance (not due to the 100bytes I download, but due to the delay on high latency connections before initialising things).
This needs a complete rethink but it will do for now while I work on other bits.
Demo
Couldn't use an inline fiddle due to character count limitation of 30,000 characters!
So here is the current JS Fiddle - https://jsfiddle.net/9d5qs6ba/.
Alternatively as mentioned previously the demo can be viewed and tested more easily on a domain I control at https://inhu.co/so/image-concept.php.
I know it isn't the "done thing" linking to your own domains but it is difficult to test printing on a jsfiddle etc.
The proper solution for printable lazy loading in 2022 is using the native loading attribute.
<img loading=lazy>
The recommendation to use a custom print button has been obsoleted as chromium issue 875403 got fixed.
Prior recommendations included adding a custom print button (which did not fix the problem when using the native browser print functionality) or using JavaScript to load images onBeforePrint the latter not being considered a good solution, as loading=lazy, as a "DOM-only" solution, must not rely on JavaScript.
Beware that, even after the bug fix, some of your users might still visit your site with a buggy browser version.
#Ingo Steinke Before one dwells into answers for the concerns that you have raised, one has to go back and think about why lazy loading came about and for what detriment it solved on initiation as framework of thought. Keyword framework of thought... it is not a solution and I would go on a leaf to say it has never been a solution but framework of thought.
Why we wanted it:
Minimise unnecessary file fetching from server - this is bandwidth critical if one is running a large user base. So it was the internet version of just in time as in industrial production.
Legacy browser versions and before async and defer were popularised in JS/HTML, interactivity with the browser window remained hampered until all content was loaded.
Now broad band as we know it has only been around since the last 6-7 years in real sense of manner and penetration. We wanted it because we didn't want to encounter no.2 on low bandwidth. To be honest, there was and still is a growing concern and ideology of minifying and zipping JS and CSS files - all because that round trip to server and back should be minimised so that next item in the list could be fetched. Do keep in mind browsers tend to limit simultaneous downloading connections to around 6 at a time per window or active window. There is reasons why Google popularised the 3 second rule. If above were to let run on as it than 3 second rule will fall on its head as if it did not have legs.
So came along thought frameworks.
Image as CSS background: This came as it did not mess up the visual aspect of the page. Everything remained as it is in its place and then suddenly became colourful. It was time when web pages seemed to have elastic fit i.e. it was that bag which once filled with air suddenly poped-transformed into jumping castle. This was increasingly become bad idea as front end developer. So fixing height and with of the container then run images as background helped and HTML5 background alignment properties upgraded them self accordingly. There was even variant and still used as in use multiple backgrounds one being loading spiril or low end blured image version on top of which actual intended image was fetched. Since level down bacground would be fetched and populated everywhere in single instance of downloading it created a more pleasing visual and user knew what to expect. worked in printing as well even if intended image did not download.
Then came JS version of it by hijacking DOM either through data-src, invalid image tags removing src, and what not. only trigger the change when content is scrolled to. Obviously there would be lag but that was either countered through CSS approach implemented in JS or calculating scroll points and triggering event couple of pixel ahead. They all still work on the same premise.
There is one question that begs to be asked and you have touched it in your pretext .... none of it controls or alters browser native functionality. Browser might as will go fetch the item even before your script had anything to do with any thing.
This is the main issue here. BOM does not care and even want to care about what your script is asking to do all it knows if there is a src property fetch the content. None of the solutions have changed that. If we could change that functionality then thought framework would become solution.
I still believe browsers should not change that just for the sake of it and thus never gained tracking in debates. What browsers have done is pre-fetching known as speculative or look-ahead pre-parser, It is the single biggest improvement in browsers that deserves it credit. Just as we type url in address bar on every chnage of string browser is pre-fetching the content even though I had not typed the whole url. I had specially developed a programme where I grabbed anything that was received at server from these look-ahead pre-parsers. It takes less than second to get response at most times and browsers begin to process it all including images and JS. This was counter the jerky delayed elastic prone display as discussed in No.1 and No.2. It did not reduce the server hit however. The reason why we are doing lazy loading any ways. But some JS workaround gained traction as there was no src property so pre-parser did not fetch the image and was only done so when user actually sent to the page and events were triggered. Some browser have toyed with the idea of lazy loading them self but let go if it as it did not assume universal consistency in standard.
Universal Standard is simple if there is src property browser will fetch the item no if and buts. Imagine if that was not the case OMG hell would break loose on poor front-end developer.
So deep down what you are raising in debate is the question regarding BOM functionality as I have discussed above. There is no work around for it. In your case both for screen and print version of display. How to make sure images are loaded when print command is sent. Answer is simple for BOM print is after the fact. Fact ebing screen display and before the fact being everything before that at BOM/DOM propagation level. Again you cannot change that.
So you have to make trade off. Trade off would come in the form of another thought framework. rather than assuming everything is print ready make it print ready on user command. There is div that pops up and shows printed version of document and then print from there on. UI could be anything it would only take second or so as majority of the content would be loaded any ways and rest will take short amount of time. CSS rules for print could mighty handy in this respect. You can almost see it in action in may places on the internet.
conclusion as we stand today where we are with BOM functionality bundling the screen display and print display with lazy load is not what lazy lading was intended for thus does not provide any better solution then mere hacks. So you have to create your UI based on your context separating the two, to make it work properly.
The question is how to get ajax calls source code? this is not crawled, for example how to crawl the pictures on a link like this? http://www.tiendeo.nl/Catalogi/amsterdam/16558&subori=web_sliders&buscar=Boni&sw=1366
If you do inspect element, then it will show you the right code in the middle where the pictures are. But how to crawl this? If you click the next page, then it will have other images in the source. How to get the source for all images?
If I understand your question correctly (How do I crawl information loaded into the page by ajax calls?), the answer is that you either need some sort of javascript-aware crawler, or you need to inspect the javascript to figure out what resources are being polled to load the content you're interested in. From PHP you should be able to send curl get requests to these URLs and receive the same responses the site's javascript is using to render the entries.
The latter option has some rewards--namely that you will most likely be able to get simple, easy-to-use JSON responses to your requests.
As with most web scraping efforts, it tends to be the case that some content providers won't appreciate your interest in their data (especially if you're collecting it in ways that put undue strain on their systems or resources.) Keep in mind that they make take steps (technical or legal) to stop you if they notice/mind.
Addendum:
If you're hoping to crawl a variety of similar sites without needing to look through the source to find the resources they're using, (let's say for the sake of argument that you're just trying to naively scrape all images over a certain size from several sites selling the same sort of items), you'd need the former option--some sort of javascript-aware scraper. I don't know if such a thing exists, but it wouldn't surprise me.
Im currently building a practice responsive website, what I am doing is taking an exising website, building it up using twitter bootsrap js and css, meaning it will be fully responsive for mobile.
The issue is that there are some large carousels and images on the site. Ideally I would like to just completely remove certain elements, like a carousel for instance, and instead have the options within the carousel as a standard list menu.
It seems the main option is display:none based on media queries, but I am starting to foresee that I will run into big problems for loading time if the entire desktop site is still going to be loaded on the mobile, only elements hidden.
Are there ways to completely exclude html based on browser size? If anyone has any good links or articles that would be great. Or even just opinions, on whether there is actually need to exclude html or not.
Thank you
First off it is really good to see that although you're talking about display:none; you actually still want to display the content without the bells and whistles of the image. Well done you.
The next thing I would look at is if you don't want to load images for a mobile then why are you adding it for the larger sites. If the image isn't providing a function, assisting in explaining the content better, then why not just drop it for the desktop size as well?
If in fact it does help tell a story then you can include the images and some of the popular image services like adaptive images, hiSRC, or PictureFill which will serve the mobile version of the image first and replace with a larger image at higher viewports (but remember, there's no bandwidth test).
Finally, if you do want to serve some different content, then take the advice of fire around including more content with ajax. The South Street toolbox from Filament group can help you out, pay particular attention to the AjaxInclude pattern (it also has a link to the picturefill).
You could consider storing heavy data JSON-encoded, and then creating elements and loading them on demand like so
var heavyImage = new Image();
heavyImage.src=imageList[id];
Then you can append image element to a desired block. From my experience with mobiles this is more robust than requesting <img> via AJAX, since AJAX could be pretty slow sometimes.
You may also 'prefetch' images with this method (like 2-3 adjacent to visible at the moment), thus improving UX.
You could pull in the heavy elements via AJAX so they wouldn't sit on the page initially, making it load faster. You could decide to do the AJAX call only if the screen size is larger than X.
If you want you can use visibility:hidden, or if you use jQuery you can use
$(element).remove() //to remove completely
$(element).hide() //to hide
$(element).fadeOut(1) //to fadeout
I'm building a web app where users can build custom web pages that pull content from other web pages. I know of a few options for doing this, and I'm not sure which is best, and if there are better solutions out there. Right now, I could:
Use iframes, which will (sort of) accomplish what I want, but will force the client to download and render all the web content, which seems slow. I've heard a lot of people say iframes are passe and should not be used, etc.
Use a library like wkhtmltopdf, which will render the html on the server side and generate a pdf image of it. This would work nicely, but the result is just an image, so text won't be selectable, links won't be clickable, etc. Also, I've heard that you can get in legal trouble for hosting other people's web content on your site without permission.
Use something like phpquery to literally scrape content off of other sites. This option could have the same legal issues as the above option.
Has anyone done anything like this, or does anyone have any thoughts?
The cleanest solution would be send off a http request server side, then render the html into your page as you require, this will also require changing all the urls of content and links to be absolute
eg:
<img src="\images\banner.png">
will work on the remote server, but once inside your page, the image will not exist. The most workable solution would be limit the functionality to images and links, then do a find / replace with regex to match relative urls and add the source address to it.
You will however run into legal issues if you are resending other peoples content from your server, even just html.
Using an iframe would be the quick dirty solution and probably have the least legal ramifications, as the browser sends a normal request to the site for the content.
I'd recommend DocRaptor for generating PDF files from HTML. It works in a similar fashion as wkhtmltopdf, but produces fully functional PDF files.
Here's a link to its homepage:
http://docraptor.com/
And a link to its API documentation:
http://docraptor.com/documentation
You see them everywhere. Like the twitter and facebook buttons that show up on blogs and websites that display a number of "tweets" or "likes".
All I need to be able to do is display a number from my MySQL database based on two variables (username and an ID). It would probably be useful to encrypt the variables somehow so that users can't just alter the badge's code and display another user's number.
But more importantly, I just need to know how to use the HTML code like you find in social network badges and have it talk to a PHP script on my server which will calculate the number from the database based on the variables held within the badge.
Any clue where to start?
Edit: I'm not talking about the kind of badges like you find on stackoverflow, I mean the kind other sites let you paste on your blog/site. Like Digg lets you show that your site has been dugg 7000 times, etc.
You may wish to look up the GD library for PHP and related tutorials. Basically, all those badges consist of is a static image as the template with some dynamic text inserted on top, usually consisting of the username and a number (likes, tweets, etc..).
For the HTML code, you would do something similar to:
<img src="http://www.yourserver.com/yourscript.php?username=miki&id=1337" />
This will send a HTTP GET request to your script, causing it to execute. Your script can then communicate with the database, fetch the user's information, use GD to insert that text into a template and then return that to the browser with the correct mime type and content.
You're talking about calling a remote script, essentially.
I assume you mean something like this -
You are viewing your profile on your side. You have a widget that promises to display the total number of points you have, for example.
You offer a "code" button like youtube embed or facebook "like"
The user clicks this, gets a segment of code and is expected to be able to paste it anywhere on the internet where applicable and the code will generate an icon or something with presumably the username and their points.
First, you can do this several ways. The most cost effective, in my opinion, is to generate the user button on your server on update - like say your points meant "number of thumbs up your articles received" so it will be an integer value. Every time you get a thumbs up, you would re-cache the button and write it into a flat text file. If you're good, you would write it to an image and flatten it to a jpg or gif. If you don't know how to do that, you can write it to html and save the file as a user specific "slug" like md5(username).'.html' - that way every time the server is called, you don't need to pile on bandwidth with redundant queries and account look ups. You only serve the optimized image or html file.
Second - you can give the user an iframe that has the html in it. This is generally how facebook "like" does it for people that don't use the fbml method. Problem is, many sites see iframes as potential xss attack and will strip them out. So, in order to make use of the iframe you would need to have control of the domain, which may defeat the purpose of your request if the intention is to share your profile goodies.
Third, you can call a js file on your server that makes an ajax call to your database and serves the results. This is also most likely going to be seen as xss attack and you should probably not even give it much more thought.
I mentioned the iframe and js methods in case you were looking to provide an option for other people who run their own sites. The way "like" is used by site owners to show how many times their domain has been "liked" and so on. These people have control of domains so the iframe and js methods are logical.
So -
This answer may not have much in way of raw code for you, but it should help you start.
I would do the image method since it is safer. You would give the user an image tag with their slug in the src attribute. They can paste it anywhere and there is no way to re-write the number within the image. Most forums and places where you can just post to other people's sites allow images. Just do a google search on drawing images with php, as well as using the imagemagick library to merge text and images.