I embed images from Facebook links. While it works perfectly in Firefox, IE doesn't want to work with it. The div and the image tags are both generated in the HTML, but the image is absent. I've tried to look for most common IE bugs with images, but never found anything relevant.
Here's the link: http://spotbc.com/example.php
Try to check if you have no specific system/network/proxy settings especially for IE. We can see your image fine here. Try empty/close your browser cache (try on a different machine if it helps).
Though you could leave out quotes for the src attribute of an img tag in an HTML5 document it is not consistent with the rest of your code. Review your generated HTML structure for best results. </br> is also no valid HTML.
Related
I have been starting using SVG recently and I love it. There so clean, cool and easy to design.
Well I stumbled across a issue when I tried to one of my svg's into html (Works with every one of my svgs except this one). It works totally fine if I call it as a image (<img src="hill.svg"></img>) but when I try to use it
either with PHP(include 'hill.svg';) or just straight up passing it in on the index files, it messes up.
Here's an image. Messed up svg top left and the working on the full screen. Same file just that the working one is imbedded with "background-image"
It kinds of look like the background is turnt into one of those "missing- image-icons"
Any help would be highly appreciated.
Svg: https://pastebin.com/RsSAGv8M
There are some raster images linked in your SVG:
947B2F3D9DDD76B8.png (two times),
947B2F3D9DDD76B9.png,
947B2F3D9DDD76BF.png
They are probably not available on your webserver. If the SVG is linked as an <img>, they are never tried to retrieve, as for security reasons all images must be self-contained. But when the SVG is embeded in a HTML page, the request fails. and some browsers show a "missing image" icon.
Either delete the <image> tags in your SVG file (it seems you did not miss their content?), or embed them as data URLs. (I don't know Adobe illustrator good enough to know if there is a utility to do that.)
I bought a theme for WordPress, and I am trying to edit the characters on a wrapper.
I easily find the code on the page when I "right click and inspect" but how to find them when going through the files on WordPress editor?
I find HTML stuff on PHP code looking for identifiers around the place I want to pinpoint. CSS classes and IDs, images or icons, mainly unique names that will help narrow the research.
Then, on your favorite code editor, do a global search for this keyword/name.
Example:
No, you can't get the codes in the Browser inspect element, because browser understands only the HTML. PHP codes are compiled into HTML by server and response to the browser for displaying a webpage.
I am looking for a way to create functionality, similar to when you post a link to the existed web-site in facebook. If this statement is rather ambiguous, I will try to elaborate.
When you paste your link and submit your post, facebook together with you link gives a small preview of the page, you are posting (text and may be a small image)
What are the ways to achieve this?
I read the similar post, but the thing is that I do not need an image so much, text will be sufficient.
Working in PHP, but language is not important, because I am looking for a high level idea.
Previously I was thinking about parsing content of the link with cURL but the thing is that in a lot of situations the text returned by facebook is not available on the page.
Is there other ways?
From what I can tell, Facebook pulls from the meta name="description" tag's content attribute on the linked page.
If no meta description tag is available, it seems to pull from the beginning of the first paragraph <p> tag it can find on the page.
Images are pulled from available <img> tags on the page, with a carousel selection available to pick from when posting.
Finally, the link subtext is also user-editable (start a status update, include a link, and then click in the link subtext area that appears).
Personally I would go with such a route: cURL the page, parse it for a meta tag description and if not grab some likely data using a basic algorithm or just the first paragraph tag, and then allow user editing of whatever was presented (it's friendlier to the user and also solves issues with different returns on user-agent). Do the user facing control as ajax so that you don't have issues with however long it takes your site to access the link you want to preview.
I'd recommend using a DOM library (you could even use DOMDocument if you're comfortable with it and know how to handle possibly malformed html pages) instead of regex to parse the page for the <meta>, <p>, and potentially also <img> tags. Building a regex which will properly handle all of the myriad potential different cases you will encounter "in the wild" versus from a known set of sites can get very rough. QueryPath usually comes recommended, and there are stackoverflow threads covering many of the available options.
Most modern sites, especially larger ones, are good about populating the meta description tag, especially for dynamically generated pages.
You can scrape the page for <img> tags as well, but you'll want to then host the images locally: You can either host all of the images, and then delete all except the one chosen, or you can host thumbnails (assuming you have an image processing library installed and turned on). Which you choose depends on whether bandwidth and storage are more important, or the one-time processing of running an imagecopyresampled, imagecopyresized, Gmagick::thumbnailimage, etc, etc. (pick whatever you have at hand/your favorite). You don't want to hot link to the images on the page due to both the morality of it in terms of bandwidth and especially the likelihood of ending up with broken images when linking any site with hotlink prevention (referrer/etc methods), or from expiration/etc. Personally I would probably go for storing thumbnails.
You can wrap the entire link entity up as an object for handling expiration/etc if you want to eventually delete the image/thumbnail files on your own server. I'll leave particular implementation up to you since you asked for a high level idea.
but the thing is that in a lot of situations the text returned by facebook is not available on the page.
Have you looked at the page's meta tags? I've tested with a few pages so far and this is generally where content not otherwise visible on the rendered linked pages is coming from, and seems to be the first choice for Facebook's algorithm.
Full disclosure upfront, I'm a developer at ThumbnailApp.com.
It's an JSON API service with an optional Javascript SDK which I think does exactly what you're after: It will parse a string to detect any urls and return the title, description and thumbnail of the asset. If the page has OpenGraph tags, it will use those for the image thumbnail. It's currently in private beta but we're adding more accounts each week.
If you feel that you really need a do-it-yourself solution:
Checkout the python based Webkit2Png and the headless browser PhantomJs. They can render webpages to an image (default size is 800x600), then you'll have to write some code to resize and crop the image like taswyn mentioned. Ideally you would then upload the resized image to Amazon S3 and then get it hosted on a CDN such as CloudFront.
To get the title and description, first get the URL content (cURL or whatever) and you will need to check the content-type header to make sure it's a webpage. If it is, you can then use a HTML parser such as the SimpleHTMLDOM PHP library to grab the title and description meta data. If you want it exactly like Facebook you will also need to check for any OpenGraph tags specifically the og:image tag.
Also don't forget about caching. The first render and description parsing can take a long time. Even if your site is fast, the webpage you're rendering could be slow and the best approach is to render / parse it once, then just save and return the resized image and meta data for subsequent requests. Depending on what your requirements are you may need to refresh the cached data every hour or you could get away with refreshing it once a day.
To do it yourself takes quite a bit of work and lots of server configuration. I feel using a 3rd party service is a better way to go, but obviously I have a biased opinion :)
I'm aware that this question has been asked a number of time on SO and have looked over many of the posts but none seem to be as restrictive. I may not be able to do anything about this but I want to ask.
I need the content within an iframe to grow and shrink with the content. The main problem that I see is that javascript cannot be used because the site providing the framed content will only provide an iframe tag with a link to its content. There is of course no guarantee that the site where the iframe is placed will allow javascript so I don't see that as an option.
Is there another alternative to an iframe that will allow me to import dynamically growing and shrinking content that doesn't rely on javascript?
You can use PHP on the server-side to retrieve the content and then include it within the page. You can strip out the meta-data, and adjust the CSS and HTML with Xpath if you need to.
If it's a completely different bit of content for which adjusting the CSS and HTML won't work, then an iframe does remain about the only solution. If you have access to a server with a GUI, you could load the page and store the height in a database, and then set the height of the iframe on load.
Try this:
http://kinsey.no/blog/index.php/2010/02/19/resizing-iframes-using-easyxdm/
I believe it still works.
I'd say that with current technology and browsers, this just cannot be done. Not without JavaScript.
And it's not pretty even with JavaScript.
I am scraping a website with HTML with php that retrieves a page and removes certain elements to only show a photo gallery. It works flawlessly for every browser BUT any version of IE (typical ;)). We can fix the problem by rewriting the .css file, but we cannot implement it into the head of the php as this will be overwritten by the .css file from the websites server. How would we go about hosting our own version of the .css file so that our website will be displayed using OUR version? Would be swap something out with a filter?
Cheers!
You do realize that it may not really be a scraping problem? It's sounds like a straightforward page display problem.
Worrying about scraping might be a red herring. After you have scraped you have some HTML (and possibly some CSS) ... does that validate at W3C? I realize that is no guarantee, but it is an indicator (I know that IE doesn't always display valid pages properly, but sometimes it's a "gotcha" when other browsers seem to display invalid HTML/CSS properly).
If it's valid then maybe you should look back at your scraping. If you already removes certain elements to only show a photo gallery then maybe you can also remove the CSS from the HTML header (or wherever) and reaplce it with your own?
If you're already scrapping the website, why not just use PHP to omit their CSS file and write your own in its place? Alternatively, you could write your own CSS file just below their's in the <head> so it overwrote their styles.
This is just another thing to check, but if one of the elements you're removing is comments, you could unwittingly be pulling out ie only stylesheets that are between conditional comments. Another thing to look at is paths. Maybe one of their stylesheets has a relative path that you can't call from your server. You would need to make that an absolute path for it to work.
Really, you should probably take a close look at the source of the original page and your formatted source side by side. You could be pulling out something that's should be left in.
You ask how you could remove their css... you do it the same way you remove the other elements you're pulling out. Just pull out style tags and tags that link to stylesheets.
Aside from that I would just write some styles to fix it and stick it them anywhere after the existing css is called. (Like every one else here mentioned)
Just add another CSS header and mark your styles as !important to override the original ones?