I have built an in browser engine that will retrieve pages without executing server side scripting... seems ridiculous, I know, but I'm doing this as part of a school project.
The problem that I am having is that once it displays the page if a link is clicked it will bring you to www.their-site.com instead of www.my-site.com?site=www.their-site.com.
Basically I need my php page to detect if a link is clicked and, if so, add "www.my-site.com?" before it so that all sites will still be rendered without all the server side scripting. Is their any way to do this?
---------------EDIT---------------------------------------------------------------------------
Ok I guess I wasn't clear enough the first time sorry about that.
I have made a php page that will display the contents of any site without executing the server side scripting that belongs with that page. This allows you to get around those annoying news articles that allow you to have a glimpse at them for two seconds and then a login box appears. the problem is once you've accessed the pages if you click any links you are connected to their server and the scripts turn back on. I want MY php to execute, not THEIRS
You need to know what you want first.
You say no server side scripting, then you mention php.
To do this, I don't think you can do it with just js.
You need to get the pages, using php, depending on what exactly, modify them such that when a link is clicked, it sends an ajax call to another page. This will require either regex replacement or the use of htmldom.
When a link is clicked, it should send the ajax response to the php page which can then request tha page, make modifications and send it back to the browser. You can then use js to replace the page contents.
Related
I want to load more images in my website when I reach the bottom of my page. I'm using php and postgresql as my database.
For this post I simply load some text instead of image. I can write the code for it's equivalent.
So currently, I'm using a button at the bottom of my page, which when pressed re-loads the page and gives you more images(I'm displaying 50 images at a time).
But there are 2 problems with it, one being that the user will have to press the button again and again while I want it to happen automatically.
And the second one being that when new images are loaded, the previous ones are gone. I don't want to happen. For eg., if currently 1-50 images are present, my page later changes it to 51-100 while I want it to have all 1-100. I'm unable to solve this.
Please help. Thanks!
What you are looking for is commonly referred to as "infinite scroll pagination", while what you're asking for is techniclly possible using only PHP it would be a terrible user experience, as each reload would take the user to the top, and they would constantly have to continuously scroll further and further just to reach the location they were previously at.
Alternatively, handle this with JavaScript, an example: https://infinite-scroll.com/demo/full-page.
Doing simple Google searches reveals a plethora of options for JavaScript and JQuery plugins to achieve this.
An alternative, without the need for a plugin you can implement the answer to this:infinite-scroll jquery plugin
Simply call your PHP code in the form of an AJAX request when the bottom of the page is reached and append your new results. (this could be easily achieved with vanilla javascript as well).
Hope this helps.
How do you do dynamic refresh of a single div tab using php/ajax and have the content actually change the local html on the page (so that it is changed when you go to ‘view source’ in a browser) instead of just putting the change in a JavaScript object? I am trying to design a webpage that loads search results without refreshing the entire page. I use a simple hash followed by a GET/query string request to determine what content to load. This gets passed to a JavaScript XMLHttpRequest, then to some php which picks up the GET and passes it to a SOAP service and finally echo’s the SOAP results back to the XMLHttpRequest to get displayed in a document.getElementById div change. This works fine for usual display in conventional browsers. However I am concerned that search bots and screen readers are not going to recognize the majority of the content that shows in browsers because it is all contained within a client side JavaScript object.
So, I guess my first question is: is this a valid concern? If it is, is there a work around?
Thanks!
AJAX content is very hard to get indexed. Google has webmaster guidelines for AJAX. This should get you started in the right direction on getting your content indexed.
I'm inexperienced with search engine behavior but as far as i know the best option is to load the full content of your div on a php page, when the page load you can include that page inside the div, and then start using js/jquery to refresh that every so many seconds.
this way when a search bot gets on the site it will see the current content, and users will see it update.
updating the div box can be done quite easy using ajax function and jquery.
Using the following tutorial I want my website to use AJAX to load the content (but also want to be able to use the back button etc. etc):
http://www.queness.com/post/328/a-simple-ajax-driven-website-with-jqueryphp
Ofcourse if someone has javascript disabled the website should also work (without Ajax).
The problem however comes when a javascript enabled user sends a link to a non javascript enabled user. Because javascript is disabled it will not handle the #-tag correctly and will just go to the homepage (so linking directly to pages from a javascript user to non-javascript user is impossible). Is there a way to resolve this issue (preferably php or htacces).
HTML5 gives us methods to alter the URL without refreshing the page https://developer.mozilla.org/en/DOM/Manipulating_the_browser_history#Adding_and_modifying_history_entries
This means you can update something without a page refresh but still give the user a url they can bookmark or send to someone else. These urls will work without JavaScript, as long as you have pages at those locations or are catching them with mod_rewrite or similar.
https://github.com/browserstate/history.js is a great little pollyfill which will use the HTML5 history stuff if the browser supports it, otherwise (Internet Explorer) it changes the hash of the url.
Basically, three steps:
code your "a" tags just normal: <a href='about'>About us</a>
in your javascript code, intercept all click events on <a> tags and navigate to # + this.href. So when they click the above url, you navigate to site.com/#about instead of site.com/about
in your javascript code, have a timer function that reads the hash value form the current location and loads a corresponding url (with # removed) via ajax
Since you code your html just as usual, the site remains fully accessible for non-js users, and, more important, for search engines' bots.
In response to the comments I can suggest the following:
redirect your home page via javascript from just site.com to site.com/js/
when <a href='about'> is clicked, navigate to site.com/js/#about
on the "js" page, have something like <a id=about href="/about">click here</a> for non-js users
Why not just build your application normally and then add the AJAX on top, rather than going the other way round and causing more work for yourself?
Ask yourself, why do you need AJAX page transitions? Does your app actually need them, or is it just because you've seen it on another site, like Twitter?
We have an application writted in PHP. Its main view is for example: /pages/index.
Now when the user clicks on certain links, it pulls in other Views via ajax. ie. a call may look like /pages/publish, so the PHP outputs the relevant html for the publish section back to the index view.
The problem we have is we'd like to be able to give the user the option of refreshing and seeing the same view as before. So, my initial thought is this, when we use .load() in jQuery, to take the URL its going to load and store it somewhere to be read by the PHP if the user refreshes. Is the best way to do or can someone think of a better way to do this whole thing?
Check out jQuery.address which should solve your problems! It allows AJAX loading of new pages, and will update the address bar accordingly. If a user saves this URL and reloads it, the script on the page can then load the correct page.
Alternatively, if you're HTML5-only, then you can try history.pushState() which will modify the URL without using the hash symbol, but support isn't 100% yet. (I don't think... it certainly behaves oddly on iPad from my experience.)
I'm new to YQL, and just trying to learn how to do some fairly simple tasks.
Let's say I have a list of URLs and I want to get their HTML source as a string in javascript (so I can later insert it to a database via ajax). How would I go about getting this info back in Javascript? Or would I have to do it in PHP? I'm fine with either, really - whatever can work.
Here's the example queries I'd run on their console:
select * from html where url="http://en.wikipedia.org/wiki/Baroque_music"
And the goal is to essentially save the HTML or maybe just the text or something, as a string.
How would I go about doing this? I somewhat understand how the querying works, but not really how to integrate with javascript and/or php (say I have a list of URLs and I want to loop through them, getting the html at each one and saving it somewhere).
Thanks.
You can't read other pages with Javascript due to a built-in security feature in web browsers. It is called the Same origin policy.
The usual method is to scrape the content of these sites from the server using PHP.
There is an other option with javascript called a bookmarklet.
You can add the bookmarklet in your bookmarks bar, and each time you want the content of a site click the bookmark.
A script will be loaded in the host page, it can read the content and post it back to your server.
Oddly enough, the same origin policy, does not prevent you to POST data from this host page to your domain. You need to POST a FORM to an IFRAME that has a source hosted on your domain.
You won't be able to read the response you get back from the POST.
But you can poll with a setInterval making a JSONP call to your domain to know if the POST was successful.