We could access all pages of Mediawiki by url ./wiki/Special:Allpages.
But there is not php file that named Special:Allpages. How does mediawiki implement it?
Thanks.
For an indepth explanation see: http://www.mediawiki.org/wiki/Category:Wiki_page_URLs
However, here is a condensed version:
In Mediawiki the URL is not a link to a specific page like a simple web site. Instead it is a key that is used by the code to determine what page is displayed and to whom it is displayed.
Everything points to a single PHP page and that page directs the request to the actual page being called. So a call to SomePage may actually go to MyPage.php instead of SomePage.php
Based on how one sets up Mediawiki (or other modern PHP sites) this can be accomplished in many ways.
For Apache users one can use ModRewrite: http://www.mediawiki.org/wiki/Manual:Short_URL/Apache_Rewrite_rules
Or one can use URL Local Settings: http://www.mediawiki.org/wiki/Manual:Short_URL/LocalSettings.php
Related
I have a php page where I'm trying to load and then echo and external page, (which is sitting in the same server but in complete different path/domain, if that matters).
I've tried using both file_get_contents() and curl. They both correctly load the html of the target page, the problem is that it's not displaying correctly because that target page has relative links to several files (images, css, javascript).
Is there any way I can accomplish this with PHP? If not, what would be the next best way? The target site must look like it's being loaded from the initial page (URL-wise), I don't want to do a redirect.
So, the browser would show http://example.com/initial-page.php even though its contents come from http://example2.com/target-page.php
EDIT:
This is something that could easily be done with an iframe but I want to avoid that too for several reasons, one of them is because with and iframe it breaks the responsiveness of the target site. I can't change the code of the target site to fix that either.
In the end, the solution was a combination of what I was trying to do (using curl) and what WebRookie suggested using the base html tag in the page being loaded via curl.
In my particular case, I pass the base URL as a parameter in curl and I echo it in the loaded page, allowing me to load that same page from different websites (which was another reason why I wanted to do this).
I am working on a website where I need to use the login form to log into a different site and then bring the member's area from that site over to the one I am building.
I have access to both sites and can make changes on either one. I would incorporate the code from the old one directly but it is in ASP and I'm working with PHP.
Any ideas? The purpose would be for someone to login to the site through site A (no problem) then get the information from site B (no problem) and present it in site A (no problem if I use cURL to get the site and break it up then display it on the new one). The issue I get into is the links that are on the new site and gathered from the old site will still point to links on the old site. Maybe there is a way to dynamically generate those pages on the new site somehow? Or am I going about it all the wrong way?
It's essentially a proxy. You need to parse and rebuild the links in the html source code received from site B. There are no functions available for this, but there are numerous open-source proxy scripts you can take code from.
Glype should be open-source (site appears to be broken now unfortunately).
https://www.glype.com/
You need to split the links to change them depending on where do they point.
Links to css/js should be rewritten to point to the real url (note that ajax content won't work, because you can't do an ajax request to another website)
Absolute links to the real website should be changed to relative
Relative links should point at your website (ex. $new_url = '/viewpage/?page=' . urlencode($url);)
Absolute links to other domains (not the old website that you proxy) should be dealt somehow, think about what do you want to do with them exactly.
I'm looking for a way to load a full-functional copy of a web site inside a php proxy page in order to be able to grab and change part of its elements and styles.
I decided to post this question to merge my previous two into a more relevant evolution:
live change any site visualization properties
load external site and change its visualization
I have found cURL functions useful to load the page (eg. www.google.it; for google.com I received a 302 redirection, but I won't face it now).
Some of the page elements, like the image logo, are not properly loaded; this should be due to the original relative path to the site resources. I have to manually add "//google.it" before them to fix, and it worked.
Now I have another issue:
How is it possible to go further in the site navigation?
When I click any link the page is reloaded with its "real" destination. I suppose I have to reload my php and use the href link attribute as url to load (I can do that).
But what about the submit buttons? How can I redirect their destination?
Use an existing proxy for that.
Generally you'll have to just find all the strings matching the old domain name and change them into your url, so every link on the page will turn from being www.bla.com/page.htm into proxy.com/page.htm.
This will also require some server setup thanks to possible ajax requests and relative paths. Besides, super hard would be to catch dynamically constructed url's such as: var add r = 'b'+'la.com';
How do I remove path inforation from a url?
For example in this url, http://stackoverflow.com/questions/ask, I want the user to only see http://stackoverflow.com. Is this possible to do?
I do a redirect in PHP from my root directory to path Foo. I don't want Foo to display in the URL.I also do a page reload of sorts using window.location.href = domain_name/foo. Similarly I don't want foo to display in the URL.
Is this possible to implment in Javascript or PHP or do I have to configure Apache to do this?
You cannot manipulate URLs in the browser's address bar using PHP or JavaScript. But you have guessed correctly, this is something that can be configured in Apache. For a primer on URL rewriting, take a look at this article.
I have seen websites that keep the user on the homepage and use AJAX to change the page content.
You should make yourself sober and then consider if you really want to hide anything and if your web site would work at all.
However, I can answer you already - it wouldn't.
We are using path information for the reason. And you'd better see it.
Read up on URL masking:
htaccess mask for my url
http://www.willmaster.com/library/web-development/URL-masking.php
etc... This cannot be handled in JS.
If you REALLY wanted to, you could do this in PHP: you would need to create an index.php page that was set up to handle the loading of other pages, and add a handler at the top of every page that detects the REQUEST_URI that sets any other link to redirect (header()) to the index page with the filepath stored in $_SESSION or another retrievable location. The index page would then render the requested page. However, this is ugly, wastes resources, and you're much better off with an apache level rewrite.
In MyBB forums you must have seen that all those threads are stoed as forum.com/Thread-Name-of-the-thread
So now this is static right ?
So now i have a site which has
blog.com/search.php?=SEARCHED+TEXT
So now how do i save this search so that Google can find out this page on my site ?
Indirectly what i mean to say is how i can i make
blog.com/SEARCHED+TEXT.html
"So now this is static right?" No. Just because the URL doesn't end in .php or similar doesn't mean it's static. It's time for you to learn the wonders of mod_rewrite:
http://www.workingwith.me.uk/articles/scripting/mod_rewrite
Your first example isn't static at all. It's just using a tool to route the request based on the URL.
All you need to get the same functionality is to investigate URL Routing in PHP and implement it in your application as well.
If you want Google to index this search page you have to tell Google it exists, either through a Sitemap or by putting a link on your site that Google can crawl. Google did fill in forms in the past, but I am not sure if they still do and afaik, they only did on a selected few sites.
To make the search static, you have to render the page once and store it in a file. Whether you do that manually by simply calling up the file in your browser and then saving it or by means of a Caching System is up to you.