How to mask url having many sub directories and files? - php

Am having an website with many directories and files. I just want to hide all the sub directories name and file names like https://example.com/folder_01/file.php to https://example.com. I could able to hide a single folder name using rewrite rule in htaccess apache server. Also I tried frame set concept but it shows unsafe script when tried to run the website in browser. Can anyone help me?
Thanks in advance.

This isn't possible.
A URL is how the browser asks the server for something.
If you want different things, then they need different URLs.
If what you desired was possible, then somehow the server would have to know that if my browser asked for / then it meant "The picture of the cat" while also knowing that if my browser asks for / then it means "The picture of the dog".
It would be like stopping at a fast food drivethru where you had never been before, didn't know anyone who worked there, and asking for "My usual" and expecting them to be able to know what that was.
You mentioned using frames, which is an old and very horrible hack that will keep a constant URL displayed in the browser's address bar but has no real effect beyond making life more difficult for the user.
They can still look at the Network tab in their browser's developer tools to see the real URL.
They can still right click a link and "Open in new tab" to escape the frames.
Links from search engines will skip right past the frames and index the URLs of the underlying pages which have actual content.
URLs are a fundamental part of the WWW. Don't try to break them. You'll only hurt your site.

Related

Adblocker blocks ajax urls containing keywords ad. Is there anyway to handle this instead of changing the url

Adblocker blocks the urls like
http://localhost/project1/advertiser/users/get_user_listing/
Since it contains the keyword advertiser. I have so many urls and ajax calls contains this keywords. Is there anyway to escape this blocker?
Note: Changing my folder name is not possible since it is used in so many places in my code.
This can be achieved easily by following these steps.
While firefox is the active window press Ctrl + Shift + V, this will open the "Blockable items window"
In the window you will list of address and type of files being
recieved from them.
Find the link there that you want to unblock, right click and choose
"Disable filter". Reload and see if it works fine after that.
There are 2 things you can do about it, but neither is perfect:
1) Go to the EasyList forums, tell them that Adblock is effectively breaking your site and request a ## exception rule added to EasyList. This is easier and faster than you may think. But keep in mind that there are multiple adblockers out there, and not all of them use the same filter lists. (And some use no filter lists at all). So if you want to prevent this happening in every adblocker out there, rather than just the top 1 or 2, the task of asking for filter exceptions can be time consuming.
2) Use .htaccess to create an alias or subdomain for the directory/folder that's being blocked which doesn't contain the word "advertiser".
Still, I'm not sure why you can't do a find/replace across all your site files for the directory name. How hard can this be?

Masking subdirectories and pages for my website

I want to mask some of the subdirectories and pages for my website. For example:
I want to change
www.example.com/blog.php?post=post1
to
www.example.com/blog
or something similar to that. I have seen examples about redirecting and such but that doesn't seem to work for what I want and I would like to stay away from iframes if possible. I just want the address bar to not reflect my internal directory structure and page names. I want it to keep showing /blog while they are moving from post to post. Thanks.
P.S. I am not using wordpress or any other CMS or blogging system.
You can use apache mod_rewrite for that.
Mod Rewrite Generator
And if You don't want to have the same url on blog-post/article change, but to display different content, all I can think of this moment is by using either POST method or browser cookies, but, that would require a lot of page reloads, and it simply is not recommended for wide use. If You are building per_se (one person only) panel or similar, than url doesn't matter that much, but, speaking of blog..
It is quite reasonable to hide .php extension or url query index/key, but not what You would like to accomplish.
Like I said, it is possible, but Luke .. don't do that.
Blog should be bookmarkable on each and every #stop, and that happens just because of unique urls and hash values. Without these two, no hyperlinking possible *(not to mention seo penals and flaws + dozen of other unwanted obstacles, page caching for instance).

how to fake url detection by php

im working on a script for indexing and downloading whole website by user sent url
for example when a user submit a domain like http://example.com then i will copy all links in index page and go for download the its inside links and start from first.....
i do this part with curl and regular expression to download and extract the links
however
some yellow websites are making fake urls for example if you go to http://example.com?page=12 it have some links to http://example.com?page=12&id=10 or http://example.com?page=13 and etc..
this will make a loop and the script cant complete the site downloading
is there any way to detect these kind of pages!?
p.s.: i think google and yahoo and some other search engines face this kind of problem too but their database are clear and on searches thay dont show these kind of data....
Some pages may use GET variables and be perfectly valid (like as you've mentioned here, ?page=12 and ?page=13 may be acceptable). So what I believe you're actually looking for here is a unique page.
It's not possible however to detect these straight from their URL. ?page=12 may point to exactly the same thing as ?page=12&id=1 does; they may not. The only way to detect one of these is to download it, compare the download to pages you've already got, and as a result find out if it really is one you haven't seen yet. If you have seen it before, don't crawl its links.
Minor side note here: Make sure you block websites from a different domain, otherwise you may accidentally start crawling the whole web :)

Identify a file that contains a particular string in PHP/SQL site

By using the inspect element feature of Chrome I have identified a string of text that needs to be altered to lower case.
Though the string appears on all the pages in the site, I am not sure which file to edit.
The website is a CMS based on PHP and SQL - I am not so familiar with these programs.
I have searched through the files manually and cannot find the string.
Is there a way to search and identify the file I need using, for example, the inspect element feature on browsers or in FTP tool such as Filezilla?
Check if you have a layout page of any kind in your CMS. If you do, then most probably either in that file, or in the footer include file you will find either the javascript for google analytics, or a js include file for the same.
Try doing a site search for 'UA-34035531-1' (which is your google analytics user key) and see if it returns anything. If you find it, what you need would be two lines under it.
Usually people do not put analytics code in DB, so there is a bigger chance you will find it in one of the files, which most probably is included/embedded in a layout file of some sort, as you need it across all pages in the site

Curl Check if a domain is root

Hello I am trying to make a little spider.
While I was building it I came across a problem where I need to check if a link is a root domain link or a subdomain link.
For example:
http://www.domain.com or
http://domain.com
http://domain.com/index.php
http://domain.com/default.php
http://domain.com/index.html
http://domain.com/default.html
.
.
etc
are all the same.
So I need a function actually that takes the string url as an input and checks if it's the root or homepage whatever you like to call it of a site.
As noted in comments, this is really a basic aspect of coding the spider. If you intend to code a general purpose spider, you'll need to add means to resolve URLs and detect if they point to the same content and in what way (through a redirect or simply through duplicate content), as well as what kind of content they point to.
You need at least to handle:
relative paths
GET-variables that are in one way or another significant to the web page, but does not render differences in the content.
Malformed URLs.
JavaScript related information in the href attribute.
Links to non-HTML material -- direct download links to PDFs, images etc. (detect it on extension isn't always enough, what with PHP scripts delivering images).
These are just some of the aspects but it all comes down to the point that the kind of detection your after have to be a fundamental part of the spider if you intend to use it in any kind of generic manner.

Categories