(PHP) Session login via url - php

I once saw a friends site, which allowed you to see a private php file by browsing to a url.
So for example, http://example.com/accessthesite.php would then show you the content that is hidden on http://example.com/index.php
They basically used it for previewing changes to their site as it hadn't launched fully.
So whilst /index.php had a coming soon page up, if you browsed to /accessthesite.php you would then be redirected to /index.php and shown the full website.
Any idea's how they did this?

It is likely that accessthesite.php set a session variable, or a cookie. Once set, index.php contains code to recognize this and display an alternate view.
I have used this method myself for various thing, but I often times find it easier to setup a second development and/or staging site. Pending changes can be viewed there before they are published to the public site. In general, it's not a good idea for developers to be working directly on the public site since there might be problems. You do not want to inconvenience users by displaying a site that isn't ready yet.

Related

Wordpress site has clone from diffrent url

Well, I have a WordPress site. Today when I checked the Google traffic reports, I caught a backlink from a different URL. After I checked the URL, I was shocked: somebody copied my web site!
I think "it" used curl or file_get_contents(). And after getting page, it changed my URL to its URL.
Already, Google created indexes of this URL.
How can I block this access from this "bot"?
Note: I checked db access and file permissions, it is fine.
When I closed server, clone site closed.
When I add publish new content, clone site also does it.
When I change theme file codes, clone site changes.
I don't have any experience with wordpress, but, it seems like that this 'bot' copied your front-end and then using your API to function in the same way.
This below is guessing on what the other site's owner is trying to do:
If you have a login site, then I think the copy site's owner tries to phishing on the login data, then try to use it to get sensitive information from other sites with the same login data - since a lot of people has the same user-password pairs on every site.
If this is the case, the first thing you should do is to make sure that this site wont show up in any search engines.
In the same situation I would try to figure out what is the IP/domain of the copy site and then put set up firewall rules to block all of the request from there. With this you can't really prevent the users to enter their login info to that site, but hacking is done by programmers, so if you are lucky then your clients/users data wont be saved in the phishing database. This only happens if the phishing layer is putting the user-pwd into the database only if the login is successful. But it's more likely that there is a successful and IP field in the phishing DB too, so the spams can be filtered.
If the "phisher" is not saving IP field then the last thing you can do is to ruin the phishing database. For this you need to get the IP of the other site, and filter your incoming request. If the sender of the request has the other site's IP address, then instead of checking the login permission you should just send back a similar response, that you used to send, when one successfully logs in to your site. With this each successful and unsuccessful login will look the same in the phishing database. This still belongs to the defense category. Spamming the phishing database counts as attacking, so I don't recommend that.
By default the user's should be able to realize if their data is being phished, but as we know it's not used to happen, however you can't help much on the realization skills.
If you don't have any login page, then the situations is not that bad.
Copying a website is illegal, so you can do a lot more

preventing anyone from accessing back server pages

I searched for the answer for my question but I couldn't find exactly what I wanted.
If you find a duplicate of this please send me it!
I have a couple of files in my website that are used to do background functions that I don't want anyone to access them- not even the admin. for example files like PHPMailer.php, login-inc.php logout-inc.php and more.
I need a way to prevent anyone from accessing those pages and not prevent them from working when triggered by buttons/forms.
I'm aware that using a session can redirect not logged users, although, here, I need to prevent everyone from accessing the pages by redirecting them or sending them to a 404 page.
what do I need to use to do that?
thanks!
Update: I'm very new to web coding so sorry for the confusing question, I wanted to block users from entering some pages by entering their location with a link for example I don't want users to be able to access tokens/passwords...
Using .htaccess solves my problem. thank you.
One way to protect your files to be called by web server is to move them out of site webroot directory. That way there is no way that someone access the with web browser and you still can include them. It's common solution.
Other way is to intercept web server requests and i.e. forbid some of them, redirect some others and so on. I.e for Apache web server you can do that inside .htaccess file. You have to allow that in website settings.
For your specific case, with those buttons:
You'll have to use .htaccess (or equivalent) to intercept all requests to those files. Then redirect those request to some php script, with also saving passed parameters.
Then your PHP script should decide what to do with that request...reject it (redirect to 404 page) or allow access.
For that your buttons, should pass some kind of pass code. So your PHP script can check, when it's called if valid pass code is provided (allow access) or not (redirect to 404).
Now making that pass code that can't be manipulated could be tricky, but generally you must invent some formula to generate them (based i.e. on current time) so PHP script could you the same formula to check it's validity.
Other way is to i.e. to do some JS action when button is pressed (i..e write some cookie) and PHP script will check for that JS action result (cookie exists or not).

iPhone problems with PHP sessions

This is something that I in practice so far I have not seen before.
I made a web app which works beautifully on all devices (so far I was thinking). Last week I received a few complaints that one part of the application does not work. Maybe I've reviewed over 100 times my code and I have not found a mistake and error behavior is that at one point the session expires or is just is not setup - which is not possible. The system was tested on a pile of users.
Today I received a response from a client that uses the iPhone 5. And really happens is that sessions are not working properly.
I use this session to force the user to open the pages in the order and that there is no possibility of jumping from page to page. If the user tries to skip the page, just go back to the beginning and need to re-start the process.
On the iPhone during the process returns me to the start and stop. It does not allow you to go to level 1 just returning back until you clear you cache.
This error happen randomly anywhere in process.
-To mention, I sessions not deleted until the user reache the end.
Is it possible that the iPhone has a problem with their browser or is error on my side?
Thanks!
This is what that solved the same problem i was facing earliar. May this will help..
the session problems for login page might occur because the url you are opening in the browser are not unique. for example If say you are creating a login page for your website, and you have created sessions successfully. Now, if you are logging in from url say http://geekzgarage.com then your session is limited to this url only. If you again open the above url like http://www.geekzgarage.com (note www. in both urls), then you will see that you are not logged in. So please be sure that your webpage is opening always in single type of url. either with www. or without www.

Can Webbots delete content from my website using links that are wired to GET requests?

OK here my problem: content is disappearing from my site. It's not the most secure site out there, it has a number of issues. Right now every time I upload a page that can delete content from the my site using simple links wired to a GET request I find the corresponding content being deleted in mass.
Example, I have a functionality on my site to upload images. Once the user uploads an image, the admin(the owner) can use another page to delete all(owned) images from the site. The delete functionality is implemented in such a way that a user clicks on the link under each thumbnail of uploaded images he would send a get request that deletes the image information from the site's database and deletes the image from the server file system.
The other day I uploaded that functionality and the next morning I found all my images deleted. The pages are protected using user authentication when you view the pages using a browser. To my surprise, however, I could wget that page with out any problem.
So I was wondering if some evil web bot was deleting my content using those links? Is that possible? What do you advice for further securing my website.
It is absolutely possible. Even non-evil web bots could be doing it. The Google bot doesn't know the link it follows has any specific functionality.
The easiest way to possibly address this is to setup a proper robots.txt file to tell the bots not to go to specific pages. Start here: http://www.robotstxt.org/
RFC 2616 (HTTP protocol), section 9.1.1: Safe Methods:
The convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered "safe". This allows user agents to represent other methods, such as POST, PUT and DELETE, in a special way, so that the user is made aware of the fact that a possibly unsafe action is being requested.
Basically, if your application allows deletion via GET requests, it's doing it wrong. Bots will follow public links, and they have no obligation to expect to delete things when doing so, and neither do browsers. If the links are protected it could still be browser prefetching or acceleration of some kind.
Edit: It might also be Bing. Nowadays Internet Explorer sends data to Microsoft about everywhere you go to gather data for its shitty search engine.
Typically, a search-bot will scan a page for any links and peek down those links to see what pages are behind that. So yeah, if a both has access to that page, the page contains links to delete items / stuff and the both opens those links to see what's behind them, the code simply gets triggered.
There's a couple of ways to block bots from scanning pages. Look into robot.txt implementations. Also, you might want to look into the mechanism / safety of your admin authentication system... ;-)
You can use the robots.txt file to block the access for some web bots.
And for those that don't look for the robots.txt file you can also use javascript, there shouldn't be many webbots interpreting it.
delete

Make page public, then hide page from public based on settings. How?

I have user profiles on my site. Users can make it public by checking a checkbox (search able via a search engine) and uncheck the box to block the page from being searched on a search engine. Site is in php codeignitor.
How is this accomplished? I am esp lost on when the user unchecks the box to block the page from being public how is that done and how to do this in as real time as possible? A good example are the profiles on fb or linkedin.
This isn't secure, but you could check the referring URL of the visitors and allow/deny their requests by looking for a search engine's address. The results would still show up in Google, and there would be page caching (which you can kinda stop with a <meta> tag).
Basically, warn the users that when they make the page public, it's not that easy to make it private after that. I'd make it a tedious and painful process, as people will complain that "your site is brokeded".
the fastest way to get pages automatically removed from google
set a new last modified date of that specific page
return an HTTP 410 for a page URL (you can still display content on it, or make the HTTP 410 google user agent specific, or no-cookie specific, or no language header specific)
put the URL into a sitemap.xml (with the new last modified date)
ping sitemap.xml to google
with this method you can make pages disappear from google SERPS in 1 to 2 hours (if your site gets crawled regularly).
note: if you want to kick out thousands and thousands of pages at once this process will take longer.

Categories