I am implementing classroom check-in system that is tied to specific desktops. Unfortunately all I have is a public facing webserver to work with and don't want students able to copy the checking url and falsify check-ins, or login with staff credentials and get access to other tools on the site. Also the computers are on a network where they DHCP regularly reassigns ip's so pinning on IP is not a reliable method of client validation. So I was thinking evercookies, I could have a staff member log into the check-in website from the computer set an evercookie then logout to prevent use of lat login for accessing other tools on the website. When the check-in site is loaded it does a check for the evercookie and as long as a certain threshold is met the check-in page is presented. This has the added benefit of bypassing php/apache's session timeouts.
Or am I barking up the wrong tree and there is a better way to fingerpint the authorized client?
Relying on an evercookie leaves you open to cookie hijacking.
In your case, someone could steal the evercookie id, and use it from another machine, making your application believe it's receiving requests from one of the specific desktops when it is not. The evercookie id could be stolen by a sophisticated user directly from the machine.
Using a strong cookie id, a strong hash, lots of entropy, etc., will not help in this case.
Changing the evercookie identifier often would invalidate any previously stolen cookies. However, this would require that someone manually intervene to regenerate the cookies periodically. This could be automated, and the updated cookie IDs could be pushed to your server with custom software over a secure connection, but that opens the possibility of the software being stolen and used on another machine.
As a rule of thumb, depending on the client to uniquely identify itself is not reliable.
If your IP addresses are assigned via DHCP, but come from some predictable set, you could implement IP checking based on a known range of IPs.
You could deploy custom software on the machine and "handshake" to it from the server. It could generate a unique id based on the hard drive serial number, a MAC address, etc. However, your custom software could be stolen and installed elsewhere by a sophisticated user, or reverse-engineered.
Related
Which technology is used to maintain the user browser or machine identification ? After removing all the browser history and cookies ?
Real Time Scenerio:
I am login to axis internet login through my machine with firefox browser,during the first login it will ask the security question.,after that I removed all my history and cookies of my firefox browser.But again login to my axis internet banking from the same machine,It couldn't ask any security question.
While accessing from different machine at the first time,it will ask the security question and all.,but two different systems are being accessing same internet.
Which technology is available to maintain the user credentials after removing all the client browser histroy and cookies.,
Thanks in advance
As I know some websites use 2 ways to check if the user already visited the website and if so perform the desidered actions.
They use:
Cookies
They store cookies in your browser and then retrive them when needed
IP recognition
They store your IP so even if you delete your cookies they have in their database the information they've stored as cookies and then they can re-add these cookies.
How to avoid this?
You can avoid to be recognized by the website turning off your modem for 10/15 minutes if you have a dynamic IP address. If you have a static one or your dynamic address do not change so often (depend on your ISP) you could use a proxy.
you can use a local storage for this purpose try with this link
I have to build a login in the website which will be bind to a system.
e.g. A user can login only through the machine he has registered.
I will not be able to get the mac id or any other unique identifier through PHP which I can bind with that login.
I have hosted the website on shared hosting, So I cannot install any new library. Standard libraries are available.
Let me know if there are best possible ways to do the same.
There is no unique id of the machine, at all. MAC addresses don't transition network boundaries, IP addresses are not unique and volatile. Beyond that you only have what you get in the HTTP headers, which is not unique or permanent either.
The best you can do is set a non-expiring cookie on the machine with a unique id and depend on that; i.e. make your own id. That'll create problems when the user switches browsers or machines or just cleans the cookie storage though, so make sure you have a procedure in place for this case. Cookies may also be hijacked, so prepare a procedure for this case too and transact everything over SSL secured connections to minimize the likelihood of that occurring.
In short: everything depending on the physical machine of the user is problematic on the web, since the web is explicitly designed to be client-agnostic. Machine ids cannot be depended upon for security. Registering a machine using a unique cookie can be a useful part in strengthening traditional username/password security, but it's not a replacement.
The best is typically to base your security on traditional usernames/passwords (+ second factor authentication if possible), and use the unique cookie as additional tool. For example, every time the user logs in from an unknown machine, you require an email/SMS feedback loop to "register" the machine.
I want to make a php webpage accessible from only one computer.
IP checking isn't suitable for that (Dynamic IP).
I could set a cookie (with no expiration date) with a token. Then I could check if the cookie has the correct token and display the page, else I could die(). I think that this isn't a secure solution, because a cookie can be stolen, can't it?
So, what to do?
P.S. Obviously I can't login every time.
So here are a couple of options:
Client side certificates
Create a client side certificate and configure your webserver to authenticate using client certificates. Problem solved. In future, if you need to have more computers connect to the server, give them client certificates as well.
IP based : using Dynamic DNS
Give your computer a dynamic-dns name (myclient.dyndns.com) and install a dyndns client on your computer. The dyndns client keeps checking its own IP and updates the nameserver entry whenever your computer's IP changes. On server side all you need to check is if the IP that the requester presents is same as myclient.dyndns.com and allow access if it is.
A slight gotcha in this one is that there is a small (configurable) window of time between when IP changes and the dyndns client pupulates it to the nameserver. So, whenever your IP changes, until the dyndns client on your computer detects it and updates the nameserver, your server will not allow any requests from your computer in that time window. Thats because your computer will present the new IP and myclient.dyndns.com will resolve to your old UP. This time window can be made as small as you want (even 1 second). The other small gotcha is that in this n second window, any random computer that gets your old IP assigned by the ISP can access your server. The probablity of this is very small but just mentioning as a possibility.
There are many free dynamic dns services out there. You can google them.
Cookie Based
You could use cookies. However as you correctly identified, cookies can be stolen. Now, there are two ways they can be stolen:
Copying the cookie off the computer: Someone who has access to the computer can copy that specific cookie and impersonate as your computer to your webserver. If this is possible (if potential malicious users can remote desktop or physically access your computer), then cookie based solution is not for you.
Sniffing over the network: Cookies can be easily sniffed over the network. A easy way to prevent sniffing is enabling SSL. Given that you are confident that cookies cannot be stolen off the computer by copying it over, cookie+SSL option works in your case. In this case its just like a shared secret key. You do it via cookie or querystring, it doesn't matter. Cookie obviously are preferred over querystring because cookies aren't normally logged in browser history or webserver logs.
Also just a thought: For all the computers that aren't authenticated, send a standard 404 response rather than some custom "Access denied" page. This way anyone who is running a crawler/bot/scanner on your site will not be intrigued by this custom response and will not attempt to circumvent your security controls.
Couldn't you just use a unique passphrase as a parameter in the uri?
e.g. http://www.example.com/index.php?passphrase=sfauh452h8243nf2489ht8924t48nf3984
and check to see if it matches the one stored in the server?
Well I get if you are not the user it is someone else... then you need only that specific client (computer) to be able to access the page
Either way the first time there must be some sort of registration. Maybe the example uri above works like this:
you request: http://www.example.com/index.php?passphrase=sfauh452h8243nf2489ht8924t48nf3984
the passphrase is checked of being correct and a boolean value is stored in the server as to never be able to "register" again.
If it is correct, a cookie is being generated with a unique key.
This same key is also stored in the server (file, database or something)
Therefore on subsequent requests when you just compare the key stored in the server and the key in the cookie you know who is the client
I am currently working on 2 web servers, One Coldfusion and the other PHP.
Right now, the Coldfusion server is my main server where users log in to access restricted data.
However, I have also begun using a PHP server and want to make it transparent for users to access a specific page on that server - that server requires log in information as well.
I do not want the users to log in twice.
Is there a way to accomplish this ?
Thx
UPDATE: Working in an Intranet environment, so I can't use any public solution.
UPDATE: Reason I am asking for this is because we are moving from a MSQL / Coldfusion environment (Initial server) to a PHP / ORACLE (new server). So I have 2 user tables as well (although they contain mostly the same information).
I am trying to faze out the use of our initial server in favor of our new server transparently to the user and thus I have to work in parallel for the time being.
Most single-sign-on solutions work a bit like this...
Main system authenticates use
User opts initiates a need to move to system 2
Main system authenticates the user with system 2 in the background
System 2 supplies a random, long and disposable token to Main system
Main system redirects the user, with the token, to system 2
System 2 checks the token (and other factors such as IP address) to validate the session
System 2 disposes of the token to ensure it can't be replayed
You would want to ensure that the transmission channels had some security on, especially where Main system and system 2 are talking to each other. You would want that to be a secure transport.
Store sessions in a database, and share them between the two apps.
You could use xml-rpc to get user data and log the user into the other site when they have a login cookie for the first one and vice versa.
Php manual page for XML-rpc
Here is what I have done, in running my own game server, had users on sql server, and on mysql, and wanted to integrate them both.
I made sure that if a user was created on 1 system, was also created on the other.
So you can modify code in both applications, to automatically create a user in other system if it is created on here.
Depending if both servers share a domain, can you do cross-domain sessions or cookies...But my best guess is to store and retreive data...
Or..
as a person logins/registers record their current ip address, on both servers, then check if this person was on the other server within 2-5 minutes, if so, use the ip address to identify them....
This system is tricky because timing is important, so your not leaving a huge hole in your security....But for short term, going between servers, this is simplest solution, in my own opinion.
Good Luck.
If you are on an intranet, you can actually sniff out the network username of the user from the PC they are logged into the network on using PHP. This assumes that:
You are using IIS to host your PHP application.
Your users are using Windows.
Check the section "2.2 Enabling Support for Detecting Usernames" here.
After that, all you need to do is investigate if the same is possible from Coldfusion, and you have the basis of an SSO solution based on the network usernames.
How about implementing an OpenID solution, much like the one apparent on StackOverflow?
You may benefit from dropping a shared object on the client machine via Flash or Flex. This object could then be read from ColdFusion/PHP/Python on servers that otherwise had no connection to each other or access to a common database.
Here is a simple example from the Adobe Docs
Maintain local persistence. This is
the simplest way to use a shared
object, and does not require Flash
Media Server. For example, you can
call SharedObject.getLocal() to create
a shared object in an application,
such as a calculator with memory. When
the user closes the calculator, Flash
Player saves the last value in a
shared object on the user's computer.
The next time the calculator is run,
it contains the values it had
previously. Alternatively, if you set
the shared object's properties to null
before the calculator application is
closed, the next time the application
runs, it opens without any values.
Another example of maintaining local
persistence is tracking user
preferences or other data for a
complex website, such as a record of
which articles a user read on a news
site. Tracking this information allows
you to display articles that have
already been read differently from
new, unread articles. Storing this
information on the user's computer
reduces server load.
Full Information: http://livedocs.adobe.com/flex/3/langref/flash/net/SharedObject.html
I roamed the site for this question using the search engine, and I don't think it's out there. If it is, apologies in advance and feel free to point me to it.
Here is my scenario:
I am setting up a web application, Moodle if anyone is familiar with it, with Apache, MySQL, and php on Windows. Moodle supports enabling SSL for login, but then reverts to regular http after login is established. This is running on an internal network with no connection to the outside world, so no Internet access through this network. All users who use the network have logins, however there are some generic guest type logins with certain restricted privilages. Currently the MySQL database is not encrypted.
My question is this:
If my users do an SSL login, and the system then reverts back to http for the remainder of their session, how vulnerable is the data that is transferred back and forth between the browser interface and the database?
I would perhaps prefer to have all the data encrypted, but I am not sure how bad the performance hit would be to do that, so any suggestions concerning that would be appreciated too. Although I will be extending the functionality in Moodle, I don't necessarily want to have to change it to encrypt everything if already does.
I am new to the world of IT security, and my DBA skills are rusty, so if you give me an answer, type slowly so I can understand! ;)
Thanks in advance!
Carvell
A few things.
The fact that the data in the DB server is not encrypted in no way is a factor in the communication between the User and the Web Server. It is a concern obviously for communications between the web server and the database server.
Your risk point between user and web server is in that packets could be sniffed if a person was able to interject in the middle of the communication chain. However, this risk is mitigated by the fact that your on an internal network.
Therefore, unless you are VERY concerned about the other people in your organization, you are more than likely ok. However, if it is really sensitive data, you might do ALL communications via SSL to ensure that it is transmitted securely. IF you are this concerned, then I would also look at the security of the DB and the communications from DB to webserver.
My concern would be how your authenticated sessions are propagated.
Generally a session works by setting a cookie or appending a session id to any URLs presented by the web site. Once a log-in has been established, often the credentials aren't needed any more, as a session is then linked to the user and deemed to be authenticated, and the existence of the session itself is proof of a successful authentication.
However, as previous posters have mentioned, local network traffic can be available for sniffing. If someone sniffed a session id, they could recreate the cookie or urls using the session id, and simply access the site as that session user, even changing the user's password if that option was available.
I would say that your decision here rests on the security of your sessions. If you have some mitigating factors in place to make sessions difficult to replicate even if a session id is compromised (ie. comparison to ip addresses, etc), or your user accounts are relatively secure from a compromised session (eg. require current password to change account settings), then perhaps SSL after login isn't required. However, if you have doubts and can afford the performance hit, then having SSL throughout the site will guarantee that your sessions can't be compromised (as far as you can guarantee SSL, anyway).
With no internet access to this network, the only thing that could potentially happen is someone else (who is already on the internal network) snooping on another user's HTTP traffic. If someone were to actually do that, and you aren't using SSL, they could read all the data that your website is sending/receiving from that user. But is that actually a concern?
Since you are on an internal network turning on SSL for the whole site should not be that bad performance wise, although it is probably unneccesary.
At the very least, you should encrypt the data in your database.
All sensitive data should be encrypted when transferred over an insecure wire. If you just transfer login details over SSL, all your data is still vulnerable to eavesdropping.
Since the data's not encrypted, anybody with sufficient network access (i.e. physical access) can read the data passing back and forth from server to browser and back. As long as everyone who has physical access to the network also has authorization to read the data, you're probably alright. If any of the information is sensitive, and should be restricted to being viewed by a subset of people who have physical access to the network, then you need to encrypt it.
Anyone on your network would be able to see everyone else's traffic with a network packet sniffer like WireShark. The connection between your web server and MySQL is also in cleartext. MySQL may not actually send passwords in cleartext; it may be a hash, for instance.
If you are really trying to be paranoid, you may not need to run your app over HTTPS. There are other lower-level possibilities like IPSec. Since this is an internal network, you can probably get away with implementing this on all workstations.
Not much to add to the above correct responses. But, one think you can do is use a Threat Modeling tool for your application. That will inform you on the types of threats you are exposing your data to by not using transport level encryption (TLS/SSL). Once you understand the threats, you can decide on an appropriate risk mitigation plan.