How to allow only certain devices to access web site - php

We are developing in-house web-based application for viewing data reports while targeting on smartphones and tablets. Our customer asked us for possibility that only certain devices could access the content. Hence we use technologies based on javascript/HTML5 we are no capable of reading unique ID like IMEI or device uuid. On the other hand side we could use server technologies like ASP, PHP to gain success.
I have several ideas which dont lead to wanted result (one discussed here: Persistent client-side web storage).
I wonder if you have any idea that allow only certain devices to access web site?

Such access control would only be "secure" if a traditional login method is implemented on top of it, i.e. users (1) need to sign in with username and password, but (2) they can only do so on specific devices.
Step (1) is required to make access basically "secure", while step (2) would only make it just a little harder to break into your app for people who have hardly a clue what they're doing.
(Without the second step, people could attempt to brute force the login form when they know its URL, without sniffing any other network traffic.)
You could certainly fingerprint the user agent (UA) string and possibly other HTTP headers, assuming the mobile browser app isn't constantly updated and therefore doesn't constantly change its UA string (that could be a hassle), and check server-sided.
Your could also create a simple, really simple native mobile app for the target platform(s), consisting only of the platform's default web browser widget, with your app's URL built-in as the default page.
You could then control the URLs and possibly HTTP headers, and add special, secret authentication headers or URL parameters (e.g. device's IMEI), for which you check on the server side.
If you target Android, you don't necessarily need to rely on Google Play; you can also distribute the APK files from one of your own servers, making the app available only to the intended audience.

AFAIK you only have the User Agent to work on, with maybe some Javascript values that you can return as are used when fingerprinting.
The User Agent should give you a lot to go on, but it can easily be spoofed. And so can the Javascript values.
I don't think there is a secure way to do what you want. But then again, I don't know if you really want it that secure.
What you also could do is to not do it 100% browser based, but create a mobile App. (Such as in Apple AppStore / Google Play Store) Here I think you can request access to more variables to identify the machine type.

Try the lightweight php-mobile-detect here: (server side checking is always better) https://code.google.com/p/php-mobile-detect/

Related

Is it possible to locate a smartphone through its SIM number in use programmatically through PHP?

I have to develop a web-based system (HTML, JavaScript, jQuery, PHP) which will send an SMS to the nearby donors whose information is already there in the database (near to hospital where blood is needed) in case of blood request from a hospital.
In that context I would like to know if a device can be located using the SIM number in use through PHP. If no, what other alternative/mechanism can be used for locating a device from a website.
No. A web browser on a smartphone, like its desktop counterpart, only has limited access to the device on which is runs. It would be an enormous privacy and legal nightmare to give websites unrestricted access to device-level hardware identifiers.
Some people have tried to implement "undeletable cookies" (and such enterprises are widely considered highly unethical, if not necessarily illegal in all jurisdictions). However, even they do not tie a device to a personal identity, or indeed to a ever-changing location.
If you want to track a user's location using GPS, then:
(a) write an Android/iOS app to do that in the background, or see if there is an existing OS-level service or app to do that for you,
(b) and ensure the user gives permission for that data to flow from their device on a regular basis,
(c) and allow the user to opt-out of this tracking at any time.
You will need to consider your users' battery consumptions here, too, since usage of the GPS subsystem tends to be power-hungry.
Alternatively you could look at browser geolocation and send this information back periodically to your server, though note that this does not necessarily use GPS (and so can be inaccurate) and it only runs when your website is open and the browser is open. While this approach is quick to implement, it thus is not a very reliable way of getting geolocation information on a long-term basis.

Is there any reliable way to identify the user machine in a unique way? [duplicate]

I need to figure out a way uniquely identify each computer which visits the web site I am creating. Does anybody have any advice on how to achieve this?
Because i want the solution to work on all machines and all browsers (within reason) I am trying to create a solution using javascript.
Cookies will not do.
I need the ability to basically create a guid which is unique to a computer and repeatable, assuming no hardware changes have happened to the computer. Directions i am thinking of are getting the MAC of the network card and other information of this nature which will id the machine visiting the web site.
Introduction
I don't know if there is or ever will be a way to uniquely identify machines using a browser alone. The main reasons are:
You will need to save data on the users computer. This data can be
deleted by the user any time. Unless you have a way to recreate this
data which is unique for each and every machine then your stuck.
Validation. You need to guard against spoofing, session hijacking, etc.
Even if there are ways to track a computer without using cookies there will always be a way to bypass it and software that will do this automatically. If you really need to track something based on a computer you will have to write a native application (Apple Store / Android Store / Windows Program / etc).
I might not be able to give you an answer to the question you asked but I can show you how to implement session tracking. With session tracking you try to track the browsing session instead of the computer visiting your site. By tracking the session, your database schema will look like this:
sesssion:
sessionID: string
// Global session data goes here
computers: [{
BrowserID: string
ComputerID: string
FingerprintID: string
userID: string
authToken: string
ipAddresses: ["203.525....", "203.525...", ...]
// Computer session data goes here
}, ...]
Advantages of session based tracking:
For logged in users, you can always generate the same session id from the users username / password / email.
You can still track guest users using sessionID.
Even if several people use the same computer (ie cybercafe) you can track them separately if they log in.
Disadvantages of session based tracking:
Sessions are browser based and not computer based. If a user uses 2 different browsers it will result in 2 different sessions. If this is a problem you can stop reading here.
Sessions expire if user is not logged in. If a user is not logged in, then they will use a guest session which will be invalidated if user deletes cookies and browser cache.
Implementation
There are many ways of implementing this. I don't think I can cover them all I'll just list my favorite which would make this an opinionated answer. Bear that in mind.
Basics
I will track the session by using what is known as a forever cookie. This is data which will automagically recreate itself even if the user deletes his cookies or updates his browser. It will not however survive the user deleting both their cookies and their browsing cache.
To implement this I will use the browsers caching mechanism (RFC), WebStorage API (MDN) and browser cookies (RFC, Google Analytics).
Legal
In order to utilize tracking ids you need to add them to both your privacy policy and your terms of use preferably under the sub-heading Tracking. We will use the following keys on both document.cookie and window.localStorage:
_ga: Google Analytics data
__utma: Google Analytics tracking cookie
sid: SessionID
Make sure you include links to your Privacy policy and terms of use on all pages that use tracking.
Where do I store my session data?
You can either store your session data in your website database or on the users computer. Since I normally work on smaller sites (let than 10 thousand continuous connections) that use 3rd party applications (Google Analytics / Clicky / etc) it's best for me to store data on clients computer. This has the following advantages:
No database lookup / overhead / load / latency / space / etc.
User can delete their data whenever they want without the need to write me annoying emails.
and disadvantages:
Data has to be encrypted / decrypted and signed / verified which creates cpu overhead on client (not so bad) and server (bah!).
Data is deleted when user deletes their cookies and cache. (this is what I want really)
Data is unavailable for analytics when users go off-line. (analytics for currently browsing users only)
UUIDS
BrowserID: Unique id generated from the browsers user agent string. Browser|BrowserVersion|OS|OSVersion|Processor|MozzilaMajorVersion|GeckoMajorVersion
ComputerID: Generated from users IP Address and HTTPS session key.
getISP(requestIP)|getHTTPSClientKey()
FingerPrintID: JavaScript based fingerprinting based on a modified fingerprint.js. FingerPrint.get()
SessionID: Random key generated when user 1st visits site. BrowserID|ComputerID|randombytes(256)
GoogleID: Generated from __utma cookie. getCookie(__utma).uniqueid
Mechanism
The other day I was watching the wendy williams show with my girlfriend and was completely horrified when the host advised her viewers to delete their browser history at least once a month. Deleting browser history normally has the following effects:
Deletes history of visited websites.
Deletes cookies and window.localStorage (aww man).
Most modern browsers make this option readily available but fear not friends. For there is a solution. The browser has a caching mechanism to store scripts / images and other things. Usually even if we delete our history, this browser cache still remains. All we need is a way to store our data here. There are 2 methods of doing this. The better one is to use a SVG image and store our data inside its tags. This way data can still be extracted even if JavaScript is disabled using flash. However since that is a bit complicated I will demonstrate the other approach which uses JSONP (Wikipedia)
example.com/assets/js/tracking.js (actually tracking.php)
var now = new Date();
var window.__sid = "SessionID"; // Server generated
setCookie("sid", window.__sid, now.setFullYear(now.getFullYear() + 1, now.getMonth(), now.getDate() - 1));
if( "localStorage" in window ) {
window.localStorage.setItem("sid", window.__sid);
}
Now we can get our session key any time:
window.__sid || window.localStorage.getItem("sid") || getCookie("sid") || ""
How do I make tracking.js stick in browser?
We can achieve this using Cache-Control, Last-Modified and ETag HTTP headers. We can use the SessionID as value for etag header:
setHeaders({
"ETag": SessionID,
"Last-Modified": new Date(0).toUTCString(),
"Cache-Control": "private, max-age=31536000, s-max-age=31536000, must-revalidate"
})
Last-Modified header tells the browser that this file is basically never modified. Cache-Control tells proxies and gateways not to cache the document but tells the browser to cache it for 1 year.
The next time the browser requests the document, it will send If-Modified-Since and If-None-Match headers. We can use these to return a 304 Not Modified response.
example.com/assets/js/tracking.php
$sid = getHeader("If-None-Match") ?: getHeader("if-none-match") ?: getHeader("IF-NONE-MATCH") ?: "";
$ifModifiedSince = hasHeader("If-Modified-Since") ?: hasHeader("if-modified-since") ?: hasHeader("IF-MODIFIED-SINCE");
if( validateSession($sid) ) {
if( sessionExists($sid) ) {
continueSession($sid);
send304();
} else {
startSession($sid);
send304();
}
} else if( $ifModifiedSince ) {
send304();
} else {
startSession();
send200();
}
Now every time the browser requests tracking.js our server will respond with a 304 Not Modified result and force an execute of the local copy of tracking.js.
I still don't understand. Explain it to me
Lets suppose the user clears their browsing history and refreshes the page. The only thing left on the users computer is a copy of tracking.js in browser cache. When the browser requests tracking.js it recieves a 304 Not Modified response which causes it to execute the 1st version of tracking.js it recieved. tracking.js executes and restores the SessionID that was deleted.
Validation
Suppose Haxor X steals our customers cookies while they are still logged in. How do we protect them? Cryptography and Browser fingerprinting to the rescue. Remember our original definition for SessionID was:
BrowserID|ComputerID|randomBytes(256)
We can change this to:
Timestamp|BrowserID|ComputerID|encrypt(randomBytes(256), hk)|sign(Timestamp|BrowserID|ComputerID|randomBytes(256), hk)
Where hk = sign(Timestamp|BrowserID|ComputerID, serverKey).
Now we can validate our SessionID using the following algorithm:
if( getTimestamp($sid) is older than 1 year ) return false;
if( getBrowserID($sid) !== createBrowserID($_Request, $_Server) ) return false;
if( getComputerID($sid) !== createComputerID($_Request, $_Server) return false;
$hk = sign(getTimestamp($sid) + getBrowserID($sid) + getComputerID($sid), $SERVER["key"]);
if( !verify(getTimestamp($sid) + getBrowserID($sid) + getComputerID($sid) + decrypt(getRandomBytes($sid), hk), getSignature($sid), $hk) ) return false;
return true;
Now in order for Haxor's attack to work they must:
Have same ComputerID. That means they have to have the same ISP provider as victim (Tricky). This will give our victim the opportunity to take legal action in their own country. Haxor must also obtain HTTPS session key from victim (Hard).
Have same BrowserID. Anyone can spoof User-Agent string (Annoying).
Be able to create their own fake SessionID (Very Hard). Volume atacks won't work because we use a time-stamp to generate encryption / signing key so basically its like generating a new key for each session. On top of that we encrypt random bytes so a simple dictionary attack is also out of the question.
We can improve validation by forwarding GoogleID and FingerprintID (via ajax or hidden fields) and matching against those.
if( GoogleID != getStoredGoodleID($sid) ) return false;
if( byte_difference(FingerPrintID, getStoredFingerprint($sid) > 10%) return false;
These people have developed a fingerprinting method for recognising a user with a high level of accuracy:
https://panopticlick.eff.org/static/browser-uniqueness.pdf
We investigate the degree to which modern web browsers
are subject to “device fingerprinting” via the version and configuration information that they will transmit to websites upon request. We
implemented one possible fingerprinting algorithm, and collected these
fingerprints from a large sample of browsers that visited our test side,
panopticlick.eff.org. We observe that the distribution of our finger-
print contains at least 18.1 bits of entropy, meaning that if we pick a
browser at random, at best we expect that only one in 286,777 other
browsers will share its fingerprint. Among browsers that support Flash
or Java, the situation is worse, with the average browser carrying at least
18.8 bits of identifying information. 94.2% of browsers with Flash or Java
were unique in our sample.
By observing returning visitors, we estimate how rapidly browser fingerprints might change over time. In our sample, fingerprints changed quite
rapidly, but even a simple heuristic was usually able to guess when a fingerprint was an “upgraded” version of a previously observed browser’s
fingerprint, with 99.1% of guesses correct and a false positive rate of only
0.86%.
We discuss what privacy threat browser fingerprinting poses in practice,
and what countermeasures may be appropriate to prevent it. There is a
tradeoff between protection against fingerprintability and certain kinds of
debuggability, which in current browsers is weighted heavily against privacy. Paradoxically, anti-fingerprinting privacy technologies can be self-
defeating if they are not used by a sufficient number of people; we show
that some privacy measures currently fall victim to this paradox, but
others do not.
It's not possible to identify the computers accessing a web site without the cooperation of their owners. If they let you, however, you can store a cookie to identify the machine when it visits your site again. The key is, the visitor is in control; they can remove the cookie and appear as a new visitor any time they wish.
A possibility is using flash cookies:
Ubiquitous availability (95 percent of visitors will probably have flash)
You can store more data per cookie (up to 100 KB)
Shared across browsers, so more likely to uniquely identify a machine
Clearing the browser cookies does not remove the flash cookies.
You'll need to build a small (hidden) flash movie to read and write them.
Whatever route you pick, make sure your users opt IN to being tracked, otherwise you're invading their privacy and become one of the bad guys.
There is a popular method called canvas fingerprinting, described in this scientific article: The Web Never Forgets:
Persistent Tracking Mechanisms in the Wild. Once you start looking for it, you'll be surprised how frequently it is used. The method creates a unique fingerprint, which is consistent for each browser/hardware combination.
The article also reviews other persistent tracking methods, like evercookies, respawning http and Flash cookies, and cookie syncing.
More info about canvas fingerprinting here:
Pixel Perfect: Fingerprinting Canvas in HTML5
https://en.wikipedia.org/wiki/Canvas_fingerprinting
You may want to try setting a unique ID in an evercookie (it will work cross browser, see their FAQs):
http://samy.pl/evercookie/
There is also a company called ThreatMetrix that is used by a lot of big companies to solve this problem:
http://threatmetrix.com/our-solutions/solutions-by-product/trustdefender-id/
They are quite expensive and some of their other products aren't very good, but their device id works well.
Finally, there is this open source jquery implementation of the panopticlick idea:
https://github.com/carlo/jquery-browser-fingerprint
It looks pretty half baked right now but could be expanded upon.
Hope it helps!
There is only a small amount of information that you can get via an HTTP connection.
IP - But as others have said, this is not fixed for many, if not most Internet users due to their ISP's dynamic allocation policies.
Useragent String - Nearly all browsers send what kind of browser they are with every request. However, this can be set by the user in many browsers today.
Collection of request fields - There are other fields sent with each request, such as supported encodings, etc. These, if used in the aggregate can help to ID a user's machine, but again are browser dependent and can be changed.
Cookies - Setting a cookie is another way to identify a machine, or more specifically a browser on a machine, but as others have said, these can be deleted, or turned off by the users, and are only applicable on a browser, not a machine.
So, the correct response is that you cannot achieve what you would live via the HTTP over IP protocols alone. However, using a combination of cookies, as well as IP, and the fields in the HTTP request, you have a good chance at guessing, sort of, what machine it is. Users tend to use only one browser, and often from one machine, so this may be fairly relieable, but this will vary depending on the audience...techies are more likely to mess with this stuff, and use more machines/browsers. Additionally, this could even be coupled with some attempt to geo-locate the IP, and use that data as well. But in any case, there is no solution that will be correct all of the time.
There are flaws with both cookie and non-cookie approaches. But if you can forgive the shortcomings of the cookie approach, here's an idea.
If you're already using Google Analytics on your site, then you don't need to write code to track unique users yourself. Google Analytics does that for you via the __utma cookie value, as described in Google's documentation. And by reusing this value you're not creating additional cookie payload, which has efficiency benefits with page requests.
And you could write some code easily enough to access that value, or use this script's getUniqueId() function.
As with the previous solutions cookies are a good method, be aware that they identify browsers though. If I visited a website in Firefox and then in Internet Explorer cookies would be stored for both attempts seperately. Some users also disable cookies (but more people disable JavaScript).
Another method to consider would be I.P. and hostname identification (be aware these can vary for dial-up/non-static IP users, AOL also uses blanket IPs). However since this only identifies networks this might not work as well as cookies.
The suggestions to use cookies aside, the only comprehensive set of identifying attributes available to interrogate are contained in the HTTP request header. So it is possible to use some subset of these to create a pseudo-unique identifier for a user agent (i.e., browser). Further, most of this information is possibly already being logged in the so-called "access log" of your web server software by default and, if not, can be easily configured to do so. Then, a utlity could be developed that simply scans the content of this log, creating fingerprints of each request comprised of, say, the IP address and User Agent string, etc. The more data available, even including the contents of specific cookies, adds to the quality of the uniqueness of this fingerprint. Though, as many others have stated already, the HTTP protocol doesn't make this 100% foolproof - at best it can only be a fairly good indicator.
When i use a machine which has never visited my online banking web site i get asked for additional authentification. then, if i go back a second time to the online banking site i dont get asked the additional authentification...i deleted all cookies in IE and relogged onto my online banking site fully expecting to be asked the authentification questions again. to my surprise i was not asked. doesnt this lead one to believe the bank is doing some kind of pc tagging which doesnt involve cookies?
This is a pretty common type of authentication used by banks.
Say you're accessing your bank website via example-isp.com. The first time you're there, you'll be asked for your password, as well as additional authentication. Once you've passed, the bank knows that user "thatisvaliant" is authenticated to access the site via example-isp.com.
In the future, it won't ask for extra authentication (beyond your password) when you're accessing the site via example-isp.com. If you try to access the bank via another-isp.com, the bank will go through the same routine again.
So to summarize, what the bank's identifying is your ISP and/or netblock, based on your IP address. Obviously not every user at your ISP is you, which is why the bank still asks you for your password.
Have you ever had a credit card company call to verify that things are OK when you use a credit card in a different country? Same concept.
Really, what you want to do cannot be done because the protocols do not allow for this. If static IPs were universally used then you might be able to do it. They are not, so you cannot.
If you really want to identify people, have them log in.
Since they will probably be moving around to different pages on your web site, you need a way to keep track of them as they move about.
So long as they are logged in, and you are tracking their session within your site via cookies/link-parameters/beacons/whatever, you can be pretty sure that they are using the same computer during that time.
Ultimately, it is incorrect to say this tells you which computer they are using if your users are not using your own local network and do not have static IP addresses.
If what you want to do is being done with the cooperation of the users and there is only one user per cookie and they use a single web browser, just use a cookie.
You can use fingerprintjs2
new Fingerprint2().get(function(result, components) {
console.log(result) // a hash, representing your device fingerprint
console.log(components) // an array of FP components
//submit hash and JSON object to the server
})
After that you can check all your users against existing and check JSON similarity, so even if their fingerprint mutates, you still can track them
Because i want the solution to work on all machines and all browsers (within reason) I am trying to create a solution using javascript.
Isn't that a really good reason not to use javascript?
As others have said - cookies are probably your best option - just be aware of the limitations.
I guess the verdict is i cannot programmatically uniquely identify a computer which is visiting my web site.
I have the following question. When i use a machine which has never visited my online banking web site i get asked for additional authentification. then, if i go back a second time to the online banking site i dont get asked the additional authentification. reading the answers to my question i decided it must be a cookie involved. therefore, i deleted all cookies in IE and relogged onto my online banking site fully expecting to be asked the authentification questions again. to my surprise i was not asked. doesnt this lead one to believe the bank is doing some kind of pc tagging which doesnt involve cookies?
further, after much googling today i found the following company who claims to sell a solution which does uniquely identify machines which visit a web site. http://www.the41.com/products.asp.
i appreciate all the good information if you could clarify further this conflicting information i found i would greatly appreciate it.
I would do this using a combination of cookies and flash cookies. Create a GUID and store it in a cookie. If the cookie doesn't exist, try to read it from the flash cookie. If it's still not found, create it and write it to the flash cookie. This way you can share the same GUID across browsers.
I think cookies might be what you are looking for; this is how most websites uniquely identify visitors.
Cookies won't be useful for determining unique visitors. A user could clear cookies and refresh the site - he then is classed as a new user again.
I think that the best way to go about doing this is to implement a server side solution (as you will need somewhere to store your data). Depending on the complexity of your needs for such data, you will need to determine what is classed as a unique visit. A sensible method would be to allow an IP address to return the following day and be given a unique visit. Several visits from one IP address in one day shouldn't be counted as uniques.
Using PHP, for example, it is trivial to get the IP address of a visitor, and store it in a text file (or a sql database).
A server side solution will work on all machines, because you are going to track the user when he first loads up your site. Don't use javascript, as that is meant for client side scripting, plus the user may have disabled it in any case.
Hope that helps.
I will give my ideas starting from simpler to more complex.
In all the above you can create sessions and the problem essentialy translates to match session with request.
a) (difficulty: easy) use client hardware to store explicitely a session id/hash of some sort (there are quite some privace/security issues so make sure you hash anything you store ), solutions include:
cookies storage
browser storage/webDB/ (more exotic browser solutions )
extensions with permission to store things in files.
The above suffer from the fact the the user can just empty his cache in case he doesn want.
b) (difficulty: medium) Login based authentication.
Most modern web frameworks provide such solution the core idea is you let the user voluntarily identify himself, quite straghtforward but adds complexity in the architecture.
The above suffer from additional complexity and making essentially non public content.
c)(difficulty: hard -R&D) Identification based on metadata, (browser ip/language /browser / and other privace invasice stuff so make sure you let your users know or you miay get sued )
non perfect solution can get more complicated (a user typing with specific frequency or using mouse with specific patterns ? you even apply ML solutions ).
The claimed solutions
The most powerful since the user even without wanting explicitely he can be identified. It is straight invasion of privacy(see GDPR) and not perfect eg. ip can change .
Assuming you don't want the user to be in control, you can't. The web doesn't work like that, the best you can hope for is some heuristics.
If it is an option to force your visitor to install some software and use TCPA you may be able to pull something off.
My post might not be a solution, but I can provide an example, where this feature has been implemented.
If you visit the signup page of www.supertorrents.org for the first time from you computer, it's fine. But if you refresh the page or open the page again, it identifies you've previously visited the page. The real beauty comes here - it identifies even if you re-install Windows or other OS.
I read somewhere that they store the CPU ID. Although I couldn't find how do they do it, I seriously doubt it, and they might use MAC Address to do it.
I'll definitely share if I find how to do it.
A Trick:
Create 2 Registration Pages:
First Registration Page: without any email or security check (just with username and password)
Second Registration Page: with high security level (email verification request and security image and etc.)
For customer satisfaction, and easy registration, default
registration page should be the (First Registration Page) but in the
(First Registration Page) there is a hidden restriction. It's IP
Restriction. If an IP tried to register for second time, (for example less than 1 hour) instead of
showing the block page. you can show the (Second Registration Page)
automatically.
in the (First Registration Page) you can set (for example: block 2
attempts from 1 ip for just 1 hour or 24 hours) and after (for example) 1 hour, you can open access from that ip automatically
Please note: (First Registration Page) and (Second Registration Page) should not be in separated pages. you make just 1 page. (for example: register.php) and make it smart to switch between First PHP Style and Second PHP Style

How to secure a URL service?

I use on my server a Text-to-Speech Synthesis platform (probably written in Java).
While the above application is running on my server, users can get audio as a URL to a wav file using the embedded HTML <audio> tag, as follows:
<audio controls>
<source src=”http://myserver.com:59125/process?INPUT_TEXT=Hello%20world” type=”audio/wav”>
</audio>
In the above ‘src’ attribute, ‘process’ requests the synthesis of some text using local port 59125.
My concern is that I might start seeing performance issues and out of memory errors, which would cause the TTS Synthesis platform server (but not the website) to crash every few days, apparently triggered by one or more entities abusing it as some sort of webservice for their own applications.
I wish to secure the URL requests so that a third party couldn't use my text-to-speech server for audio clips not related to my website.
How to secure the URL service?
I take it this URL is embedded in a public website, so any random public user needs to be able to access this URL to download the file. This makes it virtually impossible to secure as is.
The biggest problem is that you're publicly exposing a useful service which is usable for anyone to do something useful. I.e., just by requesting a URL which I construct, I can get your server to do useful work for me (turn my text into speech). The core problem here is that the input text is fully configurable by the end user.
To take away any incentive for any random person to use your server, you need to take away the ability for anyone to convert any random text. If you are the only one who wants to be in charge of what input texts are allowed, you'll have to either whitelist and validate the input, or identify it using ids. E.g., instead of
http://myserver.com:59125/process?INPUT_TEXT=Hello%20world
your URLs look more like:
http://myserver.com:59125/process?input_id=42
42 is substituted to Hello world on the server. Unknown ids won't be served.
Alternatively, again, validate and whitelist:
GET http://myserver.com:59125/process?INPUT_TEXT=Foo%20bar
404 Not Found
Speech for "Foo bar" does not exist.
For either approach, you'll need some sort of proxy in-between instead of directly exposing your TTS engine to the world. This proxy can also cache the resulting file to avoid repeatedly converting the same input again and again.
The end result would work like this:
GET http://myserver.com/tts?input=Hello%20world
myserver.com validates input, returns 403 or 404 for invalid input
myserver.com proxies a request to localhost:59125?INPUT_TEXT=Hello%20World if not already cached
myserver.com caches the result
myserver.com serves the result
This can be accomplished in any number of ways using any number of different web servers and/or CGI programs which do the necessary steps 2 and possibly 3.
This depends on what server you are using. Possible methods are:
Authentication: Use a username and password combination or ask for a SSH certificate; this could be provided via cURL when one webservice requests another one
IP whitelist: allow only specific IP's to access this server
IP whitelist example in Apache:
Deny from all
# server himself
Allow from 127.0.0.1
Allow from 192.168.1.14 # maybe some additional internal network IP
Allow from 192.168.1.36 # or another machine in the local network
Allow from 93.184.216.34 # or some machine somewhere else on the web
your best bet is using the answer above from feeela to limit the usage of the TTS platform to a said webserver (this will be where the users request the audio from and where your security logic should be implemented)
after that you need to write a "proxy" script that gets a token generated on-the-fly from the page that hosts the audio tag with a logic/method of your choice and check its validity (you can use the session/other user data and a salt), if valid it should call the TTS engine and return the audio, otherwise generate an error/a redirect/whatever you want
It depends what you mean by "securing it".
Maybe you want it to only be accessible to certain users? In that case, you have an easy answer: issue each user with login credentials that they need to enter when they visit the site, and pass those credentials through to the API. Anyone without valid credentials will be unable to use the API. Job done.
Or maybe you want it to work for anyone, but only to be used from specific sites? This is more difficult, because any kind of authentication key you have would need to be within the site's Javascript code, and thus visible to someone wanting to copy it. There isn't a foolproof solution, but the best solution I can suggest is to link each API key to the URL of the site that owns it. Then use the HTTP referrer header to check that calls made using a given API key are being called from the correct site. HTTP requests can be spoofed, including the referrer header, so this isn't foolproof, but will prevent most unauthorised use -- someone would have to go a fair distance out of their way to get around it (they'd probably have to set up a proxy server that forwarded your API requests and spoofed the headers). This is unlikely to happen unless your API is an incredibly valuable asset, but if you are worried about that, then you could make it harder for them by having the API keys change frequently and randomly.
But whatever else you do, the very first thing you need to do to secure it is to switch to HTTPS rather than HTTP.

Securely serving up data via API to app and the residing site

I'm not quite sure if an API is the way to go with this, so a little background.
I have been building up a back end which has a very useful set of data and tools for someone to run a site. The front end also uses the same data to show to customers, as one would expect. A mobile app could probably be added in the near future to enable changes to be made to the site, via the app. But the back end can potentially go onto any website like a standard script (ie. it is not centrally stored nor does any data go back and forth between the client and us).
So I thought that the best way around this would be to make an API for the site. Naturally for an app to access the API, it would need a key to authenticate with the API (which the end user can set via their back end). However, I would like the back and front ends to use the API to access the same data so nothing needs to be written twice.
I'm sure it is clear that APIs are a new thing to me, which they are. But, I am trying to improve and adapt my coding to be more efficient.
I thought perhaps that the API could perhaps do some checks from the location of the query to see if it were local request (back/front end) or via an app (which uses a key + user authentication). So how would one go about ensuring that the back and front end could securely access the API, while no one can access it via spoofing. I imagine the checks could be on the lines of the requesting URL, but I am worried that this could be spoofed or other things (that could be checked) could be spoofed. What is the best way to allow local access? Is there anything that can't be spoofed?
I know I could write in a key into the code, but since the code is distributed, I don't want this access key to be public - nor do I want to manually change the key for each site - and nor do I really want the end user to enter some random letters and numbers during setup.
You should use a public/private key. Your front/back end's, mobile versions, or even 3rd party developers will then use the their keys to authenticate each other.

Ways to create a unique user fingerprint in PHP

What is the best way to generate a 'fingerprint' of user unique-ness in PHP?
For example:
I could use a user's IP address
as the 'fingerprint', however, there
could be multiple other users on the same IP
I could
use the user's IP + user agent as
the 'fingerprint', however, a single user
could simply swap from safari to
firefox and again be seen as being unique
Ideally, the fingerprint so label the 'machine' rather than browser or 'ip' but I can't think of how this is achievable.
Open to ideas/suggestions of how you uniquely identify your users, and what advantages/disadvantages your method has.
Easiest and best way: use phps session-management - every client is given an ID, stored in a cookie (if enabled) or given as a get-variable on every link and form (alternatively you could set a cookie on your own). But, this only "fingerprints" the browser - if the user changes his browser, deletes his cookies or whatever, you can't identify it anymore.
Identifying every client by IP address is usually a bad idea and won't work. Clients that use the same router will have the same IP addresses - clients connected through a proxy-pool could have another IP address with every page load.
If you need a solution that can't be manipulated by the client in an easy way, try to do a combination of the following, using all that are supported by the clients browser and compare them on each page-load:
"normal" HTTP Cookies
Local Shared Objects (Flash Cookies)
Storing cookies in RGB values of auto-generated, force-cached PNGs using HTML5 Canvas tag to read pixels (cookies) back out
Storing cookies in and reading out Web History
Storing cookies in HTTP ETags
Internet Explorer userData storage
HTML5 Session Storage
HTML5 Local Storage
HTML5 Global Storage
HTML5 Database Storage via SQLite
There's a solution called evercookie that implements all of this.
There's something else to take in account, the public IP Address of a user is something that also can change in every page load.
There are multiple organizations that switch public IP's in they routers to balance traffic.
Achieving 100% reliability is not guaranteed, but combining some common methods can give you meaningful results
Users generally don't switch browsers. Over-complication in your algorithm only to reach engineering perfection is not worth the effort.
You certainly belong to the top 100 websites if you can expect multiple users from the same IP. Don't take it personal, but you're just not that popular.
Take the simplest possible route that could work and adjust over time if it seems necessary.
I have three different computers, various handheld devices, and many of them have different browsers installed. I use all these interchangeably at home take them with me other places so, basically, on various IP addresses. What I'm trying to point out is that fingerprinting a browser or a machine for that matter is never going to be foolproof if your goal is to block a person.
I recommend you take a different approach. Judge based on the inconclusive information you have available that suggests the identity of your banned user (same IP or same user-agent if it's a uncommon one or else some of the javascript browser fingerprinting methods such as available fonts, available plugins, non-standard window size, etc.) and require of those suspect visitors some higher form of identity verification -- such as oauth with Facebook, Google+, or Twitter. Then you can look to see if that social media account is genuine or created just to circumvent. There are also phone verification APIs in case your user base isn't social-media savvy and depending on how valuable it is to you that users don't circumvent banning.

Categories