I am working with Open Id, just playing around making a class to interact / auth Open Id's on my site (in PHP). I know there are a few other Libraries (like RPX), but I want to use my own (its good to keep help better understand the protocol and whether its right for me).
The question I have relates to the Open Id discovery sequence. Basically I have reached the point where I am looking at using the XRDS doc to get the local identity (openid.identity) from the claimed identity (openid.claimed_id).
My question is, do I have to make a cURL request to get the XRDS Location (X-XRDS-location) and then make another cURL request to get the actual XRDS doc??
It seems like with a DUMB request I only make one cURL request and get the Open Id Server, but have to make two to use the XRDS Smart method. Just doesn't seem right, can anyone else give me some info.
To be complete, yes, your RP must HTTP GET on the URL the user gives you, and then search for an XRDS document reference and if found do another HTTP GET from there. Keep in mind that the XRDS may be hosted on a different server, so don't code up anything that would require the connection to be the same between the two requests since it might not be the same connection.
If in your initial HTTP GET request you include the HTTP header:
Accept: application/xrds+xml
Then the page MAY respond immediately with the XRDS document rather than an HTML document that you have to parse for an XRDS link. You'll be able to detect that this has occurred by checking the HTTP response header for application/xrds+xml in its Content-Type header. This is an optimization so that RPs don't typically have to make that second HTTP GET call -- but you can't rely on it happening.
The best advice I can give you, is to try to abstract your HTTP requesting a little bit, and then just go through the entire process of doing an HTTP request twice.
You can keep your curl instances around if you want to speed things up using persistent connections, but that may or may not be want you want.
I hope this helps, and good luck.. OpenID is one of the most bulky and convoluted web standards I've come across since WebDAV =)
Evert
I know I'm late to the game here, but I think you should also check out the webfinger protocol. It takes the standard "email as userid" pattern and lets you do a lookup from there to discover openid etc.
Related
We have certain action links which are one time use only. Some of them do not require any action from a user other than viewing it. And here comes the problem, when you share it in say Viber, Slack or anything else that generates a preview of the link (or unfurls the link as Slack calls it) it gets counted as used since it was requested.
Is there a reliable way to detect these preview generating requests solely via PHP? And if it's possible, how does one do that?
Not possible with 100% accuracy in PHP alone, as it deals with HTTP requests, which are quite abstracted from the client. Strictly speaking you cannot even guarantee that user have actually seen the response, even tho it was legitimately requested by the user.
The options you have:
use checkboxes like "I've read this" (violates no-action requirement)
use javascript to send "I've read this" request without user interaction (violates PHP alone requirement)
rely on cookies: redirect user with set-cookie header, then redirect back to show content and mark the url as consumed (still not 100% guaranteed, and may result with infinite redirects for bots who follow 302 redirects, and do not persist cookies)
rely on request headers (could work if you had a finite list of supported bots, and all of them provide a signature)
I've looked on the entire internet to solve this problem. And I've found some workarounds to verify if the request is for link preview generation.
Then, I've created a tool to solve it. It's on GitHub:
https://github.com/brunoinds/link-preview-detector
You only need to call a single method from the class:
<?php
require('path\to\LinkPreviewOrigin.php');
$response = LinkPreviewOrigin::isForLinkPreview();
//true or false
I hope to solve your question!
When I print $data = file_get_contents('php://input'),It printed lots of messy code. I want to know from the php server which android device has uploaded a certain file to the server. Thank you so much!
That isn't messy code, it's details about the request which you do not understand.
Go learn how the HTTP protocol works, you can't tackle this problem if you don't understand what you're working with.
You can only find out what a user-agent provides. If they are not sending a request header over HTTP with a device version number, then no you will not get that.
Typically the OS is included in the User-agent string. This is likely to be the only place you'll find anything close to what you're looking for.
If you want to identify the specific user, the best you can do (by design of many of the systems between you and the device), is implement "sessions" using HTTP cookies. If the requests are coming from a browser, once you set a cookie, every subsequent request from that individual user will have the same cookie with it.
The general pattern we use is to look for an existing cookie, and if there isn't one (i.e. new user): create a unique random string, store it somewhere on the server associated with this "user", and then send it to the user as a cookie.
The PHP session features can greatly assist you in this.
Keep in mind you can only track a user in this anonymous way. You cannot get an actual device ID from the device unless you also wrote the app on the device which is sending the request. and this is good.
This may have a simple answer (and I hope it does) but looking online I only found examples of how to get the current URL/Domain. No where could I find how to get that of the incoming http requst.
My set up is a REST api that handles the typical GET/POST/DELETE/PUT requests. I have to return domain information for clients about the domain they're pulling from. Hence, if a client using my CMS clicks on info, he must receive info about the domain he is logged into (and thus sending the request from).
I chose not to add code here, seeing as my question pertains less to actual code as it does to methodology. Thanks in advance for any and all answers!
In Internet every address could be faked (VPN, proxies etc). It's one of fundamental principles of the network.
You will never could detect with 100% warranty, so the maximum what You could have is $_SERVER['HTTP_REFERER'] and $_SERVER['REMOTE_ADDR'].
You could make additional verification for it's existence before to save/process it, but it could cost some additional performance of Your server.
If Your aim is to provide some additional access rules to some methods / data, You should use an other verification mechanism (tokens, passwords etc).
print_r($_SERVER);
may be it'll useful for you
It sounds as though you're looking for the HTTP referer, accessible in PHP through $_SERVER['HTTP_REFERER'].
As far as I know, there are no reliable ways to determinate the domain where a request comes from. Maybe you could check the client's IP address and/or the HTTP referer and match it to a set of domains,... but that wouldn't be 100% safe in my opinion.
How about implementing an (optional) parameter for your API calls, which has to be the domainname?
I ended up defining a key constant in an external php file that I will deliver to the client within the CMS. (Already have a bunch of constants anyway).
On the server side I put the key in the database and compare these keys on every request. This is not fool proof but I realized I could use the key for other functions aswell and so I implemented it anyway.
Using this combined with various other security checks I found it unnecessary to have to track the request domain. Thanks for the responses guys!
Is there a way to make a PHP file so that it can only be loaded and executed by the Javascript code that I write? I.e can I make sure that someone can't read my JS, load up the PHP page in their browser with their own variables, and make unauthorized changes to my database?
Any help much appreciated.
No.
You can check if $_SERVER['HTTP_X_REQUESTED_WITH'] is set and equals "XMLHttpRequest", but this is just an HTTP header that can be faked.
Javascript just makes standard HTTP requests which can be reproduced in any number of ways. HTTP is a very simple protocol that does not offer the possibility to distinguish between clients in any reliable way. Identical requests are identical. You need to build your user identification and authorization scheme yourself on top of HTTP, it's not part of the protocol. The server needs to decide and enforce what is authorized and what isn't based on rules (that you establish), not on who asked.
Is there a way to make a PHP file so that it can only be loaded and executed by the Javascript code that I write?
Not reliably, no. Any request can be forged on client side. This method is not acceptable to establish security. You will have to use some kind of authentication on server side.
No. It is simple to write a 10 line program in e.g. Python, to spoof any useragent. You can not ever trust anything that any user sends you ever under any circumstances.
Doing so will bring shame on your entire family, all of your ancestors and cause your descendents to be forever stigmatized as the offspring of "that guy".
Maybe you can check the request header sent by Javascript. AJAX calls should send this line:
X-Requested-With: XMLHttpRequest
Is there a way to detect in my script whether the request is coming from normal web browser or some script executing curl. I can see the headers and can distinguish with "User-Agent and other few headers" but in curl fake headers can be set, so i am not able to track the request.
Please suggest me ways about identifying the curl or other similar non browser request.
The only way to catch most "automated" requests is to code in logic that spots activity that couldn't possibly be human with a browser.
For example, hitting pages too fast, filling out a form too fast, have an external source in the html file (like a fake css file through a php file), and check to see if the requesting IP has downloaded it in the previous stage of your site (kind of like a reverse honeypot), but you would need to exclude certain IP's/user agents from being blocked, otherwise you'll block google's webspiders. etc.
This is probably the only way of doing it if curl (or any other automated script) is faking its headers to look like a browser.
Strictly speaking, there is no way.
Although there are non-direct techiques, but I would never discuss it in public, especially on a site like Stackoverflow, which encourage screen scraping, content swiping autoposting and all this dirty roboting stuff.
In some cases you can use CAPTCHA test to tell a human from a bot.
As far as i know, you can't see the difference between a "real" call from your browser and one from curl.
You can compare the header (User-agent) but its all i know.