I implement login in laravel on login request there is two token generate one in body and another one is in header cookie.
When i remove value of body token it's show page expired error but when I remove value of xsrf-token it's not shows any error and login getting succesfull
POST /login HTTP/1.1
Host: <host>
Content-Length: 513
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1
Origin: <Origin Address>
Content-Type: application/x-www-form-urlencoded
User-Agent:
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Encoding: gzip, deflate
Accept-Language: en-GB,en-US;q=0.9,en;q=0.8
Cookie: XSRF-TOKEN=<token>; laravel_session=<session token>
Connection: close
_token=<token>&userName=<userName>&password=<Password>
Anyone help me to explain this both token. and why page is not getting expired on remove value of xsrf-token value using burpusite tool.
as mentioned in laravel document:
Laravel makes it easy to protect your application from cross-site
request forgery (CSRF) attacks. Cross-site request forgeries are a
type of malicious exploit whereby unauthorized commands are performed
on behalf of an authenticated user.
also:
Laravel stores the current CSRF token in an encrypted XSRF-TOKEN
cookie that is included with each response generated by the framework.
You can use the cookie value to set the X-XSRF-TOKEN request header.
This cookie is primarily sent as a convenience since some JavaScript frameworks and libraries, like Angular and Axios, automatically place its value in the X-XSRF-TOKEN header on same-origin requests.
Related
I noticed something earlier today while inspecting cookies on some of my subdomains that surprised me a little. Although I have set PHP to use only secure cookies, they are nonetheless available using HTTP.
My root domain and most of my subdomains are HTTPS. In fact, they are essentially HTTPS only. I haven't enabled HSTS because I have subdomains that are HTTP only and do not support HTTPS. (Essentially, my domains are either HTTPS only or HTTP only, not both.) While navigating to these subdomains, I noticed the cookies set by the root domain show up in the browser developer tools, and are seemingly sent on requests to the server.
I'd prefer this not happen because a) it's insecure and b) the server handling these subdomains does squat with these cookies so sending them at all is an unnecessary risk that could compromise the main applications.
The weird thing is my cookies are already secure. All cookies are set like so:
setCookie("my_cookie", $cookieValue, $expires, '/', $domain, true, true);
At the top of my session manager, I also have:
session_set_cookie_params(3600, '/', '.example.com', true, true);
The last trues are secure and httpOnly. One would think this makes them HTTP-only:
HTTPS: Cookie with "Secure" will be returned only on HTTPS connections
Reading cookies via HTTPS that were set using HTTP
To ensure that the session cookie is sent only on HTTPS connections,
you can use the function session_set_cookie_params() before starting
the session: https://stackoverflow.com/a/6531754/6110631
A cookie with the Secure attribute is sent to the server only with an
encrypted request over the HTTPS protocol, never with unsecured HTTP,
and therefore can't easily be accessed by a man-in-the-middle
attacker. Insecure sites (with http: in the URL) can't set cookies
with the Secure attribute. However, do not assume that Secure prevents
all access to sensitive information in cookies; for example, it can be
read by someone with access to the client's hard disk.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies#Secure_and_HttpOnly_cookies
But lo and behold, all cookies that I set on the root domain continue to be available on my HTTP-only subdomains. Using the developer tools, any changes with cookies in the root domain continue to be reflected on HTTP-only subdomains!
I intentionally set the cookie as .domain to make it available on all subdomains, since they all share session information and enable SSO (the HTTPS domains, that is). However, I would think that with the secure flag, this would still prevent the cookies from being available on HTTP-only subdomains. Does one of these parameters take precedence over another? (I would think secure would).
Why is this not working as intended? It seems that because the cookies are available, even though I have secure and httpOnly, the cookies could be stolen from an unencrypted HTTP connection. Is it that the cookies are not actually sent, but the browser (erroneously) displays them as present anyways, or is there a real security risk here?
The browser developer tools seems to make a distinction between displaying cookies that may be available on a domain and those that are actually sent in the request - that is to say, the developer tools will show cookies for a subdomain even if they are never sent on requests to the server.
Here's an example of a request to the root domain:
:authority: example.com
:method: GET
:path: /account
:scheme: https
accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
accept-encoding: gzip, deflate, br
accept-language: en-US,en;q=0.9
cache-control: max-age=0
cookie: [cookies redacted]
dnt: 1
referer: [redacted]
upgrade-insecure-requests: 1
user-agent: [redacted]
Here's an example of a request to an HTTP-only subdomain:
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.9
Cache-Control: max-age=0
Connection: keep-alive
DNT: 1
Host: subdomain.example.com
If-Modified-Since: Wed, 27 May 2020 20:51:35 GMT
If-None-Match: "6c5-5a6a760afe782-gzip"
Upgrade-Insecure-Requests: 1
User-Agent: [redacted]
As one can see, cookies are not sent with the second request. However, if you examine the cookies on both domains in the browser, you can see the browser displays them regardless:
This can be a source of confusion, but examining the request headers shows that the browser does indeed refrain from sending the secure cookies on insecure requests.
My organization is testing my laravel app with IBM AppScan and it reveled the following issue. I'm not sure the best way I should be verifying the referer. Details of the scan
The following changes were applied to the original request:
- Set header to 'http://bogus.referer.ibm.com'
Reasoning:
The test result seems to indicate a vulnerability because the Test
Response is identical to the Original Response, indicating that the
Cross-Site Request Forgery attempt was successful, even
though it included a fictive 'Referer' header.
Request/Response:
GET /data/1?page=3 HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE 9.0; Win32) Referer:
http://bogus.referer.ibm.com
Host: xxxx.xxx.xxx.xxx
Accept: text/html,application/
Laravel relies on CSRF token to prevent the application from CSRF. Validating the header only adds an extra layer of security, however, this can be forged.
Normally when a publicly-accessible directory requires basic HTTP authentication, the value of $_SERVER['HTTP_AUTHORIZATION'] and/or $_SERVER['REMOTE_USER'] (or $_SERVER['PHP_AUTH_USER'], etc) will be set and accessible to PHP once a valid username/password combination have been provided to the server.
For example, if http://www.example.com/members requires basic authentication, and a user successfully authenticates using the credentials myusername and mypassword by manually typing http://myusername:mypassword#www.example.com/members into their browser, the value of $_SERVER['HTTP_AUTHORIZATION'] would be something like:
Basic bXl1c2VybmFtZTpteXBhc3N3b3Jk
... and the value of $_SERVER['REMOTE_USER'] would simply be:
myusername
However if authentication is not required in the same directory, but the URL is still visited with the username/password inside of it, the values of the username/password don't seem to be set anywhere (running PHP 5.3.10 as CGI/FastCGI on Apache/2.2.22).
From within PHP (and/or .htaccess if necessary), when no authentication is required, is there a way to retrieve the values of the username (and/or password) that have been provided by a visitor who manually added them to the URL?
TLDR; As far as I can see that information is never sent to server so I claim it's not possible.
The way http authentication works if you have it set is that server sends a request for user/pass if it's not already set, and browser then adds that information in encoded form to a Authorization header and sends it to the server along with the request.
As specified in RFC 2617, describing Basic and Digest authentication mechanisms For basic authentication, server sends HTTP 401 Not Authorized status and WWW-Authenticate header fields to request this information. (RFC 2617, Access Authentication Framework)
With tests one can see that if authentication is never configured on the server to be required, server won't request authentication information from browser, and browser won't add user/pass information into the request. RFC does not mandate browser (user agent) to not pass that information, but says instead
A user agent that wishes to authenticate itself with an origin
server--usually, but not necessarily, after receiving a 401
(Unauthorized)--MAY do so by including an Authorization header field
with the request.
In practice, if you watch the sent headers you can see that if this information is requested by the server, it's sent in encoded form using Authorization header like specified by the RFC. However, if you're not using any authentication the request you send just doesn't seem to contain that information in any form. I've confirmed this with IE, Firefox and Chrome browsers myself.
If you want to test this yourself for your setup, this can be done for example using netcat like this:
First, run netcat on your server:
nc -l 8888
Then issue a request from your browser to http://testvalue:testvalue#yourdomain:8888/
As a result, observe from netcat output all the information that get sent to server, something like this:
GET / HTTP/1.1
Host: yourdomain:8888
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:39.0) Gecko/20100101 Firefox/39.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
There is no information about user or password anywhere. I claim that unless the server requests it, it won't be there.
The addition of user and password in a url using http(s)://user:pass#site.com has been disabled by at least Internet Explorer for several years now, as far as i know.
https://support.microsoft.com/en-us/kb/834489
So I am not sure if what you are trying to reach is even usefull. I think the browsers dont even pass that part of the url on anymore.
If a logged in user navigates to a certain area of the site which is to use WebSockets, How can I grab that session Id so I can identify him on the server?
My server is basically an endless while loop which holds information about all connected users and stuff, so in order to grab that id I figured the only suitable moment is at the handshake, but unfortunately the handshake's request headers contain no cookie data:
Request Headers
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.5
Cache-Control: no-cache
Connection: keep-alive, Upgrade
DNT: 1
Host: 192.168.1.2:9300
Origin: http://localhost
Pragma: no-cache
Sec-WebSocket-Key: 5C7zarsxeh1kdcAIdjQezg==
Sec-WebSocket-Version: 13
Upgrade: websocket
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
rv:27.0) Gecko/20100101 Firefox/27.0
So how can I really grab that id? I thought I could somehow force javascript to send cookie data along with that request but any self-respecting website in 2014 will have httpOnly session cookies so that wont work out. Any help is greatly appreciated!
Here's a link for the server I'm using: https://github.com/Flynsarmy/PHPWebSocket-Chat/blob/master/class.PHPWebSocket.php (thanks to accepted answer)
http only cookies as well as secure cookies work fine with websocket.
Some websocket modules have chosen to ignore cookies in the request, so you need to read the specs of the module.
Try: websocket node: https://github.com/Worlize/WebSocket-Node.
Make sure to use the secure websocket protocol as wss://xyz.com
Update:
Also, chrome will not show the cookies in the "inspect element" Network tab.
In node try dumping the request, something like:
wsServer.on('request', function(request) {
console.log(request);
console.log(request.cookies); // works in websocket node
}
If you see the cookies somewhere in the log...you've got it.
If you're using secure-only cookies, you need to be in secure web sockets: wss://
Update2:
The cookies are passed in the initial request. Chrome does not show it (all the time) as sometimes it shows provisional headers which omits cookie information.
It is up to the websocket server to do 'something' with the cookies and attach them to each request.
Looking at the code of your server: https://github.com/Flynsarmy/PHPWebSocket-Chat/blob/master/class.PHPWebSocket.php I do not see the word "cookie" anywhere, so it is not being nicely packaged and attached to each websocket connection. I could be wrong, that's why you might want to contact the developer and see if the whole header is being attached to each connection and how to access it.
This I can say for certain: If you're using secure cookies then cookies will not be transmitted unless you use the secure websocket wss://mysite.com. Plain ws://mysite.com will not work.
Also, cookies will only be transmitted in the request if the domain is the same as the webpage.
I've been trying to receive HTTP requests with custom fields in the headers but it seems like my server removes them...
This is the request that I am sending to the server (I read that request with a HTTP Proxy) :
POST /oauth.php/request_token HTTP/1.1
Host: domain.com
User-Agent: DearStranger/1.0 CFNetwork/485.12.7 Darwin/10.6.0
Authorization: OAuth realm="", oauth_consumer_key="ebb942f0d260b06cb533c6133c28408004d343197", oauth_signature_method="HMAC-SHA1", oauth_signature="qPBFAa8XRRbor2%2F%2FQXv6kU3%2F7jU%3D", oauth_timestamp="1295278460", oauth_nonce="E7D6AC76-74CE-4951-8182-7EBF9B382E7E", oauth_version="1.0"
Accept: */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
Pragma: no-cache
Content-Length: 0
Connection: keep-alive
Proxy-Connection: keep-alive
I printed the headers of the request when I arrive on my page.php. I see that :
uri http://domain.com/oauth.php/request_token
parameters
headers Array
.... Accept : */*
.... Accept-Encoding : gzip, deflate
.... Accept-Language : en-us
.... Connection : keep-alive
.... Host : domain.com
.... User-Agent : DearStranger/1.0 CFNetwork/485.12.7 Darwin/10.6.0
method POST
when I should be seeing that (it is working on a local version)
uri http://localhost:8888/oauth.php/request_token
parameters
headers Array
.... Accept : */*
.... Accept-Encoding : gzip, deflate
.... Accept-Language : en-us
.... Authorization : OAuth realm="", oauth_consumer_key="582d95bd45d455fa2e5819f88fc0c5a104d2c7ff3", oauth_signature_method="HMAC-SHA1", oauth_signature="agPSFdtlGxXv2sbrz3pRjHlROOE%3D", oauth_timestamp="1295272680", oauth_nonce="667A133C-5071-48AB-9F13-8146425E46B7", oauth_version="1.0"
.... Connection : keep-alive
.... Content-Length : 0
.... Host : localhost:8888
.... User-Agent : DearStranger/1.0 CFNetwork/485.12.7 Darwin/10.6.0
method POST
I am using php 5.2.17 on the server.
Do you have any idea to help me fix that issue?
Thanks!
Actually, there is a pretty easy fix. The fault lays with fastcgi. You can just provide an .htaccess file with a rewrite rule to save the header.
<IfModule mod_rewrite.c>
...
# Pass Authorization headers to an environment variable
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
</IfModule>
Credit goes here: https://drupal.org/node/1365168
They also talk about an even simpler solution to let these headers pass through, if you are using a virtual host.
Apache strips the Authentication header because it's a security risk, when used with CGI. Are you using PHP through CGI?
I think PHP also strips Authentication in some circumstances. Again, there's a risk that exposing it to scripts will allow one users' code to sniff other users' on the same server (e.g., if Alice and Bob both have accounts).
Please include the actual names of the headers that are being cut. This question is useless in its present form, forcing us to guess...
Have you checked with Firebug/HTTPFox that your browser's actually sending those headers? Unless you've specifically configured the server to clean up the headers, it's going to pass-through any custom ones as-is.
The Authorization header, which is where the OAuth data gets sent, would ONLY be sent by a client in response to a server-side 401 "authorization required" request. If you haven't added the server-side "must have password to get in" configuration, the client's not going to send auth info.