Submit a form as part of a cron job - php

I have a url that I need to visit as part of a wider process on a project, I know that it works when I am logged in but obviously as part of the cron job it wouldn't be. If it were htaccess I would simply either use curl or wget and pass the username and password parameters accepted.
I have tried this already on this particular cron but it didn't seem to perform the task that the url is associated with. See example below:
curl -u username:password http://www.example.com (I would usually have the dev/null 2>&1 as part of the cron but I wish to see the output for now)
The problem is however that this page sits behind a form login and I am unsure of how to pass parameters to that form using a cron job.
Any help or advice would be greatly appreciated.

Using Curl:
You will need to pass the form login parameters, probably using the POST method. Check the form's HTML to be sure.
To do a POST request with curl, see https://superuser.com/questions/149329/what-is-the-curl-command-line-syntax-to-do-a-post-request.
This might not work for some forms, which implement CSRF. To work-around this, you would need to parse the HTML, find the CSRF token, and pass it as one of the POST request's data parameters.
Next, the login most likely returns a cookie. Your browser normally saves this, and gives the cookie back to the website on each page request. You will need to specify a cookie file. See Send cookies with curl.
There may be some investigation to work-around any more complicated login schemes, depending on the website.
Using an automated web-browser
The much easier alternative, is to use an automated browser, like Selenium webdriver. There are scripting interfaces you can use, like Capybara (a ruby gem). Using Capybara and Selenium to control a browser, you can avoid any techniques that websites might have which makes using CURL difficult (eg. if they detect and block bots).
The disadvantage is that you need to install it. However, once you do, you can use simple commands to do stuff, eg visit('http://www.google.com'), click_link('Link Text'), ...
Also, see:
require 'capybara'
session = Capybara::Session.new(:webkit, my_rack_app)
session.within("//form[#id='session']") do
session.fill_in 'Email', :with => 'user#example.com'
session.fill_in 'Password', :with => 'password'
end
session.click_button 'Sign in'

Related

How to make sure AJAX is called by JavaScript?

I asked a similar question before, and the answer was simply:
if JavaScript can do it, then any client can do it.
But I still want to find out a way do restrict AJAX calls to JavaScript.
The reason is :
I'm building a web application, when a user clicks on an image, tagged like this:
<img src='src.jpg' data-id='42'/>
JavaScript calls a PHP page like this:
$.ajax("action.php?action=click&id=42");
then action.php inserts rows in database.
But I'm afraid that some users can automate entries that "clicks" all the id's and such, by calling necessary url's, since they are visible in the source code.
How can I prevent such a thing, and make sure it works only on click, and not by calling the url from a browser tab?
p.s.
I think a possible solution would be using encryption, like generate a key on user visit, and call the action page with that key, or hash/md5sum/whatever of it. But I think it can be done without transforming it into a security problem. Am I right ? Moreover, I'm not sure this method is a solution, since I don't know anything about this kind of security, or it's implementation.
I'm not sure there is a 100% secure answer. A combination of a server generated token that is inserted into a hidden form element and anti-automation techniques like limiting the number of requests over a certain time period is the best thing I can come up with.
[EDIT]
Actually a good solution would be to use CAPTCHAS
Your question isn't really "How can I tell AJAX from non-AJAX?" It's "How do I stop someone inflating a score by repeated clicks and ballot stuffing?"
In answer to the question you asked, the answer you quoted was essentially right. There is no reliable way to determine whether a request is being made by AJAX, a particular browser, a CURL session or a guy typing raw HTTP commands into a telnet session. We might see a browser or a script or an app, but all PHP sees is:
GET /resource.html HTTP/1.1
host:www.example.com
If there's some convenience reason for wanting to know whether a request was AJAX, some javascript libraries such as jQuery add an additional HTTP header to AJAX requests that you can look for, or you could manually add a header or include a field to your payload such as AJAX=1. Then you can check for those server side and take whatever action you think should be made for an AJAX request.
Of course there's nothing stopping me using CURL to make the same request with the right headers set to make the server think it's an AJAX request. You should therefore only use such tricks where whether or not the request was AJAX is of interest so you can format the response properly (send a HTML page if it's not AJAX, or JSON if it is). The security of your application can't rely on such tricks, and if the design of your application requires the ability to tell AJAX from non-AJAX for security or reliability reasons then you need to rethink the design of your application.
In answer to what you're actually trying to achieve, there are a couple of approaches. None are completely reliable, though. The first approach is to deposit a cookie on the user's machine on first click, and to ignore any subsequent requests from that user agent if the cookie is in any subsequent requests. It's a fairly simple, lightweight approach, but it's also easily defeated by simply deleting the cookie, or refusing to accept it in the first place.
Alternatively, when the user makes the AJAX request, you can record some information about the requesting user agent along with the fact that a click was submitted. You can, for example store a hash (stored with something stronger than MD5!) of the client's IP and user agent string, along with a timestamp for the click. If you see a lot of the same hash, along with closely grouped timestamps, then there's possibly abuse being attempted. Again, this trick isn't 100% reliable because user agents can see any string they want as their user agent string.
Use post method instead of get.Read the documentation here http://api.jquery.com/jQuery.post/ to learn how to use post method in jquery
You could, for example, implement a check if the request is really done with AJAX, and not by just calling the URL.
if(!empty($_SERVER['HTTP_X_REQUESTED_WITH']) && strtolower($_SERVER['HTTP_X_REQUESTED_WITH']) == 'xmlhttprequest') {
// Yay, it is ajax!
} else {
// no AJAX, man..
}
This solution may need more reflexion but might do the trick
You could use tokens as stated in Slicedpan's answer. When serving your page, you would generate uuids for each images and store them in session / database.
Then serve your html as
<img src='src.jpg' data-id='42' data-uuid='uuidgenerated'/>
Your ajax request would become
$.ajax("action.php?action=click&uuid=uuidgenerated");
Then on php side, check for the uuid in your memory/database, and allow or not the transaction. (You can also check for custom headers sent on ajax as stated in other responses)
You would also need to purge uuids, on token lifetime, on window unload, on session expired...
This method won't allow you to know if the request comes from an xhr but you'll be able to limit their number.

PHP cURL "CURLOPT_USERPWD" or best way to protect the API?

In php cURL usage, what actually is CURLOPT_USERPWD working? I can see in many examples, like:
curl_setopt($ch,CURLOPT_USERPWD,"my_username:my_password");
.. but how to do at the Server Side? What actually are those username and password? Of course since i want to protect my PHP API page at Server Side, is that the best way or what is the best way to protect it please?
Ideally, you would have a separate config class that contained your user name and password. Then you would create a new version of that config class in your file with the cURL commands. Then you would pass the username and password as variables to the cURL commands. The cURL information will not be visible anyway as long as it is not echoed or printed. Even if you just put it in directly it would be hidden, but having it in a config file will allow you to change the values without changing the main page that contains the cURL commands.

AJAX security and user managment

I am working on a web application that will be hosted on a server that is "on the internet", not a LAN.
The app uses quite a bit of AJAX calls and has about 12 ajax handler files for the functions.
My question is instead of asking anybody here to write a tutorial on AJAX security, does anybody know of any good resources (website, book, whatever) that can help me with securing these files.
Right now, as long as you know the variable name its looking for you can freely get data from the database.
I was thinking maybe session validation, or something along those lines for the logged in user.
Anyways if you have any good resources I'll do the homework myself.
Thanks
AJAX calls are generally used to access web services, which is what it seems you are using them for here. If that is the case then what you need to be concerned about is the security layer that you have provided in the server-side scripting language you are using (looks like you are using PHP as per your question's tags).
The same way that you do authentication and protection for other pages on your site that aren't accessed via AJAX calls you can implement for your web services. For instance, if you require authentication for your application then you can store the user's ID in $_SESSION. From there you can check to make sure the user is logged in via $_SESSION whenever one of your web services is requested.
I've often seen AJAX calls that check the X-REQUESTED-WITH HTTP header to "verify" that the request originated from AJAX. Depending on how you're sending your AJAX calls (with XmlHttpRequest or a JS library), you can either use the standard value for this header, or set it to a custom value. That way, you can do something similar to this in PHP to check if the page was requested with AJAX:
http://davidwalsh.name/detect-ajax
if( !empty($_SERVER['HTTP_X_REQUESTED_WITH']) &&
strtolower($_SERVER['HTTP_X_REQUESTED_WITH']) == 'xmlhttprequest')
It is important to note that since it's an HTTP header, it can be spoofed, so it is by no means full-proof.
Here is a good resource. Securing Ajax Applications: Ensuring the Safety of the Dynamic Web
However a very simple method is to use a MD5 hash with a private key. e.g. USER_NAME+PRIVATE_KEY. If you know the users name on the website/login you can provide that key in an MD5 hash set to a javascript variable. Then simply pass the users name in your AJAX request and the REST service can just take the same private key plus the users name and compare the two hashes. You're simply sending across a hash, and the user name then. It's simple and effective. Virtually impossible to reverse too unless you have a simple private key.
So in your javascript you might have this set:
var user='username';
var hash='925c35bae29a5d18124ead6fd0771756'
Then, when you send your request you send something like this:
myService.php?user=username&hash=925c35bae29a5d18124ead6fd0771756&morerequests=goodthings
When you check it, in the service you would do something like this
<?php
if(md5($_REQUEST['user']."_privatekey")==$_REQUEST['hash']){
echo 'passed validation';
}else{
echo 'sorry charlie';
}?>
Obviously you would need to use PHP or something else to generate the hash with the private key, but I think you get the general idea. _privatekey should be something complex in the event you do have a troll that tries to hack it.

how to prevent 'manual execution' of external PHP script

In simplest terms, I utilize external PHP scripts throughout my client's website for various purposes such as getting search results, updating content, etc.
I keep these scripts in a directory:
www.domain.com/scripts/scriptname01.php
www.domain.com/scripts/scriptname02.php
www.domain.com/scripts/scriptname03.php
etc..
I usually execute them using jQuery AJAX calls.
What I'm trying to do is find is a piece of code that will detect (from within) whether these scripts are being executed from a file via AJAX or MANUALLY via URL by the user.
IS THIS POSSIBLE??
I've have searched absolutely everywhere and tried various methods to do with the $_SERVER[] array but still no success.
What I'm trying to do is find is a piece of code that will detect (from within) whether these scripts are being executed from a file via AJAX or MANUALLY via URL by the user.
IS THIS POSSIBLE??
No, not with 100% reliability. There's nothing you can do to stop the client from simulating an Ajax call.
There are some headers you can test for, though, namely X-Requested-With. They would prevent an unsophisticated user from calling your Ajax URLs directly. See Detect Ajax calling URL
Most AJAX frameworks will send an X-Requested-With: header. Assuming you are running on Apache, you can use the apache_request_headers() function to retrieve the headers and check for it/parse it.
Even so, there is nothing preventing someone from manually setting this header - there is no real 100% foolproof way to detect this, but checking for this header is probably about as close as you will get.
Depending on what you need to protect and why, you might consider requiring some form of authentication, and/or using a unique hash/PHP sessions, but this can still be reverse engineered by anyone who knows a bit about Javascript.
As an idea of things that you can verify, if you verify all of these before servicing you request it will afford a degree of certainty (although not much, none if someone is deliberately trying to cirumvent your system):
Store unique hash in a session value, and require it to be sent back to you by the AJAX call (in a cookie or a request parameter) so can compare them at the server side to verify that they match
Check the X-Requested-With: header is set and the value is sensible
Check that the User-Agent: header is the same as the one that started the session
The more things you check, the more chance an attacker will get bored and give up before they get it right. Equally, the longer/more system resources it will take to service each request...
There is no 100% reliable way to prevent a user, if he knows the address of your request, from invoking your script.
This is why you have to authenticate every request to your script. If your script is only to be called by authenticated users, check for the authentication again in your script. Treat it as you will treat incoming user input - validate and sanitize everything.
On Edit: The same could be said for any script which the user can access through the URL. For example, consider profile.php?userid=3

How to deny direct access to files in AJAX directory

I have several pages that call in content via jQuery .ajax. I dont want the content visible on the page so thats why I went with .ajax and not showing/hiding the content. I want to protect the files inside the AJAX directory from being directly accessible through the browser url. I know that PHP headers can be spoofed and dont know if it is better to use an "access" key or try doing it via htaccess.
My question is what is the more reliable method? There is no logged on/non logged user status, and the main pages need to be able to pull in content from the pages in the AJAX directories.
thx
Make a temporary time-coded session variable. Check the variable in the php output file before echoing the data.
OR, if you don't want to use sessions.. do this:
$key = base64encode(time().'abcd');
in the read file:
base64decode
explode by abcd
read the time. Allow 5 seconds buffer. If the time falls within 5 seconds of the stamped request. You are legit.
To make it more secure, you can change your encrypting / decrypting mechanism.
I would drop this idea because there is no secure way to do it.
Your server will never be able to tell apart a "real" Ajax request from a "faked" one, as every aspect of the request can be forged on client side. An attacker will just have to look into a packet filter to see what requests your page makes. It is trivial to replicate the requests.
Any solution you work out will do nothing but provide a false sense of security. If you have data you need to keep secret, you will need to employ some more efficient protection like authentication.
Why not have the content be outside the webserver directory, and then have a php script that can validate if the person should see it, and then send it to them.
So, you have getcontent.php, and you can look at a cookie, or a token that was given to the javascript page and it uses to do the request, and then it will just fetch the real content, set the mime types and stream it to the user.
This way you can change your logic as to who should have access, without changing any of the rest of your application.
There is no real difference to having http://someorg.net/myimage.gif and http://someorg.net/myscript.php?token=887799&img_id=ddtw88 to the browser, but obviously it will need to work with GET so a time limited value is necessary as the user can see reuse it.

Categories