how to display a dialog if user can't download file - php

dealing with php/html/javascript.
i'm trying to figure out a good/best approach to allowing a user to download a file. i can have the traditional href link, that interfaces with the back end php app to download the file.
however, i want to have the app display some sort of dialog/alert if the user isn't able (basedon acl/permissions) to download the file... does this have to ba an ajax thing, as i don't want to do a page refresh...
thoughts/comments/pointers to code samples are appreciated.
thanks.
-tom
hi... more data/information.
in my test, i send the userID/fileID via the query to the backend php.
the app then confirms the user is the user for the file, and that the user has the rights to access the file. the query data is matched against data in the db for the user/file combination.
so the last/critical check occurs on the back end.
hope this gives a bit more insight into what i'm looking to do/accomplish.
thanks
-tom

AJAX could be a good technology to use if your looking for a work-around for the page not refreshing but it doesn't have to be your only option.
Another option without requiring AJAX, which might be cumbersome depending on how your project is design, is to enable or disable features depending on the user's authentication level.
As a simple example, enable features only related to Administrators and disable Administrator features for normal users.
You don't necessarily have to enable/disable features, you could also decide before the user clicks on links whether or not he/she has rights to do-so.
With more information on how your project is laid out, we can provide more concise answers.

The easiest method would be to return a HTTP response code of 401 ("Unauthorized"). This will cause your web server to display the 401 error page, which you can modify to fit your design.
Or, if you are using AJAX, then you can check for a 401 response code and pop up a nice alert for them without taking them to a different page.

Related

Extracting Data from Another Domain, possible?

I'm terrible at keeping track of my bills, so I wanted to create something automated. I also wanted the challenge of making it myself.
My questions:
Is it possible to have a webpage connect to another domain (any utility website i.e. timewarnercable.com) with the proper login credentials and retrieve the dollar amount I owe, then send me an email or even just display it on the webpage?
I've already got a webpage setup that has all my account info stored in it (don't worry it's only a local site!) and I can click a button and the info I have stored sends a POST request to the utility login site. This logs me in to my account page and then I can view the bill. But don't want it to open another page..I'd rather load the content of that page in the background, scan for the code where its says my $ owed, then capture that somehow, then return the dollar amount onto the webpage.
If so, is this possible to design this with Ruby (Rails) or php, with Javascript / AJAX.
Thanks!
What you're basically asking about is "page scraping", but your scenario is more complicated. You would have to fake the login post, capture and store any cookie/session info returned to you in the response and use that in subsequent requests to the site. You may also have to deal with redirects, depending on the site.
I have found nodejs actually quite useful for scraping pages since it has plugins that provide dom selectors (there is a jquery plugin) - you're using javascript for server-side programming.
Check if the site has API and if the site provides that, will make your life a ton easier.
Some banks like BankOfAmerica have applications that already do this - they aggregate your accounts and bills from other sites, see if your bank can do this.

Can Webbots delete content from my website using links that are wired to GET requests?

OK here my problem: content is disappearing from my site. It's not the most secure site out there, it has a number of issues. Right now every time I upload a page that can delete content from the my site using simple links wired to a GET request I find the corresponding content being deleted in mass.
Example, I have a functionality on my site to upload images. Once the user uploads an image, the admin(the owner) can use another page to delete all(owned) images from the site. The delete functionality is implemented in such a way that a user clicks on the link under each thumbnail of uploaded images he would send a get request that deletes the image information from the site's database and deletes the image from the server file system.
The other day I uploaded that functionality and the next morning I found all my images deleted. The pages are protected using user authentication when you view the pages using a browser. To my surprise, however, I could wget that page with out any problem.
So I was wondering if some evil web bot was deleting my content using those links? Is that possible? What do you advice for further securing my website.
It is absolutely possible. Even non-evil web bots could be doing it. The Google bot doesn't know the link it follows has any specific functionality.
The easiest way to possibly address this is to setup a proper robots.txt file to tell the bots not to go to specific pages. Start here: http://www.robotstxt.org/
RFC 2616 (HTTP protocol), section 9.1.1: Safe Methods:
The convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered "safe". This allows user agents to represent other methods, such as POST, PUT and DELETE, in a special way, so that the user is made aware of the fact that a possibly unsafe action is being requested.
Basically, if your application allows deletion via GET requests, it's doing it wrong. Bots will follow public links, and they have no obligation to expect to delete things when doing so, and neither do browsers. If the links are protected it could still be browser prefetching or acceleration of some kind.
Edit: It might also be Bing. Nowadays Internet Explorer sends data to Microsoft about everywhere you go to gather data for its shitty search engine.
Typically, a search-bot will scan a page for any links and peek down those links to see what pages are behind that. So yeah, if a both has access to that page, the page contains links to delete items / stuff and the both opens those links to see what's behind them, the code simply gets triggered.
There's a couple of ways to block bots from scanning pages. Look into robot.txt implementations. Also, you might want to look into the mechanism / safety of your admin authentication system... ;-)
You can use the robots.txt file to block the access for some web bots.
And for those that don't look for the robots.txt file you can also use javascript, there shouldn't be many webbots interpreting it.
delete

PHP Web Application Design Basics

I am making a twitter application using PHP. Please excuse me if this question is elementary. In my application, the initial landing page (index.php) contains code for login/oauth. If the user logs in successfully, should I load a new page altogether or simply echo html that renders the user's profile page. For example:
if(login success)
{
load a file that renders selected user's profile page
}
or something like
if(login success)
{
echo html that renders a profile page.
}
If I understand correctly, you're trying to decide what to show the user once they log in. Rather than think what you should show them, what does the user want to do right away? Why do they use your site? If users want to see their profile right off the bat, then do that. If they want to see feed activity, show them that. To start off, you may want to create a simple page that acknowledges they are logged in and give them their major options. Track what users click and see what that tells you. If the vast majority use feature X immediately, then consider loading feature X first. If the users are all over the map, let them pick what they want to do, record it as a preference in their profile, and load that automatically.
In the end, the best thing to show a user when the log in is the first thing they most want to see. :)
I'd recommend looking into the use of some sort of PHP MVC framework.

website screenshot on my server depending on USER

I came upon many similar questions like this but I could not find simple answer. My goal is to create my web page thumbnail onto my server for a particular User (depending on SESSION). Website is dynamic means for every different user content changes like that contents of users on facebook.
What I need to do here is generate a screenshot when user experiences a problem with the application and click the capture button
I got many options like
libwkhtmltox
wkhtmltopdf
but not getting which I should use also suggest other if better.
I have linux server and using core PHP and have shell access to it.
Please don't refer external site as they are unable to get snapshot in my case (as I said SESSION variable is maintained for every user).
Please help me with the tutorial.
Thanks in advance
libwkhtmltox and wkhtmltopdf are both great technologies for capturing images of web pages. However, the problem is that it's really hard to get these technologies to have the same session as your user, if not impossible. Additionally, many errors users experience aren't reproducible on a second request. (Errors caused by db connection errors, caching, etc.) So doing something like this will have limited value. One alternative would be to throw a popup when they click your send errorpage snap that explains how to take a screenshot.
If you absolutely want to go down this path of automating the screenshot, here's a crazy, probably stupidly insecure idea. As wkhtmltopdf is built on webkit, there are options to set cookies. As long as your php session is cookie based, you could pass the user's session_id to wkhtmltopdf, and hijack your own user's session, thereby recreating the page when wkhtmltopdf makes the request. I'm so getting downvoted for suggesting this...

PHP Redirector Counter for a link

I need a way to count how many times a link is being clicked and I was thinking of creating a php script to redirect to and do the counting. Is there a better way to do this and how would i count each time the user visits the link and would it be best to save in the database somewhere...any suggestions
Yes, it must be a PHP script - JavaScript for example won't work all the time.
So - instead of a link to
http://some.site.com/page2.php
You would link to
http://some.site.com/redirect.php?page2.php
And in the redirect.php you will track, for example, in a database, the values, and in the end throw this header:
header("Location: http://some.site.com/".$_SERVER["QUERY_STRING"]);
To redirect to the path after ?...
// yeah - logs might work... a little bit more work, though and it is also very server specific.
I would analyze your web log files as this will work whether it's a static page or a script.
If the page you need to count is a script, you could insert code that updates a table.
Website statistics is a big industry and there are many free and pay solutions out there to explore and get ideas from.
If you need to track clicks on a specific link then you'll probably need to use javascript to capture the click and send a notification to a tracking server. If you need to track page views then you're best off looking at your server logs. Remember that a page can have many links pointing to it, you have to differentiate between link clicks events page page impressions. Another possibility, depending on your application, is to use Google tracking, or a similar third party tracking app.

Categories