I came upon many similar questions like this but I could not find simple answer. My goal is to create my web page thumbnail onto my server for a particular User (depending on SESSION). Website is dynamic means for every different user content changes like that contents of users on facebook.
What I need to do here is generate a screenshot when user experiences a problem with the application and click the capture button
I got many options like
libwkhtmltox
wkhtmltopdf
but not getting which I should use also suggest other if better.
I have linux server and using core PHP and have shell access to it.
Please don't refer external site as they are unable to get snapshot in my case (as I said SESSION variable is maintained for every user).
Please help me with the tutorial.
Thanks in advance
libwkhtmltox and wkhtmltopdf are both great technologies for capturing images of web pages. However, the problem is that it's really hard to get these technologies to have the same session as your user, if not impossible. Additionally, many errors users experience aren't reproducible on a second request. (Errors caused by db connection errors, caching, etc.) So doing something like this will have limited value. One alternative would be to throw a popup when they click your send errorpage snap that explains how to take a screenshot.
If you absolutely want to go down this path of automating the screenshot, here's a crazy, probably stupidly insecure idea. As wkhtmltopdf is built on webkit, there are options to set cookies. As long as your php session is cookie based, you could pass the user's session_id to wkhtmltopdf, and hijack your own user's session, thereby recreating the page when wkhtmltopdf makes the request. I'm so getting downvoted for suggesting this...
Related
At the moment, I'm working on a website that could use some extra user usability, so I want to launch a couple of modal windows to aid users on their first time visiting of a couple pages.
I want to check if it is a users time time viewing a specific page. I've read about how you can run into problems when using cookies to do this. They can be deleted, the user can use a different PC or device, etc.
Also, I want to check for multiple pages if it's their first time viewing, not only directly after login.
I'm guessing a good idea for this would be to make a separate table with the pages in it that I need and setting a boolean for it if it is viewed or not.
Would this be the best way going about doing this?
There isn't a highly reliable way of doing that:
You can use cookies, but as you said, they are not reliable, a user can change PC, delete cookies, change browser, etc.
You can try using an IP address, but that's also not reliable. If a user switches address (which can today happen as you walk down the street with your mobile phone) he'll see the page over and over again. Moreover, if some other user happens to stumble upon the IP address the first user used, he won't see your tour/tutorial.
What I can suggest you is that you use cookies to detect if the user is new, but don't automatically throw the help modules on him, but prompt him using an non-obstructive toolbar at the top or bottom (never a popup window or lightbox).
That way, you get most of the users (because many people use the same browser and computer and rarely delete all their cookies), and even if a user has deleted his cookies/he still won't be disturbed that much.
There is no reliable approach if user is not registered and logged in with her/his username & password.
As mentioned before, there is no reliable way of detecting users ( and detecting if the user visits the site the first time), I also recomend Madara Uchiha's aproach, also you colud use html5 local storage in addition to cookies, both are not 100% reliable
u can however try user recognition without relying on cookies or html5 storage, but this is extremly complicated, u dont want to do this.
Just to satisfy your curiosity about how to do this, check this epic answer on a related question:
User recognition without cookies or local storage
I think, as I believe, there is no way with no solution. I think, a possible way consists of some parameters which first to be said and and finally by considering those, we can be able to talk about possibilities and impossibilities.
My parameters are in the below;
talk about features of a webpage as "User Detection" and detail them
think about reactions (I mean being fast to click on any elements of a page or not) on a webpage
inspect elements
URL injection
other reactions like click on some parts as spots placed on the page
stay on that page up to a time defined for being and checking authorizing
and so some solutions like the ones above.
For some reason I need to process PHP behind the scenes but not using AJAX (I know that might sound silly to you). I need this since I am getting the content dynamically through another page loading.
By using PHP's curl functions I can get the login page of a website inside my 1.php file. But then I use javascript to set form values and hit login and it takes me to the site url (not already localhost/1.php). So the question is: I need to somehow store the content of the page that I am redirected and retrieve it .
The impression i got was that, you have a resource intensive process, which would perform some action in background , while user still interacts whit the page.
I actually would make more sense to do this with some sort of service ( as shell script or standalone application ), but it is possible to do with php: you would need to fork [1] [2] the process. Just don' forget to check, if one such process is already running on the system.
It actually works pretty well in combination with XHR (also know as AJAX by marketing department), because you can kick off the process with a request, and then repeatedly check the status .. and then collect the data, when status is "done".
Since we're all taking stabs in the dark, here's what I think you're trying to do (let me know if I'm way off):
You have a site (let's call it userfriendly.org) and you are trying to add an interface of some kind to another site (we'lll call this site mean-corp.com). Essentially, when you load the page, you use curl to fetch some of the data from mean-corp.com so that your users can login and get some info but without having to deal with their site (maybe it's ugly, maybe it just fits really well into your site, whatever).
You are able to get to the site okay to get whatever initial data you need, but when you try to pass in the user login and password to actually get their info, it's redirecting back to the login URL for the site.
Long story short, you are trying to make a front-to-back web service for another site, but you're running into hiccups with redirects and whatnot?
Am I totally off? If not, I've made similar attempts in the past for my own nobel reasons,and I could pass along some tips as I'm sure others can.
But if I'm totally off, sorry for the distraction.
I want to create a private url as
http://domain.com/content.php?secret_token=XXXXX
Then, only visitors who have the exact URL (e.g. received by email) can see the page. We check the $_GET['secret_token'] before displaying the content.
My problem is that if by any chance search bots find the URL, they will simply index it and the URL will be public. Is there a practical method to avoid bot visits and subsequent index?
Possible But Unfavorable Methods:
Login system (e.g. by php session): But I do not want to offer user login.
Password-protected folder: The problem is as above.
Using Robots.txt: Many search engine bots do not respect it.
What you are talking about is security through obscurity. Its never a good idea. If you must, I would offer these thoughts:
Make the link expire
Lock the link to the C or D class of IPs that it was accessed from the first time
Have the page challenge the user with something like a logic question before forwarding to the real page with a time sensitive token (2 step process), and if the challenge fails send a 404 back so the crawler stops.
Try generating a 5-6 alphanumeric password and attach along with the email, so eventhough robots spider it , they need password to access the page. (Just an extra added safety measure)
If there is no link to it (including that the folder has no index
view), the robot won't find it
You could return a 404, if the token is wrong: This way, a robot (and who else doesn't have the token) will think, there is no such page
As long as you don't link to it, no spider will pick it up. And, since you don't want any password protection, the link is going to work for everyone. Consider disabling the secret key after it is used.
you only need to tell the search engines not to index /content.php, and search engines that honor robots.txt wont index any pages that start with /content.php.
Leaving the link unpublished will be ok in most circumstances...
...However, I will warn you that the prevalence of browser toolbars (Google and Yahoo come to mind) change the game. One company I worked for had pages from their intranet indexed in Google. You could search for the page, and a few results came up, but you couldn't access them unless you were inside our firewall or VPN'd in.
We figured the only way those links got propagated to Google had to be through the toolbar. (If anyone else has a better explanation, I'd love to hear it...) I've been out of that company a while now, so I don't know if they ever figured out definitively what happened there.
I know, strange but true...
Is it possible to get remote username when I get a referral link without involving any server side code from the referral link?
Do you mean like if I clicked a link to your site on Stack Overflow, you would want to be able to see that my username is "Agent Conundrum"? No, you can't do that without the help of the referring site. The only information you should be able to get is the (permanently misspelled) HTTP_REFERER in the $_SERVER superglobal array, which tells you the page the user came from. Even then, there are ways to block or change this so you shouldn't count on it being set (especially since it wouldn't be set if the user navigated directly to your page via the address bar).
Frankly, I wouldn't want to use a site that leaked personal information (and for some sites, even the username qualifies as personal information), and I wouldn't want to use a site that tries to harvest such leaked information without my knowledge.
Generally, any site where you have a legitimate reason to broadcast this information would have some sort of API built in, like FacebookConnect. Even then, it should be strictly opt-in for the user.
As a general thing: no. The HTTP protocol does not involve the transmission of a remote user name.
Hey, it could help to answer if you would be a little more specific on which kind of service are you trying to fetch the data from.
Large/Public services tend to have somekind of an accessible API that you can fork on your referrer, but other than that its mostly that you need to regexp the site and know the structure of the HTML pretty much.
dealing with php/html/javascript.
i'm trying to figure out a good/best approach to allowing a user to download a file. i can have the traditional href link, that interfaces with the back end php app to download the file.
however, i want to have the app display some sort of dialog/alert if the user isn't able (basedon acl/permissions) to download the file... does this have to ba an ajax thing, as i don't want to do a page refresh...
thoughts/comments/pointers to code samples are appreciated.
thanks.
-tom
hi... more data/information.
in my test, i send the userID/fileID via the query to the backend php.
the app then confirms the user is the user for the file, and that the user has the rights to access the file. the query data is matched against data in the db for the user/file combination.
so the last/critical check occurs on the back end.
hope this gives a bit more insight into what i'm looking to do/accomplish.
thanks
-tom
AJAX could be a good technology to use if your looking for a work-around for the page not refreshing but it doesn't have to be your only option.
Another option without requiring AJAX, which might be cumbersome depending on how your project is design, is to enable or disable features depending on the user's authentication level.
As a simple example, enable features only related to Administrators and disable Administrator features for normal users.
You don't necessarily have to enable/disable features, you could also decide before the user clicks on links whether or not he/she has rights to do-so.
With more information on how your project is laid out, we can provide more concise answers.
The easiest method would be to return a HTTP response code of 401 ("Unauthorized"). This will cause your web server to display the 401 error page, which you can modify to fit your design.
Or, if you are using AJAX, then you can check for a 401 response code and pop up a nice alert for them without taking them to a different page.