I have form, let's say it's on index.php. On submit, it goes through process.php, then finishes on thanks.php. I want to trigger an Analytics Event so I can track my form (no I don't want to use a URL Destination). If I put the Event code on thanks.php, I'm concerned that if people refresh the page, it will re-trigger and give me bad results. So, I want it to trigger from process.php somehow. At the end of process.php, I set the header like this:
header('Location: /thanks.php');
This is the code for triggering the Event:
_gaq.push(['_trackEvent', 'ClaimDomain', 'ConfirmationPage']);
Is there some way of sending the JavaScript call along with the header, so that it only gets processed as we leave process.php, and not when we refresh thanks.php?
Thanks for your help :)
Not really, PHP is server-side, Javascript is client-side.
From reading about this though, it sounds like it may be a non-issue:
Unique events are incremented by unique actions
Any time a user interacts with an object tagged with a particular
action name, the initial interaction is logged as one unique event for
that action name. Any additional interaction with the same action
trigger for that user's session will not contribute to the unique
event calculation for that particular action. This is true even if the
user leaves that object and begins to interact with another object
tagged via the same action name.
Source: Event Tracker Guide - Actions.
So GA will know if they happen to land on the page twice and it will not count as 2 unique events.
Based on my comment above and researched by #drew, it's a non issue.
If you also want to track the goal. Use a "head match" to do so.
http://support.google.com/googleanalytics/bin/answer.py?hl=en&answer=72285
Related
I'm in the process of writing a RESTful API. For the most part, everything works out great but there are a few cases when I'm not dealing with a resource that things start to break down. While there are a million ways to solve the problem I'm facing, I'm looking for some feedback as to which would be the most ideal.
For simplicity, we'll say that the API is a timer.
A user can only have 1 active timer at a time.
The API has 2 functional endpoints start and stop.
When the user starts the timer they POST some data related to the timer which creates a new timer as long as they don't already have a timer running.
Calling stop on the timer updates the timer to mark it inactive.
I currently have this setup as follows:
Start Timer:
POST /api/v1/timer
Body: [
'thing1' => 'something',
'thing2' => 'somethingelse
]
Response: 204
Stop Timer:
PUT /api/v1/timer/stop
Body:
Response: 204
Since a user can only have 1 timer active, it didn't seem to make sense to return the timer id as you would in a more traditional CRUD call.
I've read some posts that suggest using POST method on the stop call to trigger the stop instead of a PUT. I suppose that makes sense too... this just really breaks down when you're not dealing with a traditional resource.
Of course, I could also rewrite it to return a timer resource but to me that adds overhead of the client having to then track the timer id when they want to stop (or delete) the active timer.
Any input here would be greatly appreciated.
Think about how you would implement this requirement on a website.
You would be looking at some web page, specific to the current user. There would be a link labeled start. You would get that link, and it would bring up a form that gives you the ability to override the default parameters associated with starting the timer.
When you submit the form, the browser would construct a request based on the HTML form processing rules. Since this isn't a safe operation, the method would probably be a post, and the form data would be application/x-www-form-urlencoded into the message body.
Since changing the state of the timer would probably change the representation of the original page, that's likely where the form would tell you to submit the POST. A successful response to the POST request would tell the browser to invalidate it's cached representation of the original page.
When you reload that page, the "start" link would be gone, and there would instead be a "stop" link. The operation of that link would be much the same --> clicking the link takes you to a form, submitting the form back to the original page, invalidating the previous representation. When you reload the page, the timer is off and the start link is available again.
GET /DarthVaderYellowMerrigold
GET /DarthVaderYellowMerrigold/start
POST /DarthVaderYellowMerrigold
GET /DarthVaderYellowMerrigold
GET /DarthVaderYellowMerrigold/stop
POST /DarthVaderYellowMerrigold
GET /DarthVaderYellowMerrigold
There are various things you might do to clean this up (returning the new representation in response to a successful POST, for example, with the appropriate Content-Location headers, so that the client doesn't need to fetch the data), but the basic idea is sound.
Do that in a machine readable way, and you have yourself a REST API.
Doing that mostly means documenting how the machine is supposed to understand what each link is for. "To go to the start timer form, look for this link; to go to the stop timer form, look for that link".
You'll probably leave HTTP and URI alone, but it's reasonable to replace HTML with, for example, one of the hypermedia JSON types. Or to put the links into the HTML headers, rather than in the representation.
Of course, HTML has the immediate advantage that you can just walk the API "by hand" and make sure that everything works using your favorite desktop browser. Trade-offs abound.
I've been designing a site that is used to collect data, but the person I'm designing for wants some form of redundancy just in case the window is closed or the system shuts down. Is there any way to take data that's been collected and write it to a MYSQL database if the user is disconnected for a certain amount of time, or if they shut the browser window/shut the system down without submitting the data?
The web is stateless and disconnected - so all data will (or rather: should be) persisted between page requests.
I assume you have a web-page generated by PHP that contains a lengthy data-entry form, and you want to save the data in that form in the event the user closes their browser window - the solution is to use a client-script that polls the server with the current data in the form, or at the very least hooks on to the window close event.
Actual implementation is a task left up to the OP.
This can't be done just with a pure HTML page - if the user doesn't submit the form, your server doesn't know what they've typed.
However, you could put some Javascript on the page that made an AJAX call every few seconds (or every few key-stokes or clicks). The idea would be for the JavaScript to invisibly submit the whole form to a PHP page which saved it into a sort of "holding area".
If the user then submitted the form, the holding area could be cleared out, but if they never did, then the data in the holding area would show you where they got to.
The most common techniques to partially prevent this szenario is that web apps work with a heartbeat-function which fires via javascript in a constant interval and sends a request to the server, p.e. to show that the user is still logged on - or, in your case, maybe to submit data already typed into form fields, too.
Think of it as an ajax-powered auto-save-function!
You have to add some javascript to your code for this, but the commonly used javascript libraries, like jquery or mootools, are well documented and offer alot of examples how to do something like this.
I am working on a PHP project where the client wants multipage form data submitted. For instance, here is the process the form follows:
Create new entity.
Determine entity type.
Fill in entity-specific fields.
On each page, the form is POSTed to the current page. Validation is performed server-side, and if validation is successful the user is redirected to the next step.
I've determined that, in order to keep track of the user's progress, session data should be used. However, my concern is that, if the user opens two tabs and goes through the form in parallel, how can I keep track of what entity is being processed in each tab? Is this a scenario that can even be handled at all?
There are different 2 approaches.
If a user allowed to fill 2 forms at once, just add an unique identifier to each, and keep track them in the session.
If not - the things become simpler: just keep track of the steps passed and just show a warning in case of the previously passed step submitted.
This is a page flow issue. Do NOT use session to deal with it. You are correct that tabs or new windows would cause you trouble.
I don't really understand the point of this bit:
On each page, the form is POSTed to the current page. Validation is
performed server-side, and if validation is successful the user is
redirected to the next step.
Sounds like a "post-back", but why? POST the data from the page to a controller (or script) that does the necessary ss validation. After validation, deal with the data how you need and determine what the next step should contain. Then send the input form for that next step down in the response. Repeat until you are finished.
I want to track exit link of a logged in user. I want to study their page navigation according to their demography. I have all the details needed in profile table.
I just need to know the exit link. I don't know how to do that. May b with PHP/Mysql/Javascript.
And the exit links can be from adsense.
Can u guys put me in right direction ??
FYI: Google analytics and other sites like that can't help coz I need stats of individual user according to his/her userid not of all them combined.
Add an onclick handler for all your links that you consider "exit links".
In this onclick handler, send an AJAX request to log the exit link. Depending on the network speed etc. the request might or might not be sent. If you always want it to be sent, prevent the default handler of the link and redirect to the link's target manually after the AJAX request has finished. That's not a good user experience though if the network connection is slow as the link will not react immediately.
Can the unload Event be Used to Reliably fire ajax Request? might also be interesting for you.
A while back, online apps used to say, "do not click submit more than once." That's gone now, right? How do you guard against that in, say, PHP?
One solution I'm using involves putting a variable in the Session, so you cannot submit to a page more than once every 10 seconds. That way the database work will have completed so the normal checks can take place. Obviously, this feels like a hack and probably is.
Edit: Thanks everybody for the Javascript solution. That's fine, but it is a bit of work. 1) It's an input type=image and 2) The submit has to keep firing until the Spry stuff says it's okay. This edit is just me complaining, basically, since I imagine that after looking at the Spry stuff I'll be able to figure it out.
Edit: Not that anyone will be integrating with the Spry stuff, but here's my final code using Prototype for the document.getElementByid. Comments welcome!
function onSubmitClick() {
var allValid = true;
var queue = Spry.Widget.Form.onSubmitWidgetQueue;
for (var i=0;i<queue.length; i++) {
if (!queue[i].validate()) {
allValid = false;
break;
}
}
if (allValid) {
$("theSubmitButton").disabled = true;
$("form").submit();
}
}
For some reason, the second form submit was necessary...
You should do both client- and server-side protections.
Client side - disable button e.g. by jquery as cletus has described.
Server side - put a token in the form. If there are two submissions with the same token, ignore the latter. Using this approach, you are protected against CSRF.
This is an excellent example of what jQuery is useful for (you can do it in any Javascript though). Add this code:
$("form").submit(function() {
$(":submit",this).attr("disabled", "disabled");
});
And it disables submit buttons once clicked once.
As others have noted, you can disable the button. I like server-side checks better, though - JS may be disabled, user might hit refresh (although if you're properly using POST that will generate a warning), etc.
You can add a timestamp to the form and track it in the session - require the POSTed timestamp be greater than the tracked one. That will prevent most double-posts without noticeably affecting the UI.
It's also important to note that PHP's default behaviour if it detects the user has "cancelled" the request (by closing the browser, pressing "stop", or perhaps pressing Submit a second time) is to stop executing the script. This is undesirable if you're doing some sort of lengthy transaction. More details here.
I would think the best option is to let the PHP script first set a flag in the session array to indicate that it is processing a form. That way a second request can be set to wait until the original request has completed (use a sleep call server side waiting for the flag to clear).
It is important not to interfere too much with the submit button and process, because the nature of the web is one of uncertainty; the user may want to click a second time if no answer has arrived, and you don't want to lose their data. That could occur if the first request is lost and you have disabled their only way to submit (and therefore store) the data.
One cleaner solution is to submit the request, and use javascript to display a message saying 'Processing', so that the user can see that something is happening, but they are not prevented from re-submitting the data.
One solution is to immediately disable the button on click using Javascript. This obviously relies on javascript being on on the client browser.
The server side trick is better since it will catch race conditions between multiple browser windows if the user is editing the same record twice.
I've done a simple version of this with javascript when I was working with ASP.NET AJAX but it should work in any case where your button has an actual ID.
I take the following steps in the button onclick event:
Disable the button that triggered the onclick event
Store the button id in a magic variable closureId that I can reference later via closure
Use the setTimeout function to execute a dynamically defined callback after the specified number of milliseconds (5000ms = 5 seconds)
In the callback function I can reference the magic closureId and re-enable the button after the timeout
Below is a simple HTML button you can throw into a test.html file by itself to play with:
<input id="btnTest" type="button" value="test" onclick="var closureId=this.id; this.disabled = true; setTimeout(function(){document.getElementById(closureId).disabled = false;}, 5000);>
You should still be doing server-side checks as others have suggested but in my own instance the submit also did a server side validation of business logic. The user would submit their form and then have to wait to see if the business logic on the server-side would actually allow that entry or not. It also helped out during development when I could take my server down and still have the form submission fail decently.
Note I call that closureId variable 'magic' because I don't have a firm grasp of how it works--
I just realized that calling this.id didn't work because this is a reference to the timeout function that is executing the dynamic callback function and doesn't have any sort of DOM id.
I couldn't find any other way to get a reference to the original event (this.this.id doesn't work) but lexical scoping is somehow allowing me to still access the closureId variable as it was defined at the time of the original button click.
Feel free to fix/comment if you know of a better way!
I usually disable the submit button after it's pressed. Session throttling is good against direct attacks, but I do not want to mix interface logic where it doesn't belong.