I'm using Laravel 4 for building my one-page app and I need to implement a session timeout for the user to be redirected as soon as it is detected, I've been trying to check the $_SESSION/Session::exists() array through some polling requests but everytime I hit a route the session is refreshed.
How can I implement polling for session info on Laravel effectively? Do I need to do something more complicated like keeping an open connection (Websockets/Long pooling)?
I feel like this should be an out-of-the-box feature but strangely no-one seems to implement it, is it because most of the implementations are page-to-page instead of one-page + ajax?
That's a funny problem and you should use a middleware for that. If you're in laravel 4.1 or above laravel uses StackPHP
Check this link from fideloper, it might be useful.
Just set/update session variable(defined by you) in the middleware and create route that doesn't use the middleware in your API to query that variable.
As far as I know it is not there in Laravel out of the box but it's actually easy to implement. Just an example: you could store the time the user logged in in a session variable with Session::put('logintime', time()); and then check if there has been a timeout.
Example (with a 15 min timeout):
function isTimeout() {
return !Session::has('logintime') || Session::get('logintime') + (15 * 60) <= time();
}
Then you can use it in a response to an AJAX request like you need to.
It might be a long shot due to knowledge boundaries for some, but I do below for real time data displays in my applications, and it's worth the effort of getting started with NodeJS (and it's easier than people think as full stack PHP developers are already familiar with JS, highly recommend going into MEAN stack)
I write core functionality in a PHP framework, and for anything that I need to display or interact the user with in real time, instead of polling or using php with websockets, I introduce an extra nodejs nginx server and serve the data using socket.io, which is good because it keeps connections to your db to a minimum (ie hence avoiding any max connection problems in Mysql) and is super scaleable, as instead of polling it uses observable pattern, keeping all client connections in an array and pushing the new data when observer sees changes in your data persistence layer, instead of keeping your server super busy with gazillions of naggy clients polling your db all the time.
If you haven't done it, I also recommend dropping apache for your PHP application servers and look into Nginx with php fpm
Related
Id like very much to have second thoughts on this approach Im implementing to handle very long processes in a web application.
The problem
I have a web application, all written in javascript, which communicates with the server via an API. This application has got some "bulk actions" that take a lot of time to execute. I want to execute them in a safe way, making sure the server won't time out, and with a rich feedback to the user, so he/she knows what is going on.
The usual approach
As I can see in my research, the recommended method of doing that is firing a background process in the server and make it write somewhere how its going so you can make requests to check on it and give feedback to the user. Since Im using php in the back-end, the approach would be more or less what is described here: http://humblecontributions.blogspot.com.br/2012/12/how-to-run-php-process-in-background.html
Adding a few requisites
Since Im developing an open source project (a WordPress plugin) I want it to work in a variety of situations and environments. I did not want to add server side requirements and, as far as I know, the background process approach may not work in several shared hosting solutions.
I want it to work out of the box, in (almost) any server with typical WordPress support, even if it ended up beeing a bit slower solution.
My approach
The idea is to break this process in a way it will run incrementally through many small requests.
So the first time the browser sends a request to run the process, it will run only a small step of it, and return useful information to give the user some feedback. Then the browser does another request, and repeats it until the server informs that the process is done.
In order to do this, I would store this object in a Session, so the first request will give me an id, and the following requests will send this id to the server so it will manipulate the same object.
Here is an conceptual example:
class LongProcess {
function __construct() {
$this->id = uniqid();
$_SESSION[$this->id] = $this;
$this->step = 1;
$this->total = 100;
}
function run() {
// do stuff based on the step you are in
$this->step = $this->step + 10;
if ($this->step >= $this->total)
return -1;
return $this->step;
}
}
function ajax_callback() {
session_start();
if (!isset($_POST['id']) || empty($_POST['id'])) {
$object = new LongProcess();
} else {
$object = $_SESSION[$_POST['id']];
}
$step = $object->run();
echo json_encode([
'id' => $object->id,
'step' => $return,
'total' => $object->total
]);
}
With this I can have my client to send requests recursivelly and update the feedback to the user as the responses are recieved.
function recursively_ajax(session_id)
{
$.ajax({
type:"POST",
async:false, // set async false to wait for previous response
url: "xxx-ajax.php",
dataType:"json",
data:{
action: 'bulk_edit',
id: session_id
},
success: function(data)
{
updateFeedback(data);
if(data.step != -1){
recursively_ajax(data.id);
} else {
updateFeedback('finish');
}
}
});
}
$('#button').click(function() {
recursively_ajax();
});
Of course this is just a proof of concept, Im not even using jQuery in the actual code. This is just to express the idea.
Note that this object which is stored in the session should be a very lightweight object. Any actual data beeing processed should be stored in the database or filesystem and only reference it in the object so it knows where to look for stuff.
One typical case would be processing a large CSV file. The file would be stored in the filesystem, and the object would store a pointer to the last processed line so it knows where to start in the next request.
The object may also return a more verbose log, describing everything that was done and reporting errors, so the user have complete knowledge of what has been done.
The interface I think would be great is a progress bar with a "see details" button that would open a textarea with this detailed log.
Does it make sense?
So now I ask. How does it looks like? Is it a viable approach?
Is there a better way to do this and assure it will work in very limited servers?
Your approach has several disadvantages:
Your heavy requests may block other requests. Usually you have a limit of concurrent PHP processes for handling web request. If the limit is 10, and all slots are taken by processing your heavy requests, your website will not work until some of these requests will complete releasing slot for another lightweight request.
You (probably) will not be able to estimate how much time will take to finish one step. Depending on server load it could take 5 or 50 seconds. And 50 second will probably exceed time execution limit on most of shared hostings.
This task will be controlled by client - any interruption from client side (network problems, closing browser tab) will interrupt the task.
Depending on session backend, using session for storing current state may result in race condition bugs - concurrent request from the same client may overwrite changes in session done by background task. By default PHP uses locking for session, so this should not be the case, but if someone uses alternative backend for sessions (DB, redis) without locking, this will result serious and hard to debug bugs.
There is an obvious trade-off here. For small websites where simplifying installation and configuration is a priority, your approach is OK. In any other case I would stick to simple cron-based queue for running tasks in background and use AJAX request only to retrieve current status of task. So far I have not seen hosting without cron support and adding task to cron should not be that hard for the end user (with proper documentation).
In both cases I would not use session as a storage. Save task and its status in database and use some locking system to ensure, that only one process can modify data of one task. This will be really much more robust and flexible than using session.
Thanks for all the input. I just want to document here some very good answers I got.
Some WordPress plugins, named Woocommerce, have incorporated code from "WP Background Processing" library, that is no longer mantained, but that implements the Cron approach with some important improvements. See this blog post:
https://deliciousbrains.com/background-processing-wordpress/
The actual library lives here: https://github.com/A5hleyRich/wp-background-processing
Although this is a WordPress specific library, I think the approach is valid for any situation.
There is also, for WordPress, a library called Action Scheduler, that not only tun proccesses in background, but allows you to schedule them. Its worth a look:
https://github.com/Prospress/action-scheduler
I've done a fair bit of PHP over the years but I'm currently learning ColdFusion and have come across the Application.cfc file.
Basically this is a class that's created once (has an expire date). The class handles incoming users and can set session variables and static memory objects, such as queries. For example I can load site wide statistical data for one user in another thread from the Application.cfc. Something that would usually take a few seconds for each page would make the whole site quick and responsive.
Another example (just for clarification).
If I put an incremental variable that's set to 0 in OnApplicationStart this variable can be incremented with each user request (multiple users) or in OnSessionStart without the need to contact the SQL database since it's constantly in the server's memory under this application.
I was wondering if PHP has a similar file or object? Something that can be created once and used to store temporary variables?
The PHP runtime itself initializes the environment from scratch on every HTTP request, so it has no built-in mechanism to do this. Of course you can serialize anything into common storage and then read it back and deserialize on each request, but this is not the same as keeping it in-memory.
This type of functionality in PHP is achieved by outsourcing to other programs; memcached and APC are two of the most commonly used programs that offer such services, and both come with PHP extensions that simplify working with them.
This will be a newbie question but I'm learning php for one sole purpose (atm) to implement a solution--everything i've learned about php was learned in the last 18 hours.
The goal is adding indirection to my javascript get requests to allow for cross-domain accesses of another website. I also don't wish to throttle said website and want to put safeguards in place. I can't rely on them being in javascript because that can't account for other peers sending their requests.
So right now I have the following makeshift code, without any throttling measures:
<?php
$expires = 15;
if(!$_GET["target"])
exit();
$fn = md5($_GET["target"]);
if(!$_GET["cache"]) {
if(!array_search($fn, scandir("cache/")) ||
time() - filemtime($file) > $expires)
echo file_get_contents("cache/".$fn);
else
echo file_get_contents(file);
}
else if($_GET["data"]) {
file_put_contents("cache/".$fn, $_GET["data"]);
}
?>
It works perfectly, as far as I can tell (doesn't account for the improbable checksum clash). Now what I want to know is, and what my search queries in google refuse to procure for me, is how php actually launches and when it ends.
Obviously if I was running my own web server I'd have a bit more insight into this: I'm not, I have no shell access either.
Basically I'm trying to figure out whether I can control for when the script ends in the code, and whether every 'get' request to the php file would launch a new instance of the script or whether it can 'wake up' the same script. The reason being I wish to track whether, say, it already sent a request to 'target' within the last n milliseconds, and it seems a bit wasteful to dump the value to a savefile and then recover it, over and over, for something that doesn't need to be kept in memory for very long.
Every HTTP request starts a new instance of the interpreter; it's basically an implementation detail whether this is a whole new process, or a reuse of an existing one.
This generally pushes you towards good simple and scalable designs: you can run multiple server processes and threads and you won't get varying behaviour depending whether the request goes back to the same instance or not.
Loading a recently-touched file will be very fast on Linux, since it will come right from the cache. Don't worry about it.
Do worry about the fact that by directly appending request parameters to the path you have a serious security hole: people can get data=../../../etc/passwd and so on. Read http://www.php.net/manual/en/security.variables.php and so on. (In this particular example you're hashing the inputs before putting them in the path so it's not a practical problem but it is something to watch for.)
More generally, if you want to hold a cache across multiple requests the typical thing these days is to use memcached.
php is done from a per-connection basis. IE: each request for a php file is seen as a new instance. Each instance is ended, generally, when the connection is closed. You can however use sessions to save data between connections for a specific user
For basic use of sessions look into:
session_start()
$_SESSION
session_destroy()
When I first meet PHP, I'm amazed by the idea Sharing-Nothing-Architecture. I once in a project whose scalaiblity suffers from sharing data among different HTTP requests.
However, as I proceed my PHP learning. I found that PHP has sessions. This looks conflict with the idea of sharing nothing.
So, PHP session is just invented to make counterpart technology of ASP/ASP.NET/J2EE? Should high scalable web sites use PHP session?
The default PHP model locks sessions on a per-user basis. That means that if user A is loading pages 1 and 2, and user B is loading page 3, the only delay that will occur is that page 2 will have to wait until page 1 is done - page 3 will still load independently of pages 1 and 2 because there is nothing shared for separate users; only within a given session.
So it's basically a half-and-half solution that works out okay in the end - most users aren't loading multiple pages simultaneously; thus session lock delays are typically low. As far as requests from different users are concerned, there's still nothing shared.
PHP allows you to write your own session handler - so you can build in your own semantics using the default hooks - or, if you prefer you could use the built in functionality to generate the session id and deal with the browser side of things then write your own code to store/fetch the session data (e.g. if you only wanted the login page and not other pages to lock the session data during processing, then this is a bit tricky though not impossible using the standard hooks).
I don't know enough about the Microsoft architecture for session handling to comment on that, but there's a huge difference in the way that PHPs session handling, and what actually gets stored in the session compared with J2EE.
Not using sessions in most of your pages will make the application tend to perform a lot faster and potentially scale more easily - but you could say that about any data used by the application.
C.
I have been poking around in PHP for OOP and I noticed something... Objects are re-instantiated each time the page is refreshed. The problem is that I want the object to keep certain information in class variables for the whole time that someone is on a website.
Is there some sort of way to keep an
object alive the whole time that
someone is surfing on the website?
What alternatives are there to my
problem?
It would be really helpful to have example too!
You can use Sessions to keep data associated to one user between different pages (quoting) :
Session support in PHP consists of a
way to preserve certain data across
subsequent accesses.
See the Session Handling section of the manual, for more informations about sessions.
PHP isn't stateful. Every page load is a one time event. You can persist data with sessions, or by storing information in a database.
A php script has to exit before apache can serve the page, so if you really want to do that, one thing you can do is serialize and store all the objects that you want to persist and use session cookies to keep track of the users
PHP isn't statefull every request is a new process on the server
Your best bet is to use session data and hand the session data to the objects when you instantiate them. Have the contructors pull the data they need out of the session, and you'll essentially have the state fullness you need.
you can acess sesion using
$_SESSION['stuff'] = $data;
then you can use your objects like
$x = new DataStore($_SESSION['stuff']);
if theres data in the session the object will populate itself from that data. Otherwise it will default to its standard init.
Even when approaches like serializing objects and then deserializing them is useful, you have to make sure you understand first why your objects "disappear".
HTTP, the protocol used to retrieve pages and other resources from Web servers, is stateless. It basically means one request knows nothing from another request, even when it came from the same user. Think of it this way, when you request your PHP page, the script is run and after it finishes Apache sends out the result to you. When you request the page again, it does the same thing as if it was the very first time you did it. It's stateless.
There are techniques to keep state between requests (make it to not forget your objects) and those involve things like cookies or URL rewriting. But you have to keep in mind the stateless nature of HTTP (and thus your PHP script) when developing Web applications.
SESSIONS are good, i use them to hold object state in some of my PHP programming.
Or a better solution would be to use Flex so you don't have to worry about the stateless HTTP protocol...