I have two servers which host two identical Laravel app. Let's say Server One and Server Two. And there is Load Balancer let's call this LB Server.
I set up that on Laravel Forge. But, when I point the domain to LB. I got random 404 CSS not found. I use Laravel Mix and compile them when I do the deployment. Since the two servers got different deployment the CSS versioning is different and also the JS.
What happened is if I call the domain and if I keep refreshing the server, I got 404 CSS. Since LB is doing Round Robin Load Balancing.
The problem is when I call the domain name the LB server serves to Server One. After I keep refreshing, the LB server serves to Server Two. That time, the CSS is still calling the Server One CSS.
How can I fix this?
Notes: I know I should put my CSS/JS/Images should put to S3 or CDN.
Can't use those options for now. I don't
want to put my compiled CSS to git versioning as well.
You should change your deployment: Generate the files only once and sync them to the servers instead of generating them on every production server each (rsync for example).
Another (not elegant) way could be to use sticky sessions, the LB will set a cookie and allways route a user to the same backend afterwards (see your LBs docs).
Related
I am trying to configure a load-balanced environment using Yii 1.1.14 applications, but I seem to be having the problem where Yii does not keep a user logged in when the load balancer uses another node. Most of the time, when logging in, it will ask the user to login twice because it only logs in on one node, and then loads the page on another. Otherwise, it will ask the user to login again half-way through browsing.
The application is using DB sessions and I can see that the expire time is being updated in the database. Even in the case when it asks them to login again straight after they have already logged in, the session expire time is updated in the database. Does Yii do anything server dependent with the sessions?
I've searched around for hours but unable to find much on this topic, and wondering if anyone else has come across such problem.
On the server-side, I am using Nginx with PHP-FPM and Amazon's ELB as the load balancer. The work around (as a last resort) is to use sticky sessions on the load balancer, but then this does not work the best if a node was to go offline and force the user to use an alternative node.
Please let me know if I need to clarify anything better.
The issue was that the base path which was used to generate the application ID, prefixed to the authentication information in the session, did not match on each server. Amazon OpsWorks was deploying the code to the servers using the identical symlinked path, but the real path returned by PHP differed due to versioning and symlinking.
For example, the symlink path on both servers was '/app/current'. However, the actual path on one server was '/app/releases/2014010700' and the other was '/app/releases/2014010701', which was generating a different hash and therefore not working with the session.
Changing the base path to use the symlink path in my configuration file fixed the problem, whereas before it was using dirname() which was returning the real path of the symlinked contents. I also had to remove the realpath() function in setBasePath in the Yii framework.
The modifications I made to the Yii framework are quite specific for my issue, but for anyone else experiencing a similar issue with multiple nodes, I would double check to ensure each node contains the application in the exact same path.
Thank you to the following article: http://www.yiiframework.com/forum/index.php/topic/19574-multi-server-authentication-failure-with-db-sessions
Thought I'd answered this before, but took a bit to find my answer:
Yii session do not work in multi server
Short version: if you have Suhosin enabled, it's quite painful. Turn it off and things work much better. But yes, the answer is you can do ELB load balancing with Yii sessions without needing sticky sessions.
I have been working on a Laravel 4 site for awhile now and the company just put it behind a load balancer. Now when I try to login it basically just refreshes the page. I tried using fideloper's proxy package at https://github.com/fideloper/proxy but see no change. I even opened it up to allow all IP addresses by doing proxies => '*'. I need some help with knowing what needs to be done to get Laravel to work behind a load balancer, especially with sessions. Please note that I am using the database Laravel session driver.
The load balancer is a KEMP LM-3600.
Thank you to everyone for the useful information you provided. After further testing I found that the reason this wasn't working is because we are forcing https through the load balancer, but allowing http when not going through the load balancer. The login form was actually posting to http instead of https. This allowed the form to post but the session data never made it back to the client. Changing the form to post to https fixed this issue.
We use a load balancer where I work and I ran into similiar problems with accessing cPanel dashboards where the page would just reload every time I tried accessing a section and log me off as my IP address was changing to them. The solution was to find which port cPanel was using and configure the load balancer to bind that port to one WAN. Sorry, I am not familiar with laravel and if it just using port 80 then this might not be a solution.
Note that the session handling in Laravel 4 uses Symfony 2 code, which lacks proper session locking in all self-coded handlers that do not use the PHP provided session save handlers like "files", "memcached" etc.
This will create errors when used in a web application with parallel requests like Ajax, but this should occur unrelated to any load balancer.
You really should do some more investigation. HTTP load balancers do have some impact on the information flow, but the only effect on a PHP application would be that a single user surfing the site will randomly send the requests to any one of the connected servers, and not always to the same.
Do you also use any fancy database setup, like master-slave replication? This would affect sessions more likely, if the writing is only done on the master, the reading is done only on a slave, and this slave is behind the master with updating the last write operation. Such a configuration is not recommended as a session storage. I'd rather use Memcached instead. The PHP session save handler does implement proper locking as well...
Using fideloper's proxy does not make sense. A load balancer should be transparent to the web server, i.e. it should not act as a reverse proxy unless configured to do so.
Use a shared resource to store the session data. File and memcached will surely not work. DB should be OK. That's what I'm using on a load balanced setup with a common database.
I have been using TrustedProxy for a while now and its working fine.
the main issue with load balancers is proxy routing. the next is from the readme file and its what I was looking for.
If your site sits behind a load balancer, gateway cache or other
"reverse proxy", each web request has the potential to appear to
always come from that proxy, rather than the client actually making
requests on your site.
To fix that, this package allows you to take advantage of Symfony's
knowledge of proxies. See below for more explanation on the topic of
"trusted proxies".
I have a CodeIgniter app in a git repo. Currently i deploy a new installation on my server for each new client i signup.
Each new client has its own database and its own files in a folder such as: /var/www/vhosts/example.com/client/client1/
Each client gets a subdomain that i map out through plesk. client1.example.com.
My question:
Is it better performing to have a single app setup to manage all of these client installations with different database.php config files.
IE: /var/www/vhosts/example.com/httpdocs/*
and use a htaccess redirect for the sub domains to remap the URI to different configs.
Or is it better performing to have a seperate installation for each client like i listed above.
Server Information:
PHP 5.3
MySQL 5.x
CodeIgniter 2.1
WireDesignz HMVC
Sparks (various)
CentOS DV4 from MediaTemple
I'd say keep them apart.
Each client will have their own set of requirements. While the Server won't change that much , your code base will. It will become hard over time NOT to break something for one client while building something for another.
As they will be separate projects you'll be able to move larger sites away from the smaller sites. But this depends on what type of traffic you're clients receive.
And having each Application in it's own Repository (You are using Version Control, Right ?) would make your world that much more organized.
Performance wise the smaller application designed for a client, and only running what they want will probably outperform one monolithic site serving all your clients any day.
Hope I understood that correctly, it's my personal opinion.
I am interested in a scenario where webservers serving a PHP application is set up with a load balancer.
There will be multiple webservers with APC behind the load balancer. All requests will have to go through the load balancer, which then sends it to one of the web servers to process.
I understand that memcached should be used for distributed caching, but I think having the APC cache on each machine cache things like application configurations and other objects that will NOT be different across any of the servers would yield even better performance.
There is also an administrator area for this application. It is also accessed via the load balancer (for example, site.com/admin). In a case like this, how can I call apc_clear_cache to clear the APC object cache on ALL servers?
Externally in your network you have a public IP you use to route all your requests to your load balancer that distributes load round robin so outside you cannot make a request to clear your cache on each server one at a time because you don't know which one is being used at any given time. However, within your network, each machine has its own internal IP and can be called directly. Knowing this you can do some funny/weird things that do work externally.
A solution I like is to be able to hit a single URL and get everything done such as http://www.mywebsite/clearcache.php or something like that. If you like that as well, read on. Remember you can have this authenticated if you like so your admin can hit this or however you protect it.
You could create logic where you can externally make one request to clear your cache on all servers. Whichever server receives the request to clear cache will have the same logic to talk to all servers to clear their cache. This sounds weird and a bit frankenstein but here goes the logic assuming we have 3 servers with IPs 10.232.12.1, 10.232.12.2, 10.232.12.3 internally:
1) All servers would have two files called "initiate_clear_cache.php" and "clear_cache.php" that would be the same copies for all servers.
2) "initiate_clear_cache.php" would do a file_get_contents for each machine in the network calling "clear_cache.php" which would include itself
for example:
file_get_contents('http://10.232.12.1/clear_cache.php');
file_get_contents('http://10.232.12.2/clear_cache.php');
file_get_contents('http://10.232.12.3/clear_cache.php');
3) The file called "clear_cache.php" is actually doing the cache clearing for its respective machine.
4) You only need to make a single request now such as http://www.mywebsite/initial_clear_cache.php and you are done.
Let me know if this works for you. I've done this in .NET and Node.js similar but haven't tried this in PHP yet but I'm sure the concept is the same. :)
My webserver filesystem is equivalent to the following:
/master-domain.tld
/master-domain.tld/assets/js/foo.js
/master-domain.tld/assets/css/bar.css
/sub1.master-domain.tld
/sub1.master-domain.tld/assets/js/foo.js
/sub2.master-domain.tld
/sub1.master-domain.tld/assets/css/bar.css
...
What I'd like to know is how to serve my common static assets (hosted on the master domain) to my subdomains-- reason being that I then only have to maintain a single set of 'master' assets, rather than update/copy any files to all other subdomains each time I edit them. Obviously this can be achieved using absolute URLs, but my aim is to avoid these so that my local/dev system needn't make any remote calls while I'm designing/debugging.
Currently, I have a mod_rewrite + symbolic link + php script combo set up for each subdomain, linking any calls to non-existent local assets to the master assets. e.g. calling "/sub1.master-domain.tld/assets/css/bar.css" will retrieve the bar.css file hosted on the master domain since a local version of bar.css does not exist. Furthermore, calling "/sub1.master-domain.tld/assets/js/foo.js" would serve the local version of foo.js, since it does exist.
But my current method seems to be hindering the performance of my page loads, and I wonder if there is a better (still modular) approach to solving this problem.
Any tips? Am I going about this in completely the wrong way?
Symbolic links should be all that's required if they're all on the same server. This should cause no performance hits at all. E.g.
/sub1.master-domain.tld/assets -> /master-domain.tld/assets
If your subdomains are served from multiple servers, I would set up a mod_rewrite rule with 302 redirects to the complete URL on the master domain. On that master domain server I would set mod_expires so the assets are cached, avoiding most subsequent requests to the same "missing" assets on the subdomains.
If they are on the same machine, why not point different virtual host aliases to the same document root?
I'd use reserved static. subdomain for this purpose and aim all request on that one.