I am running a Symfony 2.8.6 application on nginx/php-fpm.
There are multiple domains that are resolved into this server, and basically what I want to do is change RDB configuration according to which domain was used to access.
So my nginx.conf has lines like fastcgi_param SYMFONY__SOME__PARAM $host, but I have a problem.
This parameter injection is cached and not working as intended.
For example, there are two domains a.example.com and b.example.com, and they point to my server.
I want it to connect to different MySQL server when accessed through different domain, but it ignores the domain and connects to the same server always.
What I've confirmed:
Nginx passes the variable correctly.
The output of var_dump($_SERVER['SYMFONY__SOME__PARAM']) changes as expected.
The parameter is stored in app/cache/prod/appProdProjectContainer.php
AFAIS there are two options: disabling configuration cache totally, or disabling caching environment variables.
I think the latter option is more preferrable, but I even don't know how to disable the cache, whether totally or partially.
Using dynamic environment variables in service definitions is not possible in Symfony (see symfony/symfony#16403 (comment) why). You can try to use Incenteev/DynamicParametersBundle, but I have no experience with it.
How about changing cache directory for each environemnt.
fastcgi_param SYMFONY__CACHE_DIR /path/to/cache
Related
I have noticed that multiple users per day are being assigned the same session_id. I am using php 7.2 now, but looking back into the history of user sessions this has been happening since I was using php 5.4.
I am just using php defaults of session_start(), no custom session handler.
I have read that the session_id is a combo of the client ip and time, but give that I am using a load balancer, that might be limiting the randomness of the ip_addresses?
What is the proper way to increase the uniqueness of session_ids to prevent collisions when using a load balancer?
If you are using Nginx you may want to check if FastCGI micro-caching is enabled and disable it. This has caused some errors before noted in PHP.net developers bugs listings in PHP 7.1 running nginx
Bug #75496 Session ID Collision happened few times
After the first case [of collision] we changed a hash entropy php settings in php.ini so session_id is now 48 chars but it didn't help to prevent second case.
Solution:
FastCGI micro caching at nginx that cached 200 responses together with session cookies.
Of course maybe it was set wrong at our server, but its definitely has nothing to do with PHP.
Please see:
https://bugs.php.net/bug.php?id=75496
Assuming you're running PHP-FPM on each web node, but I guess #2 and #3 would probably work running PHP as an Apache plugin as well.
You've got a few options here:
Keep your LB'd web nodes with apache and have each one point to the same upstream PHP-FPM server. Obviously the third box running the single PHP-FPM process might need to be beefier, since it's handling PHP parsing for both web nodes.
Change the session storage to point to a file location (probably a NFS or SMB share) that both servers can access. Not sure if I've ever done this honestly but it seems like it would work. Really, your web files should probably be on an NFS/SMB share already so you can deploy changes to only one location.
Spin up a redis server and have both web nodes' PHP-FPM process use that for session.
Number three is probably the best option in most cases.
So I am developing an application that has two kinds of deployments. One is they host everything and there is one application / one database and no issues. The second (the one this question will center around) will have one application but different database connections per client.
Access to each domain will be white-listed so we don't have to worry about one client popping another clients domain in and their is other authentication on top of that.
What I need is two-fold:
a way to determine the client that is accessing the application (request headers? domain driven?)
grab the configurations for said client so they only access their databases information (use a certain .env file based on client?).
One idea I have thought of is using apache to set env variables based on the domain requested. However, I would prefer to keep this handling inside of laravel. Any ideas?
We do this very thing. As you noted, you need some authority that can map domains to clients, and then a away to fetch the client-specific settings.
Here is how we set it up:
We have different Apache vhost per client, with all the domains/aliases the client has setup. In the vhost file, Apache gives us one single env variable CLIENT_ID. That way we know who the client is, no matter which domain alias is being used at any one time.
We have a Laravel service provider then that looks at env('CLIENT_ID') and sets up a number of config options. One of the things that gets set is a client-specific path, where we store a number of client-specific resources and files. So we're doing something like this:
Config::set("paths.client", "/var/www/clients/" . env("CLIENT_ID"));
Now that we have the client-specific paths setup, we can use Dotenv to go load a client-specific .env file for this client. Note we still have a root .env file in our app directory (Laravel expects it) which sets up common config. The client-specific .env file sets up client-specific stuff like SMTP settings, and of course database connection settings.
Dotenv::load(Config::get("paths.client"));
Now that the client-specific .env is loaded, we can easily write to the database.connections config array. And from there on out, eloquent and everything else just work.
Config::set('database.connections.mysql', [
'database' => env('DB_DATABASE'),
'username' => env('DB_USERNAME'),
'password' => env('DB_PASSWORD'),
]);
You wouldn't have to do it this way of course. We chose Apache vhosts as our authority for mapping domains to clients, but you could have a master database or just a config file. Then in your service provider, you just need to ask the master db (or the config file) what client needs to be setup.
From there your client settings could even be in the master database or config file, if you didn't want to store separate .env files. Whole lot of ways you can skin this cat.
Ultimately, however you get there, you once you set the database config connection array, you're good to go.
Update:
For anyone reading this in the future, there is another very simple way to load alternate .env files as mentioned in the comments below:
If you set APP_ENV somehow before you get to Laravel (SetEnv in virtualhost etc) it will by default look for .env.APP_ENV (Ex: .env.demo) file. I am using SetEnv to set APP_ENV and then creating .env files for each client.
For our purposes, we didn't want client-specific .env files sitting in the root of our app. But it could work quite well for other situations.
My webserver filesystem is equivalent to the following:
/master-domain.tld
/master-domain.tld/assets/js/foo.js
/master-domain.tld/assets/css/bar.css
/sub1.master-domain.tld
/sub1.master-domain.tld/assets/js/foo.js
/sub2.master-domain.tld
/sub1.master-domain.tld/assets/css/bar.css
...
What I'd like to know is how to serve my common static assets (hosted on the master domain) to my subdomains-- reason being that I then only have to maintain a single set of 'master' assets, rather than update/copy any files to all other subdomains each time I edit them. Obviously this can be achieved using absolute URLs, but my aim is to avoid these so that my local/dev system needn't make any remote calls while I'm designing/debugging.
Currently, I have a mod_rewrite + symbolic link + php script combo set up for each subdomain, linking any calls to non-existent local assets to the master assets. e.g. calling "/sub1.master-domain.tld/assets/css/bar.css" will retrieve the bar.css file hosted on the master domain since a local version of bar.css does not exist. Furthermore, calling "/sub1.master-domain.tld/assets/js/foo.js" would serve the local version of foo.js, since it does exist.
But my current method seems to be hindering the performance of my page loads, and I wonder if there is a better (still modular) approach to solving this problem.
Any tips? Am I going about this in completely the wrong way?
Symbolic links should be all that's required if they're all on the same server. This should cause no performance hits at all. E.g.
/sub1.master-domain.tld/assets -> /master-domain.tld/assets
If your subdomains are served from multiple servers, I would set up a mod_rewrite rule with 302 redirects to the complete URL on the master domain. On that master domain server I would set mod_expires so the assets are cached, avoiding most subsequent requests to the same "missing" assets on the subdomains.
If they are on the same machine, why not point different virtual host aliases to the same document root?
I'd use reserved static. subdomain for this purpose and aim all request on that one.
starting new project where users can register and system will automatically install cms system for users registered domain and need to solve dynamic setting of the server (apache).
Registration information and info about the associations between domains and actual paths to the cms installation on the server will be stored in Mysql database.
Is there an easy way to configure apache to connect for all unknown domains to a specific php script, which will look into the database and provide the actual path to the relevant cms - apache will than use this info to correctly handle the request?
I think, that "easier" solution might be to use PHP to write the domains/paths/config to a file and force apache to use this file to handle requests - however as I expect, that the number of the domains might be higher and case that some domain will be deleted will not be rare - the file might become full of unwanted rules soon and hard to optimize, also apache restart would be needed in order to use changed file etc..therefore the question about dynamic solution - that might be much easier to manage (for me and for the admin system itself).
Yes - use a wildcard vhost in apache, and mod_rewrite to direct all URLs to your front controller (or use the 404 document handler).
C.
I am looking for a way to maintain PHP sessions across multiple domains on the same server. I am going to be integrating my sites with a Simple Machines Forum so I will need to use MySQL based sessions. Thanks!
Depending upon your preferred method of modifying PHP variables (Apache's config, .htaccess), change the session.cookie_domain value to be a consistent value.
I have multiple sub-domains, and each VirtualHost section in the Apache config file contains the following line:
php_value session.cookie_domain mydomain.com
The syntax should be similar if you make the changes in a .htaccess file.
Updated for bobert5064's comment:
For multiple domains (ie domain1.com, domain2.org), I think it is only necessary to choose a common domain name (ie domain1.com). I have never tried this, so I cannot verify that it works, but the logic seems accurate.
There is also a method to set the variables direction in PHP described at http://us.php.net/manual/en/function.session-set-cookie-params.php. The documentation makes no reference to the ability or inability to set cookies on a different domain.
If one site is going to forward or link to a second it can include the session id in the href of the link or as an input in a form. Similar to George's img tag method, but the session would only move over if and when it was needed.
Which is best depends on the usage pattern of your sites really.