I have a website that has a lot of traffic and the nature of the website means that it can have a lot of requests in a specific time period.
I use amazon beanstalk to manage the load balancer and instances.
I can have up to 20 instances running and because FOSUserBundle uses Sessions to hold the data I am loosing users logins etc.
I know EB has stickiness but due to the nature of the site it gets overwhelmed and sometimes doesnt forward the correct user to the correct instance so I am loosing users again. Amazon are no help at all.
Is there a way to override this to use secure cookies (i know cookies arent secure but I could create my own crypt/decrypt method)
Any suggestions would be helpful :)
I found away to essentially negate the sessions stored on one server. I remember doing this with a custom php system (using this php net session I built a few years ago but did not think it would work with symfony. Since posting this questions I found PdoSessionStorage basically storing your sessions on a Database instead of files on the server or instances.
Please choose your syfmony version as namespaces sometimes change version to version
Link to PdoSessionStorage on Symfony
Related
It's been a very long time since I've been on here so I hope I'm doing this properly.
I'm working on a web project that is basically a twitch streamer's profile type of site. It's being designed to display only that streamers information about their stream. So no other users should be logging in via twitch or otherwise.
The problem I'm having is the recent changes to the twitch API requires OAuth to be utilized and the token resets after a period of time as it should.
The question really is this..
How would I go about privately storing a variable on the site? This variable would need to last around 30 - 60 days, not be stored anywhere other than the server, be inaccessible to anyone, and can be changed easily after the time period is up.
I was looking through APC and realized since I'm using php 7.2 that its been replaced with APCU. Reviewing details on APCU there would be problems with that information not being stored for the time frame I need and could possibly just up and get cleared. So that marked that out.
I thought about local file storage but I need to keep the information I'm gathering secret so local file is a nope.
I intend to release the source code once the project is finished so I don't want to use databases as it just makes it more complicated for the simplistic user.
sessions are stored as cookies and that puts the string in the hands of whoever might want to be malicious so thats a nope.
Long story short I'm trying to avoid the following.
local file storage
databases
sessions
any kind of caching that would be unreliable
I just need a step in the right direction.
Sorry about the lengthy post.
If you want minimal setup and infrastructure, sqlite can be a good option. It's still a database but it works within PHP directly and only requires a file to store the data in. This solution is very often used in mobile apps as well so developers can benefit from the power of SQL while keeping it simple for the user.
sqlitetutorial.net has good tutorials to get your started.
I believe I found the solution to the problem. I can still utilize files for the storage and if I place a .htaccess to the directory I wish to restrict with deny from all the server can still access the files and read them accordingly while at the same time restricting outside intrusion.
A little more trickery with some .htaccess and I can just send a 404 response so it looks like there's nothing special there instead of alerting anyone of the files or directories presence.
Thanks to SystemGlitch and imvain2 for making me think more on that problem so I could realize the solution possibilities.
I'm currently working through the process of scaling out our server setup and we're to the point where we need to reconfigure the sessions to be stored in a high availability solution (having people logged out if a server goes down is not an option). We're trying to use Redis because we're already using it for other parts of the site. The trouble I run into is that there doesn't appear to be any support for this. Before I create my own session handler class, I thought I would ask if anyone else knows if there is a project for this use case.
I am trying to configure a load-balanced environment using Yii 1.1.14 applications, but I seem to be having the problem where Yii does not keep a user logged in when the load balancer uses another node. Most of the time, when logging in, it will ask the user to login twice because it only logs in on one node, and then loads the page on another. Otherwise, it will ask the user to login again half-way through browsing.
The application is using DB sessions and I can see that the expire time is being updated in the database. Even in the case when it asks them to login again straight after they have already logged in, the session expire time is updated in the database. Does Yii do anything server dependent with the sessions?
I've searched around for hours but unable to find much on this topic, and wondering if anyone else has come across such problem.
On the server-side, I am using Nginx with PHP-FPM and Amazon's ELB as the load balancer. The work around (as a last resort) is to use sticky sessions on the load balancer, but then this does not work the best if a node was to go offline and force the user to use an alternative node.
Please let me know if I need to clarify anything better.
The issue was that the base path which was used to generate the application ID, prefixed to the authentication information in the session, did not match on each server. Amazon OpsWorks was deploying the code to the servers using the identical symlinked path, but the real path returned by PHP differed due to versioning and symlinking.
For example, the symlink path on both servers was '/app/current'. However, the actual path on one server was '/app/releases/2014010700' and the other was '/app/releases/2014010701', which was generating a different hash and therefore not working with the session.
Changing the base path to use the symlink path in my configuration file fixed the problem, whereas before it was using dirname() which was returning the real path of the symlinked contents. I also had to remove the realpath() function in setBasePath in the Yii framework.
The modifications I made to the Yii framework are quite specific for my issue, but for anyone else experiencing a similar issue with multiple nodes, I would double check to ensure each node contains the application in the exact same path.
Thank you to the following article: http://www.yiiframework.com/forum/index.php/topic/19574-multi-server-authentication-failure-with-db-sessions
Thought I'd answered this before, but took a bit to find my answer:
Yii session do not work in multi server
Short version: if you have Suhosin enabled, it's quite painful. Turn it off and things work much better. But yes, the answer is you can do ELB load balancing with Yii sessions without needing sticky sessions.
Is there a way to store/manage PHP sessions in a similar way that the IIS (Session State Service) ?
I want to have multiple front end web servers for an multi domain e-commerce platform and manage the sessions centrally. The idea being that is a server goes down users with cart contents will not have to start a new session when they are shifted to a another web server.
I know cookies and URL parameters could do it to a point but that's not answering the question.
You can register a SessionHandlerInterface which is backed by a shared database (e.g. MySQL Cluster).
For anyone looking for this because they are moving to Amazon Web Services, there are two options/alternatives:
Use the DynamoDB session handler from the AWS SDK for PHP. This essentially has the same effect as session replication. However, there are monetary costs from DynamoDB, especially if you need locking.
Use session stickiness in the load balancer. This is simpler to set up, and free, but is probably not quite as scalable, as requests from old sessions can't just be sent on to newly started servers.
The most scalable option is of course to get rid of server-side sessions, but that is not always easy without huge changes in backends and frontends, and in some cases not even desirable because of other considerations.
I am building a web-application and have a couple of quick questions. From what I learnt, one should not worry about scalability when initially building the app and should only start worrying when the traffic increases. However, this being my first web-application, I am not quite sure if I should take an approach where I design things in an ad-hoc manner and later "fix" them. I have been reading stories about how people start off with an app that gets millions of users in a week or two. Not that I will face the same situation but I can't help but wonder, how do these people do it?
Currently, I bought a shared hosting account on Lunarpages and that got me started in building and testing the application. However, I am interested in learning how to build the same application in a scalable-manner using the cloud, for instance, Amazon's EC2. From my understanding, I can see a couple of components:
There is a load balancer that first receives requests and then decides where to route each request
This request is then handled by a server replica that then processes the request and updates (if required) the database and sends back the response to the client
If a similar request comes in, then a caching mechanism like memcached kicks into picture and returns objects from the cache
A blackbox that handles database replication
Specifically, I am trying to do the following:
Setting up a load balancer (my homework revealed that HAProxy is one such load balancer)
Setting up replication so that databases can be synchronized
Using memcached
Configuring Apache to work with multiple web servers
Partitioning application to use Amazon EC2 and Amazon S3 (my application is something that will need great deal of storage)
Finally, how can I avoid burning myself when using Amazon services? Because this is just a learning phase, I can probably do with 2-3 servers with a simple load balancer and replication but until I want to avoid paying loads of money accidentally.
I am able to find resources on individual topics but am unable to find something that starts off from the big picture. Can someone please help me get started?
Personally, I think you should be considering how your app will scale initially - as otherwise you'll run into problems down the line.
I'm not saying you need to build it initially as a multi-server system, but if you think you'll need to do it later, be mindful of the concerns now.
In my experience, this includes things like:
Sessions. Unless you use 'sticky' load balancing, you will have to have some way of sharing session state between servers. This probably means storing session data on either shared storage, or in a DB.
File uploads and replication. If you allow users to upload files, or you have a CMS that allows you to upload images/documents, it needs to cater for the fact that these files will also need to find their way onto other nodes in your cluster. However, if you've gone down the shared storage route mentioned above, this should cover it.
DB scalability. If you're using traditional DB servers, you might want to think about how you'll implement scalability at that level. This may mean coding your app so you use one connection string for reads, and another for writes. Then, you are free to implement replication with one master node handling the inserts/updates cascading the changes to read only nodes that handle the bulk of the work.
Middleware. You might even want to go down the route of implementing some kind of message oriented middleware solution to completely hand off business logic functions - this will give you a great level of flexibility in how you wish to scale this business logic layer in the future. Although initially this will be a lot of complication and work for not a great deal of payoff.
Have you considered playing around with VMs first? You can run 2-3 VMs on your local machine and set them up like you would actual servers, they just won't be able to handle real traffic levels. If all you're looking for is the learning experience, it might be an ideal way to go about it.