Symfony not serving concurrent requests - php

My problem is that my Symfony application running on a remote machine with Apache/2.4.6 (CentOS) PHP/5.6.31 MySQL 5.7.19 does not handle concurrent requests. Meaning that when asking for two different pages at the same time. First one has to finish before second one can be rendered.
I have a another site on the same server written in plain Php which has no problem rendering as many pages as possible at the same time (it uses deprecated mysql connection not pdo like Doctrine).
That said I have done the following test:
I have inserted a sleep(3); at my DefaultController. I requested that page and simultaneously requested a different one. See the two profilers below:
Page with sleep (called 1st):
Page without sleep (called 2nd).
Page 1 normal load time is 782ms
Page 2 normal load time is 108ms
As you can see Symfony's Http Firewall is taking all of the time of the second page to load.
My guess (might be stupid) is that 1st action holds the database connection and only until it has finished with it does it let it go for other requests to use it. And especially something having to do with Doctrine's use of PDO connection.
By the way I have already read help and articles like:
- What is the Symfony firewall doing that takes so long?
- Why is constructing PDO connection slow?
- https://www.drupal.org/node/1064342
P.S. I have tried using both app.php and app_dev.php in apache configs nothing changed. Sticked to app_dev.php so I can have the profiler. And local development using Symfony's build in server has the same result

You can not have 2 concurrent requests for the same open session in PHP. When you use a firewall, Symfony locks the user session and until you release it manually or the request is served.
To release the session lock use this:
$session->save();
Note there will be some drawbacks and iimplications. After you save the session, you will not be able to update it (change attributes) until the next request arrives.
Session management: https://symfony.com/doc/current/components/http_foundation/sessions.html
Session interface: http://api.symfony.com/4.0/Symfony/Component/HttpFoundation/Session/SessionInterface.html#method_save
Note 2. If you have multiple concurrent users with different sessions PHP will serve concurrently those requests.

Related

How database locks and web server collaborate with respect to database sessions?

This question has a wide area set, e.g. web servers, database servers, php application, etc and hence I doubt it belongs refers to stackoverflow, however since this question will help us on how to write the application code, I have decided to ask it here.
I have a confusion on how database sessions and web servers work together. If I am right, when a connection is made for a client, ONLY one session will be created for that connection, and that will last till the time either the connection is disconnected or it is reconnected due to long inactivity.
Now if we consider a web server, Apache 2.4 in particular running a PHP 7.2 application (in Virtual Host) with a database backed by MariaDB 10.3.10 (on Fedora 28 if that matters at all), I assume the following scenario (please correct me if I am wrong):
For each web application, right now we use Laravel, only one database connection will be made as soon as the first query is hit to the URLs it serves.
Subsequently, it will only have ONE database session for that. When the query is served, the connections is kept alive to be reused by other queries the application receives. That means most likely if the application receives web requests 24 x 7 continuously, the connection will be also kept alive and only will be disconnected when we restart either mysqld or httpd, that might not even happen in months.
Since all the users of the application, let us say something like 20 users at time, uses the same Apache servers and Laravel application files (I assume I can call that an application instance) all the 20 users will be served through the same database connection and database session.
If all the use cases mentioned above is right, then the concept of database locking seems very confusing. Since let's say we would issue an exclusive lock, e.g. lock tables t1 write;, it will block the reads and writes of the other sessions, to avoid dirty read and write operations for concurrent sessions. However, since all the 20 users uses the same session and connection concurrently, we will not get the required concurrency safety out of database locking mechanism.
Questions:
How database locking, explicitly exclusive lock work in terms of web applications?
Will each web request received at Laravel application create a new connection and session, or ONLY one connection and session is reused?
Will each database connection have only and only ONE session at a time?
Will this command shows the current active sessions or connections show status wherevariable_name= 'Threads_connected'? If it shows the current active connections, how we can get the current active database sessions?
Apache has nothing to do with sessions in this scenario (mostly). Database connections and sessions are handled by php itself.
Unless you have connection pooling enabled, database sessions will not be reused, each request will open its own connection and close it at the end.
With connection pooling enabled the thread serving the request will ask for a connection from the pool to the process manager (be it fpm or mod_php) and it will return an available connection from the pool, but there will still be at least as many sessions as concurrent requests (unless you hit any of the max_ limits). The general reference goes into more details, but as a highlight:
Persistent connections do not give you an ability to open 'user
sessions' on the same link, they do not give you an ability to build
up a transaction efficiently, and they don't do a whole lot of other
things. In fact, to be extremely clear about the subject, persistent
connections don't give you any functionality that wasn't possible with
their non-persistent brothers.
Even having a connection pool available, the manager must run some cleanup operations before returning the conection to the client. One of those operations is table unlocking.
You can refer to the connections reference and persistent connections reference of the mysqli extension for more information.
However, the mode of operation you are describing where multiple client sessions share a connection is possible (and experimental) and has more drawbacks. It's known as session multiplexing.

symfony 3.4 FirewallListener Slow / Blocking

When i´m doing a request to a "huge" page with a lot of data to load, and make a second request to a "normal" content page, the normal page is blocked until the "huge" is loaded.
I activated the Profiler and recognized that the FirewallListener was the blocking element).
Profiler Screenshots (Loaded huge, switched tab - loaded normal)
Huge
Normal
While the "huge" page was loaded, i did a mysql php request on cli with some time measurements:
Connection took 9.9890232086182 ms
Query took 3.3938884735107 ms
So that is not blocking.
Any ideas on how to solve that?
Setup:
php-fpm7.2
nginx
symfony3.4
It is been blocked by the PHP Session.
You can't serve to pages that requires access to the same session id.
Although once you close/serve/release the session on the slow page, another page can be served on the same session. On the slow page just call Session::save() as soon as possible on your controller. This will release the session. Take into consideration that everything you do after saving the session will not be stored in the session.
The reason the firewall takes so long is that of debug is enabled.
In debug, the listeners are all wrapped with debugging listeners. All the information in the Firewall is being profiled and logged.
Try to run the same request with Symfony debug disabled.
We had a similar problem. When sending a couple of consecutive requests to the server in a short period, the server became very slow. I enabled the profiler bar, and a lot of time was spent by the ContextListener
The problem was that file server access on our server is very slow, and session information was stored on the file system, as is the default for symfony.
I configured my app to use the PdoSessionHandler, and the problem was gone.

Using Laravel behind a load balancer

I have been working on a Laravel 4 site for awhile now and the company just put it behind a load balancer. Now when I try to login it basically just refreshes the page. I tried using fideloper's proxy package at https://github.com/fideloper/proxy but see no change. I even opened it up to allow all IP addresses by doing proxies => '*'. I need some help with knowing what needs to be done to get Laravel to work behind a load balancer, especially with sessions. Please note that I am using the database Laravel session driver.
The load balancer is a KEMP LM-3600.
Thank you to everyone for the useful information you provided. After further testing I found that the reason this wasn't working is because we are forcing https through the load balancer, but allowing http when not going through the load balancer. The login form was actually posting to http instead of https. This allowed the form to post but the session data never made it back to the client. Changing the form to post to https fixed this issue.
We use a load balancer where I work and I ran into similiar problems with accessing cPanel dashboards where the page would just reload every time I tried accessing a section and log me off as my IP address was changing to them. The solution was to find which port cPanel was using and configure the load balancer to bind that port to one WAN. Sorry, I am not familiar with laravel and if it just using port 80 then this might not be a solution.
Note that the session handling in Laravel 4 uses Symfony 2 code, which lacks proper session locking in all self-coded handlers that do not use the PHP provided session save handlers like "files", "memcached" etc.
This will create errors when used in a web application with parallel requests like Ajax, but this should occur unrelated to any load balancer.
You really should do some more investigation. HTTP load balancers do have some impact on the information flow, but the only effect on a PHP application would be that a single user surfing the site will randomly send the requests to any one of the connected servers, and not always to the same.
Do you also use any fancy database setup, like master-slave replication? This would affect sessions more likely, if the writing is only done on the master, the reading is done only on a slave, and this slave is behind the master with updating the last write operation. Such a configuration is not recommended as a session storage. I'd rather use Memcached instead. The PHP session save handler does implement proper locking as well...
Using fideloper's proxy does not make sense. A load balancer should be transparent to the web server, i.e. it should not act as a reverse proxy unless configured to do so.
Use a shared resource to store the session data. File and memcached will surely not work. DB should be OK. That's what I'm using on a load balanced setup with a common database.
I have been using TrustedProxy for a while now and its working fine.
the main issue with load balancers is proxy routing. the next is from the readme file and its what I was looking for.
If your site sits behind a load balancer, gateway cache or other
"reverse proxy", each web request has the potential to appear to
always come from that proxy, rather than the client actually making
requests on your site.
To fix that, this package allows you to take advantage of Symfony's
knowledge of proxies. See below for more explanation on the topic of
"trusted proxies".

FastCGI on IIS7... multiple concurrent requests from same user session?

Caveat: I realize this is potentially a server configuration question, but I thought there might be a programmatic answer, which is why I am posting here...
Running PHP on Apache, our users were able to issue multiple concurrent requests (from different tabs in the same browser, for example).
Since moving to FastCGI under IIS, this is no longer the default behavior. Now, when a user starts a request to the server and the browser is waiting for a response, if they open a new tab and start another request, the new request is not processed by IIS until the previous request is completed by IIS.
If the user opens a different browser and logs in (which starts a new session for that user), concurrent requests are possible.
My question is: is there a way to configure FastCGI/IIS7 that will allow multiple concurrent requests from the same user session? If not, is there an alternative that would allow this?
The problem is the session mechanism, most likely. PHP Sessions, by default, since they are using the file system, have to wait for the session file to be closed before they can open them again. Therefore, subsequent requests for the same session wait on prior requests, or to give another example in addition to yours, if you had a frameset page (shudder) with three frames, each referencing the session, they would all load one at a time, because each page would have to wait for the session mechanism.
Possible Solutions:
As soon as you're done with the session, call session_write_close()
Implement a custom DB handler which utilizes the database instead of the file system.
It looks like I'm out of luck, at least running PHP under FastCGI on Windows: PHP FastCGI Concurrent Requests

Persistent memcached connection with Apache and CodeIgniter

I have a CodeIgniter project. I want to use Memcache, but I don't want to create a new
connection every time index.php is loaded (which is on every page load). How can I set up Apache / CodeIgniter so that I always have access to a memcache connection, without having to re-establish it all the time?
Sorry, the thing about php/apache is, it sets up and tears down the entire environment every time. There is no application level persistence, other than external to the php/apache env (i.e. file, database, or memcache). You have to set up the new connection every time you want to use it. Of course, PHP makes up for this by doing it all blisteringly fast, and that is the tradeoff the developers of the PHP runtime choose to make.

Categories