Symfony2 long polling on SSL server - php

I have an Symfony2 application that has a long pooling mechanism implemented. The user logs in the application, and at a certain time a long pooling request is started to notify the user about some changes while he still works inside the application.
The php session is saved in the database so no session locking problems occur while opening other ajax requests during the long pooling duration.
After installing a SSL certificate the problems appeared and the long pooling seems to lock other requests while he is running, behaving like the normal php session. Although the php session is still saved/read from the database the application behaves like a locking mechanism is present and doesn't allow two request at the same time.
Is this a problem with configuring the SSL module or am I missing something about Symfony's behavior? If I disable the SSL everything works great and multiple requests at the same time are not a problem.
Late edit:
Apparently the problem was with the HTTP2 headers. If I use HTTP2 headers concurrent requests are queued and executed one after the other. Using HTTP1.1 everything is ok. This is really strange, because I checked the server config according to apache documentation and this should work with my SSL module. Anyone has experienced something like this?

Are doing it with jQuery or Angular, from the client ? If so, check JS console and debug network. Also, can you get an hand on the SSL apache conf of your server ? Some parameters may overload the default config of your server, and makes conflict with the working non-ssl config.

Related

PHP stream wrappers and Windows Certificate Store with Proxy

Setup/Environment:
In our PHP application, we sometimes need to make HTTPS requests from PHP to other servers. The setup in question is as follows:
We are using PHP stream wrappers for doing the HTTP requests (using Guzzle HTTP). We are doing this because stream wrappers support using the Windows Certficiate Store for certificate verification.
The server runs on Windows.
We use a proxy on for the HTTPS requests.
The firewalls are configured to allow
Access to the servers we are doing our requests to.
Access to all certificate revocation lists relevant for the certificates used.
Our problem:
Sometimes, out of the blue, our HTTPS requests fail, with certificate validation errors. This problem persists, until someone opens a remote desktop session to the server and requests the very same URL we are trying to query in the servers Internet Explorer. After that, our PHP application can do its requests as it should.
Question:
What is the problem here? And what can we do to analyse this further?
If that were a Guzzle problem, it would happen every time.
However, do try to issue the same HTTPS call using cURL to both verify this is the case, and see if by any chance the cURL request also temporarily clears the issue, just as Internet Explorer does.
But this rather looks like a caching problem - the PHP server request is not able to properly access (priming certificates) the Certificate Store, it is only able to use its services after someone else has gained access, and only as long as the cache does not expire. To be sure this is the case, simply issue calls periodically and mark the time elapsed between user logging in and using IE, and Guzzle calls starting to fail. If I am right, that time will always be the same.
It could be a permission problem (I think it probably is, but what permissions to give to what, that I'm at a loss to guess at). Maybe calls aren't allowed unless fresh CRLs for that URL are available, and PHP doesn't get them). This situation could also either be fixed temporarily by running a IE connection attempt to the same URL from a PowerShell script launched by PHP in case of error, or (more likely, and hopefully) attempting to run said script will elicit some more informative error message.
update
I have looked into how PHP on Windows handles TLS through Guzzle, and nothing obvious came out. But I found an interesting page about TLS/SSL quirks.
More interestingly, I also found out several references on how PHP ends up using Schannel for TLS connections, and how Windows and specifically Internet Explorer have a, let us say, cavalier attitude about interoperability. So I would suggest you try activating the Schannel log on Windows and see whether anything comes out of it.
Additionally, on the linked page there is a reference to a client cache being used, and the related page ends up here ("ClientCacheTime").
Its not an application problem.
I am 99% sure this is routing problem and in some circumstances packets are dropped in the router. I would look at the network, change the environment or, if possible, do some network sniffing or monitoring.
If You have a decent network infrastructure, You can do SNPM traps for request count and timeout data collecting (from routers and switches) and ingest it in Elastic APM. This would give You quite detailed time-series analysis.
You can see this https://github.com/guzzle/guzzle/issues/394 verifyis the problem. and if you make the verify to be false that will make your system expose to security attack.
// Use the system's CA bundle (this is the default setting)
$client->request('GET', '/', ['verify' => true]);
// Use a custom SSL certificate on disk.
$client->request('GET', '/', ['verify' => '/path/to/cert.pem']);
// Disable validation entirely (don't do this!).
$client->request('GET', '/', ['verify' => false]);
These are the Request Options and you can see how to do the SSL certificate verification. They describe the issue as the following
Not all system's have a known CA bundle on disk. For example, Windows
and OS X do not have a single common location for CA bundles. When
setting "verify" to true, Guzzle will do its best to find the most
appropriate CA bundle on your system. When using cURL or the PHP
stream wrapper on PHP versions >= 5.6, this happens by default. When
using the PHP stream wrapper on versions < 5.6, Guzzle tries to find
your CA bundle in the following order:
Check if openssl.cafile is set in your php.ini file.
Check if curl.cainfo is set in your php.ini file.
Check if /etc/pki/tls/certs/ca-bundle.crt exists (Red Hat, CentOS, Fedora;
provided by the ca-certificates package)
Check if /etc/ssl/certs/ca-certificates.crt exists (Ubuntu, Debian; provided by
the ca-certificates package)
Check if /usr/local/share/certs/ca-root-nss.crt exists (FreeBSD; provided by
the ca_root_nss package)
Check if /usr/local/etc/openssl/cert.pem (OS X; provided by homebrew)
Check if C:\windows\system32\curl-ca-bundle.crt exists (Windows)
Check if C:\windows\curl-ca-bundle.crt exists (Windows)
The result of this lookup is cached in memory so that subsequent calls in the same
process will return very quickly. However, when sending only a single
request per-process in something like Apache, you should consider
setting the openssl.cafile environment variable to the path on disk to
the file so that this entire process is skipped
See also and how to ignore invalid ssl certificate errors in-guzzle 5 and guzzle-request-fails

Apache localhost not responding to clients until reset

I have setup a local server on a regular desktop (not a server desktop) and have 3-4 client machines accessing the local web application I developed from the server via a WIFI router (server is connected to router via cable. All clients via WIFI).
When two of the clients are connected to the application all is well, but when a third (or more) machine joins in there are periods where each machine does not get any service from the server (the application webpage remains loading until I manually reset Apache on the server via services). At times the server responds when one of the clients refresh their page but most of the time I have to perform a reset of the Apache server.
This occurs roughly once an hour on average (6 hours of continuous usage) as the clients are using the application.
Server is running Windows 7 and Apache v2.4 with PHP v5.4
Server and all client machines are running AVG internet security
Firewall is handled by AVG Internet Security
Is this issue due to the code in my application, desktop not being able to manage requests like a server machine, antivirus or a mix of the three?
If so, how can I set-up the server to reset automatically?
Thanks
UPDATE
It is a application where users write reports on files after reviewing information
-Frequent sql requests for file data
-No images
-Some pages contain multiple sql queries that represent the page content
-Network has no internet connection
-Code does not make requests for external information from the internet
-All client machines run the application on Google Chrome web browser
But it rarely happens but sometimes the amount of connection is restricted by the third-party interface being used by the application. We are unable to figure out the reason unless having more details like what content of your app, and the error code apache or HTTP returning.
This kind of situations is difficult to track, especially on Windows where diagnostic tools aren't as readily available as on other platforms.
I suppose you can try and check the antivirus by either running server and clients with no antivirus at all for some hours, or disabling and re-enabling the antivirus when the hangup occurs.
Apart from that, you would need to pinpoint where the error occurs:
in the connection stage (Windows OS is the problem)
in the response stage (Apache is the problem - try fiddling with the child spawning parameters)
in the management stage (PHP is the problem - you can probably check this by changing the setup from PHP-as-a-module, and PHP-as-CGI-application)
in the response stage (that is, connection to the SQL server). You can check this by setting up some pages that use different combinations of session, database, and output buffering and see whether those pages remain reachable even when the application is hung up.
For an example of the last, if a page such as
<?php print date("H:i:s"); phpinfo(); ?>
remains reachable and correctly refreshes (that's why the date() command) even when the app does not respond, this demonstrates that Windows, Apache and PHP are "innocent", and either the SQL server has issues, or you do not interface with it correctly. It might be for example be the case (though unlikely in this instance) that the resident PHP module is accumulating connections to the SQL server and not releasing them, so that after a while you need to stop Apache (thereby freeing the module) and restart.
If this were the case, even if it's not a "real" fix, you can set up Apache so that all children die and are replaced after a small number of requests (once it was 150, but when leaks all but disappeared, I believe that the default became 0 -- Apache children no longer die. Check it out, I might well misremember).

Adding PHP extension to IIS Without Restarting

I have a working PHP website at a client where I work which runs on IIS. As we are switching to MsSQL, I need to enable the php_pdo_sqlsrv_53_nts.dll. However once I'm enabling the extension, I start to receive a 500 error. My guess is that I need to restart the webserver but for certain reasons at this time we would like to avoid it.
Can you please tell me whether a restart of the web server is necessary on IIS to enable correctly a php dll?
A restart is required even if you work on your localhost !
yes - see Microsoft.com
Mind you, restarting any of my webserver takes only a few seconds so I'm not sure if that's a big issue for your client. Does he have more than one server with a load balancer or something? In that case you can do them one by one or something? Or maybe there's another smart idea of temporarily rerouting traffic elsewhere through changing the DNS?
Contrary to popular opinion, I'm going to say No, and here's why:
Since you are using IIS, you could try recycling the App Pool, if the restart is not necessarily urgent.
It might take a little while to cycle, but "recycle" uses an overlapping method, keeping the old process up until its active requests are finished while a new process handles any newly generated requests. This continues until all existing processes are finished, then the old pool gracefully exits. This will ensure that service is not disrupted for the end users. On the down side, if you have users that sit on the site for long periods of time, it may take a while before your PHP extension becomes available.
I've had success with this method in the past, was able to install PHP extensions without restarting IIS outright.
To Recycle in IIS 7:
Open Internet Information Services (IIS) Manager
Navigate to SERVERNAME > Application Pools
Select the pool you wish to recycle (the one attached to the site where you need the extension)
In the Action pane, click "Recycle..."

HTML5 - WebSocket in shared hosting

I used to have a small chat app(which was almost working), that uses PHP, jQuery and MySQL. The volume of users is very small (only my friends uses it). I used long polling method for this.
And now, I am thinking about using HTML5 Websockets for this, because it is a lot more efficient. And also most of my friends are using Google Chrome(which already supports HTML5). I have gone through some tutorials that talks about HTML5 websockets. And I have downloaded the phpWebSocket from github. I have gone through the code. But the readme file says that the PHP page that listens to incoming connections should be run using "PHP -q" from commandline. So, I have searched what this "q" flag would do. And I found that it runs the page in quiet mode. So, when I run this in quiet mode what is happened ? It would run endlessly ? Will this running process affect the system resources ?
This PHP page should run the entire time. Then only the connections could be accepted. Isn't it ?
I am having a shared hosting package with HostGator. And they allow cron jobs too. And my present chat app(that uses long polling method) inserts all the messages to database. When the user polls, it would search for any new messages from the database and then output them (if any).
So, I am bit stuck here. :(
It should be run from the command line because as you suspected, it is intended to run endlessly. It binds to a socket on the server and listens for incoming connections. It can't be reliably run from the browser.
The "-q" option tells it not to output any browser headers such as X-Powered-By: PHP or Content-Type: text/html
It will consume as much memory as PHP requires as long as its running. Your memory footprint on startup with no clients will vary between configurations. The more connected clients, the more cpu, memory and socket descriptors you will use. It uses select so it is efficient socket handling.
Also, since you're on shared hosting, you probably won't be able to use it because your user will most likely not have the ability to bind to a port and listen for connections.
As you can see in the demo, the URL to connect the WebSocket to is ws://localhost:12345/websocket/server.php. Unless you have a webserver capable of using WebSockets, you will have to run something like phpWebSocket that acts as a server and listens on a port other than 80.
Hope that helps.
The shared hosting package for HostGator does not allow clients to bind to local ports for incoming. This might be part of the problem.
http://support.hostgator.com/articles/pre-sales-policies/socket-connections

AJAX requests hang when served in quick succession

On my laptop I have an app that makes 7 AJAX GET requests to a single PHP script at about the same time (millisecond difference). They all return successfully with the result I want.
Then I moved this script to a server (Windows Server) running Apache and PHP. However, this process hangs when I make the same 7 AJAX requests. However, if I make each request individually then they all come back successful! Something doesn't want me to do all 7.
Why is this happening? What configuration variables in the PHP.ini and httpd.conf can I look for to determine what this is?
Thanks
I think the problem might be on the browser-side.
Most browsers have a 2 concurrent connections limit when talking to the same server.
When you moved your application to the server, the extra latency might have overlapped your AJAX requests, which on localhost were being served in quick succession.
You may want to check out these related articles:
The Dreaded 2 Connection Limit
The Two HTTP Connection Limit Issue
Circumventing browser connection limits for fun and profit
The server may have a throttler in place to keep excessive requests from coming in too quickly.
Maybe your Apache configuration limits the number of concurrent connections from the same IP, or even Windows. What version of Windows is it? What kind of Apache installation, Standalone or as a part of XAMPP?

Categories