Long running progam times out to proxy error over DMZ - php

I have set up a DMZ to run a web site. Most of the code is on an application server running Debian Release 7.0 (wheezy) 64-bit. I also have a web server running CentOS 6.5. It acts as a proxy for the application server. I have set up LAMP on both and my web pages are written in PHP. A PHP script on the web server calls a PHP script on the application server. The application server script calls a long running (> 1 minute) executable (that had been developed in C++). After 60 seconds (timed by my watch), the script fails with the following message.
Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request POST /appServer/scriptName.php.
Reason: Error reading from remote server
Apache/2.2.15 (CentOS) Server at sitename.com Port 80
I commented out code in the application server PHP script and narrowed the problem down to the long-running C++ executable. I also ran the executable in the shell without any problems. So there is clearly a time-out issue and it seems to be associated with the web server. I have only recently replaced an old version of Ubuntu, with Centos 6.5, on the web server and I did not have this problem before I did that. Also, the PHP code is the same as before I made the switch and it did not give me this problem prior to that. So I am convinced that the problem lies with something on the web server and one of the setting I have, with php or apache, on the new system.
I edited /etc/php.ini, on the web server, and changed all of the uncommented-out 60 second time limits (max_input_time, default_socket_timeout, mysql.connect_timeout) from 60 seconds to 600 seconds. I still get the above proxy error after 60 seconds.

Solution.
On the (CentOS) web server, edit /etc/httpd/conf/httpd.conf
Include the line
ProxyPass /appServer/ http://[private IP address]/ timeout=600 Keepalive=On
Specifically add the
timeout=600 Keepalive=On
part.
I also restarted apache to be on the safe side.

Related

Websocket handshakes failing on PHP

I'm trying to setup a websocket server in a development pc. I've been trying a few examples from github and all of them with the same result. Error 500 after a few minutes.
I already have enabled the websocket protocol on Windows optional resources, the extension is included in the php.ini file. I tried running the scripts on CMD and nothing seems to work. I also tried inserting a few breakpoints using xdebug and the script is being reached, it just keep refusing to perform the handshake. Is there anything that I may be missing?
The PC is running Windows 10, with PHP 5.6.3.

Apache / WSGI and PHP suddenly unable to connect to MSSQL server -- site down

between yesterday and today something happened that prevents processes running under Apache accessing an MSSQL server that is essential for functioning of the site.
This is what I find in the Apache error logs for PHP scripts:
PHP Warning: mssql_connect(): Unable to connect to server
Flask/SQLAlchemy applications are a bit more informative:
OperationalError: (OperationalError) (20009, 'DB-Lib error message 20009,
severity 9:\\nUnable to connect: Adaptive Server is unavailable or does
not exist (####:1234)\\nNet-Lib error during Permission
denied(13)\\n') None None
When I start the same WSGI app in test mode from the console on the same machine that Apache is running on, everything works. To summarize:
Both WSGI and PHP fail to connect to an MSSQL server literally overnight if run under Apache
When run w/o Apache, the WSGI scripts work fine (can't tell about PHP because that's not my domain)
Nothing was changed on the server that runs the web applications (can't say about the MSSQL server)
I need a clue quick. This stuff is running in a company intranet and people are getting impatient. I have control only over the RHEL server running Apache, not the MSSQL server.
The troubleshooting tips using tsql on the freetds page all work fine.
My /etc/freetds.conf is just out of the box and essentially empty (everything commented out).
Turns out it had nothing to do with Apache et al. This was an SELinux permission issue which started after the VM was rebooted during the night, probably initiated by a sysop in India. Apparently there was an updated security policy for apache. Found the issue in /var/log/messages, which thankfully even included instructions on how to fix it.

Curl too slow with big files, known solutions don't work

I upload files with curl in php running on Windows Server 2008 (via POST), but it always only uses about 10% of the possible speed. I've tried every solution that I could find:
-Using HTTP 1.0
-Using another cURL version
-another PHP version
-Using another Apache, use XAMPP, use WAMP, use IIS
-Changing the buffer size
-edit the registry of windows (windows socket stack problem)
-use another server
-use other files
-use different scripts
As said I've used different versions of everything I can change. Windows server 2008 seems to be the problem, but I can't change it because it is a rented server paid for a year. I will now try it with a virtual apache system on the server but this isn't a good solution. Is there any solution for this problem?

fwrite() hangs when running process through mod_php as upstream, but fine solo

I seem to be having issues with running wkhtmltopdf through a proc_open() on Ubuntu Server 12.04 with PHP 5.3.10.
What seems to happen (on several servers) when running solely with Apache is that the process is opened successfully, the data is written to and the PDF comes out of the other end of the process.
However, when running the same code through a setup with Nginx as a proxy and Apache as the upstream server, the fwrite() to stdin seems to hang/become unresponsive with anything more than approximately 1200 bytes.
The static binary version 0.10.0-rc2 seems to be working fine on its own, and can render any page it can access, so I'm not sure what's causing the issue here.
Edit: It doesn't seem to be Nginx, as I've put that in front of Apache on an AWS box and it still works.
You need to run "tail -f " and run the PHP script. You will hopefully see the error message appearing and that will guide you in the right direction.
This was the result of the Nginx server not having a specific hosts entry for the domain name it was using in the request. The request entered a loop, continually hitting the external address and redirecting to it, rather than resolving locally.

Why Is Apache Giving 403?

I am getting 403 Errors from Apache when I send too many, 12, synchronous HTTP Posts via a desktop app I am building in XCode / Objective-C. The 12 POST requests are just a few kb each and go out instantly one after the other and the Apache Error Log shows...
client denied by server configuration: /the-path/the-file.php
Apache 2.0 PHP 5 and I have this same setup working fine on my local machine. The error is coming from a VPS with my host, which runs very fast and smooth and has plenty of resources. To debug I threw a sleep(1); function (stalls script execution by 1 second) into the php file and that fixed it. This makes me think that I am breaking some limit for too many requests for a single IP in a certain amount of time. I have googled and combed PHP ini and Apache configs, but I cannot find what that directive/setting might be.
I should mention that the although it varies the first 4 or 5 POSTS usually work then it starts returning the 403 error intermittently after that. Just really acting like its bogging down.
Any ideas?
The error tells you everything: Most likely your VPS has flood control on their web server, which kicks in at 4 or 5 quickly-sequential hits. This has nothing to do with PHP itself, but ratherly completely to do with Apache. In other words, your home setup is not the same as the VPS's setup.
Try to off or configure mod_evasive. It is a module for Apache to provide evasive action in the event of an HTTP DoS or DDoS attack or brute force attack. (Here you can read more about it). Use the command to off mod_evasive:
a2dismod mod-evasive
service apache2 restart

Categories