I have created a codeigniter app on a AWS with Lightsail that queries a large amount of data from an old magento database, converts into a new format, and pushes it to my new database.
The app works well and fine on my local machine under localhost, but when deploying to a AWS, I encounter a Gateway Timeout error. I believe this is because my local server is willing to wait longer for a response from another server than my AWS.
Is there any way to solve this error? Or rather, is there any way I can increase the amount of time my AWS is willing to wait for a response from my server database?
I tried this, but no dice:
set_time_limit(0);
error_reporting(E_ALL);
ob_implicit_flush(TRUE);
ob_end_flush();
I also tried this to no avail:
ini_set('max_execution_time', 0);
Both were placed in the constructor for my model. If either of these solutions do work, was that the wrong place to put the code?
EDIT: I should also mention that this is a Bitnami server running in Ubuntu.
For future generations, you need to edit timeout in the php-fpm-apache.conf. This is on the line
<Proxy "unix:/opt/bitnami/php/var/run/www.sock|fcgi://www-fpm" timeout=900>
If you don't know where that is, just use
sudo find / -iname php-fpm-apache.conf
in the console. Mine happened to be located at /opt/bitnami/apache2/conf/
Be sure to restart apache and php-fpm with
sudo /opt/bitnami/ctlscript.sh restart php-fpm
sudo /opt/bitnami/ctlscript.sh restart apache
And you'll be good to go!
In my case, I was seeing my "Remaining CPU burst capacity" graph reach zero.
The solution (found here) was to create a bigger size instance. To do this, create a snapshot, then on the snapshot select "create new instance", and choose a bigger size than I was using.
Then, as quoted from this blog
Once done, go to your static IP and edit it to point to a new instance rather than the old instance.
That fixed it for me.
Related
Hope everyone is well
I have set up a LEMP environment on Ubuntu 18.04 WSL. I am using PHP7.1-FPM, Composer and NPM for an application running from a webserver.
I have set up the virtual host file and can browse to the webpage and open the application and connect to the database over localhost:80. But my issue is, it feels like there is almost a timeout on the LEMP setup. On the application, there is a 60-second timer where it automatically refreshes and pulls updated information from the MySQL database. If I press the inbuilt button to refresh it, that seems to work fine, but the minute it tries to do an automatic refresh, it throws up an error saying:
net::ERR_INCOMPLETE_CHUNKED_ENCODING
Uncaught (in promise ) Error: Network Error
at e.explorts
at XMLHttpRequest.f.onerror
I did try adding proxy buffers:
proxy_buffers 8 1024k;
proxy_buffer_size 1024k;
I feel as though Windows 10 is playing a part. from my knowlege, port 80 is in use on windows 10 from PID4 HTTP service which is used by the print spooler but when i set it to a different port, It does not even connect (1111).
I am just starting out really with this. I have (basic compared to most of you guys) knowledge in troubleshooting but WSL and LEMP is a new frontier to me so any help woul be much appreciated.
Many thanks in advance.
I think I know what the issue is, we use allow override in our vhost but this is not available on Nginx. It would mean me having to convert our vhosts file.
If anyone can assist with this, I would be most great full. In the mean time, I went with apache2 in wsl.
I have a chat socket app in php, and I'd like to run not just on my local computer, but also on a webserver. How can I do that?
php -f library/filename.php
That's it.
What your answer is missing is how to keep it going, how to get it started on system reboot, how to restart it in case it stops for some reason, etc.
Depending on your distribution, you may be able to use screen or a similar utility to keep it alive. Or, you can write a systemd service file and let systemd manage keeping it going.
I'm a LAMP guy, and now start learning WebSockets via Ratchet. So far so good following the start up docs here, and hence i'm able to run the Ratchet Server, like this:
$ php server.php
And then my Javascript Clients can connect to it, etc.
But..
As a LAMP guy, i'm very used to have Apache (or) NGINX as the "Server" for any PHP files to serve to public. Now... should i just run that above command in my terminal, and that's gonna be the Ratchet Server?
Is there a way NOT to run the server like that? (or) Is there a way to let Apache (as an example) manage the Ratchet Server? Which means, let Apache start/stop the Ratchet whenever i type:
$ service httpd start
$ service httpd stop
I'm more confident this way. Plus, the SSL handling, etc also would be then done by Apache more easily. Am i right please?
Please kindly suggest, as i'm very new to this area. Thanks all :)
You indeed are right that running it in the command line is not a production ready solution.
In the last page of the tutorial (deployment) there are some ways to do it. For example, hypervisor is entirely explained how to set it up there.
If you don't like hypervisor usage, then you could try to just write a shell script which is executed on startup, that starts the server.php (less good solution, yet easier)
The ssl part you want to use is possible using a proxy with apache.
If you are using Apache web server (2.4 or above), enable these modules in httpd.conf file :
mod_proxy.so
mod_proxy_wstunnel.so
Add this setting to your httpd.conf file
ProxyPass /wss2/ ws://ratchet.mydomain.org:8888/
If you have any more questions please let me know.
I've been attempting to install the PHP APM plugin for my Web servers, however I've hit a wall and require some assistance.
We are able to install the plugin within issue, update the config without issue, and start the service without issue. However, shortly afterwards the php_agent.log starts showing that it cannot connect to the daemon and continues to fail.
I've checked the daemon and it shows that it is running, however I discovered that the process has actually zombie'd out and is dead. Restarting PHP-FPM removes the zombie and the service works again for a few minutes, but goes back into a zombie state soon after.
I'm able to replicate this problem across all of my web servers. I even spun up a brand new box and deployed it, adding the same configurations as the others, and it too started to zombie shortly after starting.
My configuration is as follows:
CentOS 7 (kernel 3.10.0-229.11.1.el7.x86_64)
PHP-FPM (5.5.30-1.el7.remi)
Nginx (1:1.6.3-6.el7)
Newrelic Daemon (4.23.4.113-1)
Newrelic PHP5 (4.23.4.113-1)
Newrelic PHP5 Common (4.23.4.113-1)
To add insult to injury, it appears that if we leave the zombie for too long, it eventually crashes the Web site across all the servers. Truely, a pain in the rear.
I would appreciate any help or thoughts anyone might have, as this is driving me insane.
Thanks!
Do you have a process that is clearing out files residing in /tmp for more than some set time? The agent and daemon communicate via a socket file called /tmp/.newrelic.sock. If it goes away you should see "ENOENT" errors in the logs. You might also have a permission issue for some locations/files.
If the socket file is the problem, consider switching to a TCP port instead of the socket file by setting newrelic.daemon.port in your configuration file (newrelic.ini)
I have the same issue before. The only thing I have done is to reinstall it to a new application created on new relic. Good luck.
rpm.newrelic.com/accounts/{yourid}/applications
Per NewRelic:
[For CentOS], the default systemd unit file for httpd has the PrivateTmp directive set to true, which means that httpd expects a private temporary directory for use by the process(es). As a result, our PHP agent and daemon can't communicate on a fresh install, as our RPM package installs a socket file when installing the package via yum. This default socket file is outside any private temp directory, which means the agent and daemon can't use to communicate (as a result of the agent being activated via the httpd process), and the correct socket file won't get created during restarts, as the agent and daemon read the location as already existing.
So, to summarize, the problem is two-fold:
Private temp directories for the default httpd install prevent the default install of our PHP agent from communicating with the daemon.
The default socket file installed by our RPM package prevents a new one from being created in the correct location
The current work-around we have implemented is to delete the default socket file at /tmp/.newrelic.sock, and then to issue a service newrelic-daemon stop, then service httpd stop, and finally a service httpd start. (I've seen a plain restart of httpd not work at times) This problem will hinder all fresh installations of the PHP agent on CentOS 7. Another thing to note is the default unit files for nginx and php-fpm also use private temp directories and are therefore subject to the same potential issues.
I have a working PHP website at a client where I work which runs on IIS. As we are switching to MsSQL, I need to enable the php_pdo_sqlsrv_53_nts.dll. However once I'm enabling the extension, I start to receive a 500 error. My guess is that I need to restart the webserver but for certain reasons at this time we would like to avoid it.
Can you please tell me whether a restart of the web server is necessary on IIS to enable correctly a php dll?
A restart is required even if you work on your localhost !
yes - see Microsoft.com
Mind you, restarting any of my webserver takes only a few seconds so I'm not sure if that's a big issue for your client. Does he have more than one server with a load balancer or something? In that case you can do them one by one or something? Or maybe there's another smart idea of temporarily rerouting traffic elsewhere through changing the DNS?
Contrary to popular opinion, I'm going to say No, and here's why:
Since you are using IIS, you could try recycling the App Pool, if the restart is not necessarily urgent.
It might take a little while to cycle, but "recycle" uses an overlapping method, keeping the old process up until its active requests are finished while a new process handles any newly generated requests. This continues until all existing processes are finished, then the old pool gracefully exits. This will ensure that service is not disrupted for the end users. On the down side, if you have users that sit on the site for long periods of time, it may take a while before your PHP extension becomes available.
I've had success with this method in the past, was able to install PHP extensions without restarting IIS outright.
To Recycle in IIS 7:
Open Internet Information Services (IIS) Manager
Navigate to SERVERNAME > Application Pools
Select the pool you wish to recycle (the one attached to the site where you need the extension)
In the Action pane, click "Recycle..."