Docker container sometimes gives Unknown database - php

I am currently running a project on docker, and I am trying to communicate with the database via an API. They are both deployed on docker.
The problem is, when I make a call to the API (via Postman or the front-end) I SOMETIMES get a 500 error.
When I look at the logs (docker-compose logs -f) in the container folder, I get the following message
test-api_php | [2021-03-18 10:09:04] application.ERROR: SQLSTATE[HY000] [1049] Unknown database 'test-api'-#0 /app/config/....
the thing is, It happens like 50% of the time, the other times it works as intended. I've noticed that if I make a request and wait a few seconds, the next request is almost always going through, but if I make multiple requests consecutively, it's very unlikely to go through. I've given docker as much resources as possible 8 CPUs 12GB RAM 2GB Swap 60GB Disk Image Size.
My colleagues also have the same issue when they try it on their machine, so it's not machine specific

Related

webpage request with 10 pictures and some text freezes on nginx and php-fpm and disconnects other services?

is it possible that a request for a page, where the server or php might have an issue freezes and even disconnects other not related SSH services?
I am running a simple webpage (10 pictures and some text) on a dockerized environment with separate reverse proxy, a web server, a database (nginx, php-fpm and postgresql).
The whole system was up without a restart for a year or so, without problems. Now I have a newly occurring issue (about a month) with page/system freezes. When I visit my webpage it locks up from time to time (sometimes 1 instance is enough, other times, I need to open up to 20x) and needs about 30 seconds to start reacting again.The strange thing is that if I am connected in parallel with SSH to the server, it sometimes (not always) also disconnects my terminal. Which is why I believed it hast to do something with the system (but can't find anything there, so trying a different perspective here).
server (only remote access available):
Debian GNU/Linux 9.4 (stretch)
Kernel: 4.9.0-6-amd64 #1 SMP Debian 4.9.82-1+deb9u3 (2018-03-02) x86_64 GNU/Linux
68GB Ram, 8 Core, 2x4 TB HDDs and 1TB SDD
1 GBit-Uplink
I have monitoring installed and there does not seem to be any high workload on the IOs, network, CPU, or other during the lock up (I am not monitoring php stats though). I also have the same setup running on a local test server (different hardware and Kernel 4.9.0-6-amd64 #1 SMP Debian 4.9.88-1+deb9u1 (2018-05-07) x86_64 GNU/Linux) and that server has no freezing issues, so again an argument against the issue being with the dockerized environment or my page code.
I have done so far on the hardware side:
1.) SMART diagnostics - without any obvious issues (the "backup disk (not the one the servers are saved on)" has for some time: 191
G-Sense_Error_Rate 0x0032 001 001 000 , but the provider ran a
separate test some time ago and said that the disk has no issue, and
that the G-Sense_Error_Rate has little informational value anyhow)
2.) atop ( htop and iotop are live and SSH disconnects, thus I can't watch it as the problem occurs) over a 1s interval and 300 samples
(thus 5mins), where i was able to produce multiple freezes, but there
were no obvious load issues (granted this is the first time I am
looking at those things! - but there was also no high level line
coloring that atop does automatically)
3.) I have also a dockerized monitoring stack running (the freeze occurs with it running and with it being disabled, so it should not
come from here either) where I can view the dockers separately and
they also do not show anything alarming
4.) restarted the whole server - issue continues
5.) memtester-d 55 of 65 RAM without issues
6.) no problems in syslog
7.) ping the server, while producing the error and the ping is quick with 27ms, but when the server hangs, I lose 1 ping in about 10 (in those 30-40s, then ping is perfect again). But I cannot figure out, why that is
Where else could I look????
Any suggestions are highly appreciated!
Thanks!
Strange that this has only started to happen within the last few months and was fine previously.
Are you pulling down the latest image for nginx, postgres... etc? Maybe its a problem with the version of the images and could try using a specific release.

AWS RDS goes 100% with number of DB connections

I have deployed a Laravel 4.2 based project on AWS.
I am facing an issue, when there is a load on my site RDS touches 100% CPU utilization.
For example, last time we have a load when there were 422 DB connections.
The strange part is at the same time Write IOPS and Read IOPS are just normal, slow query logs are fine.
I need a direction, how to debug this issue. I want to know which thing is causing high CPU utilization.
Thanks

Server overload due to multiple xhr requests

Recently L started experiencing performance issues with my online application hosted on bluehost.
I have an online form that takes a company name and event handler "onKeyUp" tied up to that field. Every time you put a character into the field it sends request to server which makes multiple mysql queries to get the data. Mysql queries all together take about 1-2 seconds. But since requests are send after every character that is put in it easily overloads the server.
The solution for this problem was to cancel previous XHR request before sending a new one. And it seemed to work fine for me (for about a year) until today. Not sure if bluehost changed any configuration on server (I have VPS), or any php/apache settings, but right now my application is very slow due to the amount of users i have.
And i would understand gradual decrease in productivity that may be caused bu database grow, but it suddenly happened over the weekend and speeds went down like 10 times. usual request that took about 1-2 seconds before now takes 10-16 seconds.
I connected to server via SSH & ran some stress test sending lots of queries to see what process monitor (top) will show. And as I expected, for every new request it was a php process created that was put in queue for processing. This queue waiting, apparently, took the most of wait-time.
Now I'm confused, is it possible that before (hypothetical changes on server) every XHR Abort command was actually causing PHP process to quit, reducing additional load on server, and therefore making it work faster? And now for some reason this doesn't work anymore?
I have WAMP installed on Windows 7, as my test environment, and when I export the same database and run the stress-test locally it works fast. Just like it used to be on server before. But on windows I dont have such handy process monitor as TOP, so i cannot see if php processes are actually created and killed respectively.
Not sure how to do the troubleshooting at this point.

Script working fine on development server but not remote server

Am having problems with my PHP/MySQL website. It is running fine on my development machine but on Godaddy it has started giving me problems. After running it multiple times I either error 500(Internal server error) or connection timed out. Am now convinced that its not the web host as files like sitemap.xml are loading very fast.
I attempted profiling it with the NuSphere profiler and the total time it takes to load the scripts is 143.0ms. Using the Zend Controller benchmark tool(without any performance-related components) I can make an average of 12 requests per sec on my local script. Using
I get PHP function memory_get_usage I get 1340648
My questions are
What is the the allowable amount of time that a script should take to load
How can I know the CPU utilization of my scripts
How can I know the memory utilization of my scripts
I use windows with Zend CE. I have checked the error logs and nothing shows. I have googled but none of the solutions seem to work .
If its timing out on the server it's more than likely because a link to a resource in your script is not pointing to the correct location.
Leading to a function call not working or other resource not being found.

Debugging potential network bottleneck on AJAX calls

I've written some JS scripts on my school's VLE.
It uses the UWA Widget Format and to communicate with a locally-hosted PHP script, it uses a proxy and AJAX requests.
Recently we've moved the aforementioned locally-hosted server from a horrible XP-based WAMP server to a virtual Server 2008 distribution running IIS and FastCGI PHP.
Since then - or maybe it was before and I just didn't notice - my AJAX calls are starting to take in excess of 1 second to run.
I've run the associated PHP script's queries on PHPMyAdmin and, for example, the associated getCategories SQL takes 0.00023s to run so I don't think the problem lies there.
I've pinged the server and it consistently returns <1ms as it should for a local network server on a relatively small scale network. The VLE is on this same network.
My question is this: what steps can I take to determine where the "bottleneck" might be?
First of all, test how long your script is actually running:
Simplest way to profile a PHP script
Secondly, you should check the disk activity on the server. If it is running too many FastCGI processes for the amount of available RAM, it will swap and it will be very slow. If the disk activity is very high, then you know you've found your culprit. Solve it by reducing the maximum number of fastcgi processes or by increasing the amount of server RAM.

Categories