I have a problem. When I split my MariaDB from main server to another server (my database server is running MariaDB docker from latest tag) I got an error:
Got an error writing communication packets
i have 2 server the one is a webserver (no db) the other is a ubuntu 20.04 with 4gig ram and 4 core(2gh per core)
The port is open and my PING is less than 1ms.
I tried with a basic WP site DB and the connection is OK, there was no problem, but my database is about 1GB and I guess this made this problem.
I also try to connect over private network (192.168.100.25) instead of public IP, but the problem is same.
Here is my MariaDB log
Aborted connection 3 to db: 'wpdb' user: 'root' host: 'myip' (Got an error reading communication packets)
Aborted connection 5 to db: 'wpdb' user: 'root' host: 'myip' (Got an error writing communication packets)
I also edited MariaDB config:
increased max_allowed_packet to 1GB
increased net_buffer_length to 1000000
but nothing changes!
here is mariadb variable :
https://pastebin.ubuntu.com/p/yHFRh7CnVC/
SHOW GLOBAL STATUS:
https://pastebin.pl/view/b3db2b91
show process list:
8,root,31.56.66.249:60612,,Query,0,starting,SHOW FULL PROCESSLIST,0
ulimit on server root:
ubuntu#rangoabzar:~$ ulimit -a
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 15608
max locked memory (kbytes, -l) 65536
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 15608
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
ulimit in docker container:
root#63aa95764534:/# ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 15608
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1048576
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
iostat
htop
I got this same error by doing a query that returned multiple rows, then processing those rows using rows.Next() (in Golang), and exiting early because of an unrelated error without calling rows.Close(). The confusing part was that it worked the first few times, but eventually failed, indicating some resource (connections?) was being used up.
I was able to take advantage of Golang's defer statement, and just do
defer rows.Close()
before ever calling rows.Next(), but calling rows.Close() before every early exit from the rows.Next() loop works as well.
Related
fyi this question is duplicate and many people ask to but i have collect all answer into my question what i did still on problem or doesnt work but i explain more detail and i want to know why.
similar on
Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 71 bytes)
and other have much response tell its duplicate and anything.
the case is
i have 3 server ( engine ) where run an application ( laravel web ).
A. my engine ( work/no error ): ( apache Ubuntu 18.04.6 LTS )
❯ free -m
total used free shared buff/cache available
Mem: 7976 865 497 18 6613 6797
Swap: 0 0 0
B. my engine ( doesnt work/error allocate memory ): ( nginx ) ( Ubuntu 18.04.4 LTS )
➜ ~ free -m
total used free shared buff/cache available
Mem: 7975 658 4312 22 3005 7006
Swap: 4095 1 4094
C. my engine ( doesnt work/error allocate memory ): ( nginx ) ( Ubuntu 20.04.1 LTS )
~ » free -m ~
total used free shared buff/cache available
Mem: 1987 1096 112 131 778 568
Swap: 0 0 0
what i do is run a query thats export a excel file where the data is from application is build until now , so need a big memory, time or processing. if see on yesterday there is have 9000 data/row.
what i was do ( with my three server )
increase configuration about memory on nginx/apache
tried override the setting on .httaccess/file.php/index.php to increase the memory processing
tunning mysql oracle/mysql mariadb
my server (B) on doesnt work is here :
(my configuration)
➜ ~ locate php.ini
/etc/php/7.4/apache2/php.ini
/etc/php/7.4/cli/php.ini
/etc/php/7.4/fpm/php.ini
/usr/lib/php/7.4/php.ini-development
/usr/lib/php/7.4/php.ini-production
/usr/lib/php/7.4/php.ini-production.cli
/usr/lib/php/8.1/php.ini-development
/usr/lib/php/8.1/php.ini-production
/usr/lib/php/8.1/php.ini-production.cli
➜ ~ php -i | grep php.ini
Configuration File (php.ini) Path => /etc/php/7.4/cli
Loaded Configuration File => /etc/php/7.4/cli/php.ini
then i modified the php memory
// [ 1 ]
// /etc/php/7.4/cli/php.ini
max_execution_time = 3600
max_input_time = -1
memory_limit=512M // i have tried set to 2048M or -1 or 99999999999M
then i restart the php and nginx for measure ...
➜ ~ sudo service nginx restart && sudo service php7.4-fpm restart
➜ ~ nginx -s reload
nothing error but still on problem allocate memory stuck on 16384 bytes
so i tried override the setting on index.php its on laravel/index.php and laravel/public/index.php
i put these code and paste into .php file.
<?php
set_time_limit(0);
ini_set('memory_limit', '-1');
ini_set('max_execution_time', 360);
still on problem .
what i was debug is when i tried to set the memory only 512M its alert is guide me to increase more memory ( Allowed memory size of 268435456 bytes exhausted (tried to allocate 20480 bytes) ) but when i tried increase into 2048M the alerts is a different ( Allowed memory size of 268435456 bytes exhausted (tried to allocate 16384 bytes)
so we can close understand here
for first
please try allocate more 20480 bytes
for second
please try allocate more 16384 bytes.
can anyone explain why ? stuck on 16384 bytes ... what ever i increase to the maximum.
but on my first server with apache its work on configuration and tricky of override instead of index.php apache is use .htaccess.
You edited /etc/php/7.4/cli/php.ini as I did understand it. This is only the config file for your php cli.
Nginx is using /etc/php/7.4/fpm/php.ini
You need to alter your memory limit there
I'm managing a server on AWS, t2.micro (1 Mem GiB) instance type with Debian 9.
Main services installed are:
Nginx (active)
MySQL (active)
Supervisor (stopped)
Redis (active)
These programs are for 10 Laravel (PHP) projects enabled.
The problem is that free memory is always between 60MB-75MB and I can't even start supervisor service or install new project dependencies via composer without crashing everything (including SSH session):
$ free -m
total used free shared buff/cache available
Mem: 994 477 71 140 444 233
Swap: 0 0 0
The processes consuming memory are:
$ ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n
...
10.9492 MB php-fpm:
104.473 MB php-fpm:
120.109 MB php-fpm:
144.262 MB php-fpm:
380.344 MB /usr/sbin/mysqld
Actually I have only 2 MySQL (not large) databases. Why MySQL is consuming 380MB? There's a way to optimise it?
And what about PHP-FPM, there is a need to run 4 different processes with ~100MB each? How to reduce this?
Default MySQL settings are optimized and suitable for general situations. If it consumes 380 MB (in our days it is a small amount of memory), it is probably normal. Still, you could make some things with MySQL:
use MyISAM instead of InnoDB (you could turn off InnoDB engine - refer to MySQL docs)
Change some memory caches parameters (please refer to http://www.tocker.ca/2014/03/10/configuring-mysql-to-use-minimal-memory.html and MySQL documentation) but in this case, you might get performance degradation of your MySQL server.
Best of all is to use cheaper hosting because AWS is overpriced. You can buy more powerful server for the same money.
I have a PHP daemon script downloading remote images and storing them local temporary before uploading to object storage.
PHP internal memory usage remains stable but the memory usage reported by Docker/Kubernetes keeps increasing.
I'm not sure if this is related to PHP, Docker or expected Linux behavior.
Example to reproduce the issue:
Docker image: php:7.2.2-apache
<?php
for ($i = 0; $i < 100000; $i++) {
$fp = fopen('/tmp/' . $i, 'w+');
fclose($fp);
unlink('/tmp/' . $i);
unset($fp);
}
Calling free -m inside container before executing the above script:
total used free shared buff/cache available
Mem: 3929 2276 139 38 1513 1311
Swap: 1023 167 856
And after executing the script:
total used free shared buff/cache available
Mem: 3929 2277 155 38 1496 1310
Swap: 1023 167 856
Apperantly the memory is released but calling docker stats php-apache from host indicate something other:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
ccc19719078f php-apache 0.00% 222.1MiB / 3.837GiB 5.65% 1.21kB / 0B 1.02MB / 4.1kB 7
The initial memory usage reported by docker stats php-apache was 16.04MiB.
What is the explanation? How do I free the memory?
Having this contianer running in a Kubernetes cluster with resource limits causes the pod to fail and restart repeatedly.
Yes, a similar issue has been reported here.
Here's the answer of coolljt0725, one of the contributors, answering why a RES column in top output shows something different, than docker stats (I'm just gonna quote him as is):
If I understand correctly, the memory usage in docker stats is exactly read from containers's memory cgroup, you can see the value is the same with 490270720 which you read from cat /sys/fs/cgroup/memory/docker/665e99f8b760c0300f10d3d9b35b1a5e5fdcf1b7e4a0e27c1b6ff100981d9a69/memory.usage_in_bytes, and the limit is also the memory cgroup limit which is set by -m when you create container. The statistics of RES and memory cgroup are different, the RES does not take caches into account, but the memory cgroup does, that's why MEM USAGE in docker stats is much more than RES in top
What a user suggested here might actually help you to see the real memory consumption:
Try set the param of docker run --memory,then check your
/sys/fs/cgroup/memory/docker/<container_id>/memory.usage_in_bytes
It should be right.
--memory or -m is described here:
-m, --memory="" - Memory limit (format: <number>[<unit>]). Number is a positive integer. Unit can be one of b, k, m, or g. Minimum is 4M.
And now how to avoid the unnecessary memory consumption. Just as you posted, unlinking a file in PHP does not necessary drop memory cache immediately. Instead, running the Docker container in privileged mode (with --privileged flag) it is then possible to call echo 3 > /proc/sys/vm/drop_caches or sync && sysctl -w vm.drop_caches=3 periodcally to clear the memory pagecache.
And as a bonus, using fopen('php://temp', 'w+') and storing the file temporary in memory avoids the entire issue.
The issues referred by Alex explained the memory usage difference between free -m inside the container and docker stats from host. Buffer/cache is included in the latter.
Unlinking a file in PHP does not necessary drop memory cache immediately.
Instead, running the Docker container in privileged mode I was able to call echo 3 > /proc/sys/vm/drop_caches periodcally to clear the memory pagecache.
On a non production system I export data from a Magento shop using PHP script on CLI. Even if I use this settings
php -i | grep memory_limit
memory_limit => -1 => -1
or (for testing, if "-1" is a problem)
php -i | grep memory_limit
memory_limit => 9000M => 9000M
in my /etc/php/7.0/cli/php.ini, I get the following error:
Fatal error: Allowed memory size of 6290000000 bytes exhausted (tried to allocate 232422 bytes)
Memory of the system using command "top":
KiB Mem : 20561176 total, 8667804 free, 7096968 used, 4796404 buff/cache
KiB Swap: 4190204 total, 4067004 free, 123200 used. 13050308 avail Mem
How can I increase the memory limit to really unlimited?
Exists other settings I don't know?
There was a hard coded
ini_set('memory_limit', '6000M');
in the Magento code. Sorry for my question.
Report from mysqltuner, can you help me ? how to disable InnoDB ist last Linux debian mysql version on stable, and conection aborted on more than 30%?
[!!] InnoDB is enabled but isn't being used
[OK] Total fragmented tables: 0
-------- Performance Metrics -------------------------------------------------
[--] Up for: 13h 49m 59s (18K q [0.368 qps], 2K conn, TX: 5M, RX: 1M)
[--] Reads / Writes: 63% / 37%
[--] Total buffers: 176.0M global + 2.7M per thread (25 max threads)
[OK] Maximum possible memory usage: 243.2M (23% of installed RAM)
[!!] Slow queries: 7% (1K/18K)
[OK] Highest usage of available connections: 20% (5/25)
[OK] Key buffer size / total MyISAM indexes: 8.0M/185.0K
[OK] Key buffer hit rate: 96.0% (27K cached / 1K reads)
[OK] Query cache efficiency: 41.0% (3K cached / 8K selects)
[OK] Query cache prunes per day: 0
[OK] Sorts requiring temporary tables: 0% (0 temp sorts / 291 sorts)
[OK] Temporary tables created on disk: 25% (204 on disk / 794 total)
[OK] Thread cache hit rate: 99% (5 created / 2K connections)
[!!] Table cache hit rate: 2% (64 open / 2K opened)
[OK] Open file limit used: 9% (94/1K)
[OK] Table locks acquired immediately: 99% (9K immediate / 9K locks)
[!!] Connections aborted: 32%
-------- Recommendations -----------------------------------------------------
General recommendations:
Add skip-innodb to MySQL configuration to disable InnoDB
MySQL started within last 24 hours - recommendations may be inaccurate
Increase table_open_cache gradually to avoid file descriptor limits
Read this before increasing table_open_cache over 64: http://bit.ly/1mi7c4C
Your applications are not closing MySQL connections properly
Variables to adjust:
table_open_cache (> 64)
Innodb is needed in the last MySQL version 5.6 as default engine.
aborted connections is > 0, this mmeans that connexions are not closed correctly.
https://dev.mysql.com/doc/refman/5.6/en/common-errors.html