Oracle reproducing timeouts IDLE_TIME/CONNECT_TIME - php

I need some help with oracle settings to reproduce some issues we're having and to clarify, I'm not a oracle expert at all - no experience with it.
I've managed to installed oracle-xe(because it's easiest & smallest) and got our software running on it.
Now, reports say, that the connection in some some production setups time out(long running scripts/programs) and are not reconnected (not even throwing exceptions).
So that's what I'm trying to reproduce.
After some browsing around on the internet I found that running these queries should limit my connection & idle time to 1 minute:
ALTER PROFILE DEFAULT LIMIT IDLE_TIME 1;
ALTER PROFILE DEFAULT LIMIT CONNECT_TIME 1;
ALTER SYSTEM SET RESOURCE_LIMIT = TRUE;
Results in:
SQL> select * from user_resource_limits where resource_name in ('IDLE_TIME','CONNECT_TIME');
RESOURCE_NAME LIMIT
-------------------------------- ----------------------------------------
IDLE_TIME 1
CONNECT_TIME 1
After that I made a simple php script 'test.php', it runs a query - sleeps, and runs a new query.
require_once('our software');
$account1 = findAccount('email1#example.com');
sleep(100);
$account2 = findAccount('email2#example.com');
Isn't this supposed to time out?
Some extra details about what software I'm running:
Centos 5
Oracle XE
php 5.3.5
using oci8(not pdo)

Related

SQLSTATE[HY000] [2006] MySQL server has gone away (SQL: select * from `listings` where `is_active` = 1 order by `created_at` desc) [duplicate]

I'm running a server at my office to process some files and report the results to a remote MySQL server.
The files processing takes some time and the process dies halfway through with the following error:
2006, MySQL server has gone away
I've heard about the MySQL setting, wait_timeout, but do I need to change that on the server at my office or the remote MySQL server?
I have encountered this a number of times and I've normally found the answer to be a very low default setting of max_allowed_packet.
Raising it in /etc/my.cnf (under [mysqld]) to 8 or 16M usually fixes it. (The default in MySql 5.7 is 4194304, which is 4MB.)
[mysqld]
max_allowed_packet=16M
Note: Just create the line if it does not exist
Note: This can be set on your server as it's running.
Note: On Windows you may need to save your my.ini or my.cnf file with ANSI not UTF-8 encoding.
Use set global max_allowed_packet=104857600. This sets it to 100MB.
I had the same problem but changeing max_allowed_packet in the my.ini/my.cnf file under [mysqld] made the trick.
add a line
max_allowed_packet=500M
now restart the MySQL service once you are done.
I used following command in MySQL command-line to restore a MySQL database which size more than 7GB, and it works.
set global max_allowed_packet=268435456;
It may be easier to check if the connection exists and re-establish it if needed.
See PHP:mysqli_ping for info on that.
There are several causes for this error.
MySQL/MariaDB related:
wait_timeout - Time in seconds that the server waits for a connection to become active before closing it.
interactive_timeout - Time in seconds that the server waits for an interactive connection.
max_allowed_packet - Maximum size in bytes of a packet or a generated/intermediate string. Set as large as the largest BLOB, in multiples of 1024.
Example of my.cnf:
[mysqld]
# 8 hours
wait_timeout = 28800
# 8 hours
interactive_timeout = 28800
max_allowed_packet = 256M
Server related:
Your server has full memory - check info about RAM with free -h
Framework related:
Check settings of your framework. Django for example use CONN_MAX_AGE (see docs)
How to debug it:
Check values of MySQL/MariaDB variables.
with sql: SHOW VARIABLES LIKE '%time%';
command line: mysqladmin variables
Turn on verbosity for errors:
MariaDB: log_warnings = 4
MySQL: log_error_verbosity = 3
Check docs for more info about the error
Error: 2006 (CR_SERVER_GONE_ERROR)
Message: MySQL server has gone away
Generally you can retry connecting and then doing the query again to solve this problem - try like 3-4 times before completely giving up.
I'll assuming you are using PDO. If so then you would catch the PDO Exception, increment a counter and then try again if the counter is under a threshold.
If you have a query that is causing a timeout you can set this variable by executing:
SET ##GLOBAL.wait_timeout=300;
SET ##LOCAL.wait_timeout=300; -- OR current session only
Where 300 is the number of seconds you think the maximum time the query could take.
Further information on how to deal with Mysql connection issues.
EDIT: Two other settings you may want to also use is net_write_timeout and net_read_timeout.
In MAMP (non-pro version) I added
--max_allowed_packet=268435456
to ...\MAMP\bin\startMysql.sh
Credits and more details here
If you are using xampp server :
Go to xampp -> mysql -> bin -> my.ini
Change below parameter :
max_allowed_packet = 500M
innodb_log_file_size = 128M
This helped me a lot :)
This error is occur due to expire of wait_timeout .
Just go to mysql server check its wait_timeout :
mysql> SHOW VARIABLES LIKE 'wait_timeout'
mysql> set global wait_timeout = 600 # 10 minute or maximum wait time
out you need
http://sggoyal.blogspot.in/2015/01/2006-mysql-server-has-gone-away.html
I was getting this same error on my DigitalOcean Ubuntu server.
I tried changing the max_allowed_packet and the wait_timeout settings but neither of them fixed it.
It turns out that my server was out of RAM. I added a 1GB swap file and that fixed my problem.
Check your memory with free -h to see if that's what's causing it.
On windows those guys using xampp should use this path xampp/mysql/bin/my.ini and change max_allowed_packet(under section[mysqld])to your choice size.
e.g
max_allowed_packet=8M
Again on php.ini(xampp/php/php.ini) change upload_max_filesize the choice size.
e.g
upload_max_filesize=8M
Gave me a headache for sometime till i discovered this. Hope it helps.
It was RAM problem for me.
I was having the same problem even on a server with 12 CPU cores and 32 GB RAM. I researched more and tried to free up RAM. Here is the command I used on Ubuntu 14.04 to free up RAM:
sync && echo 3 | sudo tee /proc/sys/vm/drop_caches
And, it fixed everything. I have set it under cron to run every hour.
crontab -e
0 * * * * bash /root/ram.sh;
And, you can use this command to check how much free RAM available:
free -h
And, you will get something like this:
total used free shared buffers cached
Mem: 31G 12G 18G 59M 1.9G 973M
-/+ buffers/cache: 9.9G 21G
Swap: 8.0G 368M 7.6G
In my case it was low value of open_files_limit variable, which blocked the access of mysqld to data files.
I checked it with :
mysql> SHOW VARIABLES LIKE 'open%';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| open_files_limit | 1185 |
+------------------+-------+
1 row in set (0.00 sec)
After I changed the variable to big value, our server was alive again :
[mysqld]
open_files_limit = 100000
This generally indicates MySQL server connectivity issues or timeouts.
Can generally be solved by changing wait_timeout and max_allowed_packet in my.cnf or similar.
I would suggest these values:
wait_timeout = 28800
max_allowed_packet = 8M
If you are using the 64Bit WAMPSERVER, please search for multiple occurrences of max_allowed_packet because WAMP uses the value set under [wampmysqld64] and not the value set under [mysqldump], which for me was the issue, I was updating the wrong one. Set this to something like max_allowed_packet = 64M.
Hopefully this helps other Wampserver-users out there.
There is an easier way if you are using XAMPP.
Open the XAMPP control panel, and click on the config button in mysql section.
Now click on the my.ini and it will open in the editor. Update the max_allowed_packet to your required size.
Then restart the mysql service. Click on stop on the Mysql service click start again. Wait for a few minutes.
Then try to run your Mysql query again. Hope it will work.
It's always a good idea to check the logs of the Mysql server, for the reason why it went away.
It will tell you.
MAMP 5.3, you will not find my.cnf and adding them does not work as that max_allowed_packet is stored in variables.
One solution can be:
Go to http://localhost/phpmyadmin
Go to SQL tab
Run SHOW VARIABLES and check the values, if it is small then run with big values
Run the following query, it set max_allowed_packet to 7gb:
set global max_allowed_packet=268435456;
For some, you may need to increase the following values as well:
set global wait_timeout = 600;
set innodb_log_file_size =268435456;
For Vagrant Box, make sure you allocate enough memory to the box
config.vm.provider "virtualbox" do |vb|
vb.memory = "4096"
end
This might be a problem of your .sql file size.
If you are using xampp. Go to the xampp control panel -> Click MySql config -> Open my.ini.
Increase the packet size.
max_allowed_packet = 2M -> 10M
The unlikely scenario is you have a firewall between the client and the server that forces TCP reset into the connection.
I had that issue, and I found our corporate F5 firewall was configured to terminate inactive sessions that are idle for more than 5 mins.
Once again, this is the unlikely scenario.
uncomment the ligne below in your my.ini/my.cnf, this will split your large file into smaller portion
# binary logging format - mixed recommended
# binlog_format=mixed
TO
# binary logging format - mixed recommended
binlog_format=mixed
I found the solution to "#2006 - MySQL server has gone away" this error.
Solution is just you have to check two files
config.inc.php
config.sample.inc.php
Path of these files in windows is
C:\wamp64\apps\phpmyadmin4.6.4
In these two files the value of this:
$cfg['Servers'][$i]['host']must be 'localhost' .
In my case it was:
$cfg['Servers'][$i]['host'] = '127.0.0.1';
change it to:
"$cfg['Servers'][$i]['host']" = 'localhost';
Make sure in both:
config.inc.php
config.sample.inc.php files it must be 'localhost'.
And last set:
$cfg['Servers'][$i]['AllowNoPassword'] = true;
Then restart Wampserver.
To change phpmyadmin user name and password
You can directly change the user name and password of phpmyadmin through config.inc.php file
These two lines
$cfg['Servers'][$i]['user'] = 'root';
$cfg['Servers'][$i]['password'] = '';
Here you can give new user name and password.
After changes save the file and restart WAMP server.
I got Error 2006 message in different MySQL clients software on my Ubuntu desktop. It turned out that my JDBC driver version was too old.
I had the same problem in docker adding below setting in docker-compose.yml:
db:
image: mysql:8.0
command: --wait_timeout=800 --max_allowed_packet=256M --character-set-server=utf8 --collation-server=utf8_general_ci --default-authentication-plugin=mysql_native_password
volumes:
- ./docker/mysql/data:/var/lib/mysql
- ./docker/mysql/dump:/docker-entrypoint-initdb.d
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
I also encountered this error. But even with the increased max_allowed_packet or any increase of value in the my.cnf, the error still persists.
What I did is I troubleshoot my database:
I checked the tables where the error persists
Then I checked each row
There are rows that are okay to fetch and there are rows where the error only shows up
It seems that there are value in these rows that is causing this error
But even by selecting only the primary column, the error still shows up (SELECT primary_id FROM table)
The solution that I thought of is to reimport the database. Good thing is I have a backup of this database. But I only dropped the problematic table, then import my backup of this table. That solved my problem.
My takeaway of this problem:
Always have a backup of your database. Either manually or thru CRON job
I noticed that there are special characters in the affected rows. So when I recovered the table, I immediately changed the collation of this table from latin1_swedish_ci to utf8_general_ci
My database was working fine before then my system suddenly encountered this problem. Maybe it also has something to do with the upgrade of the MySQL database by our hosting provider. So frequent backup is a must!
Just in case this helps anyone:
I got this error when I opened and closed connections in a function which would be called from several parts of the application.
We got too many connections so we thought it might be a good idea to reuse the existing connection or throw it away and make a new one like so:
public static function getConnection($database, $host, $user, $password){
if (!self::$instance) {
return self::newConnection($database, $host, $user, $password);
} elseif ($database . $host . $user != self::$connectionDetails) {
self::$instance->query('KILL CONNECTION_ID()');
self::$instance = null;
return self::newConnection($database, $host, $user, $password);
}
return self::$instance;
}
Well turns out we've been a little too thorough with the killing and so the processes doing important things on the old connection could never finish their business.
So we dropped these lines
self::$instance->query('KILL CONNECTION_ID()');
self::$instance = null;
and as the hardware and setup of the machine allows it we increased the number of allowed connections on the server by adding
max_connections = 500
to our configuration file. This fixed our problem for now and we learned something about killing mysql connections.
For users using XAMPP, there are 2 max_allowed_packet parameters in C:\xampp\mysql\bin\my.ini.
This error happens basically for two reasons.
You have a too low RAM.
The database connection is closed when you try to connect.
You can try this code below.
# Simplification to execute an SQL string of getting a data from the database
def get(self, sql_string, sql_vars=(), debug_sql=0):
try:
self.cursor.execute(sql_string, sql_vars)
return self.cursor.fetchall()
except (AttributeError, MySQLdb.OperationalError):
self.__init__()
self.cursor.execute(sql_string, sql_vars)
return self.cursor.fetchall()
It mitigates the error whatever the reason behind it, especially for the second reason.
If it's caused by low RAM, you either have to raise database connection efficiency from the code, from the database configuration, or simply raise the RAM.
For me it helped to fix one's innodb table's corrupted index tree. I localized such a table by this command
mysqlcheck -uroot --databases databaseName
result
mysqlcheck: Got error: 2013: Lost connection to MySQL server during query when executing 'CHECK TABLE ...
as followed I was able to see only from the mysqld logs /var/log/mysqld.log which table was causing troubles.
FIL_PAGE_PREV links 2021-08-25T14:05:22.182328Z 2 [ERROR] InnoDB: Corruption of an index tree: table `database`.`tableName` index `PRIMARY`, father ptr page no 1592, child page no 1234'
The mysqlcheck command did not fix it, but helped to unveil it.
Ultimately I fixed it as followed by a regular mysql command from a mysql cli
OPTIMIZE table theCorruptedTableNameMentionedAboveInTheMysqld.log

Same mariaDB inserts much more slower in PHP 7.4 than PHP 7.1 with FAT FREE

I'm trying to migrate a legacy PHP/Fat Free project from php 7.1 to 7.4 and I found that some queries take too long (like 10x more time) to finish. Particularly some inserts. I'm running the same project in my localhost with xampp (7.1.32 and 7.4.6) and using the exact same MariaDB server (v10.4.8) with the exact same database always.
The code is something like that:
foreach($ridiculouslyLongArray as $row) //I'm talking about some millons of rows
$this->db->exec("INSERT INTO a_table (field1, field2, fieldn) VALUES ('".$row['field1']."', '".$row['field2']."', '".$row['fieldn']."')");
//Yes, it's open to sql injection, I will fix that too
The definition of $this->db is the next:
$this->db = new DB\SQL('mysql:host=localhost;port=3306;dbname=something', 'dbuser', 'dbpassword', array(\PDO::ATTR_ERRMODE=>\PDO::ERRMODE_EXCEPTION));
and is a wrapper of PDO as far as I know.
I've tried to change the code to insert multiple rows per query but the query still taking much more time than in php 7.1.
This is my setup
->Original Project (in which the queries run fine)
PHP 7.1.32 (memory limit 2048mb)
Fat Free 3.6.4
MariaDB 10.4.8
->New Project (in which the queries run slow)
PHP 7.4.6 (memory limit 2048mb)
Fat Free 3.7.2
MariaDB 10.4.8 (same server and db that in the previous one)
Thanks for your time.
EDIT: I Just noticed that the PDO Drivers for MySQL are different between versions
for PHP 7.1:
mysqlnd 5.0.12-dev - 20150407 - $Id: 38fea24f2847fa7519001be390c98ae0acafe387 $
for PHP 7.4:
mysqlnd 7.4.6
Edit 2: The query is in a transaction and it is using the same indexes and same dB engine because is the same insert over the same table in the same database on the same server. Nothing change in the code only the PHP versiĆ³n.
This wasn't explicitly mentioned in the comments, but something else that may be causing some slowness is query logging.
By default, Fat-Free will log all DB queries. If you are running a gazillion inserts, all those inserts are being logged. If it's not already, I would recommend in production to disable query logging. Wherever your bootstrap/services file is that creates the db connection, I would add this after it:
$f3->set('db', new DB\SQL(/* config stuff */));
if(ENVIRONMENT === 'PRODUCTION') { // or whatever you use to signal it's production
$f3->db->log(false);
}

MySQL is very slow with 19500 insert?

I have an prestashop online, and since 1 month, we have some too long response times.
MySQL: 5.5
All tables(Engine): MyIsam
PHP: 7.0 * / 5.6
I import some csv file with 20 000 lines and when i'am in dev env. it takes 40 secondes and in production [ Tie well ] 10 minutes.
We have removed the material cause, cache , ans charge cause. -> [MySQL has treat 5 billion of requests in 45 seconds -> prod machine].
Actually the export sql file is 1.3 Go -> in prod.
We use Prestashop Db class to do the request.
If you have any suggestion !!
Thanks !!
Since you're using ISAM you may disable indexes before insert.
See MySQL disable & enable keys

unixODBC PHP Update statement error

I'm using Ubuntu+php+unixodbc+mdbtools for working with .mdb file.
Every thing(connection+select) works good, but Insert or Update statements.
My code is something like this :
$mdbConnection = new \PDO("odbc:mdbdriver",$user , $password , array('dbname' =>$FileName) );
$SelectResult = $mdbConnection->query("Select * from Zone");
$UpdateResult = $mdbConnection->query("Update Zone Set ShahrCode = 99");
$SelectResult returns correct result, but the second one throws an error that cause apache to segfault error.
I test it with isql command.Running Select statement is successful but Update is not.
#isql mdbdriver
+---------------------------------------+
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
+---------------------------------------+
SQL>Update Zone Set ShahrCode = 99
Error at Line : syntax error near Update
syntax error near Update
Got no result for 'Update Zone Set ShahrCode = 99' command
[08001][unixODBC]Couldn't parse SQL
[ISQL]ERROR: Could not SQLExecute
Or
SQL> Update [Zone] Set ShahrCode = 99
Error at Line : syntax error near Update
syntax error near Update
Got no result for 'Update [Zone] Set ShahrCode = 99' command
[ISQL]ERROR: Could not SQLExecute
How should I fix this error ?
Thanks all
Personally, I wouldn't spend a lot of time trying to get PHP + mdb_tools + unixODBC to work together reliably. I have tried on several occasions and have been quite unsuccessful despite my best efforts.
My recommendations would be:
If maintaining your data in an Access .mdb file is a firm requirement then one must assume that Windows machines are involved in the project. In that case I would suggest that you run your PHP code on a Windows machine and use COM_DOTNET to manipulate the Access database (via Windows ODBC using ADODB.Connection and related objects).
If running your PHP code on Linux is a firm requirement then there is a strong case for moving your data from the Access .mdb into some other database that works better with PHP. (MySQL would be one of the more common choices.)
If both 1. and 2. are firm requirements then perhaps the best option might be to move the .mdb file to a Windows machine and use ODBTP to manipulate the .mdb file from PHP code running on the Linux machine.
At last I found a solution :
mdbtools can not write to mdb files yet.
MDB Tools currently has read-only support for Access 97 (Jet 3) and
Access 2000/2002 (Jet 4) formats. Write support is currently being
worked on and the first cut is expected to be included in the 0.6
release.
Using simple compiled java application is our solution.
Create a simple java project with Jackcess library.
Enable CLI params for java application and do what you want with
mdb file.
You can even get mdb file path with CLI params.
Compile java project.
In PHP you can use exec('cd path/to/javaproject;java -cp .
YourJavaProject "mdbfilepath" "insert|update|or select"',$output);

mod_perl and oracle vs php and oracle performance

I have a large Perl app that I need to make faster; on the basis that it spends most of its running time talking to the DB I wanted to know how many well written SQL statements I could run and meet the performance targets. To do this I wrote a very simple handler that does a SELECT and an INSERT, when I benchmarked it on 300 concurrent requests (10,000 in total) the results were quite poor (1900ms average).
The performance target we've been given by the client is based on another app they use written in PHP, so I wrote a quick PHP script that does functionally the same thing as my simple mod_perl test handler and it gave a 400ms average!
The PHP code is:
$cs = "//oracle.ourdomain.com:1521/XE";
$oc = oci_pconnect("hr","password",$cs);
if(!$oc) { print oci_error(); }
$stid = oci_parse($oc, 'SELECT id FROM zz_system_options WHERE id = 1');
oci_execute($stid);
$stmt = oci_parse($oc, "INSERT INTO zz_system_options (id,option_name) VALUES (zz_system_optionsids.nextval,'load testing')");
oci_execute($stmt);
echo "hello world";
The Perl code is:
use strict;
use Apache2::RequestRec ();
use Apache2::RequestIO ();
use Apache2::Const -compile => qw(:common);
use DBI;
our $dbh;
sub handler
{
my $r = shift;
# Connect to DB
$dbh = DBI->connect( "DBI:Oracle:host=oracle.ourdoamin.com;port=1521;sid=XE", "hr", "password" ) unless $dbh;
my $dbi_query_object = $dbh->prepare("SELECT id FROM zz_system_options");
$dbi_query_object->execute();
$dbi_query_object =
$dbh->prepare("INSERT INTO zz_system_options (id,option_name) VALUES (zz_system_optionsids.nextval,?)");
$dbi_query_object->execute("load testing");
# Print out some info about this...
$r->content_type('text/plain');
$r->print("Errors: $err\n");
return Apache2::Const::OK;
}
The mod_perl has a startup.pl script called with a PerlRequire in the apache config that loads all the 'use'd modules. If all is working correctly, and I've no reason to think it isn't, then each request should only run the lines in 'sub handler' - meaning the Perl and PHP should be doing pretty much the same thing.
Server details:- The hardware node is a Quad Core Xeon L5630 # 2.13GHz with 24Gb RAM, the OS for the Apache virtual machine is Gentoo, the OS for Oracle is Centos 5,.
Versions: OSes both updated within last 2 weeks, Apache version 2.2.22, mod_perl version 2.0.4, DBI Version 1.622, DBD::Oracle version 1.50, Oracle instant client version 10.2.0.3, Oracle Database 10g Express Edition Release 10.2.0.1.0, PHP version 5.3
Apache MPM config is ServerLimit 2000, MaxClients 2000 and MaxRequestsPerChild 300
Things I checked: during the testing the only load was from the test app/oracle, neither virtual machine hit any of its bean counter limits, e.g., memory, Oracle showed 1 session per Apache child at all times, inserts had been done after each run.
So, my question is; Can I make the mod_perl version faster and if so how?
If you changed the PHP code and the timing didn't change then clearly you're not measuring the code time, are you?
The important question is - why are you repeatedly connecting in the Perl script and not in the PHP script?
Finally, this test probably isn't going to tell you anything useful unless all your queries are simple single-table single-row selects and inserts.

Categories