Difference between PHP SQL Server Driver and SQLCMD when running queries - php

Why is that the SQL Server PHP Driver has problms with long running queries?
Every time I have a query that takes a while to run, I get the following errors from sqlsrv_errors() in the below order:
Shared Memory failure, Communication
Link Failure, Timeout failure
But if I try the same query with SQLCMD.exe it comes back fine. Does the PHP SQL Server Driver have somewhere that a no timeout can be set?
Whats the difference between running queries via SQLCMD and PHP Driver?
Thanks all for any help
Typical usage of the PHP Driver to run a query.
function already_exists(){
$model_name = trim($_GET['name']);
include('../includes/db-connect.php');
$connectionInfo = array('Database' => $monitor_name);
$conn = sqlsrv_connect($serverName, $connectionInfo);
$tsql = "SELECT model_name FROM slr WHERE model_name = '".$model_name."'";
$queryResult = sqlsrv_query($conn, $tsql);
if($queryResult != false){
$rows = sqlsrv_has_rows($queryResult);
if ($rows === true){
return true;
}else{
return false;
}
}else{
return false;
}
sqlsrv_close($conn);
}

SQLCMD has no query execution timeout by default. PHP does. I assume you're using mssql_query? If so, the default timeout for queries through this API is 60 seconds. You can override it by modifying the configuration property mssql.timeout.
See more on the configuration of the MSSQL driver in the PHP manual.
If you're not using mssql_query, can you give more details on exactly how you're querying SQL Server?
Edit [based on comment]
Are you using sqlsrv_query then? Looking at the documentation this should wait indefinately, however you can override it. How long is it waiting before it seems to timeout? You might want to time it and see if it's consistent. If not, can you provide a code snippet (edit your question) to show how you're using the driver.
If MSDTC is getting involved (and I don't know how you can ascertain this), then there's a 60-second timeout on that by default. This is configured in the Component Services administration tool and lives in a different place dependent on version of Windows.

SQL Server 2005 limits the maximum
number of TDS packets to 65,536 per
connection (limit that was removed in
SQL Server 2008). As the default
PacketSize for the SQL Server Native
Client (ODBC layer) is 4K, the PHP
driver has a de-facto transfer limit
of 256MB per connection. When
attempting to transfer more than
65,536 packets, the connection is
reset at TDS protocol level.
Therefore, you should make sure that
the BULK INSERT is not going to push
through more than 256 MB of data;
otherwise the only alternative is to
migrate your application to SQL Server
2008.
From MSDN Forums
http://social.msdn.microsoft.com/Forums/en-US/sqldriverforphp/thread/4a8d822f-83b5-4eac-a38c-6c963b386343

PHP itself has several different timeout settings that you can control via php.ini. The one that often causes problems like you're seeing is max_execution_time (see also set_time_limit()). If these limits are exceeded, php will simply kill the process without regard for ongoing activities (like a running db query).
There is also a setting, memory_limit, that does as its name suggests. If the memory limit is exceeded, php just kills the process without warning.
good luck.

Related

PHP mysql persistent connection not reused ( opens more than one connection per fpm-worker )

I'm facing a really weird behaviour while testing persistent connections from php to mysql. I have a small script that looks like this:
<?php
$db = new mysqli('p:db-host','user','pass','schema');
$res = $db->query('select * from users limit 1');
print_r($res->fetch_assoc());
My setup is :
OS: CentOS 7.3
PHP/7.1.18
php-fpm
nginx/1.10.2
MySQL-5.6.30
I tried to do some requests with ab:
$ ab -c 100 -n 500 http://mysite/my_test_script.php
PHP-FPM was configured to have 150 workers ready, and i saw what i was expecting, 150 established connections to mysql, which stayed open after the ab finished. I launched ab once again, and the behaviour was still the same, 150 connections, no new connections where opened. All fine. Then i created a script which did the the same exact requests, same IP, same HTTP headers, but used curl to make the request, and BOOM i had 300 connections on mysql instead of 150. I launched the script again, i got still 300 connections. Subsequent runs of the same script didn't increase the number of connections. Did anyone ever faced anything like this? Does anyone know what could make php open more connections than needed? Am I missing something obvious?
If it's not clear what i'm asking, please comment below and i will try to better my explain problem.
P.S. I tried this with PDO too, same behaviour.
EDIT: My tests where not accurate
After further testing i noticed that my first tests where not accurate. I was in a multi-tenant environment and different connections ( different schema ) where initialized when i launched ab. In my case the php documentation was a bit missleading, it says:
PHP checks if there's already an identical persistent connection (that remained open from earlier) - and if it exists, it uses it. If it does not exist, it creates the link. An 'identical' connection is a connection that was opened to the same host, with the same username and the same password (where applicable).
http://php.net/manual/en/features.persistent-connections.php
Maybe its i obvious to everyone, I don't know, it was not for me. Passing the 4th parameter to mysqli made php consider connections not identical. Once i changed my code to something like this:
<?php
$db = new mysqli('p:db-host','user','pass');
$db->select_db('schema');
$res = $db->query('select * from users limit 1');
print_r($res->fetch_assoc());
The application started to behave as i expected, one connection per worker.

MySQL CSV Import Issue

I'm trying to upload a CSV into a mysql database using phpmyadmin
When I try with a shortened version of the database, the process works ok, but when I try with the full database, I get the error:
#2006 - MySQL server has gone away
The section of my CSV that is working is:
trans_id,price_paid,date,postcode,property_type,poperty_type_2,hold,add_num,add_flat,add_road,add_area,add_city,add_borough,add_county,add_rand
{33C588EE-BB09-4F6F-BA8C-000312C72B3B},159950,23/05/2014 00:00,SL6 9LX,F,N,L,2,,THE SHAW,COOKHAM,MAIDENHEAD,WINDSOR AND MAIDENHEAD,WINDSOR AND MAIDENHEAD,A
{2C650B8C-57C0-421C-A4A9-00037BDFDCFB},158000,30/05/2014 00:00,NN14 1RJ,T,N,F,4,,MIDLAND COTTAGES,RUSHTON,KETTERING,KETTERING,NORTHAMPTONSHIRE,A
{74FA45D0-CB64-40E1-94C4-00055AEBF72C},470000,30/05/2014 00:00,KT20 5SF,D,N,F,11,,CHAPEL ROAD,,TADWORTH,REIGATE AND BANSTEAD,SURREY,A
{054AB14B-0EED-48FD-B3CD-0005B154A5C3},135000,23/05/2014 00:00,NR27 9AZ,F,N,L,48,,ALBANY COURT,,CROMER,NORTH NORFOLK,NORFOLK,A
{86896E40-68BA-4BA2-8468-0006258B9C41},124995,09/05/2014 00:00,L24 9NA,S,Y,L,131,,ADDENBROOKE DRIVE,SPEKE,LIVERPOOL,LIVERPOOL,MERSEYSIDE,A
{A948BD6F-DD91-4DE9-82D1-0008226FC360},95000,13/06/2014 00:00,HU6 7XE,S,N,F,51,,DOWNFIELD AVENUE,,HULL,CITY OF KINGSTON UPON HULL,CITY OF KINGSTON UPON HULL,A
{7191F69F-7648-4603-9CE7-000882808E16},174000,19/05/2014 00:00,DT5 1HX,T,N,F,2,,LONG ACRE,,PORTLAND,WEYMOUTH AND PORTLAND,DORSET,A
{525BE511-1351-475F-9765-0009645D0B60},328000,11/06/2014 00:00,TW18 2EP,T,N,F,1,,EDGELL ROAD,,STAINES-UPON-THAMES,SPELTHORNE,SURREY,A
I've tried:
increasing the max_packet=64M in /etc/my.cnf to 64M, and the wait_timeout= 1000 but no luck.
I've also made the same changes for the packet size limit on the php.ini but no luck.
Any help would be appreciated
Thanks,
Mo
Check out the documentation on MySQL error #2006. I've included some of the more likely possibilities below:
You have encountered a timeout on the server side and the automatic reconnection in the client is disabled (the reconnect flag in the MYSQL structure is equal to 0).
You can also get these errors if you send a query to the server that is incorrect or too large. If mysqld receives a packet that is too large or out of order, it assumes that something has gone wrong with the client and closes the connection. If you need big queries (for example, if you are working with big BLOB columns), you can increase the query limit by setting the server's max_allowed_packet variable, which has a default value of 1MB. You may also need to increase the maximum packet size on the client end. More information on setting the packet size is given in Section B.5.2.10, “Packet Too Large”. This likely isn't the issue, as you've already done the suggested solution.
You also get a lost connection if you are sending a packet 16MB or larger if your client is older than 4.0.8 and your server is 4.0.8 and above, or the other way around.
There's also an error specific for Windows applications, so if you're using Windows, check that out:
You are using a Windows client and the server had dropped the connection (probably because wait_timeout expired) before the command was issued. The problem on Windows is that in some cases MySQL does not get an error from the OS when writing to the TCP/IP connection to the server, but instead gets the error when trying to read the answer from the connection. Prior to MySQL 5.0.19, even if the reconnect flag in the MYSQL structure is equal to 1, MySQL does not automatically reconnect and re-issue the query as it doesn't know if the server did get the original query or not. The solution to this is to either do a mysql_ping() on the connection if there has been a long time since the last query (this is what Connector/ODBC does) or set wait_timeout on the mysqld server so high that it in practice never times out.

postgresql pdo very slow connect

We are facing performance issue with our web server. We are using an apache server (2.4.4) with php 5.4.14 (it's a uniserver package) and a postgresql database 9.2. It’s on a Windows system (can be XP, 7 or server…).
Problem is that requests answers from the web server are too slow; we have made some profiling and found that database connection is around 20 ms (millisecond).
We are using PDO like this:
$this->mConnexion = new \PDO(“postgres: host=127.0.0.1;dbname=”, $pUsername,$pPassword, array(\PDO::ATTR_PERSISTENT => false));
We have made some time profiling like this:
echo "Connecting to db <br>";$time_start = microtime();
$this->mConnexion = new \PDO(…
$time_end = microtime();$time = $time_end - $time_start;
echo "Connecting to db done in $time sec<br>";
We have made a test with ATTR_PERSISTENT to true and we came up with a connection time much faster. Code reports connection time = 2. E-5 second (whereas it’s 0.020 s with persistent to false).
Is 20 ms a normal value (and we have to move to persistent connection) ?
we have also made a test with mysql, connection time for non persistent connection is around 2 ms.
We have these options set in postgresql configuration file :
listen_addresses = '*'
port = 5432
max_connections = 100
SSL = off
shared_buffers = 32MB
EDIT
We do not use permanent (yet) because there are some drawbacks, if the script fail connection will be in a bad state (so we will have to manage these cases, and it’s what we will have to do…). I would like to have more points of view concerning this database connection time before directly switching to persistent connection.
To answer Daniel Vérité question, SSL is off (I already checked this option from my previous search about the subject).
#Daniel : i have tested on a intel core 2 Extreme CPU X9100 # 3.06Ghz 4Gb RAM
Try using unix domain socket by leaving host empty. It's a little bit faster.

How to manage PHP odbc_exec timeout?

I've got a command-line PHP script that connects to SQL Server 2005 using an ODBC connection. It uses odbc_exec() to execute a stored procedure which may take a while to process, and I don't want it to timeout.
I've been unable to find anything in the PHP documentation regarding how to set the timeout for odbc_exec(). Does it default to an infinite wait?
There's only so much you can do here. PHP's default script execution time is set to 30 seconds. There are numerous ways to change this, including the set_time_limit() function (found in the manual), and extending execution time by using ini_set, or using odbc_setoption
//extend execution time
$num_minutes = 5;
ini_set('max_execution_time', (60*$num_minutes)); //set timeout to 5 minutes
You can also change this setting inside your php.ini file. However, all of these only control the scope within what your apache server can control. If the database itself has a timeout, it will still cut off your call and return an error, even if your personally set execution times are elevated.
According to php.net you can use odbc_setoption():
$result = odbc_prepare($conn, $sql);
odbc_setoption($result, 2, 0, 30);// 30 seconds timeout
odbc_execute($result);
So you can increase timeout to some bigger number depending on your need

How to keep a php script from timing out because of a long mysql query

I have an update query being run by a cron task that's timing out. The query takes, on average, five minutes to execute when executed in navicat.
The code looks roughly like this. It's quite simple:
// $db is a mysqli link
set_time_limit (0); // should keep the script from timing out
$query = "SLOW QUERY";
$result = $db->query($query);
if (!$result)
echo "error";
Even though the script shouldn't timeout, the time spent waiting on the sql call still seems to be subject to a timeout.
Is there an asynchronous call that can be used? Or adjust the timeout?
Is the timeout different because it's being called from the command line rather than through Apache?
Thanks
I had the same problem somwhere, and "solved" it with the following code (first two lines of my file):
set_time_limit(0);
ignore_user_abort(1);
According to the manual:
Note: The set_time_limit() function and the configuration directive max_execution_time only affect the execution time of the script itself. Any time spent on activity that happens outside the execution of the script such as system calls using system(), stream operations, database queries, etc. is not included when determining the maximum time that the script has been running.
So it's unlikely to have anything to do with PHP's time limit. What message are you getting when it times out? Perhaps there's a MySQL setting involved.
Is your php running in safe-mode? Quote from PHP manual of set_time_limit:
This function has no effect when PHP
is running in safe mode. There is no
workaround other than turning off safe
mode or changing the time limit in the
php.ini.
I came across something similar in one of my PHP scripts, I added this inline right before executing the slow query:
$timeout_seconds = 3153600; // 1 year...
// Make sure the PHP script doesn't time out
set_time_limit(0);
ignore_user_abort(1);
// Make sure the PHP socket doesn't time out
ini_set('default_socket_timeout', $timeout_seconds);
ini_set('mysqlnd.net_read_timeout', $timeout_seconds);
// Make sure the MySQL server doesn't time out
// Assuming your $link is a MySQLi object:
$link->query("SET SESSION connect_timeout=" . $timeout_seconds);
$link->query("SET SESSION delayed_insert_timeout=" . $timeout_seconds);
$link->query("SET SESSION have_statement_timeout='NO'");
$link->query("SET SESSION net_read_timeout=" . $timeout_seconds);
$link->query("SET SESSION net_write_timeout=" . $timeout_seconds);
Obviously, you'll want to set the seconds appropriately, but I didn't want my script to time out at all.
PHP Init directives: https://www.php.net/manual/en/ini.list.php
MySQL Server variables: https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html
Note: Check the documents for more information, and also verify the variables match the PHP and MySQL versions that you are using. I'm using PHP 7.3 and MySQL 5.7.
** Edit: Setting the PHP timeouts wasn't enough for my script, I had to add the MySQL SESSION variables too.
Assuming you are on linux, Debian based systems have separate configurations for mod_php/php cgi and php-cli. This shouldn't be too difficult to set up on a different linux system that doesn't separate cgi/cli configuration.
Once you have separate configs, I would adjust your php cli configuration. Disable safe mode and any time limits and ram limits.
Check out some of the resource limit variables in php,ini:
max_execution_time, max_input_time, memory_limit
You could also set a time limit for the script in PHP itself:
http://ca3.php.net/set_time_limit

Categories