PHP Mysql vs Mysqli in windows - php

I am using PHP 5.3.10 in windows (windows 7 64bits) using Apache and Mod_php.
Im in the process to decide which library i should use, so i am testing both, Mysql and MySQLi
I created two pages for the test
$mysqli = new mysqli("127.0.0.1", "root2", "secretsquirrel", "test");
for ($i=0;$i<100;$i++) {
$result = $mysqli->query("SELECT * from test");
// var_dump($result);
echo "ok $i";
$result->close();
}
And
$dbh=mysql_connect("127.0.0.1","root2","secretsquirrel",true);
mysql_select_db("test",$dbh);
for ($i=0;$i<100;$i++) {
$result=#mysql_query("SELECT * from test",$dbh);
echo "ok";
mysql_free_result($result);
}
In both test, i can connect without any problem and can fetch information.
However, if i do a concurrent test (5 concurrent users), MySqli is painfully slow.
And worst, if i do a concurrent test (10 concurrent users), then MySQLi crash Apache.
Faulting application name: httpd.exe, version: 2.2.22.0, time stamp: 0x4f4a84ad
Faulting module name: php5ts.dll, version: 5.3.10.0, time stamp: 0x4f2ae5d1
Exception code: 0xc0000005
Fault offset: 0x0000c7d7
Faulting process id: 0x1250
Faulting application start time: 0x01cd037de1e2092d
Faulting application path: C:\apache2\bin\httpd.exe
Faulting module path: C:\php53\php5ts.dll
Report Id: 1fb70b72-6f71-11e1-a64d-005056c00008
With MySql, everything works perfectly, even with 1000 concurrent users.
Question: i am doing something wrong?.
It is my php.ini configuration
[MySQLi]
mysqli.max_persistent = -1
;mysqli.allow_local_infile = On
mysqli.allow_persistent = On
mysqli.max_links = -1
mysqli.cache_size = 2000
mysqli.default_port = 3306
mysqli.default_socket =
mysqli.default_host =
mysqli.default_user =
mysqli.default_pw =
mysqli.reconnect = Off
ps: As espected, PDO is way worst
The code: (may be the test is wrong?)
$dbo = new PDO("mysql:host=127.0.0.1;dbname=test", "root2", "secretsquirrel" );
for ($i=0;$i<100;$i++) {
$dbo->query("SELECT * from test");
echo "ok $i";
}
The result is worst than MySQLi
Update:
I did the same in Linux (redhat) and MysqlI /PDO are more stable.
(1000 concurrent calls, with less there is not any difference).
Module Sum(ms) Min(ms) Max(ms)
MySQLi 66986 265 1762
MySQL 64521 234 1388
PDO 75426 249 1809
(minus is better).
Well, apparently there is not a answer, under windows MysqlI and PDO is a big no (unless to development process). For linux, the three are the same, however for a popular server (a lot of concurrent users), Mysql is the best, MysqlI is close (3% slower) and PDO is a big no (+10% slower).
However, it is not a real test so mileage can vary. However, the results are consistent with the popular believe, Mysql>Mysqli>pdo.

The single feature in mysqli that should make all the difference is parametrized queries. The mysql interface doesn't have these, which means you will be interpolating values into queries yourself, and this in turn means you have a big potential for SQL injection vulnerabilities - it just so turns out that securing your query concatenation isn't as trivial as it sounds.
BTW, have you considered PDO? It offers the same features mysqli does, but it can connect to any of the supported (and configured) databases, so if at any point you decide to migrate to, say, PostgreSQL, SQLite or SQL Server, you have only the SQL dialect differences to worry about, instead of porting everything to a different API.

Apache May Not Support everything, I've Faced Many Problems with Respect To Password Encryption & Decryption With Respect To Apache, but found solution on Hosting it on IIS...
Try Running your Application in IIS... http://learn.iis.net/page.aspx/246/using-fastcgi-to-host-php-applications-on-iis/ will helps you to HOST PHP on IIS 7... http://www.websitehosting.com/apache-vs-iis-web-server/ will clear the difference between Apache & IIS with respect to PHP...

Related

Same mariaDB inserts much more slower in PHP 7.4 than PHP 7.1 with FAT FREE

I'm trying to migrate a legacy PHP/Fat Free project from php 7.1 to 7.4 and I found that some queries take too long (like 10x more time) to finish. Particularly some inserts. I'm running the same project in my localhost with xampp (7.1.32 and 7.4.6) and using the exact same MariaDB server (v10.4.8) with the exact same database always.
The code is something like that:
foreach($ridiculouslyLongArray as $row) //I'm talking about some millons of rows
$this->db->exec("INSERT INTO a_table (field1, field2, fieldn) VALUES ('".$row['field1']."', '".$row['field2']."', '".$row['fieldn']."')");
//Yes, it's open to sql injection, I will fix that too
The definition of $this->db is the next:
$this->db = new DB\SQL('mysql:host=localhost;port=3306;dbname=something', 'dbuser', 'dbpassword', array(\PDO::ATTR_ERRMODE=>\PDO::ERRMODE_EXCEPTION));
and is a wrapper of PDO as far as I know.
I've tried to change the code to insert multiple rows per query but the query still taking much more time than in php 7.1.
This is my setup
->Original Project (in which the queries run fine)
PHP 7.1.32 (memory limit 2048mb)
Fat Free 3.6.4
MariaDB 10.4.8
->New Project (in which the queries run slow)
PHP 7.4.6 (memory limit 2048mb)
Fat Free 3.7.2
MariaDB 10.4.8 (same server and db that in the previous one)
Thanks for your time.
EDIT: I Just noticed that the PDO Drivers for MySQL are different between versions
for PHP 7.1:
mysqlnd 5.0.12-dev - 20150407 - $Id: 38fea24f2847fa7519001be390c98ae0acafe387 $
for PHP 7.4:
mysqlnd 7.4.6
Edit 2: The query is in a transaction and it is using the same indexes and same dB engine because is the same insert over the same table in the same database on the same server. Nothing change in the code only the PHP versión.
This wasn't explicitly mentioned in the comments, but something else that may be causing some slowness is query logging.
By default, Fat-Free will log all DB queries. If you are running a gazillion inserts, all those inserts are being logged. If it's not already, I would recommend in production to disable query logging. Wherever your bootstrap/services file is that creates the db connection, I would add this after it:
$f3->set('db', new DB\SQL(/* config stuff */));
if(ENVIRONMENT === 'PRODUCTION') { // or whatever you use to signal it's production
$f3->db->log(false);
}

postgresql pdo very slow connect

We are facing performance issue with our web server. We are using an apache server (2.4.4) with php 5.4.14 (it's a uniserver package) and a postgresql database 9.2. It’s on a Windows system (can be XP, 7 or server…).
Problem is that requests answers from the web server are too slow; we have made some profiling and found that database connection is around 20 ms (millisecond).
We are using PDO like this:
$this->mConnexion = new \PDO(“postgres: host=127.0.0.1;dbname=”, $pUsername,$pPassword, array(\PDO::ATTR_PERSISTENT => false));
We have made some time profiling like this:
echo "Connecting to db <br>";$time_start = microtime();
$this->mConnexion = new \PDO(…
$time_end = microtime();$time = $time_end - $time_start;
echo "Connecting to db done in $time sec<br>";
We have made a test with ATTR_PERSISTENT to true and we came up with a connection time much faster. Code reports connection time = 2. E-5 second (whereas it’s 0.020 s with persistent to false).
Is 20 ms a normal value (and we have to move to persistent connection) ?
we have also made a test with mysql, connection time for non persistent connection is around 2 ms.
We have these options set in postgresql configuration file :
listen_addresses = '*'
port = 5432
max_connections = 100
SSL = off
shared_buffers = 32MB
EDIT
We do not use permanent (yet) because there are some drawbacks, if the script fail connection will be in a bad state (so we will have to manage these cases, and it’s what we will have to do…). I would like to have more points of view concerning this database connection time before directly switching to persistent connection.
To answer Daniel Vérité question, SSL is off (I already checked this option from my previous search about the subject).
#Daniel : i have tested on a intel core 2 Extreme CPU X9100 # 3.06Ghz 4Gb RAM
Try using unix domain socket by leaving host empty. It's a little bit faster.

Oracle reproducing timeouts IDLE_TIME/CONNECT_TIME

I need some help with oracle settings to reproduce some issues we're having and to clarify, I'm not a oracle expert at all - no experience with it.
I've managed to installed oracle-xe(because it's easiest & smallest) and got our software running on it.
Now, reports say, that the connection in some some production setups time out(long running scripts/programs) and are not reconnected (not even throwing exceptions).
So that's what I'm trying to reproduce.
After some browsing around on the internet I found that running these queries should limit my connection & idle time to 1 minute:
ALTER PROFILE DEFAULT LIMIT IDLE_TIME 1;
ALTER PROFILE DEFAULT LIMIT CONNECT_TIME 1;
ALTER SYSTEM SET RESOURCE_LIMIT = TRUE;
Results in:
SQL> select * from user_resource_limits where resource_name in ('IDLE_TIME','CONNECT_TIME');
RESOURCE_NAME LIMIT
-------------------------------- ----------------------------------------
IDLE_TIME 1
CONNECT_TIME 1
After that I made a simple php script 'test.php', it runs a query - sleeps, and runs a new query.
require_once('our software');
$account1 = findAccount('email1#example.com');
sleep(100);
$account2 = findAccount('email2#example.com');
Isn't this supposed to time out?
Some extra details about what software I'm running:
Centos 5
Oracle XE
php 5.3.5
using oci8(not pdo)

mod_perl and oracle vs php and oracle performance

I have a large Perl app that I need to make faster; on the basis that it spends most of its running time talking to the DB I wanted to know how many well written SQL statements I could run and meet the performance targets. To do this I wrote a very simple handler that does a SELECT and an INSERT, when I benchmarked it on 300 concurrent requests (10,000 in total) the results were quite poor (1900ms average).
The performance target we've been given by the client is based on another app they use written in PHP, so I wrote a quick PHP script that does functionally the same thing as my simple mod_perl test handler and it gave a 400ms average!
The PHP code is:
$cs = "//oracle.ourdomain.com:1521/XE";
$oc = oci_pconnect("hr","password",$cs);
if(!$oc) { print oci_error(); }
$stid = oci_parse($oc, 'SELECT id FROM zz_system_options WHERE id = 1');
oci_execute($stid);
$stmt = oci_parse($oc, "INSERT INTO zz_system_options (id,option_name) VALUES (zz_system_optionsids.nextval,'load testing')");
oci_execute($stmt);
echo "hello world";
The Perl code is:
use strict;
use Apache2::RequestRec ();
use Apache2::RequestIO ();
use Apache2::Const -compile => qw(:common);
use DBI;
our $dbh;
sub handler
{
my $r = shift;
# Connect to DB
$dbh = DBI->connect( "DBI:Oracle:host=oracle.ourdoamin.com;port=1521;sid=XE", "hr", "password" ) unless $dbh;
my $dbi_query_object = $dbh->prepare("SELECT id FROM zz_system_options");
$dbi_query_object->execute();
$dbi_query_object =
$dbh->prepare("INSERT INTO zz_system_options (id,option_name) VALUES (zz_system_optionsids.nextval,?)");
$dbi_query_object->execute("load testing");
# Print out some info about this...
$r->content_type('text/plain');
$r->print("Errors: $err\n");
return Apache2::Const::OK;
}
The mod_perl has a startup.pl script called with a PerlRequire in the apache config that loads all the 'use'd modules. If all is working correctly, and I've no reason to think it isn't, then each request should only run the lines in 'sub handler' - meaning the Perl and PHP should be doing pretty much the same thing.
Server details:- The hardware node is a Quad Core Xeon L5630 # 2.13GHz with 24Gb RAM, the OS for the Apache virtual machine is Gentoo, the OS for Oracle is Centos 5,.
Versions: OSes both updated within last 2 weeks, Apache version 2.2.22, mod_perl version 2.0.4, DBI Version 1.622, DBD::Oracle version 1.50, Oracle instant client version 10.2.0.3, Oracle Database 10g Express Edition Release 10.2.0.1.0, PHP version 5.3
Apache MPM config is ServerLimit 2000, MaxClients 2000 and MaxRequestsPerChild 300
Things I checked: during the testing the only load was from the test app/oracle, neither virtual machine hit any of its bean counter limits, e.g., memory, Oracle showed 1 session per Apache child at all times, inserts had been done after each run.
So, my question is; Can I make the mod_perl version faster and if so how?
If you changed the PHP code and the timing didn't change then clearly you're not measuring the code time, are you?
The important question is - why are you repeatedly connecting in the Perl script and not in the PHP script?
Finally, this test probably isn't going to tell you anything useful unless all your queries are simple single-table single-row selects and inserts.

Difference between PHP SQL Server Driver and SQLCMD when running queries

Why is that the SQL Server PHP Driver has problms with long running queries?
Every time I have a query that takes a while to run, I get the following errors from sqlsrv_errors() in the below order:
Shared Memory failure, Communication
Link Failure, Timeout failure
But if I try the same query with SQLCMD.exe it comes back fine. Does the PHP SQL Server Driver have somewhere that a no timeout can be set?
Whats the difference between running queries via SQLCMD and PHP Driver?
Thanks all for any help
Typical usage of the PHP Driver to run a query.
function already_exists(){
$model_name = trim($_GET['name']);
include('../includes/db-connect.php');
$connectionInfo = array('Database' => $monitor_name);
$conn = sqlsrv_connect($serverName, $connectionInfo);
$tsql = "SELECT model_name FROM slr WHERE model_name = '".$model_name."'";
$queryResult = sqlsrv_query($conn, $tsql);
if($queryResult != false){
$rows = sqlsrv_has_rows($queryResult);
if ($rows === true){
return true;
}else{
return false;
}
}else{
return false;
}
sqlsrv_close($conn);
}
SQLCMD has no query execution timeout by default. PHP does. I assume you're using mssql_query? If so, the default timeout for queries through this API is 60 seconds. You can override it by modifying the configuration property mssql.timeout.
See more on the configuration of the MSSQL driver in the PHP manual.
If you're not using mssql_query, can you give more details on exactly how you're querying SQL Server?
Edit [based on comment]
Are you using sqlsrv_query then? Looking at the documentation this should wait indefinately, however you can override it. How long is it waiting before it seems to timeout? You might want to time it and see if it's consistent. If not, can you provide a code snippet (edit your question) to show how you're using the driver.
If MSDTC is getting involved (and I don't know how you can ascertain this), then there's a 60-second timeout on that by default. This is configured in the Component Services administration tool and lives in a different place dependent on version of Windows.
SQL Server 2005 limits the maximum
number of TDS packets to 65,536 per
connection (limit that was removed in
SQL Server 2008). As the default
PacketSize for the SQL Server Native
Client (ODBC layer) is 4K, the PHP
driver has a de-facto transfer limit
of 256MB per connection. When
attempting to transfer more than
65,536 packets, the connection is
reset at TDS protocol level.
Therefore, you should make sure that
the BULK INSERT is not going to push
through more than 256 MB of data;
otherwise the only alternative is to
migrate your application to SQL Server
2008.
From MSDN Forums
http://social.msdn.microsoft.com/Forums/en-US/sqldriverforphp/thread/4a8d822f-83b5-4eac-a38c-6c963b386343
PHP itself has several different timeout settings that you can control via php.ini. The one that often causes problems like you're seeing is max_execution_time (see also set_time_limit()). If these limits are exceeded, php will simply kill the process without regard for ongoing activities (like a running db query).
There is also a setting, memory_limit, that does as its name suggests. If the memory limit is exceeded, php just kills the process without warning.
good luck.

Categories