When you run a query like so:
$query = "SELECT * FROM table";
$result = odbc_exec($dbh, $query);
while ($row = odbc_fetch_array($result)) {
print_r($row);
}
Does the resource stored in $result point to data that exists on the server running php? Or is pointing to data in the database? Put another way, as the while loop does it's thing ,is PHP talking to the DB every iteration or is it pulling that $row from some source on the application side?
Where this is mattering to me is I have a database I'm talking to over VPN using ODBC with PHP. This last weekend something strange has happened where huge pauses are happening during the while loop. So between iterations, the script will stop execution for seconds and up to minutes. It seems to be completely random where this happens. I'm wondering if I need to talk to the server over VPN each iteration and maybe the connection is flaky or if something has gone wrong with my ODBC driver (FreeTDS).
mysql_query and odbc_exec both return a resource which (quote from php.net) "is a special variable, holding a reference to an external resource." This suggests the server is talking with the database server every iteration, I am not sure though.
However, there are 2 connections we are talking about here. The first being your connection with the PHP server, and the second one being the connection between the PHP server and the database server. If both servers have a fast connection, the strange behaviour you are experiencing might not have anything to do with your VPN.
The resource identifies the internal data structure used by PHP for interacting with the external resource.
In the case of the resource returned by mysql_query(), this data structure will include the rows returned by the query (and won't return until all the data has been returned or the conenction fails). However this behaviour is specific to MySQL - there is no requirement that the DBMS return the data before it is explicitly requested by the client.
If there is some strange problem causing lots of latency in your setup, then the only obvious solution would be to compile the results of the query at the database side then deliver them to your PHP code, aither batched or as a whole (think webservice).
C.
Related
I've just set up my first remote connection with FileMaker Server using the PHP API and something a bit strange is happening.
The first connection and response takes around 5 seconds, if I hit reload immediately afterwards, I get a response within 0.5 second.
I can get a quick response for around 60 seconds or so (not timed it yet but it seems like at least a minute but less than 5 minutes) and then it goes back to taking 5 seconds to get a response. (after that it's quick again)
Is there any way of ensuring that it's always a quick response?
I can't give you an exact answer on where the speed difference may be coming from, but I'd agree with NATH's notion on caching. It's likely due to how FileMaker Server handles caching the results on the server side and when it clears that cache out.
In addition to that, a couple of things that are helpful to know when using custom web publishing with FileMaker when it comes to speed:
The fields on your layout will determine how much data is pulled
When you perform a find in the PHP api on a specific layout, e.g.:
$request = $fm->newFindCommand('myLayout');
$request->addFindCriterion('name', $myname);
$result = $request->execute();
What's being returned is data from all of the fields available on the my layout layout.
In sql terms, the above query is equivalent to:
SELECT * FROM myLayout WHERE `name` = ?; // and the $myname variable is bound to ?
The FileMaker find will return every field/column available. You designate the returned columns by placing the fields you want on the layout. To get a true * select all from your table, you would include every field from the table on your layout.
All of that said, you can speed up your requests by only including fields on the layout that you want returned in the queries. If you only need data from 3 fields returned to your php to get the job done, only include those 3 fields on the layout the requests use.
Once you have the records, hold on to them so you can edit them
Taking the example from above, if you know you need to make changes to those records somewhere down the line in your php, store the records in a variable and use the setField and commit methods to edit them. e.g.:
$request = $fm->newFindCommand('my layout');
$request->addFindCriterion('name', $myname);
$result = $request->execute();
$records = $result->getRecords();
...
// say we want to update a flag on each of the records down the line in our php code
foreach($records as $record){
$record->setField('active', true);
$record->commit();
}
Since you have the records already, you can act on them and commit them when needed.
I say this as opposed to grabbing them once for one purpose and then grabbing them again from the database later do make updates to the records.
It's not really an answer to your original question, but since FileMaker's API is a bit different than others and it doesn't have the greatest documentation I though I'd mention it.
There are some delays that you can remove.
Ensure that the layouts you are accessing via PHP are very simple, no unnecessary or slow calculations, few layout objects etc. When the PHP engine first accesses that layout it needs to load it up.
Also check for layout and file script triggers that may be run, IIRC the OnFirstWindowOpen script trigger is called when a connection is made.
I don't think that it's related to caching. Also, it's the same when accessing via XML. Haven't tested ODBC, but am assuming that it is an issue with this too.
Once the connection is established with FileMaker Server and your machine, FileMaker Server keeps this connection alive for about 3 minutes. You can see the connection in the client list in the FM Server Admin Console. The initial connection takes a few seconds to set up (depending on how many others are connected), and then ANY further queries are lightning fast. If you run your app again, it'll reuse that connection and give results in very little time.
You can do completely different queries (on different tables) in a different application, but as long as you execute the second one on the same machine and use the same credentials, FileMaker Server will reuse the existing connection and provide results instantly. This means that it is not due to caching, but it's just the time that it takes FMServer to initially establish a connection.
In our case, we're using a web server to make FileMaker PHP API calls. We have set up a cron every 2 minutes to keep that connection alive, which has pretty much eliminated all delays.
This is probably way late to answer this, but I'm posting here in case anyone else sees this.
I've seen this happen when using external authentication with FileMaker Server. The first query establishes a connection to Active Directory, which takes some time, and then subsequent queries are fast as FMS has got the authentication figured out. If you can, use local authentication in your FileMaker file for your PHP access and make sure it sits above any external authentication in your accounts list. FileMaker runs through the auth list from top to bottom, so this will make sure that FMS successfully authenticates your web query before it gets to attempt an external authentication request, making the authentication process very fast.
I am working on a web application which returns large tables of statistics from a database, using SQL Server (not my call). In my code I have several functions with while loops of the type :
while($row = sqlsrv_fetch_array($stmt, SQLSRV_FETCH_ASSOC))
However, since the tables are so large I would like to make as few calls to the database as possible to minimize loading time. That means I would like to somehow reset the internal pointer of the database statement after these while-loops. My experiece with SQL Server is limited, to say the least, and googling suggest that in MySQL I should be able to use the mysql_data_seek-function. Is there an equivalent in SQL Server? Is it even possible?
sqlsrv_fetch($stmt,SQLSRV_SCROLL_FIRST);
in http://www.php.net/manual/en/function.sqlsrv-fetch.php
I'm loading 9 images from a database and my syntax looks roughly like this:
<img src="image_loader.php?id=4"></img>
My PHP for image_loader.php looks like:
<?php
/* I set up my connection using mysql_connect and mysql_select_db */
$query = sprintf("SELECT ... FROM ... WHERE id='".$_GET["id"]."'");
$result = mysql_query($query, $con);
mysql_close($con);
if (!$result) {
// no result
}
else {
$row = mysql_fetch_row($result);
header('Content-type: image/png');
echo $row[0];
mysql_free_result($result);
}
?>
Each image is about 10-13k but the bunch seems to be loading very slow. I realize that there is some bottle-necking in the number of requests a browser can execute at a time but the wait times seem to be gratuitous.
Any suggestions on how to get images loaded from a database fast?
Also, and this is almost a separate question, but is it possible to instruct a browser (or server) to cache images with now .gif/.png/.jpg srcs? It seems that Firefox does and Chrome doesn't, but I'm not certain of this.
I'd first consider whether storing images in a database makes the most sense. It may, but it should be seriously considered, as giving each image a unique filename in the filesystem and storing that in the database will often be faster.
You would gain additional speed if you could request that file directly from the client as opposed to requesting a generic PHP script that does some sort of fopen()-style abstraction.
In order to narrow down the source of delay, it first might be helpful to check whether your database is hosted on the same server as your webserver. One indication that it is not hosted locally but on a remote database server is to check the host string you're providing in the mysql_connect() call. localhost would suggest its local, something else would suggest it's not. As a note, many shared hosted services (e.g. GoDaddy) split their database server from the webserver.
For a better idea of the source of the delay, I'd suggest instrumenting your image_loader.php code with timestamps to locate the delay? My guess is that it'll be in the query.
If the delay is in your query, you will want to limit the number of queries you make. A strategy that allows you to make one query instead of 9 would limit the impact any webserver-to-database server delay.
I am attempting to build a progress bar loaded for a long running page (yes indexes are obvious solution but currently the dataset/schema prohibits partitioning/proper indexing) so I plan to use a GUID/uniqueID within the query comment to track progess with SHOW_FULL_PROCESSLIST via ajax - but the key to this rests on if the sequential queries are executed in order by php, does anyone know?
MySQL as a database server uses multiple threads to handle multiple running queries. It allocates a thread as and when it received a query from a client i.e. a PHP connection or ODBC connection etc.
However, since you mentioned mysql_query I think there can be 2 things in your mind:
if you call mysql_query() multiple times to pass various commands, then MySQL will be passed each query after the previous query is completely executed and the result returned to PHP. So, of course MySQL will seem to then work sequentially although it is actually PHP that is waiting to send MySQL a query till one query is finished.
In MySQL5 and PHP5 there is a function called (mysqli_multi_query()) using which you can pass multiple queries at once to MySQL without PHP waiting for 1 query to end. The function will return all results at once in the same result object. MySQL will actually run all the queries at once using multiple threads and the wait time in this case is substantially less and you also tend to use the server resources available much better as all the queries will run as separate threads.
As Babbage once said, "I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question." Calls to mysql_query, like calls to any other function, execute in the order that they are reached in program flow.
Order in the file is mostly irrelevant. It's order of execution that matters.
<?php
myCmdFour();
myCmdTwo();
myCmdThree();
myCmdTwo();
function myCmdTwo() {
mysql_query(...);
}
function myCmdThree() {
mysql_query(...);
}
function myCmdFour() {
mysql_query(...);
}
myCmdFour();
myCmdThree();
myCmdTwo();
myCmdTwo();
?>
Although anyone that has PHP files that look like that, needs to seriously rethink things.
Now I must start by saying, I can't copy the string. This is a general question.
I've got a query with several joins in that takes 0.9 seconds when run using the mysql CLI. I'm now trying to run the same query on a PHP site and it's taking 8 seconds. There are some other big joins on the site that are obviously slower, but this string is taking much too long. Is there a PHP cache for database connections that I need to increase? Or is this just to be expected.
PHP doesn't really do much with MySQL; it sends a query string to the server, and processes the results. The only bottleneck here is if it's an absolutely vast query string, or if you're getting a lot of results back - PHP has to cache and parse them into an object (or array, depending on which mysql_fetch_* you use). Without knowing what your query or results are like, I can only guess.
(From comments): If we have 30 columns and around, say, a million rows, this will take an age to parse (we later find that it's only 10k rows, so that's ok). You need to rethink how you do things:-
See if you can reduce the result set. If you're paginating things, you can use LIMIT clauses.
Do more in the MySQL query; instead of getting PHP to sort the results, let MySQL do it by using ORDER BY clauses.
Reduce the number of columns you fetch by explicitly specifying each column name instead of SELECT * FROM ....
Some wild guesses:
The PHP-version uses different parameters and variables each query: MySQL cannot cache it. While the version you type on the MySQL-CLI uses the same parameter: MySQL can fetch it from its cache. Try adding the SQL_NO_CACHE to your query on CLI to see if the result requires more time.
You are not testing on the same machine? Is the MySQL database you test the PHP-MySQL query with and the CLI the same machine? I mean: you are not testing one on your laptop and the other one on some production server, are you?
You are testing over a network: When the MySQL server is not installed on the same host as your PHP app, you will see some MySQL connection that uses "someserver.tld" instead of "localhost" as database host. In that case PHP will need to connect over a network, while your CLI already has that connection, or connects only local.
The actual connection is taking a long time. Try to run and time the query from your PHP-system a thousand times after each other. Instead of "connect to server, query database, disconnect", you should see the query timing when it is "connect to server, query database thousand times, disconnect". Some PHP-applications do that: they connect and disconnect for each and every query. And if your MySQL server is not configured correctly, connecting can take a gigantic amount of the total time.
How are you timing it?
If the 'long' version is accessed through a php page on a website, could the additional 7.1 seconds not just be the time it takes to send the request and then process and render the results?
How are you connecting? Does the account you're using use a hostname in the grant tables? If you're connectinv via TCP, MySQL will have to do a reverse DNS lookup on your IP to figure out if you're allowed in.
If it's the connection causing this, then do a simple test:
select version();
if that takes 8seconds, then it's connection overhead. If it return instantly, then it's PHP overhead in processing the data you've fetched.
The function mysql_query should should take the same time as mysql client. But any extra mysql_fetch_* will add up.