In PHP, is there equivalent functionality to sqlsrv_has_rows?
I don't want to know how many rows, just has it got any at all.
I don't really want to fetch a row, as that puts the row pointer out.
It seems there is no equivalent. You have to fetch the first row. If you want to then start at the first row again, you would have to use oci_execute again. Not a great idea if the query takes a long time to run.
So some logic to store the first row would be a likely route to go down.
Related
I am building a report using php and mysql, i have multiple queries going on in one go on one page and as you can imagine this is putting a lot of stress on the server, now what i wish to do is get the first query to start and before launching the second query, it checks if the first query has finished and it goes on like this until it reaches the last query. And just to be clear, one query at a time does not put that much stress on the server but several in one go does. If anybody has any idea or has an alternative please let me know.
By default, PHP will not execute next MySQL query or any other code at all before previous query is finished.
Is there a way using php to take a MYSQL query result, get a value from a column in the first record fetched, and then put that record back into the query result so when when looping over the result set later, that record can be used again? If this is possible, can I put the record back in the first position again?
Or should I return the value that I need from that first row as a separate variable in my routine in MYSQL. If this would be the better route to take, can someone give me some insight on how to return both a query result set and a separate variable as well? I cannot seem to get this to work either.
Or would the best way to do this be to create my own array from the query result set and then manipulate the mysql result set as need? I'm trying to stay away from this just to cut out the step of creating that array if one of the above two options is possible, otherwise I will just go with this.
Thanks in advance.
Just before you need to loop through again, you could use mysql_data_seek($query, 0);
Then the next call to fetch will be the first row again.
I haven't personally used it, but that's what I understood from the php manual:
http://us.php.net/manual/en/function.mysql-data-seek.php
I agree with the answers above but just to state it a slightly different way:
There is no point rewinding a pointer inside the query result object. You have to keep track of where it is, and it's easy to mess up, and it isn't worth the tiny speed increase.
It's much better to make a copy of the entire array and access the records using keys. Much easier to keep track of what is going on.
I think I'm probably looking at this the complete wrong way. I have a stored procedure that returns a (potentially large, but usually not) result set. That set gets put into a table on the web via PHP. I'm going to implement some AJAX for stuff like dynamic reordering and things. The stored procedure takes one to two seconds to run, so it would be nice if I could store that final table somewhere that I can access it faster once it's been run. More specifically, the SP is a search function; so I want the user to be able to do the search, but then run an ORDER BY on the returned data without having to redo the whole search to get that data again.
What comes to mind is if there is a way to get results from the stored procedure without it terminating, so I can use a temp table. I know I could use a permanent table, but then I'd run into trouble if two people were trying to use it at the same time.
A short and simple answer to the question: 'is a way to get results from the stored procedure without it terminating?': No, there isn't. How else would the SP return the resultset?
2 seconds does sound like an awfully long time, perhaps you could post the SP code, so we can look at ways to speed up the query's you use. It might also prove useful to give some more info on your tables (indeces, primary keys... ).
If all else fails, you might consider looking into JavaScript table sorters... but again: some code might help here
I never really considered this(pagination) as an issue until lately. When I sat down and zeroed in on it, I found myself facing plenty of problems.
What I am into is a basic contacts management system wherein a user can add/update/delete/search contacts. The search part is where I need the pagination to be implemented effectively.
What I have in mind (with +ve and -ve points)
I can specify pageNo and offset while POSTing to my search.php page. This page will fire a simple MySQL query to retrieve the results. Since the number of rows can pretty much run in thousands, I need to paginate it. Quite simple, but I need to fire the same query again and again for every different page. Meaning, when a user goes from page1 to page2, the same MySQL query will be fired(of course with a different offset), which is something I feel is redundant, and am trying to avoid.
Then I thought of capturing the entire set of result, and storing it into $_SESSION, but in this case, what if the results are just huge? Will it affect performance in any way?
On similar lines like the second point, I thought of writing out the results on to a file, which is plain crap! (I just put it here, as a point. I know this is REAL bad way of doing things.)
My Questions:
A. Which of the above methods do I implement? Which one is better? Are there any other methods? I have googled it, but I find that most of the examples follow point1 above.
B. My questions for point1: How can we rely on the order of the mysql results? Suppose, the user navigates to page2 after some time, how can we be sure, that during the second time, the records of the first page arent repeated? (Because, we are doing a fresh query)..
C. What exactly is a MySQL resource? I understand that a mysql_query(..) returns a resource. Is it global in the sense that, it maintains the state between different calls to PHP script? (I can maintain the resource in a $_SESSION).
Thanks a million! :-)
PS: I know this is a pretty long question. I just tried to put across, in a concise way, whats going around in my head.
Use your first suggestion. The one with offsets. It's the "standard" way of doing pagination. Putting the whole result set into session would be a bad idea, since every user would have his own private copy of the data. If you hit performance problems you can always add caching (memcache) which will benefit all users accessing the data.
MySQL will always result your data the same way. The only way that a record from page 1 would appear on page 2 is if a new record was inserted between the time that user navigates from page 1 to page 2. In other words: you have nothing to worry about.
A resource is MySQL's case is a pointer of sorts that points to the result set. You can then manipulate that (fetching data row by row, counting the number of rows returned etc). It is not global.
A. First one, of course. there are other methods, like for the every thing in the Earth, but like for the every thing on the Earth one have to use most usual and generic way first, just because they have to get familiar with it and because it will suit you for sure, as it suits other webmasters.
Also note that your other proposed methods are not among sensible ones.
B. yes, records do move across pages. Nothing bad in that.
C. Nothing in PHP maintains it's state between calls. No resource can be saved in a session. go for offset pagination.
From my experience (which is not much), i usually used the first method, because each time you go to another page you will always get an updated data from mysql. Yes, if you're using order by last_updated_time then the result will move across pages.
But i think that's not what you have in your mind. As you mention in your third question, perhaps you want to have some kind of buffer for your results, but it means you'll have to create the buffer for every result (that's the reason you mention about using file to store mysql result).
probably this is the answer that you're looking for (if that could consider as an answer at all :LOL), but my purpose was just trying to give some perspective.
When constructing your SQL you can do something like the following (0 is the offset, 10 is how many rows to return)
SELECT * FROM `your_table` LIMIT 0, 10
This will display the first 10 results from the database.
Alternative syntax, 3 queries, showing the first 30 results 1-10,11-20,21-30.
SELECT * FROM `your_table` LIMIT 10 OFFSET 0
SELECT * FROM `your_table` LIMIT 10 OFFSET 10
SELECT * FROM `your_table` LIMIT 10 OFFSET 20
Edit:
Okay, to clarify, option 1 is your best bet. Pass in page number. Limit is the same each query, and $offset = ($pageNum - 1 ) * 10;.
You will need an ORDER BY clause. However, if the contents of the database change between page loads there a user might notice discrepancies. It really depends on how frequently your data changes.
I've not tried to store the result of a mysql_query() in session. I would suspect not the way you are thinking of using it. As when the script ends you can consider mysql_close() to be called implicitly, and resources destroyed.
I been looking at some database (MySQL) wrapper classes, A lot of them work like this,
1) Run the sql query
2) while fetching associative mysql array, they cyle through the results and add them to there own array
3) then you would run the class like this below and cycle through it's array
<?php
$database = new Database();
$result = $database->query('SELECT * FROM user;');
foreach ($result as $user){
echo $user->username;
}
?>
So my question here, is this not good on a high traffic type site? I ask because as far as I can tell, mysql is returning an array which eats memory, then you are building a new array from that array and then cycleing through the new array. Is this not good or pretty much normal?
The short answer is: it's bad, very bad.
The trouble is you pay a nasty performance hit (cycles and memory!) by iterating over the results twice. (what if you have 1000 rows returned? you'd get all of the data, loop 1000 times, keep it all in memory, and then loop over it again).
If you refactor your class a bit you can still wrap the query and the fetch, but you'll want to do the fetch_array outside the query. In this case, you can discard each row from memory as soon as your done, so you don't need to store the entire result set, and you loop just one time.
IIRC, PHP won't load the entire MySQL result set into memory, basically, when you call mysql_fetch_array you're asking for the next row in the set, which is loaded only upon asking for it, so you're not paying the memory hit for the full set (on the PHP side) just by running the original query. The whole result gets loaded into memory when you use mysql_query (thanks VolkerK), but you're still paying that CPU cost twice and that could be a substantial penalty.
The code is fine.
foreach() just moves the array pointer on each pass.
You can read all about it here:
http://php.net/foreach
For a deeper understanding, look at how pointers work in C:
http://home.netcom.com/~tjensen/ptr/ch2x.htm
Nothing is copied, iteration is almost always performed by incrementing pointers.
This kind of query is pretty much normal. It's better to fetch only a row at a time if you can, but for normal small datasets of the kind you'd get for small and paged queries, the extra memory utilisation isn't going to matter a jot.
SELECT * FROM user, however, could certainly generate an unhelpfully large dataset, if you have a lot of users and a lot of information in each user row. Try to keep the columns and number of rows selected down to the minimum, the information you're actually going to put on the page.