Hello i'm trying to make a tag script for my website so each time a search engine comes to my website 10 different tags will show on my website.
These tags will be grabbed from the db.
So at the minute i have coded it so it grabs only one. ( because i don't know how to do a while )
Like so
$sql = "SELECT tagname FROM tags ORDER BY rand() LIMIT 10";
$result = mysql_query($sql);
$row = mysql_fetch_object($result);
echo "<a href='index.php'>" .$row->tagname. " </a>";
Is there anyway i can add a while to that so it does it 10 times ? E.g use the same echo but print out 10 results instead of the 1 .... i have changed the limit from 1 to 10 but that did not work... still showing one...
Please, stop using ORDER BY RAND(). Just stop. This operation has complexity of n*log2(n), which means that the time spent on query would grow "
entries | time units
-------------------------
10 | 1 /* if this takes 0.001s */
1'000 | 300
1'000'000 | 600'000 /* then this will need 10 minutes */
If you want to generate random results, create a stored procedure, which generates them. Something like this (code taken from this article, which you should read):
DELIMITER $$
DROP PROCEDURE IF EXISTS get_rands$$
CREATE PROCEDURE get_rands(IN cnt INT)
BEGIN
DROP TEMPORARY TABLE IF EXISTS rands;
CREATE TEMPORARY TABLE rands ( tagname VARCHAR(63) );
loop_me: LOOP
IF cnt < 1 THEN
LEAVE loop_me;
END IF;
SET cnt = cnt - 1;
INSERT INTO rands
SELECT tags.tagname
FROM tags
JOIN (SELECT (RAND()*(SELECT MAX(tags.id) FROM tags)) AS id) AS choices
WHERE tags.id >= choices.id
LIMIT 1;
END LOOP loop_me;
END$$
DELIMITER ;
And to use it, you would write:
CALL get_rands(10);
SELECT * FROM rands;
As for executing it all on PHP side, you should stop using the ancient mysql_* API. It is more than 10 years old and no longer maintained. Community has even begun process for deprecating them. There should not be any more new code written with mysql_* in 2012. Instead you should use PDO or MySQLi. As for how to write it (with PDO):
// creates DB connection
$connection = new PDO('mysql:host=localhost;dbname=mydb;charset=UTF-8',
'username', 'password');
$connection->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);
// executes the procedure and creates select statement
$connection->exec('CALL get_rands(10)');
$statement = $connection->query('SELECT * FROM rands');
// performs query and collects all the info
if ($statement->execute())
{
$tags = $statement->fetchAll(PDO::FETCH::ASSOC);
}
Update
If the requirement is to get not only 10 random results, but actually 10 UNIQUE random results, then it would require two changes to the PROCEDURE:
The temporary table should enforce the uniqueness of entries:
CREATE TEMPORARY TABLE rands ( tagname VARCHAR(63) UNIQUE);
It also might make sense to collect just IDs and not the values. Esspecially if what you are looking for are 10 unique articles, not just tags.
When inserting a duplicate value is found, the cnt counter should not decrease. This can be ensured by adding a HANDLER (before definition of LOOP), which would "catch" the raised warning, and adjust the counter:
DECLARE CONTINUE HANDLER FOR SQLSTATE '23000' SET cnt = cnt + 1;
Note, read before the real answer: for the ones that keep downvoting this answer. Read the title (that starts with "Doing a while") and the final part, the question ("Is there anyway i can add a while to that so it does it 10 times ?"). This answer is about iterating the result set, not about the usage of the RAND function! The query doesn't even appear in my answer, and I am also suggesting a different approach at the end:
you just need to wrap your call to mysql_fetch_object in a loop
$result = mysql_query($sql);
while ($row = mysql_fetch_object($result))
{
echo "<a href='index.php'>" .$row->tagname. " </a>";
}
Later edit
Other considerations would be:
if the table hold a very big amount of data (but it doesn't seem that it will) order by rand() can have a bad effect on the performance
consider using pdo (or at least mysqli)
you should have some error handling even if the query seems to be
perfect, at least
if (!$result)
{
echo mysql_error();
die;
}
You are fetching only one of them
You need to fetch all of them one by one in a while
$sql = "SELECT tagname FROM tags ORDER BY rand() LIMIT 10";
$result = mysql_query($sql);
while($row = mysql_fetch_object($result)) {
echo "<a href='index.php'>" .$row->tagname. " </a>";
}
Related
I am using php and mysql to create a page that displays all of the jobs we have in the database. The data is shown is a table and when a row is clicked a modal window triggers with the information of the clicked job inside. At the top of the page I want a simple counter that shows amount of paid jobs, invoiced jobs etc etc. I am using the code below but having no luck...
<?php
$con = mysql_connect("localhost","databaseusername","password");
if (!$con) {
die('Could not connect: ' . mysql_error());
}
mysql_select_db("databasename", $con);
$result = mysql_query("select count(1) FROM jobslist");
$row = mysql_fetch_array($result);
$total = $row[0];
mysql_close($con);
?>
This code as far as I am aware is counting the amount of INT columns set to 1 rather than 0. No matter what I try I can't seem to get it to count the amount of 'paid' items in the database or 'invoiced' etc etc.
Once the count function is complete currently I am echoing out the outcome as below:
<?php echo "" . $total;?>
I am sure I am overlooking something simple, but any help is appreciated.
EDIT: TABLE STRUCTURE INCLUDED
http://i.stack.imgur.com/hcMJV.png
Assuming a column called paid you could restructure the query similar to the following. If you needed to sum the amounts involved that requires additional tweaking.
$result = mysql_query("select
( select count(*) from `jobslist` where `paid`=1 ) as 'paid',
( select count(*) from `jobslist` where `paid`=0 ) as 'unpaid'
from jobslist");
$rows = mysql_num_rows( $result );
while( $rs=mysql_fetch_object( $result ) ){
$paid=$rs->paid;
$unpaid=$rs->unpaid;
echo 'Total: '.$rows.'Paid: '. $paid.' Unpaid: '.$unpaid;
}
When I do this I usually name the COUNT result. Try this out:
$result = mysql_query("SELECT COUNT(*) AS total_rows FROM jobslist;");
$row = mysql_fetch_array($result);
$total = $row['total_rows'];
If you do not want to name the COUNT result, then give the following a go:
$result = mysql_query("SELECT COUNT(*) FROM jobslist;");
$row = mysql_fetch_array($result);
$total = $row['COUNT(*)'];
select count(1) FROM jobslist
This code as far as I am aware is counting the amount of INT columns set to 1 rather than 0.
No, this is just counting rows in your table and not filtering. If you want to count something with a specific filter you have to add that filter condition:
SELECT COUNT(*) AS `MyCount`
FROM `joblist`
WHERE `MyColumn` = 1; -- assuming MyColumn contains the INT you're looking for
You should stop using mysql_* functions. These extensions have been removed in PHP 7. Learn about prepared statements for PDO and MySQLi and consider using PDO, it's really pretty easy.
First you should change deprecated mysql_... to mysqli_... (look here how to). But it's not the reason you fail.
Unlike you seem to suppose, COUNT(1) will not look for an INT column having value 1.
Instead you must use COUNT(*) or COUNT(a_column_name) (same result), with adding a WHERE clause stating which condition is involved.
Here you seem wanting to count records where a given column (say the_column) has value 1. So you should:
SELECT COUNT(*)
FROM jobslist
WHERE the_column = 1
Last point: you don't need echo "" . in <?php echo "" . $total;?>.
Merely write <?php echo $total;?>.
Imagine I've got a database with lots of data, from which users can search.
The result of a typical search is generally around 20-100 rows, which are then paginated (20 rows per page).
I've thought of two approaches to handle the navigation for these pages and would like to know if there are any pros and/or cons to these and if there are any better alternatives.
Query once, store results in $_SESSION variable and filter rows according to current page. The reason I came up with this was to make the data retrieval once, without having to connect to the database for every page the user navigates. I don't know if it's better or worse than the other alternative I've come up with.
session_start();
$search = rawurldecode($_GET['search']); //search word
$interval = rawurldecode($_GET['interval']); //rows per page
$page = rawurldecode($_GET['page']); //page
$min_row = $interval * ($page-1)+1;
$max_row = $interval * $page;
//query if (no results stored or first page) && the current search is not the previous search
if((empty($_SESSION['SEARCH_RESULTS']) || $page == 1) && $_SESSION['SEARCH_RESULTS']['TERM'] != $search){
$_SESSION['SEARCH_RESULTS'] = array();
$_SESSION['SEARCH_RESULTS']['TERM'] = $search;
$query = "exec usp_Search '$search'";
$dbh = new DBH;
$dbh->Connect()->Query($query);
while($row = $dbh->Fetch_Array()){
$_SESSION['SEARCH_RESULTS']['ROWS'][] = $row;
}
}
for($j = 0; $j < count($_SESSION['SEARCH_RESULTS']['ROWS']); $j++){
$row = $_SESSION['SEARCH_RESULTS']['ROWS'][$j];
//ignore all other rows not on the page
if($j < ($min_row-1) || $j > $max_row) continue;
//print stuff
}
Query page by page. The query and the pagination is pretty straightforward.
//Query
$search = rawurldecode($_GET['search']);
$interval = rawurldecode($_GET['interval']);
$page = rawurldecode($_GET['page']);
$min_row = $interval * ($page-1)+1;
$max_row = $interval * $page;
$query = "exec usp_Search '$search', $min_row, $max_row";
$dbh = new DBH;
$dbh->Connect()->Query($query);
while($row = $dbh->Fetch_Array()){
//print stuff
}
SQL procedures from the alternatives
Is just a procedure with a SELECT query
SELECT
COL1,
COL2,
COL...
FROM TABLE1
WHERE (
COL1 LIKE '%'+#search+'%' OR
COL2 LIKE '%'+#search+'%' OR
COL... LIKE '%'+#search+'%'
)
Is a procedure that creates a temp table and then selects rows from variables start to end.
SELECT
COL1,
COL2,
COL...,
ROW_NUMBER() OVER (ORDER BY COL1) AS [ROW_NUMBER]
INTO #result
FROM TABLE1
WHERE (
COL1 LIKE '%'+#search+'%' OR
COL2 LIKE '%'+#search+'%' OR
COL... LIKE '%'+#search+'%'
)
SELECT
COL1,
COL2,
COL...
FROM #result
WHERE ROW_NUMBER BETWEEN #row_start AND #row_end
You really can't store all of the results in the _SESSION for at least a couple reasons:
Users may make multiple searches simultaneously
Search results may change between a user's page loads.
The second point depends on how frequently you update your DB, but is something to consider. The first is major, but you may also be able to get around it if you store the session in a clever way (but you don't want _SESSION getting too large either). This is irrespective of performance.
Another consideration about getting all results at once and storing into _SESSION is that the majority of your users may only make one search request per visit. I know you would like to think they will always look at all 100 results, but if a large chunk of those results are not even being used, you're wasting quite a lot just to save a query or two. It's up to you to figure out how your users navigate.
After reading that this is only going to be used by 20-30 people and only 70 rows a day, I'm satisfied to say you're wasting time trying to improve performance at this point. Go for the code that's easier to update later in case of major changes.
Consider this scenario:
User searches a term with 100 results stored in database.
You query the database once getting all 100 results and you store them in session.
User finds what he was looking for in the first 5 results and leaves the search page.
In the end, you "overheated" database to fetch 95 rows for nothing. What if those 100 results are 1000, or 10.000 ?
In my opinion, getting all the results in a single query and store the results in session is a "reliable method" to reduce performance.
I have a table as below,
ID Name Age
----------------------
100 A 10
203 B 20
Now how do i select only row1 using MySQL SELECT command and then I've to increase +1 to it to select row2. In short I'll be using for loop to do certain operations.
Thanks.
Sounds like you've got a mix up. You want to select all the rows you want to iterate through in your for loop with your query, and then iterate through them one by one using php's mysql functions like mysql_fetch_row
You should not try to use tables in a linear fashion like this. Set your criteria, sorting as appropriate, and then to select the next row use your existing criteria and limit it to one row.
SELECT * FROM `table` ORDER BY `ID` LIMIT 1
SELECT * FROM `table` ORDER BY `ID` WHERE ID > 100 LIMIT 1
You'd probably be better off retrieving all rows that you need, then using this. Note the LIMIT is entirely optional.
$query = mysql_query(' SELECT ID, Name, Age FROM table_name WHERE condition LIMIT max_number_you_want '))
while ($row = mysql_fetch_assoc($query)
{
// Do stuff
// $row['ID'], $row['Name'], $row['Age']
}
Lots of small queries to the database will execute much slower than one decent-sized one.
You should get the result into an array (php.net : mysql_fetch_*).
And after you'll can loop on the array "to do certain operations"
Yep, this is a pretty common thing to do in PHP. Like the others who have posted, here is my version (using objects instead of arrays):
$result = mysql_query("SELECT * FROM table_name");
while ($row = mysql_fetch_object($result)) {
// Results are now in the $row variable.
// ex: $row->ID, $row->Name, $row->Age
}
I have a script that has a GET variable: $_GET['percentage']
I have a MySQL table of data.
Now lets say that there are 100 rows of data in this table.
In pseudo-code:
SELECT data FROM table
Now would it be possible to select $_GET['percentage'] of random data from table?
For example (again in pseudo-code):
$_GET['percentage'] = 10;
SELECT 10% of data from table order by rand()
If this IS possible, how could I do it?
In MySQL, it's probably easiest to do this in two queries. First, get the count of rows in the table:
SELECT COUNT(*) FROM MyTable;
Then prepare the query to get random rows:
SELECT ... FROM MyTable ORDER BY RAND() LIMIT ?;
Then execute the prepared query and send the value of the count divided by 10.
Not every problem needs to be solved by a single query.
Here's an example PHP script, edited to use the old mysql extension.
<?php
// Get the total number of rows in the table.
$sql = "SELECT COUNT(*) FROM Kingdoms";
$result = mysql_query($sql);
$row = mysql_fetch_array($result);
$rows_in_table = $row[0];
// We only want a portion of the rows, specified by the user
// choice of percentage. The count we want is therefore equal
// to the total number of rows in the table multiplied by the
// desired percentage.
$percentage = intval($_GET["percentage"]) / 100.0;
$count = intval(round($rows_in_table * $percentage));
// LIMIT makes the query return at most the number of rows specified.
// Sort randomly first (if the table has too many rows this will be slow),
// then return the first $count rows.
$sql = "SELECT * FROM Kingdoms ORDER BY RAND() LIMIT {$count}";
$result = mysql_query($sql);
while ($row = mysql_fetch_array($result)) {
print_r($row);
}
PS: Always be careful when interpolating a variable into an SQL expression. You should force the variable to a known format -- an integer value in this case. Otherwise you risk creating an SQL Injection vulnerability.
If you have auto incremented ID field you may use
HAVING ID_FIELD<=ceil(count(*)*10/100);
Otherwise a stored procedure can help in this.
select columnvalue from mytable WHERE RAND() <= 0.5 .....will directly result in very near to 50% of the records
May be this event rise the solution
drop event OEAuditEvent;
DELIMITER $$
CREATE EVENT OEAuditEvent
ON SCHEDULE EVERY 1 SECOND
STARTS '2012-09-05 09:00:00'
DO
BEGIN
DECLARE a CHAR(20);
DECLARE b,c,d INT;
DECLARE done INT DEFAULT FALSE;
IF CURRENT_TIME() = '23:40:00' THEN
begin
DECLARE cur CURSOR FOR select OE_User,count(OE_User) from RNCM_Status where date(OE_Date)=CURDATE() group by OE_User;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
OPEN cur;
read_loop: LOOP
FETCH cur INTO a, b;
SET c=ceil((b*5)/100);
IF done THEN
LEAVE read_loop;
ELSE
insert into OE_Audit(MDN,CAF,UploadedDate,OEUser,OEDate,UserCount,QCCount,intime) select MDN,CAF,UploadedDate,OE_User,OE_Date,b,c,now() from RNCM_Status where OE_User=a and date(OE_Date)=CURDATE() order by rand() limit c;
END IF;
END LOOP;
CLOSE cur;
end ;
END IF;
END $$
DELIMITER ;
I have a table with roughly 1 million rows. I'm doing a simple program that prints out one field from each row. However, when I started using mysql_pconnect and mysql_query the query would take a long time, I am assuming the query needs to finish before I can print out even the first row. Is there a way to process the data a bit at a time?
--Edited--
I am not looking to retrieve a small set of the data, I'm looking for a way to process the data a chunk at a time (say fetch 10 rows, print 10 rows, fetch 10 rows, print 10 rows etc etc) rather than wait for the query to retrieve 1 million rows (who knows how long) and then start the printing.
Printing one million fields will take some time. Retrieving one million records will take some time. Time adds up.
Have you profiled your code? I'm not sure using limit would make such a drastic difference in this case.
Doing something like this
while ($row = mysql_fetch_object($res)) {
echo $row->field."\n";
}
outputs one record at a time. It does not wait for the whole resultset to be returned.
If you are dealing with a browser you will need something more.
Such as this
ob_start();
$i = 0;
while ($row = mysql_fetch_object($res)) {
echo $row->field."\n";
if (($i++ % 1000) == 0) {
ob_flush();
}
}
ob_end_flush();
Do you really want to print one million fields?
The customary solution is to use some kind of output pagination in your web application, showing only part of the result. On SELECT queries you can use the LIMIT keyword to return only part of the data. This is basic SQL stuff, really. Example:
SELECT * FROM table WHERE (some conditions) LIMIT 40,20
shows 20 entries, starting from the 40th (off by one mistakes on my part may be possible).
It may be necessary to use ORDER BY along with LIMIT to prevent the ordering from randomly changing under your feet between requests.
This is commonly needed for pagination. You can use the limit keyword in your select query. Search for limit here:
The LIMIT clause can be used to constrain the number of rows returned by the SELECT statement. LIMIT takes one or two numeric arguments, which must both be nonnegative integer constants (except when using prepared statements).
With two arguments, the first argument specifies the offset of the first row to return, and the second specifies the maximum number of rows to return. The offset of the initial row is 0 (not 1):
SELECT * FROM tbl LIMIT 5,10; # Retrieve rows 6-15
To retrieve all rows from a certain offset up to the end of the result set, you can use some large number for the second parameter. This statement retrieves all rows from the 96th row to the last:
SELECT * FROM tbl LIMIT 95,18446744073709551615;
With one argument, the value specifies the number of rows to return from the beginning of the result set:
SELECT * FROM tbl LIMIT 5; # Retrieve first 5 rows
In other words, LIMIT row_count is equivalent to LIMIT 0, row_count.
You might be able to use
Mysqli::use_result
combined with a flush to output the data set to the browser. I know flush can be used to output data to the browser at an incremental state as I have used it before to do just that, however I am not sure if mysqli::use_result is the correct function to retrieve incomplete result sets.
This is how I do something like that in Oracle. I'm not sure how it would cross over:
declare
my_counter integer := 0;
begin
for cur in (
select id from table
) loop
begin
-- do whatever your trying to do
update table set name = 'steve' where id = cur.id;
my_counter := my_counter + 1;
if my_counter > 500 then
my_counter := 0;
commit;
end if;
end;
end loop;
commit;
end;
An example using the basic mysql driver.
define( 'CHUNK_SIZE', 500 );
$result = mysql_query( 'select count(*) as num from `table`' );
$row = mysql_fetch_assoc( $result );
$totalRecords = (int)$row['num'];
$offsets = ceil( $totalRecords / CHUNK_SIZE );
for ( $i = 0; $i < $offsets; $i++ )
{
$result = mysql_query( "select * from `table` limit " . CHUNK_SIZE . " offset " . ( $i * CHUNK_SIZE ) );
while ( $row = mysql_fetch_assoc( $result ) )
{
// your per-row operations here
}
unset( $result, $row );
}
This will iterate over your entire row volume, but do so only 500 rows at a time to keep memory usage down.
It sounds like you're hitting the limits of various buffer sizes within the mysql server... Some methods you could do would be to specify the field you want in the SQL statement to reduce this buffer size, or play around with the various admin settings.
OR, you can use a pagination like method but have it output all on one page...
(pseudocode)
function q($part) {
$off = $part*SIZE_OF_PARTITIONS;
$size = SIZE_OF_PARTITIONS;
return( execute_and_return_sql('SELECT `field` FROM `table` LIMIT $off, $size'));
}
$ii = 0;
while ($elements = q($ii)) {
print_fields($elements);
$ii++;
}
Use mysql_unbuffered_query() or if using PDO make sure PDO::MYSQL_ATTR_USE_BUFFERED_QUERY is false.
Also see this similar question.
Edit: and as others have said, you may wish to combine this with flushing your output buffer after each batch of processing, depending on your circumstances.