In php what is a function to only display strings that have a length greater than 50 characters, truncate it to not display more than 130 characters and limit it to one result?
so for example say i have 30 rows in a result set but I only want to show the newest row that have these parameters. If the newest row has 25 characters it should not display. It should only display the newest one that has a string length of 50 or more characters.
Use an SQL query. For finding the newest you want max on either an auto_increment primary key (ill call it id) or a date/time when the row was created (say, time time_created).
So I am assuming table with: id (int), stringVal (string, char(), varchar(), whatever)
SELECT MAX(id), SUBSTRING(stringVal, 1, 130)
FROM yourTable
WHERE LENGTH(stringVal) > 30
Replace id with a time field if you have to. You're going to have a hard time finding the newest without one of them, but you can always arbitrarily pick one row.
--Edit-- a sample of using mysql functions in PHP to run above query and fetch desired output
$sql = "SELECT MAX(id), SUBSTRING(stringVal, 1, 130) FROM yourTable WHERE LENGTH(stringVal) > 30";
$r = mysql_query($sql, $conn); //im hoping $conn or something like it is already set up
$row = mysql_fetch_assoc($r);
$desiredString = $row['stringVal'];
Something like this should do just make sure that you grab your data sorting by the newest items first. The break statement will ensure that the loop is terminated after the first result matching your criteria is found...
foreach($array_returned_from_query as $row)
{
if(strlen($row) > 50)
{
echo substr($row, 0, 130);
break;
}
}
Related
This question already has answers here:
Generate a random value that doesn't exist in the same column
(2 answers)
Closed 7 months ago.
This post was edited and submitted for review 7 months ago and failed to reopen the post:
Original close reason(s) were not resolved
Without using PHP loops, I'm trying to find an efficent way to generate a unique number that's different from all existing values in MYSQL database. I tried to do that in PHP, but it's not efficent because it has to deal with lots of loops in the future. Recently, I tried to do this with MYSQL and I found this sqlfiddle solution. But It doesn't work. Sometimes it generates the same number as in the table and it doesn't check every value in the table. I tried this this but didn't help. They generally give that query:
SELECT *, FLOOR(RAND() * 9) AS random_number FROM Table1
WHERE "random_number" NOT IN (SELECT tracker FROM Table1)
I will work with 6-digit numbers in the future, so I need that to be efficent and fast. I can use different methods such as pre-generating the numbers to be more efficent but I don't know how to handle that. I would be glad if you help.
EDIT: Based on #Wiimm's solution, I can fill the table with 999.999 different 'random' unique numbers without using PHP loop functions. Then I developed a method where ID's of the deleted rows can be reused. This is how I managed it:
Duplicate your original table and name it "table_deleted". (All columns must be the same.)
Create a trigger in MYSQL. To do that, enter "SQL" in MYSQL and run this code (It simply moves your row to the "table_deleted"):
MYSQL Code
DELIMITER
$$
CREATE TRIGGER `table_before_delete` BEFORE DELETE
ON
`your_table` FOR EACH ROW
BEGIN
INSERT INTO table_deleted
select * from your_table where id = old.id;
END ; $$
DELIMITER
;
Create another trigger. This code will move the row back to original table when it's updated.
MYSQL Code
DELIMITER
$$
CREATE TRIGGER `table_after_update` AFTER UPDATE
ON
`your_table` FOR EACH ROW
BEGIN
INSERT INTO your_table
select * from table_deleted where id = old.id;
END ; $$
DELIMITER
;
The PHP code that I use (The number column must be "NULL" to work this code):
PHP Code
//CHECK IF TABLE_DELETED HAS ROWS
$deleted = $db->query('SELECT COUNT(*) AS num_rows FROM table_deleted');
$deletedcount= $temp->fetchColumn();
//IF TABLE_DELETED HAS ROWS, RUN THIS
if($tempcount > 0) {
//UPDATE THE VALUE THAT HAS MINIMUM ID
$update = $db->prepare("UPDATE table_deleted SET value1= ?, value2= ?, value3= ?, value4= ? ORDER BY id LIMIT 1");
$update->execute(array("$value1","$value2","$value3","$value4"));
//AFTER UPDATE, DELETE THAT ROW
$delete=$db->prepare("DELETE from table_deleted ORDER BY id LIMIT 1");
$delete->execute();
}
else {
//IF TABLE_DELETED IS EMPTY, ADD YOUR VAULES (EXCEPT RANDOM NUMBER)
$query=$db->prepare("insert into your_table set value1= ?, value2= ?, value3= ?, value4= ?");
$query->execute(array("$value1","$value2","$value3","$value4"));
//USING #Wiimm's SOLUTION FOR GENERATING A RANDOM-LOOKING UNIQUE NUMBER
$last_id = $db->lastInsertId();
$number= str_pad($last_id * 683567 % 1000000, 6, '0', STR_PAD_LEFT);
//INSERT THAT RANDOM NUMBER TO THE CURRENT ROW
$insertnumber= $db->prepare("UPDATE your_table SET number= :number where id = :id");
$insertnumber->execute(array("number" => "$number", "id" => "$last_id"));
}
MYSQL triggers do the rest for you.
Random numbers and non-repeatable numbers are basically 2 different things that are mutually exclusive. Can it be that a sequence of numbers that only looks like random numbers is enough for you?
If yes, then I have a solution for it:
Use auto increment of your database.
Multiply the Id by a prime number. Other manipulations like bit rotations are possible too.
About prime number: It is important, that the value range (in your case 1000000) and the multiplicand have no common prime divisors. Otherwise the sequence of numbers is much shorter.
Here is an example for 6 digits:
MYSQL_INSERT_INSTRUCTION;
$id = $mysql_conn->insert_id;
$random_id = $id * 683567 % 1000000;
With this you get:
1: 683567
2: 367134
3: 50701
4: 734268
5: 417835
6: 101402
7: 784969
8: 468536
9: 152103
10: 835670
11: 519237
12: 202804
13: 886371
14: 569938
15: 253505
16: 937072
17: 620639
18: 304206
19: 987773
20: 671340
After 1000000 records the whole sequence is repeated. I recommend the usage of the full range of 32 bits. So the sequence have 4 294 967 296 different numbers. In this case use a much larger prime number, e.g. about 2.8e9. I use always a prime ~ 0.86*RANGE for this.
Alternatives
Instead of $random_id = $id * 683567 % 1000000; you can user other calculations to disguise your algorithm. Some examples:
# add a value
$random_id = ( $id * 683567 + 12345 ) % 1000000;
# add a value and swap higher and lower part
$temp = ( $id * 683567 + 12345 ) % 1000000;
$random_id = intdiv($temp/54321) + ($temp%54321)*54321;
If you can move away from the 6 digit (numeric) requirement, I would as it would allow you to create true random strings with some sort of uuid() function.
However, if this needs to be done outside of PHP and has to be 6 digit numbers, I would use an auto-increment column in MySQL.
If there needs to be some randomness, you can adjust the auto-increment column by a random increase:
alter table tableName auto_increment = [insert new starting number here];
This of course may find you in 7 digit numbers rather quickly.
Alternatively, I'd see the solution being PHP picking a random number and checking that against the DB (or pull in the rows of the DB first to check against without a DB query every time).
Let's see, I'm making a mess with the cursor pagination, based on an Id in my case ULID, I want to return an array with the results, next_cursor and prev_cursor.
To obtain the NextCursor is very easy, I only have to add one more to the Limit, that is to say, if I have a limit of 10, I request 11 records and if I get 11 records then the NextCursor is the result 11. But for the PrevCursor the only thing I can think of is to do an additional Query to the one I am already doing. Example:
$limit = 10;
$result = 'SELECT * FROM Table WHERE id <= $cursor ORDER BY id DESC LIMIT $limit+1'
$results = array_slice($result, 0, $limit);
$nextCursor = array_slice($result, $limit, 1);
And now to get the Prev Cursor, I do as I said before an additional query
$prevCursor = 'SELECT * FROM Table WHERE id > $cursor ORDER BY id ASC LIMIT 1'
That way my API can return the following array to the frontend
return [
'data' => $results,
'next_cursor' => $nextCursor,
'prev_cursor' => $prevCursor
];
Now I rephrase the same question again, is there any way to do this without having to do additional Mysql query to get the Prev Cursor, I mean in a same Query or in some other way, I don't know, it's the first time I do this, and I'm a bit lost.
Thanks very much!
Indexing column allows you to quickly find specific row by its ULID and scan nearest rows forward and backward, but obviously, scanning forward and backward internally are two different operations, so if you insist on having the result of the both, you are doomed to perform two different operations.
There's some SQL syntactic sugar to help you hide those two internally executed operations inside one query, but first let me clarify around your objective a little.
What you are trying to build here is actually a window, not a page.
Pagination is when you have an ordered list of rows split over in pages of some size and user references an index of a page which she wants to browse. E.g. page #0, page #1, ... etc. Last page might have less items than a page size, and also if the total number of rows is less than a page, then first page and last page would be the same page and it's OK.
LIMIT and OFFSET operators are here to support that use case. A link to previous page is simply min(0,current_page-1) and a link to next page is min(max_pages,current_page+1).
On the opposite side, windowing is when you have an ordered list of rows, and when user references some specific row by its ULID, you fetch him a few rows behind and/or after queried row. It's like grep -C 10 in bash.
You can emulate window using sub-selects and UNION.
$limit = 10;
// Fetch a limit+1 of results after and including id AND
// a limit of results before id
$result = 'SELECT * FROM (
(
SELECT * from Table
WHERE id >= $cursor ORDER BY id ASC LIMIT $limit+1
) UNION (
SELECT * from Table
WHERE id < $cursor ORDER BY id DESC LIMIT $limit
)
) TableAlias ORDER by id;'
// as we actually fetching some rows before cursor
// we should find its position in the result set ...
$cursor_index = array_search($cursor, array_column($result, 'id'));
// ... and throw away the rest rows
$results = array_slice($result, $cursor_index, $limit);
// here cursors are always first and last items of the result set
$prevCursor = array_slice($result, 0, 1)['id'];
$nextCursor = array_slice($result, -1, 1)['id'];
return [
'data' => $results,
'prev_cursor' => $prevCursor,
'next_cursor' => $nextCursor
];
Since MySQL 8.0 you have a whole set of windowing functions, for your case, LEAD() and LAG() can help you move away all cursor calculations and slicing to your MySQL server.
$limit = 10;
// Wrap same query as above into sub-select, because LAG/LEAD work after WHERE, so we still need UNION to fetch previous cursor
$result = '
WITH
tab1 AS (SELECT * FROM Table where id >= $cursor ORDER BY id ASC LIMIT $limit+1),
tab2 AS (SELECT * FROM Table WHERE id < $cursor ORDER BY id DESC LIMIT $limit),
tab3 AS (SELECT * FROM tab1
UNION ALL
SELECT * FROM tab2 ORDER BY id),
tab4 AS (SELECT
*,
LAG(id, 4) OVER (order by id) as prevcursor,
LEAD(id, 4) OVER (order by id) as nextcursor
FROM tab3)
SELECT * FROM tab4
WHERE id >= $cursor LIMIT $limit'
// here calculated cursor ids are always on first row of result set
$prevCursor = $result[0]['prevcursor'];
$nextCursor = $result[0]['nextcursor'];
// (optionally) strip unwanted columns from result
$results = array_map(function ($a) { return array_diff_key($a, array_flip(array('prevcursor', 'nextcursor'))); }, $result);
return [
'data' => $results,
'prev_cursor' => $prevCursor,
'next_cursor' => $nextCursor
];
That should work well if you don't ever delete rows.
Now, consider following. Both pagination and windowing do not work well with unstable lists.
E.g. if new rows are sequentially adding to the end of the set, then the last page is constantly moving forward. So when one user is opening 'last' page and sees, say, three items there, another user might add another bunch of items and his view of what 'last' page would be different.
What's worse is that if your table usage allows deleting rows, the whole set of pages after and including the page where deleted row was is now rearranged. This leads to very nasty user experience when user is clicking 'next' page and accidentally skips some items, or is clicking 'previous' and sees some of the items he has already seen before.
To overcome those deficiencies you might want to redesign your API such that querying 'previous' page and 'next' page be semantically clearly different from querying 'current' page.
That is, you would need three API endpoints:
query a row by ULID (and a set of up to N rows after it) - initial user entry point from, e.g. search or catalog tree.
query a set of N rows before specific ULID. You pass ULID of the first row in the window user is currently looking at. If there are no rows before given one, result set is empty, you might either display a notification message to user or silently redirect them to first endpoint.
query a set of N rows after specific ULID. You pass ULID of the last row in the window user is currently looking at. If there are no rows after given one, result set is empty, you might either display a notification message to user or silently redirect them back.
If you design your API that way, you would have following benefits:
all three API implemented by only simple ORDER and LIMIT
no need in second query neither explicit nor implicit
your next/previous window results would not ever have any misses or duplicates comparing to previously seen windows.
The only drawback here is that original row user is referring to can be deleted as well. To overcome this, you might want to add a boolean deleted flag to your table schema and set it to false instead of actual row deletion.
After reading #shomeax 's comment and thinking a little more I can suggest to encode cursor in base-64 and make it to contain prev cursor additionally. For example:
[$prevCursor, $curCursor] = explode(':', base64_decode($request['cursor']));
$limit = 10;
$prevPlusCurPagesLimit = $limit * 2;
$ulids = 'SELECT ulid FROM Table WHERE ulid <= $prevCursor ORDER BY ulid DESC LIMIT $prevPlusCurPagesLimit+1';
$resultUlids = array_slice($ulids, $limit, $limit);
$nextCursor = array_slice($ulids, $prevPlusCurPagesLimit, 1);
$prevPrevCursor = reset($ulids);
$response = [
'data' => $resultUlids,
'prevCursor' => base64_encode("$curCursor:$nextCursor"),
'nextCursor' => base64_encode("$prevPrevCursor:$prevCursor"),
];
I didn't try such approach myself but it is partially based on this article https://slack.engineering/evolving-api-pagination-at-slack/ and looks like working
You don’t need to request more, only use < and > rather than <= and >=.
Then you can use the last id in $results for next and the first id for previous.
Assuming that
the "cursor" is the ID from which the next portion starts
and IDs are sequential and without gaps,
and limit is not changed from one query to another
you could get the prev cursor by substracting/adding (depending on sorting) $limit from/to current cursor: $prevCursor = $results[0]['id'] - $limit
And if the ID column has gaps, I suppose there is no reasonable way to implement ability to get prev cursor without additional query. You could only turn it into sub-query or use UNION, but this does not make a big difference.
Consider this...
You fetch the 5 rows for the current page. Then the "previous" page ends before the first id on this page and the "next" page starts after the last id on the current page.
Example: The current page contains ids 65, 67, 71, 82, 91. This finds the 5 rows for the previous page:
SELECT ...
WHERE id < 65
ORDER BY id DESC
LIMIT 5;
(They will be in reverse order, but that is easy to fix.) For the "next" page (in proper order):
SELECT ...
WHERE id > 91
ORDER BY id ASC
LIMIT 5;
As another tip: fetching an extra row (6 instead of 5), lets you cheaply discover whether you are at the "end", thereby being able to suppress the [Next] or [Prev] button.
More: Pagination
Granted, this technique does not deliver the 5 rows for the next/previous page, but do you really need that? Since the Selects should be quite efficient, I don't necessarily see a drawback of doing more than one Select or combining selects with UNION.
I am going to delete my previous answer, despite its upvote, because it is clearly the wrong approach.
Update
Given that you are retrieving on a column, id, that is presumably unique and indexed, then when a "page" of N rows is returned (where N is 10), you need to pass up either the id of the first row (the one with the greater value since we are sorting by descending id) or the last row as query parameter lastId along with a direction flag parameter directionFlag that is either F for forward or B for backward to give the direction of "paging." It should then be possible to directly seek to the correct rows as follows (I am assuming that we are using PDO for MySql Access):
define(PAGE_SIZE, 10);
$limit = PAGE_SIZE;
// lastId parameter specified? It will not be present on the initial request:
if (isset($_REQUEST('lastId')) {
$lastId = $_REQUEST['lastId'];
$direction = $_REQUEST['direction']; 'F'orward or 'B'ackward
if ($direction == 'F') {
$sql = "SELECT * FROM Table ORDER BY id DESC where id < :id LIMIT $limit";
}
else {
// paging backward:
$sql = "SELECT * from (SELECT * FROM Table ORDER BY id ASC where id > :id LIMIT $limit) sq ORDER BY id DESC";
}
$params = [':id' => $lastId];
}
else {
// this is the initial request:
$sql = "SELECT * FROM Table ORDER BY id DESC LIMIT $limit";
$params = [];
}
$stmt = $pdo->prepare($sql);
$stmt->execute($params);
$rows = $stmt->fetchAll(PDO::FETCH_ASSOC);
// The id from $rows[0] will be passed back as lastId with direction flag 'B' for paging backward
// And the id from $rows[PAGE_SIZE-1] will be passed back as lastId with direction flag 'F' for paging forward
I'm running into some weird behaviour when calling values with SQL.
$result = mysqli_query($conn,"SELECT DISTINCT value FROM table ORDER BY value ASC");
while($row[] = mysqli_fetch_array($result));
$maxcat = count($row);
There are 8 unique values in my table, yet $maxcat returns "9". Am I missing something? I'm using the information to populate a list of categories. I knew I'd have to subtract one from maxcat to allow for $row[0], but I'm having to subtract two to make up for this empty value that won't get lost.
for ($i=0; $i<=($maxcat-2); $i++)
Cheers!
Thanks #scrowler & #faintsignal! Using mysqli_num_row() rather than count() is exactly what I needed. Functioning code below.
$result = mysqli_query($conn,"SELECT DISTINCT value FROM table ORDER BY value ASC");
$maxcat = mysqli_num_rows($result);
while($row[] = mysqli_fetch_array($result));
The while condition gets evaluated once for every row in your query result, plus one additional time to detect that there are no more rows left. So if you have 8 rows in your result, the while condition is evaluated 8 + 1 = 9 times. Hence you add 9 entries to your array $row, the final entry having the value NULL.
Note, if your while loop had a body, that body would only be iterated 8 times because the 9th time the condition would evaluate to false.
As #scrowler pointed out, if you just want to count the number of rows in the result, you can simply use mysqli_num_rows.
I'm wondering why my MySQL COUNT(*) query always results in ->num_rows to be equal 1.
$result = $db->query("SELECT COUNT( * ) FROM u11_users");
print $result->num_rows; // prints 1
Whereas fetching "real data" from the database works fine.
$result = $db->query("SELECT * FROM u11_users");
print $result->num_rows; // prints the correct number of elements in the table
What could be the reason for this?
Because Count(*) returns just one line with the number of rows.
Example:
Using Count(*) the result's something like the following.
array('COUNT(*)' => 20);
echo $result['COUNT(*)']; // 20
Reference
It should return one row*. To get the count you need to:
$result = $db->query("SELECT COUNT(*) AS C FROM u11_users");
$row = $result->fetch_assoc();
print $row["C"];
* since you are using an aggregate function and not using GROUP BY
that's why COUNT exists, it always returns one row with number of selected rows
http://dev.mysql.com/doc/refman/5.1/en/counting-rows.html
Count() is an aggregate function which means it returns just one row that contains the actual answer. You'd see the same type of thing if you used a function like max(id); if the maximum value in a column was 142, then you wouldn't expect to see 142 records but rather a single record with the value 142. Likewise, if the number of rows is 400 and you ask for the count(*), you will not get 400 rows but rather a single row with the answer: 400.
So, to get the count, you'd run your first query, and just access the value in the first (and only) row.
By the way, you should go with this count(*) approach rather than querying for all the data and taking $result->num_rows; because querying for all rows will take far longer since you're pulling back a bunch of data you do not need.
I have a table with roughly 1 million rows. I'm doing a simple program that prints out one field from each row. However, when I started using mysql_pconnect and mysql_query the query would take a long time, I am assuming the query needs to finish before I can print out even the first row. Is there a way to process the data a bit at a time?
--Edited--
I am not looking to retrieve a small set of the data, I'm looking for a way to process the data a chunk at a time (say fetch 10 rows, print 10 rows, fetch 10 rows, print 10 rows etc etc) rather than wait for the query to retrieve 1 million rows (who knows how long) and then start the printing.
Printing one million fields will take some time. Retrieving one million records will take some time. Time adds up.
Have you profiled your code? I'm not sure using limit would make such a drastic difference in this case.
Doing something like this
while ($row = mysql_fetch_object($res)) {
echo $row->field."\n";
}
outputs one record at a time. It does not wait for the whole resultset to be returned.
If you are dealing with a browser you will need something more.
Such as this
ob_start();
$i = 0;
while ($row = mysql_fetch_object($res)) {
echo $row->field."\n";
if (($i++ % 1000) == 0) {
ob_flush();
}
}
ob_end_flush();
Do you really want to print one million fields?
The customary solution is to use some kind of output pagination in your web application, showing only part of the result. On SELECT queries you can use the LIMIT keyword to return only part of the data. This is basic SQL stuff, really. Example:
SELECT * FROM table WHERE (some conditions) LIMIT 40,20
shows 20 entries, starting from the 40th (off by one mistakes on my part may be possible).
It may be necessary to use ORDER BY along with LIMIT to prevent the ordering from randomly changing under your feet between requests.
This is commonly needed for pagination. You can use the limit keyword in your select query. Search for limit here:
The LIMIT clause can be used to constrain the number of rows returned by the SELECT statement. LIMIT takes one or two numeric arguments, which must both be nonnegative integer constants (except when using prepared statements).
With two arguments, the first argument specifies the offset of the first row to return, and the second specifies the maximum number of rows to return. The offset of the initial row is 0 (not 1):
SELECT * FROM tbl LIMIT 5,10; # Retrieve rows 6-15
To retrieve all rows from a certain offset up to the end of the result set, you can use some large number for the second parameter. This statement retrieves all rows from the 96th row to the last:
SELECT * FROM tbl LIMIT 95,18446744073709551615;
With one argument, the value specifies the number of rows to return from the beginning of the result set:
SELECT * FROM tbl LIMIT 5; # Retrieve first 5 rows
In other words, LIMIT row_count is equivalent to LIMIT 0, row_count.
You might be able to use
Mysqli::use_result
combined with a flush to output the data set to the browser. I know flush can be used to output data to the browser at an incremental state as I have used it before to do just that, however I am not sure if mysqli::use_result is the correct function to retrieve incomplete result sets.
This is how I do something like that in Oracle. I'm not sure how it would cross over:
declare
my_counter integer := 0;
begin
for cur in (
select id from table
) loop
begin
-- do whatever your trying to do
update table set name = 'steve' where id = cur.id;
my_counter := my_counter + 1;
if my_counter > 500 then
my_counter := 0;
commit;
end if;
end;
end loop;
commit;
end;
An example using the basic mysql driver.
define( 'CHUNK_SIZE', 500 );
$result = mysql_query( 'select count(*) as num from `table`' );
$row = mysql_fetch_assoc( $result );
$totalRecords = (int)$row['num'];
$offsets = ceil( $totalRecords / CHUNK_SIZE );
for ( $i = 0; $i < $offsets; $i++ )
{
$result = mysql_query( "select * from `table` limit " . CHUNK_SIZE . " offset " . ( $i * CHUNK_SIZE ) );
while ( $row = mysql_fetch_assoc( $result ) )
{
// your per-row operations here
}
unset( $result, $row );
}
This will iterate over your entire row volume, but do so only 500 rows at a time to keep memory usage down.
It sounds like you're hitting the limits of various buffer sizes within the mysql server... Some methods you could do would be to specify the field you want in the SQL statement to reduce this buffer size, or play around with the various admin settings.
OR, you can use a pagination like method but have it output all on one page...
(pseudocode)
function q($part) {
$off = $part*SIZE_OF_PARTITIONS;
$size = SIZE_OF_PARTITIONS;
return( execute_and_return_sql('SELECT `field` FROM `table` LIMIT $off, $size'));
}
$ii = 0;
while ($elements = q($ii)) {
print_fields($elements);
$ii++;
}
Use mysql_unbuffered_query() or if using PDO make sure PDO::MYSQL_ATTR_USE_BUFFERED_QUERY is false.
Also see this similar question.
Edit: and as others have said, you may wish to combine this with flushing your output buffer after each batch of processing, depending on your circumstances.