PHP MYSQL timeout problem - php

I have an SQL query like this. When iam executing it though my php file it gives time out error. But when running the same query thoruhg php myadmin it gives results within seconds.
SELECT
cake_apartments.id,
cake_apartments.Headline,
cake_apartments.Description,
cake_apartments.photoset_id,
cake_apartments.Rental_Price,
cake_apartments.Bedrooms,
cake_apartments.Bathrooms
FROM
cake_apartments,cake_neighborhoods
WHERE
(cake_apartments.Rented = 0)
AND
(cake_apartments.status = 'Active')
ORDER BY
cake_neighborhoods.Name DESC
I know that increasing the time out may solve the problem. but i don't want to spend more than 30 sec for this query.

the problem is you haven't specific the relationship between your two tables. it returns quickly in phpmyadmin because phpmyadmin adds a LIMIT clause that allows the mysql server to stop sending rows quickly, never getting near the timeout.
you think the query is just retrieving the rows where the apartments are not rented and are active, but what you're really getting is that numbers of rows * the number of neighborhoods in your database.
rewrite your query like this:
SELECT
cake_apartments.id,
cake_apartments.Headline,
cake_apartments.Description,
cake_apartments.photoset_id,
cake_apartments.Rental_Price,
cake_apartments.Bedrooms,
cake_apartments.Bathrooms
FROM
cake_apartments
JOIN
cake_neighborhoods
ON
cake_neighborhoods.id = cake_apartments.neighborhood_id
WHERE
(cake_apartments.Rented = 0)
AND
(cake_apartments.status = 'Active')
ORDER BY
cake_neighborhoods.Name DESC
note that i only guessed at how the two tables are related in the ON clause, so if i got it wrong you'll have to adjust it.

If you ONLY want rows where a match exists, your SQL needs to include an INNER JOIN. If cake_neighborhoods.cake_apartments_id is the name of your foreign key in the cake_neighborhoods table, I suggest rewriting the query as the following:
SELECT
cake_apartments.id,
cake_apartments.Headline,
cake_apartments.Description,
cake_apartments.photoset_id,
cake_apartments.Rental_Price,
cake_apartments.Bedrooms,
cake_apartments.Bathrooms,
cake_neighborhoods.Name
FROM
cake_apartments,cake_neighborhoods
WHERE
(cake_apartments.Rented = 0)
AND
(cake_apartments.status = 'Active')
AND
cake_neighborhoods.cake_apartments_id = cake_apartments.id
ORDER BY
cake_neighborhoods.Name DESC

Related

Missing rows when duplicating data from another table using MySQL

INSERT cash_transaction2017 SELECT * FROM cash_transaction WHERE created_at < "2018-01-01 00:00:00"
Above is the result from the script I executed, total rows is 336,090.
However, when I browse the table from phpMyAdmin, I can only see 334,473 rows.
After ordering my rows in cash_transaction2017 table in ascending, I found out that some rows are missing because the last created_date is different with that of cash_transaction.
Why is this happening? The result is the same no matter I execute the script from mysql console, or using php codes.
I also tried to use mysqldump with --skip-tz-utc and it's also missing some rows.
UPDATE
SELECT count(*) FROM `cash_transaction2017`
SELECT count(*) FROM `cash_transaction` WHERE created_at < "2018-01-01 00:00:00"
Apparently executing these 2 queries give me same number of rows, however, the last rows from these 2 queries are different. See the screenshots below:
UPDATE 2
Since both tables are transactions table, so if they have the same total amount, it should signifies that they have the same number of rows without any data loss.
So I tried SELECT SUM(amount) on both tables and turns out both the tables have same total amount from SUM(amount)
So the question now is, are there actually any missing rows? Does this problem occur because I'm using innodb?
You may try to add the line in your config.inc.php from your phpMyAdmin directory:
$cfg['MaxExactCount'] = 1000000*
*(Make sure $cfg['MaxExactCount'] is large enough)
This problem probably occurs only with InnoDB tables. Hope this is useful.

Check if row exists, The most efficient way?

SQL Queries /P1/
SELECT EXISTS(SELECT /p2/ FROM table WHERE id = 1)
SELECT /p2/ FROM table WHERE id = 1 LIMIT 1
SQL SELECT /P2/
COUNT(id)
id
PHP PDO Function /P3/
fetchColumn()
rowCount()
From the following 3 Parts, What is the best method to check if a row exists or not with and without the ability to retrieve data like.
Retrievable:
/Query/ SELECT id FROM table WHERE id = 1 LIMIT 1
/Function/ rowCount()
Irretrievable
/Query/ SELECT EXISTS(SELECT COUNT(id) FROM table WHERE id = 1)
/Function/ fetchColumn()
In your opinion, What is the best way to do that?
By best I guess you mean consuming the least resources on both MySQL server and client.
That is this:
SELECT COUNT(*) count FROM table WHERE id=1
You get a one-row, one-column result set. If that column is zero, the row was not found. If the column is one, a row was found. If the column is greater that one, multiple rows were found.
This is a good solution for a few reasons.
COUNT(*) is decently efficient, especially if id is indexed.
It has a simple code path in your client software, because it always returns just one row. You don't have to sweat edge cases like no rows or multiple rows.
The SQL is as clear as it can be about what you're trying to do. That's helpful to the next person to work on your code.
Adding LIMIT 1 will do nothing if added to this query. It is already a one-row result set, inherently. You can add it, but then you'll make the next person looking at your code wonder what you were trying to do, and wonder whether you made some kind of mistake.
COUNT(*) counts all rows that match the WHERE statement. COUNT(id) is slightly slower because it counts all rows unless their id values are null. It has to make that check. For that reason, people usually use COUNT(*) unless there's some chance they want to ignore null values. If you put COUNT(id) in your code, the next person to work on it will have to spend some time figuring out whether you meant anything special by counting id rather than *.
You can use either; they give the same result.

SELECT * FROM table HUNDREDS MILLIONS ROWS

Okay so I have a table which currently has 40000 rows and I need to SELECT them all. I use a index for id and url column and if I have to SELECT a value by id or url it's instant but if I have to SELECT * it's very slow. What I'm trying to do is searching my database and output the matches and I did this with a
while($arr = mysqli_fetch_array($query))
{ #code... echo $arr['whatever_i_need']."<br>"; }
$query = mysqli_query($link,"SELECT * FROM table");
In the future I will have hundreds of millions of rows in the database so I would like to return the search results fast in 1 sec or something. If you can give me solutions I would really appreciate! Thanks!
EDIT:
I don't want to display all of the data but I want to loop through it quickly to find all the matches
If you want speed then you definitely don't want the query to return every row from the table, and then "loop through" every row returned by the query to identify the ones you are interested in returning. That approach might give acceptable performance with small tables, but it definitely doesn't scale.
For performance, you want the database to locate just the rows you want to return, filter out the ones you don't want, and return just the subset.
And that comes down to writing an appropriate SQL query; executing an appropriate SELECT statement.
SELECT t.col1
, t.col2
, t.col3
FROM mytable t
WHERE t.col3 LIKE '%foo%'
AND t.col2 >= '2016-03-15'
AND t.col2 < '2016-06-15'
ORDER BY t.col2 DESC, t.col1 DESC
LIMIT 200
Performance is about making sure appropriate indexes are available and that the query execution is making effective use of the available indexes.

Paginating result with mysql limit

I want to paginate some result using
LIMIT no_of_rows, row_offset
query in mysql
I runt a script that does this via ajax, the problem is when I fetch the last rows.
How can I fetch the last rows via mysql limit without getting any errors?
The first time you query the table (assuming it does not change every second)
Do it like that:
SELECT SQL_CALC_FOUND_ROWS *
FROM table_name LIMIT page_size,0;
SELECT FOUND_ROWS();
That way you know how many rows to expect, and you do not give a LIMIT which does not exists.

Handling huge mysql query : Group BY sensitive

I have to run this Mysql query on my website to fetch huge amount of data: (3 tables , each with 100,000 + records)
SELECT on_resume.*, on_users.subscribed, on_users.user_avatar, on_resume_page.*
FROM on_resume
LEFT JOIN on_users ON (on_resume.resume_userid = on_users.user_id )
LEFT JOIN on_resume_page ON ( on_resume.resume_userid = on_resume_page.resume_userid)
WHERE on_resume.active= '1'
GROUP BY on_resume.rid
ORDER BY on_resume.rid DESC
LIMIT 0,18
The time I run this at Phpmyadmin sql section, the whole mysqld service will be down and needs to be restarted.
Now I was testing this query and I found out if I don't use Group by and Order by conditions the query will be fine.
SELECT on_resume.*, on_users.subscribed, on_users.user_avatar, on_resume_page.*
FROM on_resume
LEFT JOIN on_users ON (on_resume.resume_userid = on_users.user_id )
LEFT JOIN on_resume_page ON ( on_resume.resume_userid = on_resume_page.resume_userid)
WHERE on_resume.active= '1'
LIMIT 0,18
Showing rows 0 - 17 ( 18 total, Query took 0.4248 sec)
Why is it like this and how can I fix it?...
NOTE : I have tested the SQL query with group by or Order by alone in either case , even with one of them still the query fails and hangs the server.
EDIT : This problem is solved by making column on_resume_page.resume_userid indexed.
This is what i was told, took a while to figure it out:
Look at #jer in Chicago comment
Remember, when there is a GROUP BY clause, there are certain rules that apply for grouping columns. One of those rules is "The Single-Value Rule" -- every column named in the SELECT list must also be a grouping column unless it is an argument for one of the set functions. MySQL extends standard SQL by allowing you to use columns or calculations in a SELECT list that don't appear in a GROUP BY clause. However, we are warned not to use this feature unless the columns you omit from the GROUP BY clause are not unique in the group because you will get unpredictable results.

Categories