Three simultaneous SQL queries in one table - php

I have the following table structure:
----------------------------------
ID | Acces | Group | User | Gate |
----------------------------------
1 | 1 | TR | tsv | TL-23|
----------------------------------
And I have a page with 3 functions:
Select group to see all gates where selected group has access.
Select gate to see all groups which have access to selected gate.
Select group to see all users that belong to selected group.
So basically:
SELECT Gate WHERE Group = TR
SELECT Group WHERE Gate = TL-23
SELECT User WHERE Group = TR
What I am trying to achieve is: The user should be able to run the all three queries in any order without the results of the former queries dissappearing.
Now, I know multi-threading is no longer possible in PHP, but there must be a way to temporarily save the results of a specific query until the same query is made again.
Any help and suggestions would be appreciated.

Firstly, PHP has never done MT out of the box, but can (still) with the use of pcntl extensions.
That said, that isn't the sauce you seek.
If you simply want the user on the front-end to interact 3 separate times, without having to hit the DB once the first time, twice the second (to redo both the first and the new query) you may benifit from caching the results of each call in the user's session.
If you actually want to make the 3 queries in a relational way at one exact time, try JOINs.
If you simply want to make all 3 separate queries at (what is essentially) the same time, look into TRANSACTIONs.
Hope that helps.

Related

PHP & MySQL performance - One big query vs. multiple small

For an MySQL table I am using the InnoDB engine and the structure of my tables looks like this:
Table user
id | username | etc...
----|------------|--------
1 | bruce | ...
2 | clark | ...
3 | tony | ...
Table user-emails
id | person_id | email
----|-------------|---------
1 | 1 | bruce#wayne-ent.com
2 | 1 | ceo#wayne-ent.com
3 | 2 | clark.k#daily-planet.com
To fetch data from the database I've written a tiny framework. E.g. on __construct($id) it checks if there is a person with the given id, if yes it creates the corresponding model and saves only the field id to an array. During runtime, if I need another field from the model it fetches only the value from the database, saves it to the array and returns it. E.g. same with the field emails for that my code accesses the table user-emails and get all the emails for the corresponding user.
For small models this works alright, but now I am working on another project where I have to fetch a lot of data at once for a list and that takes some time. Also I know that many connections to MySQL and many queries are quite stressful for the server, so..
My question now is: Should I fetch all data at once (with left joins etc.) while constructing the model and save the fields as an array or should I use some other method?
Why do people insist on referring to the entities and domain objects as "models".
Unless your entities are extremely large, I would populate the entire entity, when you need it. And, if "email list" is part of that entity, I would populate that too.
As I see it, the question is more related to "what to do with tables, that are related by foreign keys".
Lets say you have Users and Articles tables, where each article has a specific owner associate by user_id foreign key. In this case, when populating the Article entity, I would only retrieve the user_id value instead of pulling in all the information about the user.
But in your example with Users and UserEmails, the emails seem to be a part of the User entity, and something that you would often call via $user->getEmailList().
TL;DR
I would do this in two queries, when populating User entity:
select all you need from Users table and apply to User entity
select all user's emails from the UserEmails table and apply it to User entity.
P.S
You might want to look at data mapper pattern for "how" part.
In my opinion you should fetch all your fields at once, and divide queries in a way that makes your code easier to read/manage.
When we're talking about one query or two, the difference is usually negligible unless the combined query (with JOINs or whatever) is overly complex. Usually an index or two is the solution to a very slow query.
If we're talking about one vs hundreds or thousands of queries, that's when the connection/transmission overhead becomes more significant, and reducing the number of queries can make an impact.
It seems that your framework suffers from premature optimization. You are hyper-concerned about fetching too many fields from a row, but why? Do you have thousands of columns or something?
The time consuming part of your query is almost always the lookup, not the transmission of data. You are causing the database to do the "hard" part over and over again as you pull one field at a time.

Php and mysql. Data management for classifieds ads website

Visitor opens url, for example
/transport/cars/audi/a6
or
/real-estate/flats/some-city/city-district
I plan separate table for cars and real-estate (separate table for each top level category).
Based on url (php explode create array)
$array[0] - based on the value know which table to SELECT
And so on $array[1], $array[2] ...
For example, RealEstate table may look like:
IdOfAd | RealEstateType | Location1 | Location2 | TextOfAd | and so on
----------------------------------------------------------------------
1 | flat | City1 | CityDistric1 | text.. |
2 | land | City2 | CityDistric2 | text.. |
And mysql query to display ads would be like:
SELECT `TextOfAd`, `and so on...`
WHERE RealEstateType = ? AND Location1 =? AND Location2 = ?
// and possibly additional AND .. AND
LIMIT $start, $limit
Thinking about performance. Hopefully after some long time number of active ads would be high (also i plan not to delete expired ads, just change column value to 0 not to display for SELECT; but display if directly visit from search engine).
What i need to do (change database design or SELECT in some another way), if for example number of rows in table would be 100 000 or millions?
Thinking about moving expired ads to another table (for which performance is not important). For example, from search engine user goes to some url with expired ad. At first select main table, if do not find, then select in table for expired ads. Is this some kind of solution?
Two hints:
Use ENGINE=InnoDB when creating your table. InnoDB uses row-level locking, which is MUCH better for bigger tables, as this allows rows to be read much faster, even when you're updating some of them.
ADD INDEX on suitable columns. Indexing big tables can reduce search times by several orders of magnitude. They're easy to forget and a pain to debug! More than once I've been investigating a slow query, realised I forgot a suitable INDEX, added it, and had immediate results on a query that used to take 15 seconds to run.

Multiply column with same content, make them count as "one"

About
I have this table in my database holding some information saved with a user id and time.
| CONTENT | USER ID | TIME |
| text | 1 | 1405085592 |
| hello | 2 | 1405085683 |
| hey | 1 | 1405086953 |
This example could be a data dump from my database, now as you can count there is "three" rows. However I only need to know how many users there have some information in my database. Therefor the result I'm really looking for is "two", because only two users have information in the database. User ID 1 is owning both "text"(1) & "hey"(3) where user ID 2 haves "hello"(2).
In short
I want to count how many users (regardless how many rows of information they have) there are inside my database.
** What I tried **
I tried to fetch every single row into an array and then using array_unique to count them together, works fine but I do not see this as a clean and best way to do this.
Then what?
I could use the array_unique and just use count to see how many rows there are, but I'm looking for something more clean. I tried to search for this, but I'm not actually sure what I should search for in term to hit something I'm looking for. After being stuck and though I wanted to learn something new, I wanted to post this problem here.
Note
I hope you guys can help me, I have tried to make it clear what I'm looking for and what I tried. If not please let me know. Sorry if some of the above contains misspelled words, incorrect grammar or is badly explained. I do not speak English daily, but I try my best.
You are looking for the DISTINCT keyword. It returns the count of unique values of a column:
SELECT COUNT(DISTINCT user_id)
FROM your_table
See example on SQL Fiddle.
This query:
SELECT DISTINCT user_id FROM table
will return just one row for every user in the table.

Restructured database, using SQL in phpmyadmin to move data around

I've recently been working on normalizing and restructuring my database to make it more effective in the long run. Currently I have around 500 records, and obviously I don't want to lose the users data.
I assume SQL through phpmyadmin is the easiest way to do this?
So let me give you guys an example
In my old table I would have something like this
records //this table has misc fields, but they are unimportant right now
id | unit |
1 | g |
With my new one, I have it split apart 3 different tables.
records
id
1
units
id | unit
1 | g
record_units
id | record_id | unit_id
1 | 1 | 1
Just to be clear, I am not adding anything into the units table. The table is there as a reference for which id to store in the record_units table
As you can see it's pretty simple. What moved in the second table is that I started using an index table to hold my units, since they would be repeated quite often. I then store that unit id, and the pairing record id in the record_units table, so I can later retrieve the fields.
I am not incredibly experienced with SQL, though i'd say I know average. I know this operation would be quite simple to do with my cakephp setup, because all my associations are already setup, but I can't do that.
If I understand correctly you want to copy related records from your old table to the new tables, in which case you can use something like this
UPDATE units u
INNER JOIN records r
ON u.id=r.id
SET u.unit = r.unit
This will copy the unit type from your old table to the matching id in the new units table and then you can do something similare on your 3rd table

Best way to enable blacklists for users

I want to allow my users to create blacklists of content sources (users/categories/words/?). They shouldn't be able to see any content from these sources.
For example: If user A blacklisted user B and then user B uploaded a picture, then user A request to see the gallery he won't see picture from B, but he'll be able to see pictures from user C, D, ...
The problem occurs when one user built a big blacklist (e.g. 100 sources). Then the SQL queries will be really long and complex ("...and author != 'B' and category != 'C'...") what will eventually kill my server.
What are the other ways to handle this problem?
It sounds to me like you're using dynamic SQL to build this query. You should have the blacklist stored in a table related by UserId, then you can write a stored procedure that uses NOT IN or NOT EXISTS to build the final resultset.
First of all, create indices on every column that can be ignored. This way, your match criteria will be found a little bit faster.
Then you could create an in-between table that collects relations between a user and the content he has blacklisted.
Maybe something like this:
userId | blacklistType | blacklistId
1 | user | 2
1 | category | 12
1 | word | 4
Now if you want all categories for user 1, you could make the query
SELECT *
FROM categories
WHERE NOT EXISTS (
SELECT userId
FROM blacklist
WHERE userId = 1
AND blacklist.blacklistType = 'category'
AND categories.id = blacklist.blacklistId
)
(I'm not quite sure about the syntax here, but you should get the idea, I hope)
The query can look complex but that doesn't really matter. Make sure you index these columns and preferably use numbers for the categories, authors, etc.
So then you'll have a query like
SELECT * FROM .... WHERE author_id NOT IN (1,12,567,6788) AND category_id NOT IN (6654,23245,89795)

Categories