I cleaned the question a little bit because it was getting very big and unreadable.
Running on my localhost.
As you can see in the image below, the query takes 755.15 ms when selecting from the table Job that contains 15000 rows (with the where conditions returning 6650)
The table Company contains 1000 rows.
The table geo__name contains 84300 rows approx and is not giving me any problem, so I believe the problem is the database structure or something.
The structure of these 2 tables is the following:
Table Job is:
CREATE TABLE `job` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`created_at` datetime NOT NULL,
`updated_at` datetime NOT NULL,
`company_id` int(11) NOT NULL,
`activity_sector_id` int(11) DEFAULT NULL,
`status` int(11) NOT NULL,
`active` datetime NOT NULL,
`contract_type_id` int(11) NOT NULL,
`salary_type_id` int(11) NOT NULL,
`workday_id` int(11) NOT NULL,
`geoname_id` int(11) NOT NULL,
`title` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`minimum_experience` int(11) DEFAULT NULL,
`min_salary` decimal(7,2) DEFAULT NULL,
`max_salary` decimal(7,2) DEFAULT NULL,
`zip_code` int(11) DEFAULT NULL,
`vacancies` int(11) DEFAULT NULL,
`show_salary` tinyint(1) NOT NULL,
PRIMARY KEY (`id`),
KEY `created_at` (`created_at`,`active`,`status`) USING BTREE,
CONSTRAINT `FK_FBD8E0F823F5422B` FOREIGN KEY (`geoname_id`) REFERENCES `geo__name` (`id`),
CONSTRAINT `FK_FBD8E0F8398DEFD0` FOREIGN KEY (`activity_sector_id`) REFERENCES `activity_sector` (`id`),
CONSTRAINT `FK_FBD8E0F85248165F` FOREIGN KEY (`salary_type_id`) REFERENCES `job_salary_type` (`id`),
CONSTRAINT `FK_FBD8E0F8979B1AD6` FOREIGN KEY (`company_id`) REFERENCES `company` (`id`),
CONSTRAINT `FK_FBD8E0F8AB01D695` FOREIGN KEY (`workday_id`) REFERENCES `workday` (`id`),
CONSTRAINT `FK_FBD8E0F8CD1DF15B` FOREIGN KEY (`contract_type_id`) REFERENCES `job_contract_type` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=15001 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
The table company is:
CREATE TABLE `company` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`logo` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`created_at` datetime NOT NULL,
`updated_at` datetime NOT NULL,
`website` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`user_id` int(11) NOT NULL,
`phone` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`cifnif` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`type` int(11) NOT NULL,
`subscription_id` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `UNIQ_4FBF094FA76ED395` (`user_id`),
KEY `IDX_4FBF094F9A1887DC` (`subscription_id`),
KEY `name` (`name`(191)),
CONSTRAINT `FK_4FBF094F9A1887DC` FOREIGN KEY (`subscription_id`) REFERENCES `subscription` (`id`),
CONSTRAINT `FK_4FBF094FA76ED395` FOREIGN KEY (`user_id`) REFERENCES `user` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1001 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
The query is the following:
SELECT
j0_.id AS id_0,
j0_.status AS status_1,
j0_.title AS title_2,
j0_.min_salary AS min_salary_3,
j0_.max_salary AS max_salary_4,
c1_.id AS id_5,
c1_.name AS name_6,
c1_.logo AS logo_7,
a2_.id AS id_8,
a2_.name AS name_9,
g3_.id AS id_10,
g3_.name AS name_11,
j4_.id AS id_12,
j4_.name AS name_13,
j5_.id AS id_14,
j5_.name AS name_15,
w6_.id AS id_16,
w6_.name AS name_17
FROM
job j0_
INNER JOIN company c1_ ON j0_.company_id = c1_.id
INNER JOIN activity_sector a2_ ON j0_.activity_sector_id = a2_.id
INNER JOIN geo__name g3_ ON j0_.geoname_id = g3_.id
INNER JOIN job_salary_type j4_ ON j0_.salary_type_id = j4_.id
INNER JOIN job_contract_type j5_ ON j0_.contract_type_id = j5_.id
INNER JOIN workday w6_ ON j0_.workday_id = w6_.id
WHERE
j0_.active >= CURRENT_TIMESTAMP
AND j0_.status = 1
ORDER BY
j0_.created_at DESC
When executing the above query I have these results:
In MYSQL Workbench: 0.578 sec / 0.016 sec
In Symfony profiler: 755.15 ms
The question is: Is the duration of this query correct? if not, how can I improve the speed of the query? it seems too much.
The Symfony debug toolbar if it helps:
As you can see in the below image, I'm only getting the data I really need:
The explain query:
The timeline:
The MySQL server can't handle the load being placed on it. This could be due to resource contention, or because it has not been appropriately tuned and it could also be a problem with your hard drive.
First, I would start your performance by adding MySQL keyword "STRAIGHT_JOIN" which tells MySQL to query the data in the order I have provided, dont try to think the relationships for me. However, on your dataset being so small, and already 1/2 second, don't know if that will help as much, but on larger datasets I have known it to SIGNIFICANTLY improve performance.
Next, you appear to be getting lookup descriptions based on the PK/FK relationship results. Not seeing the indexes on those tables, I would suggest doing covering indexes which contain both the key and description so the join can get the data from the index pages it uses for the JOIN instead of use index page, find the actual data pages to get the description and continue.
Last, your job table with the index on (created_at,active,status), might perform better if the index had the index as ( status, active, created_at ).
With your existing index, think of it this way, each day of data is put into a single box. Within each day box that is sorted by an active timestamp (even if simplified by active date), THEN the status.
So, for each day CREATED, you open a box. Look at secondary boxes, one for each "Active" timestamp (ex: by day). Within each Active timestamp (day), only now can you see if the "Status = 1" records. So open each active timestamp day, assess Status = 1, then close each created day box and go to the next created day box and repeat. So look at the labor intensive of open each box per day, each active box within that day.
Now, under the suggested index starting with status. You now have a very finite number of boxes, one for each status. Open only the 1 box for status = 1 These are the only ones you want to consider... All the others you don't care. Inside that, you have the actual records based on ACTIVE Timestamp and that is sub-sorted. From that, you can jump directly to those at the current timestamp. From the first record and the rest within the box, you now have all the records that qualify. Done. Since these records (index) ALSO has the Created_at as part of the index, it can optimize that with the descending sort order.
For ensuring "covering indexes" for the other lookup tables if they do not yet exist, I suggest the following.
table index
company ( id, name, logo )
activity_sector (id, name )
geo__name ( id, name )
job_salary_type ( id, name )
job_contract_type ( id, name )
workday ( id, name )
And the MySQL Keyword...
SELECT STRAIGHT_JOIN (rest of query...)
There are several reasons as to why Symfony is slow.
1. Server fault
First, it could be the server fault. Server performances may hinder your query time.
2. Data size and defered rendering
Then comes the data size. As you can see on the image below, the query on one of my project have a 50Mb data size (currently about 20k rows).
Parsing 50Mb in HTML can take some time, mostly because of loops.
Still, there are solutions about this, like defered rendering.
Defered rendering is quite simple, instead of parsing data in your twig you,
send all data to a javascript varaible, and use javascript to parse/render data once the DOM is loaded.
3. Query optimisation
As I wrote in comment, you can check the following question, on which I explained why custom queries are important.
Are Doctrine relations affecting application performance?
In this question, you will read that order matter... It's in fact the most important thing.
While static data in your databases are often inserted in the right order,
it's rarely the case for dynamic data (data provided by user during the website life)
Which is why, using ORDER BY in your query will often speed up the page rendering,
as doctrine won't be doing extra queries on it's own.
As exemple, One of my site have about 700 entries diplayed on the index.
First, here is the query count while using findAll() :
It show 254 query (253 duplicates) in 144ms, plus 39 render time.
Next, using the second parameter of findBy(), ORDER BY, I get this result :
You can see the full query here (sreenshot is big)
Much better, 1 query only in 8ms, and about the same render time.
But, here, I don't use any fields from associations.
From the moment I will do it, doctrine qui do some extra query, and query count and time will skyrocket.
In the end, it will turn back to something like findAll()
And last, this is the custom query :
In this custom query, the query time went from 8ms to 38ms.
But, unlike the previous query, I got way more data in my result,
which will prevent doctrine from doing extra query.
Again, ORDER BY() matter in this query. Without it, I skyrocket back to 84 queries.
4. Partials
When you do custom query, you can load partials objects instead of full data.
As you said in your question, description field seems to slow down your loading speed,
with partials, you can avoid to load some fields from the table, which will speed up query speed.
First, instead of your regular syntax, this is how you will create the query builder :
$em=$this->getEntityManager();
$qb=$em->createQueryBuilder();
Just in case, I prefer to keep $em as a separate variable (if I want to fetch some class repository for example).
Then you can start your partial select. Careful, first select can't include any association fields :
$qb->select("partial job.{id, status, title, minimum_experience, min_salary, max_salary, zip_code, vacancies")
->from(Job::class, "job");
Then you can add your associations :
$qb->addSelect("company")
->join("job.company", "company");
Or even add partial association in case you don't need all the data of the association :
$qb->addSelect("partial activitySector.{id}")
->join("job.activitySector", "activitySector");
$qb->addSelect("partial job.{id, company_id, activity_sector_id, status, active, contract_type_id, salary_type_id, workday_id, geoname_id, title, minimum_experience, min_salary, max_salary, zip_code, vacancies, show_salary");
5. Caches
You could also use various caches, like Zend OPCache for PHP, which you will find some advices in this question: Why Symfony3 so slow?
There is also the SQL cache Varnish.
This round up about everything I can share to lower your loading time.
Hope it will prove useful and you will be able to solve your problem.
So many keys , try to minimize the number of keys.
I am using phpmyadmin to manage my database. In one of my tables, when I click to see the last page (30 records) of 60000 records, I get this alert:
"This operation could take a long time. Proceed anyway?" which in fact does not happen and it shows up records in a very short time.
By the way, my table structure is as follow:
CREATE TABLE `documents` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` int(11) DEFAULT NULL,
`type` char(50) NOT NULL,
`comment` varchar(512) DEFAULT NULL,
`date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
KEY `user_id` (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ;
so why am I getting this alert?
phpMyAdmin, as you know is a PHP based database management panel, it has to scan through the database to the last 30 rows, which means it is processing each record between 0 and 60000 to retrieve records 59970-60000.
Depending on how fast your web & sql server is, this can take a long time. The warning message is simply there to say that it could take a long time to get these records because of the size of your database.
I am developing a system that includes a log history. In the log history is the record of actions taken by every users. Someone suggested that I have to compress the data annually because the database is expected to grow and so that I can save space. I have this table:
delimiter $$
CREATE TABLE `actionhistory` (
`historyID` int(11) NOT NULL AUTO_INCREMENT,
`empID` int(11) NOT NULL,
`item` varchar(50) NOT NULL,
`actionTaken` varchar(15) NOT NULL,
`type` varchar(15) NOT NULL,
`dateActTaken` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`historyID`),
KEY `emp` (`empID`)
) ENGINE=InnoDB AUTO_INCREMENT=429 DEFAULT CHARSET=latin1$$
What is the syntax in doing it? I want to do it annually regardless of how many rows it has per year.
EDIT: I have read about using ROW_FORMAT, KEY_BLOCK_SIZE but since the row per year is not constant, I didn't use it.
You have to take out Year (Y) form field dateActTaken to identify which year is it. Once you have year then you can fetch all records belonging to that year.
As far as compression, you can take it in zip file or you can save yearly data in your other mining databases.
I have been over this issue for the last year or so, changing what I am doing and trying different things. The issue is to do with the schema so I can still order nicely in player/clan ladders but if we want to add a stat later it won't lock our table changing every row due to one stat per column.
I see two options for how to do this but both don't seem to be right. One is one stat per column. There would be 4 tables, user_stat_summary (for basic stats shown on ladders), user_stat_beast (teams are human vs beast), user_stat_human and user_stat_overall. Stats are shown everywhere from the last 30 days. A cron job will take any dated stats by getting a query on matches that happened after the 30 days and taking away those stats from the 3 main tables and putting them into the overall one. Matches will have blobs for the stats each player got for that match. The issue I see here is when we have a lot of rows that we can't easily add more stats when say the game changes a little. What I was thinking was a extra_stats blob column on each table and if we add new stats they simply aren't going to be sortable on the ladders.
The other option is an EAV model which is what I have been playing around with but can't seem to get it right. I would be getting many more rows per query and then grouping them into users and the order would work for the most part but I couldn't get limits right for pagination since there was generally an unknown number of rows selected.
What I was thinking is the EAV model with a table that stores ranks per stats which could be used for ordering. So the EAV tables are currently as follows...
CREATE TABLE `user_stat` (
`user_id` int(10) unsigned NOT NULL,
`stat_id` varchar(50) NOT NULL,
`value` int(10) unsigned NOT NULL,
PRIMARY KEY (`user_id`,`stat_id`),
CONSTRAINT `user` FOREIGN KEY (`user_id`) REFERENCES `xf_user` (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
CREATE TABLE `user_human_stat` (
`user_id` int(10) unsigned NOT NULL,
`stat_id` varchar(50) NOT NULL,
`value` int(10) unsigned NOT NULL,
PRIMARY KEY (`user_id`,`stat_id`),
CONSTRAINT `human_user` FOREIGN KEY (`user_id`) REFERENCES `xf_user` (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
CREATE TABLE `user_beast_stat` (
`user_id` int(10) unsigned NOT NULL,
`stat_id` varchar(50) NOT NULL,
`value` int(10) unsigned NOT NULL,
PRIMARY KEY (`user_id`,`stat_id`),
CONSTRAINT `beast_user` FOREIGN KEY (`user_id`) REFERENCES `xf_user` (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
CREATE TABLE `user_stat_overall` (
`user_id` int(10) unsigned NOT NULL,
`human` blob NOT NULL,
`beast` blob NOT NULL,
`total` blob NOT NULL,
PRIMARY KEY (`user_id`),
CONSTRAINT `user_overall` FOREIGN KEY (`user_id`) REFERENCES `xf_user` (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
So I was thinking I could add a user_stat_rank table which would be user_id, stat_id, rank. Then say I want to get the first page of the ladder ordered by the 'kills' stat I could get all the user_ids order by rank where stat_id is kills. Then make a second query to populate all the users stats.
After writing all this out it seems like it would work fine but I might not be seeing something. I also understand this question is all over the place so if you would like me to edit in details at places just say so.
For sake of managibility, I would stick to adding a column for every stat. In the long run, this will probably be the easiest way to manage it without ending up in a corner due to the limitations that for instance the EAV model would impose on you.
If you're worried about the stats table growing too large, you could consider implementing some form of table partitioning where you regularly move the data older than 4 weeks to (a) historic table(s). The historic table(s) can be indexed to the extreme, as they won't require constant updating.
I have imported 2 .csv file that I wanted to compare into MySQL table. now i want to compare both of them using join.
However, whenever I include both table in my queries, i get no response from phpMyAdmin ( sometimes it shows 'max execution time exceeded).
The record size in both db tables is 73k max. I dont think thats huge on data. Even a simple query like
SELECT *
FROM abc456, xyz456
seems to hang. I did an explain and I got this below. I dont know what to take from this.
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE abc456 ALL NULL NULL NULL NULL 73017
1 SIMPLE xyz456 ALL NULL NULL NULL NULL 73403 Using join buffer
can someone please help?
UPDATE: added the structure of the table with composite keys. There are around 100000+ records that would be inserted in this table.
CREATE TABLE IF NOT EXISTS `abc456` (
`Col1` varchar(4) DEFAULT NULL,
`Col2` varchar(12) DEFAULT NULL,
`Col3` varchar(9) DEFAULT NULL,
`Col4` varchar(3) DEFAULT NULL,
`Col5` varchar(3) DEFAULT NULL,
`Col6` varchar(40) DEFAULT NULL,
`Col7` varchar(200) DEFAULT NULL,
`Col8` varchar(40) DEFAULT NULL,
`Col9` varchar(40) DEFAULT NULL,
`Col10` varchar(40) DEFAULT NULL,
`Col11` varchar(40) DEFAULT NULL,
`Col12` varchar(40) DEFAULT NULL,
`Col13` varchar(40) DEFAULT NULL,
`Col14` varchar(20) DEFAULT NULL,
KEY `Col1` (`Col1`,`Col2`,`Col3`,`Col4`,`Col5`,`Col6`,`Col7`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
It looks like you are doing a pure catesian join in your query.
Shouldn't you be joining the tables on certain fields? If you do that and the query still takes a long time to execute, you should put appropriate indexes to speed up the query.
The reason that it is taking so long is that it is trying to join every single row of the first table to every single row of the second table.
You need a join condition, some way of identifying which rows should be matched up:
SELECT * FROM abc456, xyz456 WHERE abc456.id = xyz456.id
Add indexes on joining columns. That should help with performance.
Use MySQL Workbench or MySQL Client (console) for long queries. phpmyadmin is not designed to display queries that return 100k rows :)
If you REALLY have to use phpmyadmin and you need to run long queries you can use Firefox extension that prevents phpmyadmin timeout: phpMyAdmin Timeout Preventer (direct link!)
There is a direct link, because i couldnt find english description.