Related
I have a database containing over 1,000 item information and I am now developing a system that will have this check the API source via a regular Cron Job adding new entries as they come. Usually, but not always the case, when a new item is released, it will have limited information, eg; Image and name only, more information like description can sometimes be initially withheld.
With this system, I am creating a bulletin to let everyone know new items have been released, so like most announcements, they get submitted to a database, however instead of submitting static content to the database for the bulletin, is it possible to submit something which will be executed upon the person loading that page and that bulletin data is firstly obtained then the secondary code within run?
, For example, within the database could read something like the following
<p>Today new items were released!</p>
<?php $item_ids = "545, 546, 547, 548"; ?>
And then on the page, it will fetch the latest known information from the other database table for items "545, 546, 547, 548"
Therefore, there would be no need to go back and edit any past entries, this page would stay somewhat up-to-date dynamically.
Typically you would do something like have a date field on your items, so you can show which items were released on a given date. Or if you need to have the items associated with some sort of announcement record, create a lookup table that joins your items and announcements. Do not insert executable code in the DB and then pull it out and execute it.
CREATE TABLE `announcements` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`publish_date` DATETIME NOT NULL,
`content` text,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
CREATE TABLE `items` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`title` VARCHAR(128) NOT NULL,
`description` text,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
CREATE TABLE `announcement_item_lkp` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`announcement_id` int(11) unsigned NOT NULL,
`item_id` int(11) unsigned NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `announcement_item_lkp_uk1` (`announcement_id`,`item_id`),
KEY `announcement_item_lkp_fk_1` (`announcement_id`),
KEY `announcement_item_lkp_fk_2` (`item_id`),
CONSTRAINT `announcement_item_lkp_fk_1` FOREIGN KEY (`announcement_id`) REFERENCES `announcements` (`id`) ON DELETE CASCADE ON UPDATE CASCADE,
CONSTRAINT `announcement_item_lkp_fk_2` FOREIGN KEY (`item_id`) REFERENCES `items` (`id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
With the announcement_item_lkp table, you can associate as many items to your announcement as you like. And since you have cascading deletes, if an item gets deletes, its lookup records are deleted as well, so you don't have to worry about orphaned references in your announcements, like you would it you just stuff a string of IDs somewhere.
You're already using a relational database, let it do its job.
I cleaned the question a little bit because it was getting very big and unreadable.
Running on my localhost.
As you can see in the image below, the query takes 755.15 ms when selecting from the table Job that contains 15000 rows (with the where conditions returning 6650)
The table Company contains 1000 rows.
The table geo__name contains 84300 rows approx and is not giving me any problem, so I believe the problem is the database structure or something.
The structure of these 2 tables is the following:
Table Job is:
CREATE TABLE `job` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`created_at` datetime NOT NULL,
`updated_at` datetime NOT NULL,
`company_id` int(11) NOT NULL,
`activity_sector_id` int(11) DEFAULT NULL,
`status` int(11) NOT NULL,
`active` datetime NOT NULL,
`contract_type_id` int(11) NOT NULL,
`salary_type_id` int(11) NOT NULL,
`workday_id` int(11) NOT NULL,
`geoname_id` int(11) NOT NULL,
`title` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`minimum_experience` int(11) DEFAULT NULL,
`min_salary` decimal(7,2) DEFAULT NULL,
`max_salary` decimal(7,2) DEFAULT NULL,
`zip_code` int(11) DEFAULT NULL,
`vacancies` int(11) DEFAULT NULL,
`show_salary` tinyint(1) NOT NULL,
PRIMARY KEY (`id`),
KEY `created_at` (`created_at`,`active`,`status`) USING BTREE,
CONSTRAINT `FK_FBD8E0F823F5422B` FOREIGN KEY (`geoname_id`) REFERENCES `geo__name` (`id`),
CONSTRAINT `FK_FBD8E0F8398DEFD0` FOREIGN KEY (`activity_sector_id`) REFERENCES `activity_sector` (`id`),
CONSTRAINT `FK_FBD8E0F85248165F` FOREIGN KEY (`salary_type_id`) REFERENCES `job_salary_type` (`id`),
CONSTRAINT `FK_FBD8E0F8979B1AD6` FOREIGN KEY (`company_id`) REFERENCES `company` (`id`),
CONSTRAINT `FK_FBD8E0F8AB01D695` FOREIGN KEY (`workday_id`) REFERENCES `workday` (`id`),
CONSTRAINT `FK_FBD8E0F8CD1DF15B` FOREIGN KEY (`contract_type_id`) REFERENCES `job_contract_type` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=15001 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
The table company is:
CREATE TABLE `company` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`logo` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`created_at` datetime NOT NULL,
`updated_at` datetime NOT NULL,
`website` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`user_id` int(11) NOT NULL,
`phone` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`cifnif` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`type` int(11) NOT NULL,
`subscription_id` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `UNIQ_4FBF094FA76ED395` (`user_id`),
KEY `IDX_4FBF094F9A1887DC` (`subscription_id`),
KEY `name` (`name`(191)),
CONSTRAINT `FK_4FBF094F9A1887DC` FOREIGN KEY (`subscription_id`) REFERENCES `subscription` (`id`),
CONSTRAINT `FK_4FBF094FA76ED395` FOREIGN KEY (`user_id`) REFERENCES `user` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1001 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
The query is the following:
SELECT
j0_.id AS id_0,
j0_.status AS status_1,
j0_.title AS title_2,
j0_.min_salary AS min_salary_3,
j0_.max_salary AS max_salary_4,
c1_.id AS id_5,
c1_.name AS name_6,
c1_.logo AS logo_7,
a2_.id AS id_8,
a2_.name AS name_9,
g3_.id AS id_10,
g3_.name AS name_11,
j4_.id AS id_12,
j4_.name AS name_13,
j5_.id AS id_14,
j5_.name AS name_15,
w6_.id AS id_16,
w6_.name AS name_17
FROM
job j0_
INNER JOIN company c1_ ON j0_.company_id = c1_.id
INNER JOIN activity_sector a2_ ON j0_.activity_sector_id = a2_.id
INNER JOIN geo__name g3_ ON j0_.geoname_id = g3_.id
INNER JOIN job_salary_type j4_ ON j0_.salary_type_id = j4_.id
INNER JOIN job_contract_type j5_ ON j0_.contract_type_id = j5_.id
INNER JOIN workday w6_ ON j0_.workday_id = w6_.id
WHERE
j0_.active >= CURRENT_TIMESTAMP
AND j0_.status = 1
ORDER BY
j0_.created_at DESC
When executing the above query I have these results:
In MYSQL Workbench: 0.578 sec / 0.016 sec
In Symfony profiler: 755.15 ms
The question is: Is the duration of this query correct? if not, how can I improve the speed of the query? it seems too much.
The Symfony debug toolbar if it helps:
As you can see in the below image, I'm only getting the data I really need:
The explain query:
The timeline:
The MySQL server can't handle the load being placed on it. This could be due to resource contention, or because it has not been appropriately tuned and it could also be a problem with your hard drive.
First, I would start your performance by adding MySQL keyword "STRAIGHT_JOIN" which tells MySQL to query the data in the order I have provided, dont try to think the relationships for me. However, on your dataset being so small, and already 1/2 second, don't know if that will help as much, but on larger datasets I have known it to SIGNIFICANTLY improve performance.
Next, you appear to be getting lookup descriptions based on the PK/FK relationship results. Not seeing the indexes on those tables, I would suggest doing covering indexes which contain both the key and description so the join can get the data from the index pages it uses for the JOIN instead of use index page, find the actual data pages to get the description and continue.
Last, your job table with the index on (created_at,active,status), might perform better if the index had the index as ( status, active, created_at ).
With your existing index, think of it this way, each day of data is put into a single box. Within each day box that is sorted by an active timestamp (even if simplified by active date), THEN the status.
So, for each day CREATED, you open a box. Look at secondary boxes, one for each "Active" timestamp (ex: by day). Within each Active timestamp (day), only now can you see if the "Status = 1" records. So open each active timestamp day, assess Status = 1, then close each created day box and go to the next created day box and repeat. So look at the labor intensive of open each box per day, each active box within that day.
Now, under the suggested index starting with status. You now have a very finite number of boxes, one for each status. Open only the 1 box for status = 1 These are the only ones you want to consider... All the others you don't care. Inside that, you have the actual records based on ACTIVE Timestamp and that is sub-sorted. From that, you can jump directly to those at the current timestamp. From the first record and the rest within the box, you now have all the records that qualify. Done. Since these records (index) ALSO has the Created_at as part of the index, it can optimize that with the descending sort order.
For ensuring "covering indexes" for the other lookup tables if they do not yet exist, I suggest the following.
table index
company ( id, name, logo )
activity_sector (id, name )
geo__name ( id, name )
job_salary_type ( id, name )
job_contract_type ( id, name )
workday ( id, name )
And the MySQL Keyword...
SELECT STRAIGHT_JOIN (rest of query...)
There are several reasons as to why Symfony is slow.
1. Server fault
First, it could be the server fault. Server performances may hinder your query time.
2. Data size and defered rendering
Then comes the data size. As you can see on the image below, the query on one of my project have a 50Mb data size (currently about 20k rows).
Parsing 50Mb in HTML can take some time, mostly because of loops.
Still, there are solutions about this, like defered rendering.
Defered rendering is quite simple, instead of parsing data in your twig you,
send all data to a javascript varaible, and use javascript to parse/render data once the DOM is loaded.
3. Query optimisation
As I wrote in comment, you can check the following question, on which I explained why custom queries are important.
Are Doctrine relations affecting application performance?
In this question, you will read that order matter... It's in fact the most important thing.
While static data in your databases are often inserted in the right order,
it's rarely the case for dynamic data (data provided by user during the website life)
Which is why, using ORDER BY in your query will often speed up the page rendering,
as doctrine won't be doing extra queries on it's own.
As exemple, One of my site have about 700 entries diplayed on the index.
First, here is the query count while using findAll() :
It show 254 query (253 duplicates) in 144ms, plus 39 render time.
Next, using the second parameter of findBy(), ORDER BY, I get this result :
You can see the full query here (sreenshot is big)
Much better, 1 query only in 8ms, and about the same render time.
But, here, I don't use any fields from associations.
From the moment I will do it, doctrine qui do some extra query, and query count and time will skyrocket.
In the end, it will turn back to something like findAll()
And last, this is the custom query :
In this custom query, the query time went from 8ms to 38ms.
But, unlike the previous query, I got way more data in my result,
which will prevent doctrine from doing extra query.
Again, ORDER BY() matter in this query. Without it, I skyrocket back to 84 queries.
4. Partials
When you do custom query, you can load partials objects instead of full data.
As you said in your question, description field seems to slow down your loading speed,
with partials, you can avoid to load some fields from the table, which will speed up query speed.
First, instead of your regular syntax, this is how you will create the query builder :
$em=$this->getEntityManager();
$qb=$em->createQueryBuilder();
Just in case, I prefer to keep $em as a separate variable (if I want to fetch some class repository for example).
Then you can start your partial select. Careful, first select can't include any association fields :
$qb->select("partial job.{id, status, title, minimum_experience, min_salary, max_salary, zip_code, vacancies")
->from(Job::class, "job");
Then you can add your associations :
$qb->addSelect("company")
->join("job.company", "company");
Or even add partial association in case you don't need all the data of the association :
$qb->addSelect("partial activitySector.{id}")
->join("job.activitySector", "activitySector");
$qb->addSelect("partial job.{id, company_id, activity_sector_id, status, active, contract_type_id, salary_type_id, workday_id, geoname_id, title, minimum_experience, min_salary, max_salary, zip_code, vacancies, show_salary");
5. Caches
You could also use various caches, like Zend OPCache for PHP, which you will find some advices in this question: Why Symfony3 so slow?
There is also the SQL cache Varnish.
This round up about everything I can share to lower your loading time.
Hope it will prove useful and you will be able to solve your problem.
So many keys , try to minimize the number of keys.
i have developed an online auction system in which users can sale or buy goods, my problem is with retrieving auctions relative information that are in two separate tables one contains information such as (auction_id,owner,title,description,base_price,..) and the other contains information about requests for each auction: (bid_id,auction_id,bidder,price,date), each user may post several auctions or not, i want to show the highest price and the bidder(some one who gives such price) for that price and number of requests additional to information stored in auction table for each auction
but when i join to table, if there is no request for auction so the result will be zero and you will see the message: there is no information to show but the user has just posted a new auction, what should i do?! should i check if there is a request for each auction and if yes then get these information?! dosent in code duplication? in this way i should connect to db twice in a single request for profile page
here is my tables and current query:
create table `auction`(
`auction_id` INT UNSIGNED NOT NULL PRIMARY KEY AUTO_INCREMENT,
`owner` VARCHAR(32) NOT NULL,
`group_id` TINYINT UNSIGNED NOT NULL ,
`title` VARCHAR(100) NOT NULL,
`sale_type` VARCHAR(1) NOT NULL,
`base_price` INT NOT NULL,
`min_increase` INT NULL,
`photo` VARCHAR(200) NULL,
`description` VARCHAR(500) NOT NULL,
`start_date` DATETIME NOT NULL,
`termination_date` DATETIME NULL,
`sold` VARCHAR(1) NOT NULL DEFAULT 0,
`purchaser` VARCHAR(32) NULL,
`deleted` VARCHAR(1) NOT NULL DEFAULT 0,
FOREIGN KEY(owner) REFERENCES users(user_name) on delete cascade on update cascade,
FOREIGN KEY(purchaser) REFERENCES users(user_name) on delete cascade on update cascade,
FOREIGN KEY(group_id) REFERENCES commodity_groups(group_id) on delete cascade on update cascade)
ENGINE=InnoDB default charset=utf8;
create table `bid`(
`bid_id` INT NOT NULL PRIMARY KEY AUTO_INCREMENT,
`auction_id` INT UNSIGNED NOT NULL,
`bidder` VARCHAR(32) NOT NULL,
`price` INT NOT NULL,
`date` DATETIME NOT NULL,
`deleted` VARCHAR(1) NOT NULL DEFAULT 0,
FOREIGN KEY(auction_id) REFERENCES auction(auction_id) ON DELETE CASCADE ON UPDATE CASCADE,
FOREIGN KEY(bidder) REFERENCES users(user_name) ON DELETE CASCADE ON UPDATE CASCADE)
ENGINE=InnoDB default charset=utf8;
and here is my query i use prepared statements:
SELECT `auction`.`auction_id` , `title` , `base_price` , `min_increase` , `photo` , `description` , `start_date` , `termination_date` , `max_bidder` , `bids_count` , `max_bid`
FROM `auction` , (
SELECT `bid`.`auction_id` , `bidder` AS max_bidder, `bids_count` , `max_bid`
FROM `bid` , (
SELECT `auction_id` , count( bid_id ) AS bids_count, max( price ) AS max_bid
FROM `bid`
WHERE `auction_id`
IN (
SELECT `auction_id`
FROM `auction`
WHERE `owner` = ?
)
GROUP BY (
auction_id
)
) AS temp
WHERE `bid`.`auction_id` = `temp`.`auction_id`
AND `price` = `max_bid`
) AS temp2
WHERE `auction`.`auction_id` = `temp2`.`auction_id`
it is clear that if there is no request for auction, the result will be zero and no auction will be shown to user in his profile, however he or she has just post a new auction, i will thank if any body could help me
What you have is more of a database design problem and a future scalability problem than an actual problem. You know you can make two requests if you want to.
If you care about scaling things up, you're going to have to think very carefully about what user information you want to replicate across multiple servers, and how you're going to synchronize that. The basic answer is: Yes, you use joins to include the user information you want. But a more complicated answer is that you might want to create mini tables with just a little bit of user information (duplicated and synchronized) that you can join very quickly, which no user would ever write to -- in other words they are written only by the master table either through a slave setup or with some cron job.
A lot depends on how large you expect your site to be and how many people might be writing to the users table. It's assumed that many people will be writing to the auction table, so ideally you don't want ANY foreign key dependencies on that table or you will get deadlocks. It should be an ISAM or Federated table, probably.
I am in the process of writing a web-based quiz application using PHP and MySQL. I don't want to bore you with the details of it particularly, so here's what (I think) you need to know.
Questions are all multiple choice, and can be stored in a simple table with a few columns:
ID: The question number (primary index)
Category: The category this question falls under (e.g. animals,
vegetables, minerals)
Text: The question stem (e.g. What is 1+1?)
Answer1: A possible answer (e.g. 2)
Answer2: A possible answer (e.g. 3)
Answer3: A possible answer (e.g. 4)
CorrectAnswer: The correct answer to the question (either 1, 2 or 3 (in this case 1))
Users can sign up by creating a username and password, and then attempt questions from categories.
The problem is that the questions I'm writing are designed to be attempted more than once. However, users need to be given detailed feedback on their progress. The FIRST attempt at a question matters, and contributes to a user's overall 'questions answered first time' score. I therefore need to keep track of how many times a question has been attempted.
Since the application is designed to be flexible, I would like to have support for many hundreds of users attempting many thousands of questions. Thus, trying to integrate this information into the user table or questions table seems to be impossible. The way I would like to approach this problem is to create a new table for each user when they have signed up, with various columns.
Table Name: A user's individual table (e.g. TableForUser51204)
QuestionID: The ID of a question that the user has attempted.
CorrectFirstTime: A boolean value stating whether or not the
question was answered correctly first time.
Correct: The number of times the question has been answered
correctly.
Incorrect: The number of times the question has been answered
incorrectly.
So I guess what I would like to ask is whether or not organising the database in this manner is a wise thing to do. Is there a better approach rather than creating a new table for each user? How much would this hinder the performance if there are say 500 users and 2000 questions?
Thanks.
You don't want to be creating a new table per user. Instead, modify your database structure.
Normally, you'd have a table for questions, a table for options (with maybe a boolean column to indicate if it's the correct answer), a users table, and a join table on users and options to store users' responses. A sample schema:
CREATE TABLE `options` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`question_id` int(10) unsigned NOT NULL,
`text` varchar(255) NOT NULL,
`correct` tinyint(1) NOT NULL,
PRIMARY KEY (`id`),
KEY `question_id` (`question_id`)
) TYPE=InnoDB;
CREATE TABLE `options_users` (
`option_id` int(10) unsigned NOT NULL,
`user_id` int(10) unsigned NOT NULL,
`created` timestamp NOT NULL,
KEY `option_id` (`option_id`),
KEY `user_id` (`user_id`)
) TYPE=InnoDB;
CREATE TABLE `questions` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`question` varchar(255) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `id` (`id`,`question`)
) TYPE=InnoDB;
CREATE TABLE `users` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`username` varchar(60) NOT NULL,
`password` char(40) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `username` (`username`)
) TYPE=InnoDB;
ALTER TABLE `options`
ADD CONSTRAINT `options_ibfk_1` FOREIGN KEY (`question_id`) REFERENCES `questions` (`id`) ON DELETE CASCADE ON UPDATE CASCADE;
ALTER TABLE `options_users`
ADD CONSTRAINT `options_users_ibfk_2` FOREIGN KEY (`option_id`) REFERENCES `options` (`id`) ON DELETE CASCADE ON UPDATE CASCADE,
ADD CONSTRAINT `options_users_ibfk_1` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE CASCADE;
This links options to questions, and users' responses to options. I've also added a created column to the options_users table so you can see when a user answered the question and track their progress over time.
I have inherited a PHP project and the client is wanting to add some functionality to their CMS, basically the CMS allows them to create some news, all the news starts with the same content, and that is saved in one table, the actually news headline and articles are saved in another table, and the images for the news are saved in another, basically if the base row for the news is deleted I need all the related rows to be deleted, the database is not setup to work with foreign keys so I cannot use cascade deletion, so how can I delete the all the content I need to, when I only what the ID of the base news row is?
Any help would be very helpful I am sorry I cannot give you much more help, here is this the original SQL of tables scheme if that helps?
--
-- Table structure for table `mailers`
--
CREATE TABLE IF NOT EXISTS `mailers` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`mailer_title` varchar(150) NOT NULL,
`mailer_header` varchar(60) NOT NULL,
`mailer_type` enum('single','multi') NOT NULL,
`introduction` varchar(80) NOT NULL,
`status` enum('live','dead','draft') NOT NULL,
`flag` enum('sent','unsent') NOT NULL,
`date_mailer_created` int(11) NOT NULL,
`date_mailer_updated` int(10) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=13 ;
-- --------------------------------------------------------
--
-- Table structure for table `mailer_content`
--
CREATE TABLE IF NOT EXISTS `mailer_content` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`headline` varchar(60) NOT NULL,
`content` text NOT NULL,
`mailer_id` int(11) NOT NULL,
`position` enum('left','right','centre') DEFAULT NULL,
`created_at` int(10) NOT NULL,
`updated_at` int(10) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=18 ;
-- --------------------------------------------------------
--
-- Table structure for table `mailer_images`
--
CREATE TABLE IF NOT EXISTS `mailer_images` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`title` varchar(150) NOT NULL,
`filename` varchar(150) NOT NULL,
`mailer_id` int(11) NOT NULL,
`content_id` int(11) DEFAULT NULL,
`date_created` int(10) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=15 ;
It is worth noting that the schema cannot be changed nor can I change to the DB to MYISAM so that I can use foreign keys.
Add foreign key to table mailer_content
FOREIGN KEY (mailer_id)
REFERENCES mailers(id)
ON DELETE CASCADE
Add foreign key to table mailer_images
FOREIGN KEY (content_id)
REFERENCES mailer_content(id)
ON DELETE CASCADE
http://dev.mysql.com/doc/refman/5.1/en/innodb-foreign-key-constraints.html
It is worth noting that the schema cannot be changed nor can I change to the DB to MYISAM so that I can use foreign keys.
Why can't the schema be changed? You designed the app, didn't you? Even if you didn't, adding the proper keys is just a matter of adding the right indexes and then altering the right columns. #Michael Pakhantosv's answer has what looks to be the right bits of SQL.
Further, it's InnoDB that does foreign keys, not MyISAM. You're fine there already.
If you could change the schema, making the appropriate IDs actual, real Foreign Keys and using ON DELETE CASCADE would work. Or maybe triggers. But that's just asking for it.
Now, for some reason, ON DELETE CASCADE isn't liked very much around here. I disagree with other people's reasons for not liking it, but I don't disagree with their sentiment. Unless your application was designed to grok ON DELETE CASCADE, you're in for a world of trouble.
But, given your requirement...
basically if the base row for the news is deleted I need all the related rows to be deleted
... that's asking for ON DELETE CASCADE.
So, this might come as a shock, but if you can't modify the database, you'll just have to do your work in the code. I'd imagine that deleting a news article happens in only one place in your code, right? If not, it'd better. Fix that first. Then just make sure you delete all the proper rows in an appropriate order. And then document it!
If you can not change the schema then triggers are not an option.
InnoDB supports transactions, so deleting from two tables should not be an issue, what exactly is your problem?
P.S. It would be worth noting which version of the server are you using.