Create secure anchor references to database entries - php

I am working on a workbank in Code Igniter where people add, edit and review documents. Document information is stored in a MySQL table somewhat like this:
CREATE TABLE `document` (
`owner` varchar(15) NOT NULL,
`id` smallint(6) NOT NULL AUTO_INCREMENT,
`pdf` varchar(250) NOT NULL,
`title` varchar(250) NOT NULL,
`permission` tinyint(1) NOT NULL DEFAULT '1',
`date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`,`pdf`),
) ENGINE=InnoDB AUTO_INCREMENT=35 DEFAULT CHARSET=utf8
The pdf field is the name of the file. Combined with the id they make up for the primary key of the table.
On the frontend part of the application, users often see and use lists of these documents and I want to attach links to them. Imagine that such a link can be created like this:
<a href='some_controller/some_method?pdf=<?php echo $document->handle; ?>'>Link</a>
Where the handle attribute of a supposed $document object is a unique identifier that method some_method uses to load the document that the user clicked.
What is the most secure database field for this? I considered the id field but its auto_increment seems insecure and would allow people to fiddle with the HTML to get forbidden documents. Should I add another unique column with a random n-digit or string hash?

You can do a lot of things here.
About what you suggest of creating a new column on your table and saving the hashed id of the document there will increase a little bit the 'security', so the probability of randomly type an existing id is lower.
But I don't think thats the proper way to do that. My recomendation here is to create a new table that relates the avaible documents per every user, or a table that relates the methods with the users, depending on your application needs.
For example:
//documents_for_users
document | user
0803161 | 1
0803162 | 1
Once this has been done, on your method call, you have to check for this relation between tables before do anything with the document.
Controller
function some_method(){
$id = $this->input->get('pdf');
$user = $this->session->userdata('userid');
if ($this->my_model->check_permission($id,$user) {
// do your stuff with document then
}
}
Model
function check_permission($id,$user){
$this->db->select('*');
$this->db->from('documents_for_users');
$this->db->where('document', $id);
$this->db->where('user', $user);
$query = $this->db->get();
if ($query->num_rows() > 0){
return true;
}
return false;
}
This way your security would be significantly increased

Related

Doctrine / MySQL Slow query even when using indexes

I cleaned the question a little bit because it was getting very big and unreadable.
Running on my localhost.
As you can see in the image below, the query takes 755.15 ms when selecting from the table Job that contains 15000 rows (with the where conditions returning 6650)
The table Company contains 1000 rows.
The table geo__name contains 84300 rows approx and is not giving me any problem, so I believe the problem is the database structure or something.
The structure of these 2 tables is the following:
Table Job is:
CREATE TABLE `job` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`created_at` datetime NOT NULL,
`updated_at` datetime NOT NULL,
`company_id` int(11) NOT NULL,
`activity_sector_id` int(11) DEFAULT NULL,
`status` int(11) NOT NULL,
`active` datetime NOT NULL,
`contract_type_id` int(11) NOT NULL,
`salary_type_id` int(11) NOT NULL,
`workday_id` int(11) NOT NULL,
`geoname_id` int(11) NOT NULL,
`title` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`minimum_experience` int(11) DEFAULT NULL,
`min_salary` decimal(7,2) DEFAULT NULL,
`max_salary` decimal(7,2) DEFAULT NULL,
`zip_code` int(11) DEFAULT NULL,
`vacancies` int(11) DEFAULT NULL,
`show_salary` tinyint(1) NOT NULL,
PRIMARY KEY (`id`),
KEY `created_at` (`created_at`,`active`,`status`) USING BTREE,
CONSTRAINT `FK_FBD8E0F823F5422B` FOREIGN KEY (`geoname_id`) REFERENCES `geo__name` (`id`),
CONSTRAINT `FK_FBD8E0F8398DEFD0` FOREIGN KEY (`activity_sector_id`) REFERENCES `activity_sector` (`id`),
CONSTRAINT `FK_FBD8E0F85248165F` FOREIGN KEY (`salary_type_id`) REFERENCES `job_salary_type` (`id`),
CONSTRAINT `FK_FBD8E0F8979B1AD6` FOREIGN KEY (`company_id`) REFERENCES `company` (`id`),
CONSTRAINT `FK_FBD8E0F8AB01D695` FOREIGN KEY (`workday_id`) REFERENCES `workday` (`id`),
CONSTRAINT `FK_FBD8E0F8CD1DF15B` FOREIGN KEY (`contract_type_id`) REFERENCES `job_contract_type` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=15001 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
The table company is:
CREATE TABLE `company` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`logo` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`created_at` datetime NOT NULL,
`updated_at` datetime NOT NULL,
`website` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`user_id` int(11) NOT NULL,
`phone` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`cifnif` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`type` int(11) NOT NULL,
`subscription_id` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `UNIQ_4FBF094FA76ED395` (`user_id`),
KEY `IDX_4FBF094F9A1887DC` (`subscription_id`),
KEY `name` (`name`(191)),
CONSTRAINT `FK_4FBF094F9A1887DC` FOREIGN KEY (`subscription_id`) REFERENCES `subscription` (`id`),
CONSTRAINT `FK_4FBF094FA76ED395` FOREIGN KEY (`user_id`) REFERENCES `user` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1001 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
The query is the following:
SELECT
j0_.id AS id_0,
j0_.status AS status_1,
j0_.title AS title_2,
j0_.min_salary AS min_salary_3,
j0_.max_salary AS max_salary_4,
c1_.id AS id_5,
c1_.name AS name_6,
c1_.logo AS logo_7,
a2_.id AS id_8,
a2_.name AS name_9,
g3_.id AS id_10,
g3_.name AS name_11,
j4_.id AS id_12,
j4_.name AS name_13,
j5_.id AS id_14,
j5_.name AS name_15,
w6_.id AS id_16,
w6_.name AS name_17
FROM
job j0_
INNER JOIN company c1_ ON j0_.company_id = c1_.id
INNER JOIN activity_sector a2_ ON j0_.activity_sector_id = a2_.id
INNER JOIN geo__name g3_ ON j0_.geoname_id = g3_.id
INNER JOIN job_salary_type j4_ ON j0_.salary_type_id = j4_.id
INNER JOIN job_contract_type j5_ ON j0_.contract_type_id = j5_.id
INNER JOIN workday w6_ ON j0_.workday_id = w6_.id
WHERE
j0_.active >= CURRENT_TIMESTAMP
AND j0_.status = 1
ORDER BY
j0_.created_at DESC
When executing the above query I have these results:
In MYSQL Workbench: 0.578 sec / 0.016 sec
In Symfony profiler: 755.15 ms
The question is: Is the duration of this query correct? if not, how can I improve the speed of the query? it seems too much.
The Symfony debug toolbar if it helps:
As you can see in the below image, I'm only getting the data I really need:
The explain query:
The timeline:
The MySQL server can't handle the load being placed on it. This could be due to resource contention, or because it has not been appropriately tuned and it could also be a problem with your hard drive.
First, I would start your performance by adding MySQL keyword "STRAIGHT_JOIN" which tells MySQL to query the data in the order I have provided, dont try to think the relationships for me. However, on your dataset being so small, and already 1/2 second, don't know if that will help as much, but on larger datasets I have known it to SIGNIFICANTLY improve performance.
Next, you appear to be getting lookup descriptions based on the PK/FK relationship results. Not seeing the indexes on those tables, I would suggest doing covering indexes which contain both the key and description so the join can get the data from the index pages it uses for the JOIN instead of use index page, find the actual data pages to get the description and continue.
Last, your job table with the index on (created_at,active,status), might perform better if the index had the index as ( status, active, created_at ).
With your existing index, think of it this way, each day of data is put into a single box. Within each day box that is sorted by an active timestamp (even if simplified by active date), THEN the status.
So, for each day CREATED, you open a box. Look at secondary boxes, one for each "Active" timestamp (ex: by day). Within each Active timestamp (day), only now can you see if the "Status = 1" records. So open each active timestamp day, assess Status = 1, then close each created day box and go to the next created day box and repeat. So look at the labor intensive of open each box per day, each active box within that day.
Now, under the suggested index starting with status. You now have a very finite number of boxes, one for each status. Open only the 1 box for status = 1 These are the only ones you want to consider... All the others you don't care. Inside that, you have the actual records based on ACTIVE Timestamp and that is sub-sorted. From that, you can jump directly to those at the current timestamp. From the first record and the rest within the box, you now have all the records that qualify. Done. Since these records (index) ALSO has the Created_at as part of the index, it can optimize that with the descending sort order.
For ensuring "covering indexes" for the other lookup tables if they do not yet exist, I suggest the following.
table index
company ( id, name, logo )
activity_sector (id, name )
geo__name ( id, name )
job_salary_type ( id, name )
job_contract_type ( id, name )
workday ( id, name )
And the MySQL Keyword...
SELECT STRAIGHT_JOIN (rest of query...)
There are several reasons as to why Symfony is slow.
1. Server fault
First, it could be the server fault. Server performances may hinder your query time.
2. Data size and defered rendering
Then comes the data size. As you can see on the image below, the query on one of my project have a 50Mb data size (currently about 20k rows).
Parsing 50Mb in HTML can take some time, mostly because of loops.
Still, there are solutions about this, like defered rendering.
Defered rendering is quite simple, instead of parsing data in your twig you,
send all data to a javascript varaible, and use javascript to parse/render data once the DOM is loaded.
3. Query optimisation
As I wrote in comment, you can check the following question, on which I explained why custom queries are important.
Are Doctrine relations affecting application performance?
In this question, you will read that order matter... It's in fact the most important thing.
While static data in your databases are often inserted in the right order,
it's rarely the case for dynamic data (data provided by user during the website life)
Which is why, using ORDER BY in your query will often speed up the page rendering,
as doctrine won't be doing extra queries on it's own.
As exemple, One of my site have about 700 entries diplayed on the index.
First, here is the query count while using findAll() :
It show 254 query (253 duplicates) in 144ms, plus 39 render time.
Next, using the second parameter of findBy(), ORDER BY, I get this result :
You can see the full query here (sreenshot is big)
Much better, 1 query only in 8ms, and about the same render time.
But, here, I don't use any fields from associations.
From the moment I will do it, doctrine qui do some extra query, and query count and time will skyrocket.
In the end, it will turn back to something like findAll()
And last, this is the custom query :
In this custom query, the query time went from 8ms to 38ms.
But, unlike the previous query, I got way more data in my result,
which will prevent doctrine from doing extra query.
Again, ORDER BY() matter in this query. Without it, I skyrocket back to 84 queries.
4. Partials
When you do custom query, you can load partials objects instead of full data.
As you said in your question, description field seems to slow down your loading speed,
with partials, you can avoid to load some fields from the table, which will speed up query speed.
First, instead of your regular syntax, this is how you will create the query builder :
$em=$this->getEntityManager();
$qb=$em->createQueryBuilder();
Just in case, I prefer to keep $em as a separate variable (if I want to fetch some class repository for example).
Then you can start your partial select. Careful, first select can't include any association fields :
$qb->select("partial job.{id, status, title, minimum_experience, min_salary, max_salary, zip_code, vacancies")
->from(Job::class, "job");
Then you can add your associations :
$qb->addSelect("company")
->join("job.company", "company");
Or even add partial association in case you don't need all the data of the association :
$qb->addSelect("partial activitySector.{id}")
->join("job.activitySector", "activitySector");
$qb->addSelect("partial job.{id, company_id, activity_sector_id, status, active, contract_type_id, salary_type_id, workday_id, geoname_id, title, minimum_experience, min_salary, max_salary, zip_code, vacancies, show_salary");
5. Caches
You could also use various caches, like Zend OPCache for PHP, which you will find some advices in this question: Why Symfony3 so slow?
There is also the SQL cache Varnish.
This round up about everything I can share to lower your loading time.
Hope it will prove useful and you will be able to solve your problem.
So many keys , try to minimize the number of keys.

Do I need a transaction here?

I have a web application in which users create objects that get stored in a MySQL database. Each object has a globally unique identifier that is stored in a table.
DROP TABLE IF EXISTS `uidlist`;
CREATE TABLE IF NOT EXISTS `uidlist` (
`uid` varchar(9) CHARACTER SET ascii COLLATE ascii_bin DEFAULT NULL,
`chcs` varchar(16) DEFAULT NULL,
UNIQUE KEY `uid` (`uid`)
) ENGINE=InnoDB DEFAULT CHARSET=ascii;
When a new object is to be created and stored I generate a new uid and ensure that it does not already exist in the uidlist table. (I should mention that collisions are rare since the potential range of UIDs I have is very large).
No issues here - it works just fine. However, with an increasing number of users wanting to simultaneously create + store objects the need to check uidlist is liable to become a bottleneck.
To circumvent the problem here is what I have done:
I have a secondary table
DROP TABLE IF EXISTS `uidbank`;
CREATE TABLE IF NOT EXISTS `uidbank` (
`uid` varchar(9) CHARACTER SET ascii COLLATE ascii_bin DEFAULT NULL,
`used` tinyint(1) NOT NULL DEFAULT '0'
) ENGINE=InnoDB DEFAULT CHARSET=ascii;
I pre-populate this table at short intervals via a CRON job - which ensures that it always has 1000 uid values that are tested for uniqueness.
When a live user requires a new UID I do the following:
function makeUID()
{
global $dbh;
$sql = "DELETE FROM `uidbank` WHERE used = '1';";
//discard all "used" uids from previous hits on the bank
$sql .= "UPDATE `uidbank` SET used = '1' WHERE used = '0' LIMIT 1;";
//setup a new hit
$dbh->exec($sql);
//be done with that
$sql = "SELECT uid FROM `uidbank` WHERE used = '1'";
$uid = $dbh->query($sql)->fetchColumn();
//now pickup the uid hit we just setup
return $uid;
//return the "safe" uid ready for use
}
No issues here either. It works perfectly well in my single user test environment. However, my SQL skills are pretty basic so I am not 100% sure that
this is the right way to handle the job
that my "safe" UID pickup method will not return unsafe values because in the mean time another user has been assigned the same UID.
I'd much appreciate any tips on how this scheme might be improved.
Any reason you are not using a serial as your unique identifier? That would definitely be my first suggestion and would negate the necessity for the complicated setup you have.
Assuming that there is some reason then the biggest flaw I can see in your current makeUID call is that you are likely to get into a situation whereby the update sets 'used' to 1 and this row is deleted by a secondary call to makeUID before it has been able to successfully return the column meaning your second select (SELECT uid FROM uidbank WHERE used = '1') would return no rows
Could you explain why you dont use a serial so then I can try and get a better idea as to what is going on

Can MySQL handle 100 million+ rows? [duplicate]

This question already has answers here:
How many rows in a database are TOO MANY?
(10 answers)
Closed 8 years ago.
I run a small to medium car website and we are trying to log how many times a visit goes to vehicles detail page. We do this by hashing, md5, the make, model, and zip of the current vehicle. We then keep a vehicle_count total and increment this if the hashes match.
After running the numbers there appears to be about 50 makes, each make has about 50 models, and our locations db has about 44,000 unique zip codes. Roughly 100 million+ potential of unique hashes
This is the create table:
CREATE TABLE `vehicle_detail_page` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`vehicle_hash` char(32) NOT NULL,
`make` varchar(100) NOT NULL,
`model` varchar(100) NOT NULL,
`zip_code` char(7) DEFAULT NULL,
`vehicle_count` int(6) unsigned DEFAULT '1',
PRIMARY KEY (`id`),
UNIQUE KEY `vehicle_hash` (`vehicle_hash`),
KEY `make` (`make`),
KEY `model` (`model`),
KEY `zip_code` (`zip_code`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
This is the PHP code to insert/update the table:
public function insertUpdate($make, $model, $zip)
{
// set table
$table = self::TABLE;
// create hash
$hash = md5($make.$model.$zip);
// insert or update count
try
{
$stmt = $this->db->conn->prepare("INSERT INTO $table
(vehicle_hash,
make,
model,
zip_code)
VALUES
(:vehicle_hash,
:make,
:model,
:zip_code)
ON DUPLICATE KEY UPDATE
vehicle_count = vehicle_count + 1;");
$stmt->bindParam(':vehicle_hash', $hash, PDO::PARAM_STR);
$stmt->bindParam(':make', $make, PDO::PARAM_STR);
$stmt->bindParam(':model', $model, PDO::PARAM_STR);
$stmt->bindParam(':zip_code', $zip, PDO::PARAM_STR);
$stmt->execute();
} catch (Exception $e)
{
return FALSE;
}
return TRUE;
}
Questions:
Can MySQL handle this many rows?
Does anyone see anything wrong with this code, and is there a better way to do this?
What will querying this data be like?
The Big question is, once this table grows how will that php function above perform. If/when that table has a few million+ rows, how will that table perform. Can anyone give some insight?
You could also avoid the hash altogether.
CREATE TABLE `vehicle_visits` (
`make` varchar(100) DEFAULT NULL,
`model` varchar(100) DEFAULT NULL,
`zip_code` char(7) DEFAULT NULL,
`vehicle_count` int(11) DEFAULT NULL,
UNIQUE KEY `make_model_zip` (`make`,`model`,`zip_code`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
This avoids having multiple UNIQUE values. Instead of "ID" and "Hash", you can use real world values to create the UNIQUE identifier. Notice how MySQL can use 3 columns to form a unique index.
Note: to decrease the size of your index, you can decrease the size of make and model columns. Unless you are expecting to have 100 character make and model name of course. If you are worried about size, you can also create an index using a prefix of each of the columns.
Edit: adding the hash column as an index method
As an alternative to a composite index, you can introduce a column
that is “hashed” based on information from other columns. If this
column is short, reasonably unique, and indexed, it might be faster
than a “wide” index on many columns.
http://dev.mysql.com/doc/refman/5.0/en/multiple-column-indexes.html
You will need to do some real world tests to see which method is quicker. Since the data shows about 50 makes and 50 models, the lookup will mostly involve the zip_code column. Index order also makes a difference. Also, creating an index using prefixes such as make(10), model(10), zip(7), creates an index of length 27. On the other hand, an md5 column would be 32.
The hash method may help with lookups, but will it really help with real world applications? This table seems to track visitors, and will most likely have analytics performed on it. The index will help with SUM() operations (depending on the order of the index). For example, if I want to find the total number of visitors to "Honda" or "Honda Civic" page, it is easily done with the multiple column index.

Mysql storing lots of bit sized settings

I have ~38 columns for a table.
ID, name, and the other 36 are bit-sized settings for the user.
The 36 other columns are grouped into 6 "settings", e.g. Setting1_on, Setting1_colored, etc.
Is this the best way to do this?
Thanks.
If it must be in one table and they're all toggle type settings like yes/no, true/false, etc... use TINYINT to save space.
I'd recommend creating a separate table 'settings' with 36 records one for each option. Then create a linking table to the user table with a value column to record the user settings. This creates a many-to-many link for the user settings. It also makes it easy to add a new setting--just add a new row to the 'settings' table. Here is an example schema. I use varchar for the value of the setting to allow for later setting which might not be bits, but feel free to use TINYINT if size is an issue. This solution will not use as much space as the one table with the danger of a large sparsely populated set of columns.
CREATE TABLE `user` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(64) DEFAULT NULL,
`address` varchar(64) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `setting` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(64) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `setting_user` (
`user_id` int(11) NOT NULL DEFAULT '0',
`setting_id` int(11) unsigned NOT NULL,
`value` varchar(32) DEFAULT NULL,
PRIMARY KEY (`user_id`,`setting_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
All depends on how you want to access them. If you want to (or must) just select one of them, then go with the #Ray solution. If they can be functionally grouped (really, not some pretend grouping for all those that start with F) ie. you'll always need number of them for a function and reading and writing them doesn't make sense as an individual flag, then perhaps storing them as ints and using logic operaoprs on them might be a goer.
Saying that, unless you are doing a lot of read and writes to the db during a session, bundling them up into ints gives you very little performance wise, it would save some space on the DB, if all the options had to exist. If doesn't exist = false, it could be a toss up.
So all things being unequal, I'd go with Mr Ray.
MySQL has a SET type that could be useful here. Everything would fit into a single SET, but six SETs might make more sense.
http://dev.mysql.com/doc/refman/5.5/en/set.html

What's a good DB schema to store high volume logging data?

I'm adding "activity log" to a busy website, which should show user the last N actions relevant to him and allow going to a dedicated page to view all the actions, search them etc.
The DB used is MySQL and I'm wondering how the log should be stored - I've started with a single Myisam table used for FULLTEXT searches, and to avoid extra select queries on every action: 1) an insert to that table happens 2) the APC cache for each is updated, so on the next page request mysql is not used. Cache has a log lifetime and if it's missing, the first AJAX request from user creates it.
I'm caching 3 last events for each user, so when a new event happens, I grab the current cache, add the new event to the beginning and remove the oldest event, so there's always 3 of those in the cache. Every page of the site has a small box displaying those.
Is this a proper setup? How would you recommend implementing this sort of feature?
The schema I have is:
CREATE DATABASE `audit`;
CREATE TABLE `event` (
`eventid` INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY ,
`userid` INT UNSIGNED NOT NULL ,
`createdat` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ,
`message` VARCHAR( 255 ) NOT NULL ,
`comment` TEXT NOT NULL
) ENGINE = MYISAM CHARACTER SET utf8 COLLATE utf8_unicode_ci;
ALTER DATABASE `audit` DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci;
ALTER TABLE `audit`.`event` ADD FULLTEXT `search` (
`message` ( 255 ) ,
`comment` ( 255 )
);
Based on your schema, I'm guessing that (caching aside), you'll be inserting many records per second, and running fairly infrequent queries along the lines of select * from event where user_id = ? order by created_date desc, probably with a paging strategy (thus requiring "limit x" at the end of the query to show the user their history.
You probably also want to find all users affected by a particular type of event - though more likely in an off-line process (e.g. a nightly mail to all users who have updated their password"; that might require a query along the lines of select user_id from event where message like 'password_updated'.
Are there likely to be many cases where you want to search the body text of the comment?
You should definitely read the MySQL Manual on tuning for inserts; if you don't need to search on freetext "comment", I'd leave the index off; I'd also consider a regular index on the "message" table.
It might also make sense to introduce the concept of "message_type" so you can introduce relational consistency (rather than relying on your code to correctly spell "password_updat3"). For instance, you might have an "event_type" table, with a foreign key relationship to your event table.
As for caching - I'm guessing users would only visit their history page infrequently. Populating the cache when they visit the site, on the off-chance they might visit their history (if I've understood your design) immediately limits the scalability of your solution to how many history records you can fit into your cachce; as the history table will grow very quickly for your users, this could quickly become a significant factor.
For data like this, which moves quickly and is rarely visited, caching may not be the right solution.
This is how Prestashop does it:
CREATE TABLE IF NOT EXISTS `ps_log` (
`id_log` int(10) unsigned NOT NULL AUTO_INCREMENT,
`severity` tinyint(1) NOT NULL,
`error_code` int(11) DEFAULT NULL,
`message` text NOT NULL,
`object_type` varchar(32) DEFAULT NULL,
`object_id` int(10) unsigned DEFAULT NULL,
`id_employee` int(10) unsigned DEFAULT NULL,
`date_add` datetime NOT NULL,
`date_upd` datetime NOT NULL,
PRIMARY KEY (`id_log`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=6 ;
My advice would be use a schema less storage system .. they perform better in high volume logging data
Try to consider
Redis
MongoDB
Riak
Or any other No SQL System

Categories