Mysql trigger or coding in PHP? - php

I have a table hardware_description:
+------------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------------+--------------+------+-----+---------+----------------+
| Computer_id | int(11) | NO | PRI | NULL | auto_increment |
| Emp_id | int(11) | NO | MUL | NULL | |
| PC_type | varchar(20) | YES | | NULL | |
| Operating_system | varchar(20) | YES | | NULL | |
| Product_key | varchar(30) | YES | | NULL | |
| Assign_date | date | YES | | NULL | |
| DVD_ROM | varchar(20) | YES | | NULL | |
| CPU | varchar(30) | YES | | NULL | |
| IP_address | varchar(30) | YES | | NULL | |
| MAC_address | varchar(30) | YES | | NULL | |
| Model_name | varchar(30) | YES | | NULL | |
| Model_number | varchar(30) | YES | | NULL | |
| Monitor | varchar(30) | YES | | NULL | |
| Processor | varchar(30) | YES | | NULL | |
| Product_name | varchar(30) | YES | | NULL | |
| RAM | varchar(20) | YES | | NULL | |
| Serial_number | varchar(30) | YES | | NULL | |
| Vendor_id | varchar(30) | YES | | NULL | |
Emp_id is foreign key from employees table.
When I update a particular row, I want the existing data for that row to be saved in another table along with the timestamp of that update action. Now,
a) Shall I use PHP code (PDO transaction) to first grab that row & insert in another table then perform the UPDATE query on that particular row?
b) Use trigger on this table.
Which process is better practice & more efficient? Is there another way of achieving this?
I have not used trigger in my short career so far but I can do it if it is better practice.

If you can do a trigger, it would be a lot better to use that.
The reason for this is that if for some reason you forget to write the PHP code to do this (in some weird situation) - you would have missing, unrelated data - otherwise known as orphaned data, which does not have a corresponding row or set of rows.
Here's the link to the MySQL documentation page for triggers: http://dev.mysql.com/doc/refman/5.0/en/create-trigger.html

Related

Data migration into CiviCRM - keep legacy IDs

I develop custom migration code using CiviCRM's PHP API calls like:
<?php
$result = civicrm_api3('Contact', 'create', array(
'sequential' => 1,
'contact_type' => "Household",
'nick_name' => "boo",
'first_name' => "moo",
));
There's a need to keep original IDs, but specifying 'id' or 'contact_id' above does not work. It either does not create the contact or updates an existing one.
The ID is auto-incremented, for sure, but MySQL supports to insert arbitrary, unique values in that case.
How would you proceed? Hack CiviCRM to somehow pass the id to MySQL at the INSERT statement? Somehow dump the SQL after the import and manipulate the IDs in-place at the .sql textfile (hard to maintain integrity)? Any suggestions for that?
I have ~300.000 entries at least to deal with, so a fully automated and robust solution is a must. Any SQL magic potentially to do that?
For those who are not familiar with CiviCRM, the table structure is the following:
mysql> desc civicrm_contact;
+--------------------------------+------------------+------+-----+-------------------+-----------------------------+
| Field | Type | Null | Key | Default | Extra |
+--------------------------------+------------------+------+-----+-------------------+-----------------------------+
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| contact_type | varchar(64) | YES | MUL | NULL | |
| contact_sub_type | varchar(255) | YES | MUL | NULL | |
| do_not_email | tinyint(4) | YES | | 0 | |
| do_not_phone | tinyint(4) | YES | | 0 | |
| do_not_mail | tinyint(4) | YES | | 0 | |
| do_not_sms | tinyint(4) | YES | | 0 | |
| do_not_trade | tinyint(4) | YES | | 0 | |
| is_opt_out | tinyint(4) | NO | | 0 | |
| legal_identifier | varchar(32) | YES | | NULL | |
| external_identifier | varchar(64) | YES | UNI | NULL | |
and we talk about the first field.
You should use the external_identifier field which is exactly done for what you want.
This field is not used by CiviCRM itself so there is no risk to mess with core functionality. It's done to link with an external system (legacy for example).
CiviCRM consider the external_identifier to be unique so it will throw an error (using API - I think) or update (using CiviCRM contact import screen) if you try to insert a contact with the same external_identifier.

Laravel 5 custom database sessions?

I'm new to Laravel (using 5.1). I have my entire DB schema (MySQL 5.5) diagrammed and have begin implementing it. The problem is, I need to adapt Laravel to use my sessions table. After making a new migration to bring the table more in line with what Laravel expects, I have this table:
+---------------+---------------------------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+---------------------------------------+------+-----+---------+----------------+
| id | bigint(20) | NO | PRI | NULL | auto_increment |
| id_hash | varchar(255) | NO | UNI | NULL | |
| user_id | bigint(20) unsigned | NO | | 0 | |
| created_at | int(10) unsigned | NO | | 0 | |
| updated_at | int(10) unsigned | NO | | 0 | |
| expires_at | int(10) unsigned | NO | | 0 | |
| last_activity | int(10) unsigned | NO | | 0 | |
| platform | enum('d','p','t','b','a','i','w','k') | NO | | d | |
| ip_address | varchar(40) | NO | | 0.0.0.0 | |
| payload | text | NO | | NULL | |
| user_agent | text | NO | | NULL | |
+---------------+---------------------------------------+------+-----+---------+----------------+
The main thing I need to accomplish is to have id as an auto-incrementing integer (because my Session model has relationships to other models) and use id_hash as the publicly identifying string (I also plan to cut id_hash back to 64), which I think is the token in the payload.
At session creation, id_hash, platform, ip_address, and user_agent will be set, never to change again. After authentication, user_id will be populated, then cleared at logout.
I'm ok with keeping the payload handling as-is.
Is this just a matter of creating a custom class that implements SessionHandlerInterface? What else needs to be in it for handling my extra fields that's not obvious from the session docs?

PHP access images with utf-8 encode

I have used scrapy(a python crawler framework), I crawled some images and dowload them,also I store the image online url in mysql table and after download I store the local path into mysql.now there is one problem, when I try to display the image with php, it can't access the image...it just display **404*
my mysql table:
+-------------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+-------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| brand | varchar(45) | NO | | NULL | |
| series | varchar(45) | NO | | NULL | |
| commodity | varchar(45) | NO | | NULL | |
| designer | varchar(45) | NO | | NULL | |
| year | varchar(45) | NO | | NULL | |
| description | text | NO | | NULL | |
| technology | text | NO | | NULL | |
| url | text | NO | | NULL | |
| local_url | text | NO | | NULL | |
| name | text | NO | | NULL | |
| downloaded | tinyint(1) | NO | | 0 | |
| referer | varchar(45) | NO | | NULL | |
+-------------+-------------+------+-----+---------+----------------+
I use url field store the online image url and local_url field store the local image path.
some data in local_url is as following:
agape/— Contenitori/EVO-N\_NIVIS\_ALTO.jpg?1415875406,agape/— Contenitori/EVO-N\_CONTENITORE.jpg?1415875446,agape/— Contenitori/FLAT\_XL\_CONT\_PUZZLE.jpg?1415875463,agape/— Contenitori/EVOLUZIONE\_CONT\_PUZZLE.jpg?1415875485
I user comma to split the image path, the image stored in local is as following:
/var/www/images/agape/— Contenitori/xxx.jpg
the path contains space and other special charater,when I using php get the image from database and try to access the image,it can not be accessed..
the url try to access is as following:
http://45.32.255.165/images/agape/%E2%80%94%20Contenitori/EVO-L_660_PART_CASSETTO_R.jpg?1362501749
hope I describe my question clearly......

MySQL - new to setting up foreign keys to request the correct data

Having some trouble with foreign keys, I am new to MySQL so please forgive. I am not sure how to do what I want.
I have 4 tables:
Users
Tasks
Day
Week (includes Week number and date commencing).
What I want to do is, reference each table to pull the correct data.
E.G. pseudo code:
find user liam, find select day associated with liam, find the task associated with the selected day, find the week from the selected day.
This should then allow me to sieve through the data.
Here are the tables:
+------------+-----------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+-----------------+------+-----+---------+----------------+
| user_id | int(1) unsigned | NO | PRI | NULL | auto_increment |
| username | varchar(255) | NO | UNI | NULL | |
| password | varchar(255) | NO | | NULL | |
| first_name | varchar(255) | NO | | NULL | |
| last_name | varchar(255) | NO | | NULL | |
| email | varchar(255) | NO | UNI | NULL | |
+------------+-----------------+------+-----+---------+----------------+
+------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+------------------+------+-----+---------+----------------+
| day_id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| day | varchar(255) | NO | | NULL | |
| user-id-fk | int(10) unsigned | NO | | NULL | |
+------------+------------------+------+-----+---------+----------------+
+-------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+------------------+------+-----+---------+----------------+
| am_id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| am_task | varchar(255) | NO | | NULL | |
| description | varchar(255) | NO | | NULL | |
| am_color | varchar(20) | YES | | NULL | |
+-------------+------------------+------+-----+---------+----------------+
+---------+-----------------+------+-----+------------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------+-----------------+------+-----+------------+----------------+
| week_id | int(2) unsigned | NO | MUL | NULL | auto_increment |
| week | varchar(255) | NO | | NULL | |
| date | date | NO | | 2014-12-29 | |
+---------+-----------------+------+-----+------------+----------------+
So to start off I understand that I have to reference the first two tables, this is why I have created a field called 'user-id-fk'.
I have tried to create the foreign key inside of phpMyAdmin: http://tinypic.com/r/mkar20/8
But not sure if it is working or if I have created it properly.
As I already said I am new to MySQL so even if I had created it properly, I'm not sure how to test?
I can add any more data if required.
Please note that Foreign keys are only ways ensuring solid data structure within SQL. They aren't technically necessary for querying your data across two or more tables, as long as you have keys that are in common across both tables.
The use of foreign keys may make it easier to make sure you are correctly choosing items from a controlled list. However, as they are good ideas as they ensure integrity in your database, the easiest way to test these, is to set up these keys as RESTRICT and test adding a record from the table that has a foreign key restriction that does not exist on the referenced table It should provide an error.
For instance given one of your examples:
+------------+-----------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+-----------------+------+-----+---------+----------------+
| user_id | int(1) unsigned | NO | PRI | NULL | auto_increment |
| username | varchar(255) | NO | UNI | NULL | |
| password | varchar(255) | NO | | NULL | |
| first_name | varchar(255) | NO | | NULL | |
| last_name | varchar(255) | NO | | NULL | |
| email | varchar(255) | NO | UNI | NULL | |
+------------+-----------------+------+-----+---------+----------------+
+------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+------------------+------+-----+---------+----------------+
| day_id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| day | varchar(255) | NO | | NULL | |
| user-id-fk | int(10) unsigned | NO | | NULL | |
+------------+------------------+------+-----+---------+----------------+
If user-id-fk is a foreign key on that second table referencing user_id on the first table, and you set it up as restrict, you will not be able to add any records into user-id-fk without entering the same user_id into the first table.
Assuming user_id on the first table only values 1, 2, or 3 entered. If you try to enter a row 4 into the second table, it should err, because there is no referenced key on the first table.
Note:
You have created different sized integers for these tables, so you could run into a problem after you have entered 10 people. I recommend making the integer size on user_id to be larger than 1.

Mysql query optimization Multi Column Index solves this slowness?

+----------------------------+------------------------------------------------------------------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------------------+------------------------------------------------------------------------------+------+-----+---------+----------------+
| type | enum('Website','Facebook','Twitter','Linkedin','Youtube','SeatGeek','Yahoo') | NO | MUL | NULL | |
| name | varchar(100) | YES | MUL | NULL | |
| processing_interface_id | bigint(20) | YES | MUL | NULL | |
| processing_interface_table | varchar(100) | YES | MUL | NULL | |
| create_time | datetime | YES | MUL | NULL | |
| run_time | datetime | YES | MUL | NULL | |
| completed_time | datetime | YES | MUL | NULL | |
| reserved | int(10) | YES | MUL | NULL | |
| params | text | YES | | NULL | |
| params_md5 | varchar(100) | YES | MUL | NULL | |
| priority | int(10) | YES | MUL | NULL | |
| id | bigint(20) unsigned | NO | PRI | NULL | auto_increment |
| status | varchar(40) | NO | MUL | none | |
+----------------------------+------------------------------------------------------------------------------+------+-----+---------+----------------+
select * from remote_request use index ( processing_order ) where remote_request.status = 'none' and type = 'Facebook' and reserved = '0' order by priority desc limit 0, 40;
This table receives an extremely large amount of writes and reads. each remote_request ends up being a process, which can spawn anywhere between 0 and 5 other remote_requests depending on the type of request, and what the request does.
The table is currently sitting at about 3.5 Million records, and it goes to a snail pace when the site itself is under heavy load and I have more then 50 or more instances running simultaneously. (REST requests are the purpose of the table just in case you were not sure).
As the table grows it just gets worse and worse. I can clear the processed requests out on a daily basis but ultimatly this is not fixing the problem.
What I need is for this query to always have a very low response ratio.
Here are the current indexes on the table.
+----------------+------------+----------------------------------+--------------+----------------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+----------------+------------+----------------------------------+--------------+----------------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| remote_request | 0 | PRIMARY | 1 | id | A | 2403351 | NULL | NULL | | BTREE | | |
| remote_request | 1 | type_index | 1 | type | A | 18 | NULL | NULL | | BTREE | | |
| remote_request | 1 | processing_interface_id_index | 1 | processing_interface_id | A | 18 | NULL | NULL | YES | BTREE | | |
| remote_request | 1 | processing_interface_table_index | 1 | processing_interface_table | A | 18 | NULL | NULL | YES | BTREE | | |
| remote_request | 1 | create_time_index | 1 | create_time | A | 160223 | NULL | NULL | YES | BTREE | | |
| remote_request | 1 | run_time_index | 1 | run_time | A | 343335 | NULL | NULL | YES | BTREE | | |
| remote_request | 1 | completed_time_index | 1 | completed_time | A | 267039 | NULL | NULL | YES | BTREE | | |
| remote_request | 1 | reserved_index | 1 | reserved | A | 18 | NULL | NULL | YES | BTREE | | |
| remote_request | 1 | params_md5_index | 1 | params_md5 | A | 2403351 | NULL | NULL | YES | BTREE | | |
| remote_request | 1 | priority_index | 1 | priority | A | 716 | NULL | NULL | YES | BTREE | | |
| remote_request | 1 | status_index | 1 | status | A | 18 | NULL | NULL | | BTREE | | |
| remote_request | 1 | name_index | 1 | name | A | 18 | NULL | NULL | YES | BTREE | | |
| remote_request | 1 | processing_order | 1 | priority | A | 200 | NULL | NULL | YES | BTREE | | |
| remote_request | 1 | processing_order | 2 | status | A | 200 | NULL | NULL | | BTREE | | |
| remote_request | 1 | processing_order | 3 | type | A | 200 | NULL | NULL | | BTREE | | |
| remote_request | 1 | processing_order | 4 | reserved | A | 200 | NULL | NULL | YES | BTREE | | |
+----------------+------------+----------------------------------+--------------+----------------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
Any idea how i solve this? Is it not possible to make some sort of complicated index that would automatic order them with priority, then take the first 40 that match the 'Facebook' type? It currently is scanning more then 500k rows of the table before it returns a result which is grossly inefficient.
Some other version of the query that I have been tinkering with are:
select * from remote_request use index ( type_index,status_index,reserved_index,priority_index ) where remote_request.status = 'none' and type = 'Facebook' and reserv ed = '0' order by priority desc limit 0, 40
It would be amazing if we could get the rows scanned to under 1000 rows depending on just how many types of requests enter the table.
Thanks in advance, this might be a real nutcracker for most except the most experienced mysql experts?
Your four-column index has the right columns, but in the wrong order.
You want the index to first look up matching rows, which you do by three columns. You are looking up by three equality conditions, so you know that once the index finds the set of matching rows, the order of these rows is basically a tie with respect to those first three columns. So to resolve the tie, add as the fourth column the column by which you wanted to sort.
If you do that, then the ORDER BY becomes a no-op, because the query can just read the rows in the order they are stored in the index.
So I would create the following index:
CREATE INDEX processing_order2 ON remote_request
(status, type, reserved, priority);
There's probably not too much significance to the order of the first three columns, since they're all in equality terms combined with AND. But the priority column belongs at the end.
You may also like to read my presentation How to Design Indexes, Really.
By the way, using USE INDEX() shouldn't be necessary if you have the right index, MySQL's optimizer will choose it automatically most of the time. But USE INDEX() can block the optimizer from considering a new index that you create, so it becomes a disadvantage for code maintenance.
This isn't a complete answer but it was too long for a comment:
Are you actually searching on all of those indexes? If not get rid of some. Extra indexes slow down writes.
Secondly use EXPLAIN on your query and don't specify an index when you do. See how MySQL wants to process it rather than forcing an option (Generally it does the right thing).
Finally sorting is likely what hurts you the most. If you don't sort it probably gets the records pretty quickly. It has to scan and sort every row that meets your criteria before it can return the top 40.
Options:
Try creating a VIEW (not as familiar with VIEWS but it might work)
Split this table into smaller tables
use a third party tool such as
Sphinx or Lucene to create specialized indexes to search on. (I've
used Sphinx for something like this before. You can find it at
http://sphinxsearch.com/).
Or look into using a NoSQL solution where you can use a Map function to do it.
Edit I read a bit about using VIEW and I don't think it will help you in your case because you have such a large table. See the answer in this thread: Using MySQL views to increase performance

Categories