Auto-creating tables or rows with populated data in mySQL - php

I'm looking to create a basic IP management solution using a web front end with PHP and MySQL. I am not too familiar with PHP or MySQL but I would like to know if it would be possible to do the following:
When a user inserts a new subnet range into the database (e.g 192.168.1.0/24) using a HTML form, would it be possible for that to trigger another create table (e.g. ip_addresses) that would auto-populate the table with the subnet range (e.g. 192.168.1.1, 1.2, 1.3 and so on).
Apologies if I'm being too vague - thanks

Though it would be possible to do what you are asking, I would advise against it. In MySQL, it is common to store IP addresses as long integers, rather than as their string representation as a dotted quad. MySQL provides two functions, INET_ATON() and its counterpart INET_NTOA() to convert IP addresses into long integers.
The structure I would recommend would be to store the starting and ending integer representation of the subnets. That way, it becomes possible, if necessary to query for addresses residing between the endpoints. Also it allows you to store incomplete subnets.
From MySQL docs:
mysql> SELECT INET_ATON('10.0.5.9');
-> 167773449
mysql> SELECT INET_NTOA(167773449);
-> '10.0.5.9'
You can query with statements like:
SELECT * ipRanges WHERE INET_ATON('192.168.1.99') BETWEEN startIpNum AND endIpNum;

You could use triggers to do that.
CREATE
TRIGGER new_range INSERT
ON ip_range FOR EACH ROW BEGIN ... END
And put your logic of managing specific IP addresses in the range inside your BEGIN ... END instead of the '...'
Have a look here: http://dev.mysql.com/doc/refman/5.0/en/create-trigger.html - for details of how this works.

Related

PHP if comparison vs MySQL Where (Which is more efficient)

My situation: My website will look at a cookie for a remember me token and a user ID. If the cookie exists it will unhash it and look up the user ID and compare the token. with a "WHERE userid = '' and rememberme = ''".
My question is: Will MySQL optimize this query on the unique userid so that the query does not scan the entire database for this 20+ character token? Or instead should I just select the token from the database and then use a php if comparison to check if the tokens are the same?
In short (tl;dr): Would it be better to check if a token matches in with a MySQL select query, or to grab all the tokens from a databases database and compare the values with a php if conditional?
Thanks!
Simple answer:
YES, the database will definitely optimism your search AS LONG AS THE variable you are searching in the WHERE ... portion is indexed! You definitely should not retrieve all the information via SQL and then do a PHP conditional if you are worried about performance.
So if the id column in your table is not indexed, you should index it. If you have let say... 1 million rows already in your table and run a command like SELECT * FROM user WHERE id = 994321, you would see a definite increase in performance.
Elaborating:
A database (like MySQL) is made to be much faster at executing queries/commands than you would expect that to happen in php for instance. In your specific situation, lets say you are executing this SQL statement:
$sql = "SELECT * FROM users WHERE id = 4";
If you have 1 million users, and the id column is not indexed, MySQL will look through all 1 million users to find all the rows with id = 4. However, if it is indexed, there is something called a b tree that MySQL makes (behind the scenes) which works similarly to how the indexing of a dictionary work.
If you try to find the world slowly in a dictionary, you might open the book in the middle, find words that start with the letter M and then look in the middle again of the pages on your right side hoping to find a letter closer to S. This method of looking for a word is much faster than looking at each single page from the beginning 1 by 1.
For that very reason, MySQL has created indexes to help performance and this feature should definitely be taken advantage of to help increase the speed of your queries.
Comparing it on MySQL-side should be fast. It should find the corresponding row by ID first (fast) and then compare the hash (also fast, since there will be only 1 row to check).
Try analyzing the query with EXPLAIN to find out the actual execution plan.
In my opinion it will be always faster to use WHERE clause no matter what (real) database server will be used. Database engines have strong algorithms for searching data written in language that is compiling to low-level code dedicated to platform, so it cannot be even compared with some loop written in interpreted PHP.
And remember that for PHP loop you will have to send all records from DB to PHP.
If you Data Base its on a separate server than you Apache PHP there is not doubt it would be faster if you write a query in MySQL.
If your PHP and MySQL server is on the same physical server probably PHP would be faster cause the comparison will be made on the RAM But have all the User Id array into RAM would be a waste of RAM so you can use Indexes that would speed up your query
ALTER TABLE table ADD INDEX idx__tableName__fieldName (field)

Database/datasource optimized for string matching?

I want to store large amount (~thousands) of strings and be able to perform matches using wildcards.
For example, here is a sample content:
Folder1
Folder1/Folder2
Folder1/*
Folder1/Folder2/Folder3
Folder2/Folder*
*/Folder4
*/Fo*4
(each line has additionnal data too, like tags, but the matching is only against that key)
Here is an example of what I would like to match against the data:
Folder1
Folder1/Folder2/Folder3
Folder3
(* being a wildcard here, it can be a different character)
I naively considered storing it in a MySQL table and using % wildcards with the LIKE operator, but MySQL indexes will only work for characters on the left of the wildcard, and in my case it can be anywhere (i.e. %/Folder3).
So I'm looking for a fast solution, that could be used from PHP. And I am open: it can be a separate server, a PHP library using files with regex, ...
Have you considered using MySQL's regular expression engine? Try something like this:
SELECT *
FROM your_table
WHERE your_query_string REGEXP pattern_column
This will return rows with regex keys that your query string matches. I expect it will perform better than running a query to pull all of the data and doing the matching in PHP.
More info here: http://dev.mysql.com/doc/refman/5.1/en/regexp.html
You might want to use the multicore approach to solve that search in a fraction of the time, i would recommend for search and matching, using FPGA's but thats probably the hardest way to do it, consider THIS ARTICLE using CUDA, you can do that searches in 16x usual time, in multicore CPU Systems, you can use posix, or a cluster of computers to do the job (MPI for example), you can call Gearman service to run the searches using advanced algorithms.
Were it me, I'd store out the key field two times ... once forward and once reversed (see mysql's reverse function). you can then search the index with left(main_field) and left(reversed_field). it won't help you when you have a wildcard in the middle of the string AND the beginning (e.g. "*Folder1*Folder2), but it will when you have a wildcard at the beginning or the end.
e.g. if you want to search */Folder1 then search where left(reverse_field, 8) = '1redloF/';
for Folder1/*/FolderX search where left(reverse_field, 8) = 'XredloF/' and left(main_field, 8) = 'Folder1/'
If your strings represent some kind of hierarchical structure (as it looks like in your sample content), actually not "real" files, but you say you are open to alternative solutions - why not consider something like a file-based index?
Choose a new directory like myindex
Create an empty file for each entry using the string key as location & file name in myindex
Now you can find matches using glob - thanks to the hierarchical file structure a glob search should be much faster than searching up all your database entries.
If needed you can match the results to your MySQL data - thanks to your MySQL index on the key this action will be very fast.
But don't forget to update the myindex structure on INSERT, UPDATE or DELETE in your MySQL database.
This solution will only compete on a huge data-set (but not too huge as #Kyle mentioned) with a rather deep than wide hierarchical structure.
EDIT
Sorry this would only work if the wildcards are in your search terms not in the stored strings itself.
As the wildcards (*) are in your data and not in your queries I think you should start with breaking up your data into pieces. You should create an index-table having columns like:
dataGroup INT(11),
exactString varchar(100),
wildcardEnd varchar(100),
wildcardStart varchar(100),
If you have a value like "Folder1/Folder2" store it in "exactString" and assign the ID of the value in the main data table to "dataGroup" in the above index table.
If you have a value like "Folder1/*" store a value of "Folder1/" to "wildcardEnd" and again assign the id of the value in the main table to the "dataGroup" field in above Table.
You can then do a match within your query using:
indexTable.wildcardEnd = LEFT('Folder1/WhatAmILookingFor/Data', LENGTH(indexTable.wildcardEnd))
This will truncate the search string ('Folder1/WhatAmILookingFor/Data') to "Folder1/" and then match it against the wildcardEnd field. I assume mysql is clever enough not to do the truncate for every row but to start with the first character and match it against every row (using B-Tree indexes).
A value like "*/Folder4" will go into the field "wildcardStart" but reversed. To cite Missy Elliot: "Is it worth it, let me work it
I put my thing down, flip it and reverse it" (http://www.youtube.com/watch?v=Ke1MoSkanS4). So store a value of "4redloF/" in "wildcardStart". Then a WHERE like the following will match rows:
indexTable.wildcardStart = LEFT(REVERSE('Folder1/WhatAmILookingFor/Folder4'), LENGTH(indexTable.wildcardStart))
of course you could do the "REVERSE" already in your application logic.
Now about the tricky part. Something like "*/Fo*4" should get split up into two records:
# Record 1
dataGroup ==> id of "*/Fo*4" in data table
wildcardStart ==> oF/
wildcardEnd ==> /Fo
# Record 2
dataGroup ==> id of "*/Fo*4" in data table
wildcardStart ==> 4
Now if you match something you have to take care that every index-record of a dataGroup gets returned for a complete match and that no overlapping occurs. This could also get solved in SQL but is beyond this question.
Database isn't the right tool to do these kinds of searches. You can still use a database (any database and any structure) to store the strings, but you have to write the code to do all the searches in memory. Load all the strings from the database (a few thousand strings is really no biggy), cache them and run your search\match algorithm on them.
You probably have to code your algorithm yourself because the standard tools will be an overkill for what you are trying to achieve and there is no garantee that they will be able to achieve exactly what you need.
I would build a regex representation of your wildcard based strings and run those regexs on your input. Your probabaly will have to do some work until you get the regex right, but it will be the fastest way to go.
I suggest reading the keys and their associated payload into a binary tree representation ordered alphanumerically by key. If your keys are not terribly "clumped" then you can avoid the (slight additional) overhead building of a balanced tree. You also can avoid any tree maintenance code as, if I understand your problem correctly, the data will be changing frequently and it would be simplest to rebuild the tree rather than add/remove/update nodes in place. The overhead of reading into the tree is similar to performing an initial sort, and tree traversal to search for your value is straight-forward and much more efficient than just running a regex against a bunch of strings. You may even find while working it through that your wild cards in the tree will lead to some shortcuts to prune the search space. A quick search show lots of resources and PHP snippets to get you started.
If you run SELECT folder_col, count(*) FROM your_sample_table group by folder_col do you get duplicate folder_col values (ie count(*) greater than 1)?
If not, that means you can produce an SQL that would generate a valid sphinx index (see http://sphinxsearch.com/).
I wouldn't recommend to do text search on large collection of data in MySQL. You need a database to store the data but that would be it. For searching use a search engine like:
Solr (http://lucene.apache.org/solr/)
Elastic Search (http://www.elasticsearch.org/)
Sphinx (http://sphinxsearch.com/)
Those services will allow you doing all sort of funky text search (including Wildcards) in a blink of an eye ;-)

mysql - select email from xyz where email="%gmail.com"

Is there a way I can select from the database the entries with certain data? I got a lot of email addresses in the database but I want to select only from one domain. Is it even possible?
Sure - just use the LIKE operator.
SELECT email FROM Persons
WHERE email LIKE '%gmail.com'
You are not advisable to do a wildcard search.
This is because mysql not able to use index to fasten the select query.
Especially you mention you have lots of email in the database.
Alternatively, you can use an additional field, as hostname to store just the hostname only.
And of course build an index to it.
If you need to search for email with gmail.com,
then you can do straight string comparison
SELECT email FROM Persons
WHERE hostname='gmail.com';
As the straight string comparison is the good mate to mysql index, your query will be optimized.
As ajreal points out, MySQL can't use indexes to optimise a LIKE query in the general case. However in the specific case of a trailing wildcard where the only % is at the very end of the pattern (effectively a "starts with" query), the optimiser can do a good job of speeding up the query using an index.
Therefore, if you were to add an additional indexed column storing the email address in reverse, you could efficiently query for
SELECT email FROM xyz WHERE reverse_email LIKE 'moc.liamg#%`
to find all gmail addresses, or LIKE 'ku.% for all addresses under uk domains, etc. You can have the database keep this column up to date for you using triggers, so it doesn't affect your existing update code
CREATE TRIGGER emailinsert BEFORE INSERT ON xyz
FOR EACH ROW SET NEW.reverse_email = REVERSE(NEW.email);
CREATE TRIGGER emailupdate BEFORE UPDATE ON xyz
FOR EACH ROW SET NEW.reverse_email = REVERSE(NEW.email);
You need to use LIKE MYSQL CLAUSE
SELECT * FROM email_table WHERE email LIKE "%gmail.com"

Database Definition for Sphinx Search

Background
I am creating a MySQL database to store items such as courses where there may be many attributes to a single course. For example:
A single course may have any or all of the following attributes:
Title (varchar)
Secondary Title (varchar)
Description (text)
Date
Time
Specific Location (varchar; eg. White Hall Room 7)
General Location (varchar; eg. Las Vegas, NV)
Location Coords (floats; eg. lat, long)
etc.
The database is set up as follows:
A table storing specific course info:
courses table:
Course_ID (a Primary Key unique ID for each course)
Creator_ID (a unique ID for the creator)
Creation_Date (datetime of course creation)
Modified_Date (where this is the most recent timestamp the course was modified)
The table storing each courses multiple attributes is set up as follows:
course_attributes table:
Attribute_ID (a unique ID for each attribute)
Course_ID (reference to the specific course attribute is for)
Attribute (varchar definining the attribute; eg. 'title')
Value (text containing value of specified attribute; eg. 'Title Of My Course')
Desire
I would like to search this database using sphinx search. With this search, I have different fields weighing different amounts, for example: 'title' would be more important than 'description'.
Specific search fields that I wish to have are:
Title
Date
Location (string)
Location (geo - lat/long)
The Question
Should I define a View in Mysql to organize the attributes according to 'title', 'description', etc., or is there a way to define my sphinx.conf file to understand specific attributes?
I am open to all suggestions to solving this problem, whether it be rearrangement of the database/tables or the way in which I search.
Let me know if you need any additional details to help me find a solution.
Thanks in advance for the help
!--Update--!
OK, so after reading some of the answers, I feel that I should provide some additional information.
Latitude / Longitude
The latitude/longitude attributes are created by me internally after receiving the general location string. I can generate the values in any way I wish, meaning that I can store them together in a single lat/long attribute as 'float lat, float long' values or any other desired format. This is done only after they have been generated from the initial location string and verified. This is to guard against malformed data as #X-Zero and #Cody have suggested.
Keep in mind that the latitude and longitude was merely illustrating the need to have that field be searchable as opposed to anything more than that. It is simply another attribute; one of many.
Weighting Search Results
I know how to add weights to results in a Sphinx search query:
$cl->setFieldWeights( array('title'=>1000, 'description'=>500) );
This causes the title column to have a higher weight than the description column if the structure was as #X-Zero suggested. My question was more directed to how one would apply the above logic with the current table definition.
Database Structure, Views, and Efficiency
Using my introductory knowledge of Views, I was thinking that I could possibly create something that displays a row for each course where each attribute is its own column. I don't know how to accomplish this or if it's even possible.
I am not the most confident with database structures, but the reason I set my tables up as described was because there are many cases where not all of the fields will be completed for every course and I was attempting to be efficient [yes, it seems as though I've failed].
I was thinking that using my current structure, each attribute would contain a value and would therefore cause no wasted space in the table. Alternatively, if I had a table with tons of potential attributes, I would think there would be wasted space. If I am incorrect, I am happy to learn why my understanding is wrong.
Let me preface this by saying that I've never even heard of Sphinx, nor (obviously) used it. However, from a database perspective...
Doing multi-domain columns like this is a terrible (I will hunt you down and kill you) idea. For one thing, it's impossible to index or sort meaningfully, period. You also have to pray that you don't get a latitude attribute with textual data (and because this can only be reinforced programatically, I'm going to garuantee this will happen) - doing so will cause all distance based formulas to crash. And speaking of location, what happens if somebody stores a latitude without a longitude (note that this is possible regardless of whether you are storing a single GeoLocation attribute, or the pair)?
Your best bet is to do the following:
Figure out which attributes will always be required. These belong in the course table (...mostly).
For each related set of optional attributes, create a table. For example, location (although this should probably be required...), which would contain Latitude/Longitude, City, State, Address, Room, etc. Allow the columns to be nullable (in sets - add constraints so users can't add just longitude and not latitude).
For every set of common queries add a view. Even (perhaps especially) if you persist in using your current design, use a view. This promotes seperation between the logical and physical implementations of the database. (This assumes searching by SQL) You will then be able to search by specifying view_column is null or view_column = input_parameter or whichever.
For weighted searching (assuming dynamic weighting) your query will need to use left joins (inside the view as well - please document this), and use prepared-statement host-parameters (just save yourself the trouble of trying to escape things yourself). Check each set of parameters (both lat and long, for example), and assign the input weighting to a new column (per attribute), which can be summed up into a 'total' column (which must be over some threshold).
EDIT:
Using views:
For your structure, what you would normally do is left join to the attributes table multiple times (one for each attribute needed), keying off of the attribute (which should really be an int FK to a table; you don't want both 'title' and 'Title' in there) and joining on course_id - the value would be included as part of the select. Using this technique, it would be simple to then get the list of columns, which you can then apparently weight in Sphinx.
The problem with this is if you need to do any data conversion - you are betting that you'll be able to find all conversions if the type ever changes. When using strongly typed columns, this is between trivial (the likelyhood is that you end up with a uniquely named column) to unnecessary (views usually take their datatype definitions from the fields in the query); with your architecture, you'll likely end up looking through too many false positives.
Database efficiency:
You're right, unfilled columns are wasted space. Usually, when something is optional(ish), that means you may need an additional table. Which is Why I suggested splitting off location into it's own table: this prevents events which don't need a location (... what?) from 'wasting' the space, but then forces any event that defines a location to specify all required information. There's an additional benefit about splitting it off this way: if multiple events all use the same location (... not at the same time, we hope), a cross-reference table will save you a lot of space. Way more than your attributes table ever could (you're still having to store the complete location per event, after all). If you still have a lot of 'optional' attributes, I hear that NoSQL is made for these kinds of things (but I haven't really looked into it). However, other than that, the cost of an additional table is trivial; the cost of the data inside may not be, but the space required is weighed against the perceived value of the data stored. Remember that disk space is relatively cheap - it's developer/maintainer time that is expensive.
Side note for addresses:
You are probably going to want to create an address table. This would be completely divorced from the event information, and would include (among other things) the precomputed latitude/longitude (in the recommended datatype - I don't know what it is, but it's for sure not a comma-separated string). You would then have an event_address table that would be the cross-reference between the events and where they take place - if there is additional information (such as room), that should be kept in a location table that is referenced (instead of referencing address directly). Once a lat/long value is computed, you should never need to change it.
Thoughts on later updates for lat/long:
While specifying the lat/long values yourself is better, you're going to want to make them a required part of the address table (or part of/in addition to a purely lat/long only table). Frankly, multi-value columns (delimited lists) of any sort are just begging for trouble - you keep having to parse them every time you search on them (among other related issues). And the moment you make them separate rows, one of the pair will eventually get dropped - Murphy himself will personally intervene, if necessary. Additionally, updating them at different times from the addresses will result in an address having a lat/long pair that does not match; your best bet is to compute this at insertion time (there are a number of webservices to find this information for you).
Multi-domain tables:
With a multi-domain table, you're basically betting that the domain key (attribute) will never become out-of-sync with the value (err, value). I don't care how good you are, somewhere, somehow, it's going to happen: at my company, we had one of these in our legacy application (it stored FK links and which files the FKs refer to, along with an attribute). At one point an application was installed in production which promptly began storing the correct file links, but the FK links to a different file, for a given class of attribute. Thankfully, there were audit records in another file which allowed this to be reversed (... as near as they were able tell).
In summary:
Revisit your required/optional data. Don't be afraid to create additional tables, each for a single entity, with every column for a single domain; you will also need relationship tables. You may also wish to place your audit data (last_updated_time) in a set of separate tables (single-domain tables will help immensely in this regard).
In the sphinx config you define your index and the SQL queries that populate it. You can define basic attributes, see Sphinx Attributes
Sphinx also supports geo searches on lat/long but they need to be expressed in radians, definitely not text columns like you have. I agree with X-Zero that storing lat/lng values are strings is a bad idea.

Optimizing SQL query

I have to get all entries in database that have a publish_date between two dates. All dates are stored as integers because dates are in UNIX TIMESTAMP format...
Following query works perfect but it takes "too long". It returns all entries made between 10 and 20 dazs ago.
SELECT * FROM tbl_post WHERE published < (UNIX_TIMESTAMP(NOW())-864000)
AND published> (UNIX_TIMESTAMP(NOW())-1728000)
Is there any way to optimize this query? If I am not mistaken it is calling the NOW() and UNIX_TIMESTAMP on evey entry. I thought that saving the result of these 2 repeating functions into mysql #var make the comparison much faster but it didn't. 2nd code I run was:
SET #TenDaysAgo = UNIX_TIMESTAMP(NOW())-864000;
SET #TwentyDaysAgo = UNIX_TIMESTAMP(NOW())-1728000;
SELECT * FROM tbl_post WHERE fecha_publicado < #TenDaysAgo
AND fecha_publicado > #TwentyDaysAgo;
Another confusing thing was that PHP can't run the bove query throught mysql_query(); ?!
Please, if you have any comments on this problem it will be more than welcome :)
Luka
Be sure to have an index on published.And make sure it is being used.
EXPLAIN SELECT * FROM tbl_post WHERE published < (UNIX_TIMESTAMP(NOW())-864000) AND published> (UNIX_TIMESTAMP(NOW())-1728000)
should be a good start to see what's going on on the query. To add an index:
ALTER TABLE tbl_post ADD INDEX (published)
PHP's mysql_query function (assuming that's what you're using) can only accept one query per string, so it can't execute the three queries that you have in your second query.
I'd suggest moving that stuff into a stored procedure and calling that from PHP instead.
As for the optimization, setting those variables is about as optimized as you're going to get for your query. You need to make the comparison for every row, and setting a variable provides the quickest access time to the lower and upper bounds.
One improvement in the indexing of the table, rather than the query itself would be to cluster the index around fecha_publicado to allow MySQL to intelligently handle the query for that range of values. You could do this easily by setting fecha_publicado as PRIMARY KEY of the table.
The obvious things to check are, is there an index on the published date, and is it being used?
The way to optimize would be to partition the table tbl_post on the published key according to date ranges (weekly seems appropriate to your query). This is a feature that is available for MySQL, PostgreSQL, Oracle, Greenplum, and so on.
This will allow the query optimizer to restrict the query to a much narrower dataset.
I agree with BraedenP that a stored procedure would be appropriate here. If you can't use one or really don't want to, you can always either generate the dates on the PHP side, but they might not match exactly with the database unless you have them synced.
You can also do it more quickly as 3 separate queries likely. Query for the begin data, query for the end date, then use those values as input into your target query.

Categories