I have a shape file, and I want to show it on the web by using leaflet (http://leaflet.cloudmade.com/). Since leaflet only support geoJSON, I should change the shp file into geoJSON. It is easy since I can use "save as" capability in Quantum-GIS.
Although I can use geojson as database (by reading, edit and writing the file programmatically), I think it is better to use the "real" database. My-SQL is the most popular one, and it support spatial data, so I decide to use MySQL.
The scenario is:
Change shp into MySQL (I use ogr2ogr and just simply run this command: ogr2ogr -f "MySQL" MySQL:"geo,user=root,host=localhost,password=toor" -lco engine=MYISAM airports.shp)
Fetch MySQL database into geojson <-- here is the problem
Using ajax to get the geojson and change the layout <-- this should be easy, I'm good with JQuery
There is a column in My MySQL table which its type is "GEOMETRY", Look the table definition below:
CREATE TABLE IF NOT EXISTS `airports` (
`OGR_FID` int(11) NOT NULL AUTO_INCREMENT,
`SHAPE` geometry NOT NULL,
`cat` decimal(10,0) DEFAULT NULL,
`na3` varchar(80) DEFAULT NULL,
`elev` double(32,3) DEFAULT NULL,
`f_code` varchar(80) DEFAULT NULL,
`iko` varchar(80) DEFAULT NULL,
`name` varchar(80) DEFAULT NULL,
`use` varchar(80) DEFAULT NULL,
UNIQUE KEY `OGR_FID` (`OGR_FID`),
SPATIAL KEY `SHAPE` (`SHAPE`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=77 ;
Is there any way to change such a table into geojson format?
(I prefer the easy way, but if there is not, just change the column into array like is acceptable)
EDIT:
I use geophp written by phayes.
https://github.com/phayes/geoPHP/wiki/Example-format-converter.
This solves the main problem. Only need to a bit mess up with adding feature etc.
Any easier solution?
While there may not be a direct method to convert from a mysql spatial entity to geojson, you can try the following:
get the WKT (Well Known Text) of the entity. (MySQL Reference)
convert from WKT to geojson (Done in perl, although you should be able to find it in other languages or write your own in javascript);
Note that just calling jsonEncode() on the entity, as others have suggested, will not yeild geoJson.
My personal suggestion, which does not directly answer your question, would be to store the data in the format you need it retrieved in. It will reduce the overhead required to process the data every time you need it.
The easiest way to do this is to store the geojson in plain text as you suggested. If, for whatever reason, you also need the geometry stored in native format, you can store it in another column. The only downside is keeping the two columns in sync.
Related
So I'm not sure exactly how to title this question. I am creating a database and need to store some default SEO info as follows.
default page title
default keywords
default page description
header code
footer code
There will never be more than 1 entry per field. So the question is do I create a table in the database with columns for each of these data types with the understanding that there will only ever be 1 row of data?
OR do I create a table that has a name column for each of the fields and then a column for the data (text)? With this option I can see that I wont be able to set the data type for each field, instead each would have to be tinytext or varchar.
Here are the 2 database table structures I'm contemplating.
CREATE TABLE `cms_seo` (
`id` int(2) NOT NULL,
`name` VARCHAR(100) NOT NULL,
`data` tinytext NOT NULL,
PRIMARY KEY (`id`)
)
INSERT INTO `cms_seo`
(`id`, `name`, `data`)
VALUES
(1, 'Website Keywords', ''),
(2, 'Default Page Title', ''),
(3, 'Default Page Description', ''),
(4, 'Header Code', ''),
(5, 'Footer Code', '');
OR
CREATE TABLE `cms_seo`(
`id` INT(1) NOT NULL AUTO_INCREMENT,
`default_page_title` VARCHAR(500) NOT NULL,
`default_keywords` VARCHAR(1000) NOT NULL,
`default_page_description` TINYTEXT NOT NULL,
`header_code` TINYTEXT NOT NULL,
`footer_code` TINYTEXT NOT NULL,
PRIMARY KEY (`id`)
)
INSERT INTO `cms_seo`
(`id`,
`default_page_title`,
`default_keywords`,
`default_page_description`,
`header_code`,
`footer_code`)
VALUES
(NULL, '', '', '', '', '');
Would there be any alternative to storing this data? Such as in a text file? The data will need to be editable through the cms.
It's a common pattern to store the type of data you describe in a "key/value" format like your design #1. Some advantages include:
If your application needs a default for some new property, you can add it simply by INSERTing a new row.
There's a practical limit to the width of a row in MySQL. But you can add as many rows as you want.
If the list of defaults gets long, you can query for just one row. If you have an index, looking up a single row is more efficient.
Advantages of design #2:
You can use MySQL data types to constrain the length or format of each column individually. Design #1 requires you to use a data type that can store any possible value.
You store the property names as metadata, instead of as strings. Using metadata for property names is more "correct" relational database design.
I have posted many times in the past discouraging people to use the "key/value" design for data. But it's a legitimate use of that design when you just have one set of values, like the defaults in your case.
Another option, as you have mentioned, would be to store the data in file, instead of a database. See http://php.net/manual/en/function.parse-ini-file.php
Another option is to store the default values in a PHP file Just declare a hash array of them. One advantage of this technique is that a PHP file is converted to bytecode by PHP and then cached.
But since you say you have to be able to edit values through your application, you might find it easier to store it in a database.
This answer is really something in between a glorified comment and a full answer. I prefer option #2, because at some point in the future perhaps you might have the need for more than just one placeholder record. In addition, if you go with the second option you can make use of MySQL's relational capabilities, such as joining by column name.
There's nothing wrong with a table with only one row. (Relationally, its only candidate key is {}, but SQL doesn't let you express that directly.)
Relationally, ie if you want to ask arbitrary questions about individual keywords or collections of keywords, then you should store, query & manipulate this "row" as two tables:
CREATE TABLE `cms_seo`(
`id` INT(1) NOT NULL AUTO_INCREMENT,
`default_page_title` VARCHAR(500) NOT NULL,
`default_page_description` TINYTEXT NOT NULL,
`header_code` TINYTEXT NOT NULL,
`footer_code` TINYTEXT NOT NULL,
PRIMARY KEY (`id`)
)
CREATE TABLE `cms_seo_keyword`(
`seo_id` INT(1) NOT NULL,
`default_keywords` VARCHAR(1000) NOT NULL,
PRIMARY KEY (`seo_id`, `default_keywords`),
FOREIGN KEY (`seo_id`) REFERENCES `cms_seo` (`seo_id`)
)
You can declare a view for cms_seo in terms of these. Ideally you would program as much as possible using this database.
PS Design 1 is an EAV design. Reasearch EAV's problems. Essentially, it means you are using a DBMS to implement & use a (bug-filled feature-poor) program whose desired functionality is... a DBMS. You should only use such a design if you demonstrate that a straightforward relational design using DML & DDL gives insufficient performance but an EAV design does. (And that includes the present value/expense of EAV disadvantages.)
I'm using PDO and trying to make my application support both MySQL and SQLite, but in sqlite I get this error when I try to import my database schema:
SQLSTATE[HY000]: General error: 1 near "AUTO_INCREMENT": syntax error
The query looks like this:
CREATE TABLE events (
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL,
title VARCHAR(64) NOT NULL,
description LONGTEXT,
starttime DATETIME DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY(id),
KEY name(name)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
(and it works in a MySQL database.)
I don't understand what the problem is here? Shouldn't both database systems be compatible?
http://www.sqlite.org/autoinc.html
In SQLite it's called AUTOINCREMENT, not AUTO_INCREMENT
They should be compatible as regards the ANSI SQL standards, and all SQL databases should adhere to that. However, AutoIncrement is not a part of that standard, but an extra feature implemented by some databases (including MySQL). Not all databases provide that feature, or may provide it in a different manner, or with different syntax.
AUTO_INCREMENT is MySQL-specific. SQLite apparently has a similar thing, AUTOINCREMENT.
Unfortunately though SQL should be a standard, each database implementation is different and have its own peculiarities, so you have to arrange your Query to make it work on SQLite.
No, they support a completely different set of features. The most significant difference is that SQLite uses dynamic data types whereas MySQL uses static data types, but there are many other differences too.
They do however both support a common subset of SQL, so it is possible to write some simple SQL statements that will work in both systems.
I need to store a very big amount of text in mysql database. It will be millions of records with field type LONGTEXT and database size will be huge.
So, I want ask, if there is a safe way to compress text before storing it into TEXT field to save space, with ability to extract it back if needed?
Something like:
$archived_text = compress_text($huge_text);
// saving $archived_text to database here
// ...
// ...
// getting compressed text from database
$archived_text = get_text_from_db();
$huge_text = uncompress_text($archived_text);
Is there a way to do this with php or mysql? All the texts are utf-8 encoded.
UPDATE
My application is a large literature website where users can add their texts. Here is the table I have:
CREATE TABLE `book_parts` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`book_id` int(11) NOT NULL,
`title` varchar(200) DEFAULT NULL,
`content` longtext,
`order_num` int(11) DEFAULT NULL,
`views` int(10) unsigned DEFAULT '0',
`add_date` datetime DEFAULT NULL,
`is_public` tinyint(3) unsigned NOT NULL DEFAULT '1',
`published_as_draft` tinyint(3) unsigned NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
KEY `key_order_num` (`order_num`),
KEY `add_date` (`add_date`),
KEY `key_book_id` (`book_id`,`is_public`,`order_num`),
CONSTRAINT FOREIGN KEY (`book_id`) REFERENCES `books` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8
Currently it has about 800k records and weights 4 GB, 99% of queries are SELECT. I have all reasons to think that numbers increase diagrammatically. I wouldn't like to store texts in the files because there is quite heavy logic around and my website has quite a few hits.
Are you going to index these texts. How big is read load on this texts? Insert load?
You can use InnoDB data compression - transparent and modern way. See docs for more info.
If you have realy huge texts (say, each text is above 10MB), than good idea is not to store them in Mysql. Store compressed by gzip texts in file system and only pointers and meta in mysql. You can easily expand your storage in future and move it to e.g. DFS.
Update: another plus of storing texts outside Mysql: DB stays small and fast. Minus: high probability of data inconsistence.
Update 2: if you have much programming resourses, please, take a look on projects like this one: http://code.google.com/p/mysql-filesystem-engine/.
Final Update: according to your info, you can just use InnoDB compression - it is the same as ZIP. You can start with these params:
CREATE TABLE book_parts
(...)
ENGINE=InnoDB
ROW_FORMAT=COMPRESSED
KEY_BLOCK_SIZE=8;
Later you will need to play with KEY_BLOCK_SIZE. See SHOW STATUS LIKE 'COMPRESS_OPS_OK' and SHOW STATUS LIKE 'COMPRESS_OPS'. Ratio of these two params must be close to 1.0: Docs.
If you're compressing (eg. gzip), then don't use TEXT fields of any sort. They're not binary-safe. Data going into/coming out of text fields is subject to character set translation, which probably (though not necessarily) mangle the compressed data and give you a corrupted result when you retrieve/uncompress the text.
Use BLOB fields instead, which are binary-transparent and do not to any translation of the data.
It might be better to define the text field as blob, and compress the data in PHP to save costs in communication.
CREATE TABLE book_parts (
......
content blob default NULL,
......
)
In PHP, use gzcompress and gzuncompress.
$content = '......';
$query = sprintf("replace into book_parts(content) values('%s') ",
mysql_escape_string(gzcompress($content)) );
mysql_query($query);
$query = "select * from book_parts where id = 111 ";
$result = mysql_query($query);
if ($result && $row = mysql_fetch_assoc($result))
$content = gzuncompress($row['content']);
You may also want to use a COMPRESS option to enable compression of packets.
Read some information about this option:
Use Compression in MySQL Connector/Net
Compress Property in dotConnect for MySQL
For PHP I have found this - MYSQLI_CLIENT_COMPRESS for mysqli_real_connect function.
You could use php functions gzdeflate and gzinflate for text.
There are no benefits in compressing large
texts into a database.
Here are the problems you might face in the long run:
If the server crashes the data may be hard to recover.
Not ideal for search.
It takes additional time to transfer the data between the mysql server and the browser.
Time consuming for backup (not using replication).
I think storing these large texts into a disk file will be easier for:
Distributed backup (rsync).
PHP to handle file upload.
I am in the process of migrating a large amount of data from several databases into one. As an intermediary step I am copying the data to a file for each data type and source db and then copying it into a large table in my new database.
The structure is simple in the new table, called migrate_data. It consists of an id (primary key), a type_id (incremented within the data type set), data (a field containing a serialized PHP object holding the data I am migrating), source_db (refers to the source database, obviously), data_type (identifies what type of data we are looking at).
I have created keys and key combinations for everything but the data field. Currently I have the data field set as a longtext column. User inserts are taking about 4.8 seconds each on average. I was able to trim that down to 4.3 seconds using DELAY_KEY_WRITE=1 on the table.
What I want to know about is whether or not there is a way to improve the performance even more. Possibly by changing to a different data column type. That is why I ask about the longtext vs text vs blob. Are any of those more efficient for this sort of insert?
Before you answer, let me give you a little more information. I send all of the data to an insert function that takes the object, runs it through serialize, then runs the data insert. It is also being done using Drupal 6 (and its db_query function).
Any efficiency improvements would be awesome.
Current table structure:
CREATE TABLE IF NOT EXISTS `migrate_data` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`type_id` int(10) unsigned NOT NULL DEFAULT '0',
`data` longtext NOT NULL,
`source_db` varchar(128) NOT NULL DEFAULT '',
`data_type` varchar(128) NOT NULL DEFAULT '',
PRIMARY KEY (`id`),
KEY `migrated_data_source` (`source_db`),
KEY `migrated_data_type_id` (`type_id`),
KEY `migrated_data_data_type` (`data_type`),
KEY `migrated_data_id__source` (`id`,`source_db`),
KEY `migrated_data_type_id__source` (`type_id`,`source_db`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 DELAY_KEY_WRITE=1;
The various text/blob types are all identical in storage requirements in PHP, and perform exactly the same way, except text fields are subject to character set conversion. blob fields are not. In other words, blobs are for when you're storing binary that MUST come out exactly the same as it went in. Text fields are for storing text data that may/can/will be converted from one charset to another.
Im making a 100% javascript and canvas web application, no forms at all. I have a js array with data and I'm wondering how could be possible to pass it to a php script so it gets loaded into the database. Suggestions?
My thoughts are to keep it simple. If you are looking to store arrays based on value/pair, ie a flat file and no relationships between tables, then I would do the following:
Create a mysql database with one table, two rows:
CREATE TABLE `data` (
`data_id` INT(10) NULL,
`data_key` CHAR(50) NULL,
`data_value` TEXT NULL,
`datemodified` TIMESTAMP NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`datecreated` DATETIME NULL,
PRIMARY KEY (`data_id`),
UNIQUE INDEX `data_key` (`data_key`)
)
COLLATE='latin1_swedish_ci'
ENGINE=MyISAM
ROW_FORMAT=DEFAULT
Anyway create a PHP script that will take a post of 2 variables, a key (a key), and value (this would be an object in javascript).
If you post a key, it should return the value (in the json format so javascript can interpret it into an object)
If you post a key and a value, the script will do an "Insert Ignore" and return the data_id. Run the value through json_encode() (as it will be if posted through javascript) and store it under the key.
You could also make an optional third way of accessing the data using the data_id value.
Just a thought... let us know how much php experience you have and if you need specific details on what functions to use
Security would also be a factor to consider. In this case, you might want to have javascript generate a unique session id, and then add this session_id to the table. So users can only access their own data. Not really sure how your app works at this stage though so sorry I can't suggest something more secure.
create a webservice in php that can connect to mysql db. Your js code would make calls to these webservices to save to db
You can send it as a JSON string to a PHP file using AJAX and then decode the JSON, inserting the data into the database. You should have the PHP file return something to make sure it's done.