I'm using PDO and trying to make my application support both MySQL and SQLite, but in sqlite I get this error when I try to import my database schema:
SQLSTATE[HY000]: General error: 1 near "AUTO_INCREMENT": syntax error
The query looks like this:
CREATE TABLE events (
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL,
title VARCHAR(64) NOT NULL,
description LONGTEXT,
starttime DATETIME DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY(id),
KEY name(name)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
(and it works in a MySQL database.)
I don't understand what the problem is here? Shouldn't both database systems be compatible?
http://www.sqlite.org/autoinc.html
In SQLite it's called AUTOINCREMENT, not AUTO_INCREMENT
They should be compatible as regards the ANSI SQL standards, and all SQL databases should adhere to that. However, AutoIncrement is not a part of that standard, but an extra feature implemented by some databases (including MySQL). Not all databases provide that feature, or may provide it in a different manner, or with different syntax.
AUTO_INCREMENT is MySQL-specific. SQLite apparently has a similar thing, AUTOINCREMENT.
Unfortunately though SQL should be a standard, each database implementation is different and have its own peculiarities, so you have to arrange your Query to make it work on SQLite.
No, they support a completely different set of features. The most significant difference is that SQLite uses dynamic data types whereas MySQL uses static data types, but there are many other differences too.
They do however both support a common subset of SQL, so it is possible to write some simple SQL statements that will work in both systems.
Related
It's actually my fault that I did not think about it earlier that, my remote server MySQL version (on shared hosting) is 5.5.6, but my local MySQL version is 5.7.19.
I developed a Laravel (v6.6.0) Web Application, where I ran the migration on the very first run, but as it's completely a personal project, I continued modifying the database by hand where and how necessary, (but off-the-record, I kept changing the migration files as well though I never ran them after the first instance).
I migrated all the data from some other tables and my application was ready to deploy. But when I was exporting the local database tables, and importing them to the remote database, it's giving me a well-known error:
Specified key was too long; max key length is 767 bytes
I actually ignored it because all the tables were imported nicely. But recently I found its caveats - all the AUTO_INCREAMENT and PRIMARY_KEY are not present on my remote database.
I searched what I could, but all the solutions are suggesting to delete the database and create it again with UTF-8 actually could not be my case. And a solution like the following PHP-way is also not my case as I'm using PHPMyAdmin to Import my table while I'm getting the error:
// File: app/Providers/AppServiceProvider.php
use Illuminate\Support\Facades\Schema;
public function boot()
{
Schema::defaultStringLength(191);
}
I also tried running the following command on my target database:
SET #global.innodb_large_prefix = 1;
But no luck. I also tried replacing all the occurrences of my .sql local file:
from utf8mb4 to utf8, and
from utf8mb4_unicode_ci to utf8_general_ci
but found no luck again.
From where the error specifically is coming from, actually the longer foreign keys, like xy_section_books_price_unit_id_foreign, and at this stage when everything is done, I don't know how can I refactor all the foreign keys to 5.5 compatible.
Can anybody please shed some light on my issue?
How can I deploy my local database (v5.7) without losing my PRIMARY_KEYs, FOREIGN KEYS and INDEXes to a v5.5 MySQL database keeping the data intact?
Change your key names. You can overwrite the "default generated" very long key names when you create them. See https://laravel.com/docs/5.8/migrations Available index types for the documentation
I ran in a similar issue when migrating from SQL server to MySQL and the autogenerated key names that had full long namespaces and key names were simply too long. So by replacing those all by hand crafted unique index names I got around those problems.
You don't really need unique names in MySQL, but if you use SQLITE for unit tests you do need unique names.
so instead of:
public function up()
{
....
$table->primary('id');
// generates something like work_mayeenul_islam_workhorse_models_model_name_id_primary_key
$table->index(['foobar','bazbal']);
// generates something like work_mayeenul_islam_workhorse_models_model_name_foobar_bazbal_index
}
You use your own defined, you know these to be short index names.
public function up()
{
....
$table->primary('id', 'PK_short_namespace_modelname_id');
$table->index(['foobar', 'bazbal'], 'IX_short_namespace_modelname_foobar_bazbal');
}
Thank you #Tschallacka for your answer. My problem was, I cannot run php artisan migrate anymore because I've live data on those tables. First of all, the issue let me learn newer things (Thanks to my colleague Nazmul Hasan):
Lesson Learnt
Keys are unique but could even be gibberish
First, I found a pattern in the foreign keys: {table_name}_{column_name}_foreign. Similarly in index keys: {table_name}_{column_name}_index. Lesson learned that the foreign key or index key doesn't have to be in such a format to make work. It has to be unique, but it can be anything and could be gibberish too. So password_resets_email_index key can easily be pre_idx or anything else.
But that was not the issue.
Solution
For the solution, I tried digging the .sql file table by table and scope by scope. And I found only 2 of the UNIQUE key declaration was showing blocking error. And there were 3 other occasions where there were warnings:
ALTER TABLE `contents` ADD KEY `contents_slug_index` (`slug`); --- throwing warning
ALTER TABLE `foo_bar` ADD UNIQUE KEY `slug` (`slug`); --- throwing error
ALTER TABLE `foo_bar` ADD KEY `the_title_index` (`title`) USING BTREE; --- throwing warning
ALTER TABLE `password_resets` ADD KEY `password_resets_email_index` (`email`); --- throwing waring
ALTER TABLE `users` ADD UNIQUE KEY `users_email_unique` (`email`); --- throwing error
Finally, the solution came from this particular StackOverflow thread:
INNODB utf8 VARCHAR(255)
INNODB utf8mb4 VARCHAR(191)
With inspection on those table with the knowledge of that SO thread, I found:
The issue is: with collation utf8mb4_unicode_ci in MySQL 5.5/5.6 the field value cannot be greater than 191. But,
with collation utf8_unicode_ci in MySQL 5.5/5.6 the field value cannot be greater than 255. But with utf8_unicode_ci you cannot save emoji etc.
So I decided to stay with the utf8_unicode_ci for a comparatively longer value. So for a temporary remedy:
I changed all those particular columns, I changed from utf8mb4_unicode_ci to utf8_unicode_ci
If those particular columns exceed 255, I reduced them to 255
So for example, if the table is like below:
CREATE TABLE `foo_bar` (
`id` bigint(20) UNSIGNED NOT NULL,
`cover` varchar(500) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`title` varchar(300) COLLATE utf8mb4_unicode_ci NOT NULL,
`slug` varchar(300) COLLATE utf8mb4_unicode_ci NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
I changed only the necessary columns:
CREATE TABLE `foo_bar` (
`id` bigint(20) UNSIGNED NOT NULL,
`cover` varchar(500) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`title` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`slug` varchar(255) COLLATE utf8_unicode_ci NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
And that's it. This temporary remedy is working just fine, and I didn't have to change the foreign key or index key.
Why this temporary remedy? Because eventually I'll go with MySQL 5.7+, but before that, at least try to cope with the previous versions.
I'm trying to convert a database to use utf8mb4 instead of utf8. Everything is going fine except one table:
CREATE TABLE `search_terms` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`search_term` varchar(128) NOT NULL,
`time_added` timestamp NULL DEFAULT NULL,
`count` int(10) unsigned NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
UNIQUE KEY `search_term` (`search_term`),
KEY `search_term_count` (`count`)
) ENGINE=InnoDB AUTO_INCREMENT=198981 DEFAULT CHARSET=utf8;
Basically all it does is save an entry every time somebody searches something in a form so we can track the number of searches, very simple.
There's a unique index on search_term because we want to only have one row per search term and instead increment the count value.
However when converting to utf8mb4 I am getting duplicate entry errors. Here is the command I am running:
ALTER TABLE `search_terms` CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
Looking in the database I can see various examples like this:
fm2012
fm2012
fm2012
In it's current utf8 character set, these are all being treated as unique and exist within the database without ever having an issue with the unique index on search_term.
But when converting to utf8mb4 they are now being considered equal and throwing an error due to that index.
I can figure out how to merge these together easily enough, but i'm concerned this may be a symptom of a greater underlying problem. I'm not really sure how this has happened or what the consequences may be, so my questions are a bit vague:
Why is utf8mb4 treating these differently to utf8?
What are the possible consequences?
Is there someway I can do a conversion so things like "fm2012" never appear in my database and I only have "fm2012" (I am also using Laravel 5.1)
Your problem is the change of collation: you're using general_ci and you're converting to unicode_ci: general_ci is quite a simple collation that doesn't know much about unicode, but unicode_ci does.
The first "f" in your example string is a "Fullwidth Latin Small Letter F" (U+FF46) which is considered equal to "Latin Small Letter F" (U+0066) by unicode_ci but not by general_ci.
Normally it's recommended to use unicode_ci exactly because of its unicode-awareness but you could convert to utf8mb4_general_ci to prevent this problem.
To prevent this problem in the future, you should normalize your input before saving it in the DB. Normally you'd use NFC, but your case seems to call for NFKC. This should bring all "equivalent" strings to the same form.
Despite what was said previously it is not about general_ci being more simplistic than unicode_ci. Yes, it can be true, but the issue is that you need to keep it matching to the sub-type you have.
For example, my database is utf8_bin. I cannot convert to utf8mb4_unicode_ci nor to utf8mb4_general_ci. These commands will throw an error of a duplicate key being found. However the correct collation utf8mb4_bin completes without issues.
I have a Joomla (PHP) website with an existing hosted MySQL database.
I have a Google Cloud SQL Instance with some statistical data in.
I need to query the data across both databases and would like the query to run on the Google Cloud SQL instance.
My research so far has lead me to belive that the best way to do this is to create a federated table inside the Google Cloud SQL database but in attempting to do this I am not getting the results I expect (neither am I getting an error?!)
Joomla MySQL table:
CREATE TABLE test_table (
id INT(20) NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL DEFAULT '',
other INT(20) NOT NULL DEFAULT '0',
PRIMARY KEY (id),
INDEX name (name),
INDEX other_key (other)
)
ENGINE=MyISAM
DEFAULT CHARSET=latin1;
Google Cloud SQL:
CREATE TABLE federated_table (
id INT(20) NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL DEFAULT '',
other INT(20) NOT NULL DEFAULT '0',
PRIMARY KEY (id),
INDEX name (name),
INDEX other_key (other)
)
ENGINE=FEDERATED
DEFAULT CHARSET=latin1
CONNECTION='mysql://*uid*:*pwd*#*joomla_server_ip*:3306/*database_name*/test_table';
Where
*uid*, *pwd*, *joomla_server_ip* and *database_name*
Are all valid values.
Both statements execute fine with no errors, but after inserting data to *test_table* on Joomla I am unable to see any data in *federated_table* on Google Cloud SQL.
I have tried the federated table creation using both the command line tool (Windows) and using the SQuirrel SQL JDBC client.
Because I am seeing no errors what so ever I'm not sure if the problem is at the Joomla database end or the Google Cloud SQL database end. So any help will be greatly appreciated. I am assuming the problem is with the connection between the two databases, but am open to trying any other theroies that you may throw at me.
EDIT:
I'm now using a different client to connect (MySQL Workbench) and this reports an error when trying to do the same thing
1286 Unknown storage engine 'FEDERATED' 1266 Using storage engine InnoDB for table 'federated_table'
Shortly after asking this question Google added the MySQL Wire Protocol to Google Cloud SQL.
http://googlecloudplatform.blogspot.co.uk/2013/10/google-cloud-sql-now-accessible-from-any-application-anywhere.html
It is now possible to create Federated tables in the normal way.
I know this question has been asked more than once here, but I couldn't find a solution.
We are using a database where we are storing the facebook id as a BIGINT(20).
create table users(
fb_id bigint(20) NOT NULL,
user_name varchar(30) NOT NULL,
CONSTRAINT uk_name unique (user_name),
CONSTRAINT pk_fb_id primary key (fb_id)
)ENGINE=INNODB;
But the PDO engine of PHP can insert only the max integer value of PHP, i.e. 2147483647.
$stmt->bindParam(':fb_id', $this->fb_id, PDO::PARAM_INT);
This, I understand, is quite obvious since we are limited by the maximum value of integer in PHP. I tried to use the string -
$stmt->bindParam(':fb_id', $this->fb_id, PDO::PARAM_STR);
but still it doesn't work.
I want to know if there could be a workaround to store it as bigint.
We are using a database where we are storing the facebook id as a BIGINT(20).
Why oh why are you doing that?
I think general consensus is that Facebook ids should not be saved as numeric types, but as strings instead. Saving them as something numeric does not yield any advantages whatsoever – but several disadvantages.
I have a shape file, and I want to show it on the web by using leaflet (http://leaflet.cloudmade.com/). Since leaflet only support geoJSON, I should change the shp file into geoJSON. It is easy since I can use "save as" capability in Quantum-GIS.
Although I can use geojson as database (by reading, edit and writing the file programmatically), I think it is better to use the "real" database. My-SQL is the most popular one, and it support spatial data, so I decide to use MySQL.
The scenario is:
Change shp into MySQL (I use ogr2ogr and just simply run this command: ogr2ogr -f "MySQL" MySQL:"geo,user=root,host=localhost,password=toor" -lco engine=MYISAM airports.shp)
Fetch MySQL database into geojson <-- here is the problem
Using ajax to get the geojson and change the layout <-- this should be easy, I'm good with JQuery
There is a column in My MySQL table which its type is "GEOMETRY", Look the table definition below:
CREATE TABLE IF NOT EXISTS `airports` (
`OGR_FID` int(11) NOT NULL AUTO_INCREMENT,
`SHAPE` geometry NOT NULL,
`cat` decimal(10,0) DEFAULT NULL,
`na3` varchar(80) DEFAULT NULL,
`elev` double(32,3) DEFAULT NULL,
`f_code` varchar(80) DEFAULT NULL,
`iko` varchar(80) DEFAULT NULL,
`name` varchar(80) DEFAULT NULL,
`use` varchar(80) DEFAULT NULL,
UNIQUE KEY `OGR_FID` (`OGR_FID`),
SPATIAL KEY `SHAPE` (`SHAPE`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=77 ;
Is there any way to change such a table into geojson format?
(I prefer the easy way, but if there is not, just change the column into array like is acceptable)
EDIT:
I use geophp written by phayes.
https://github.com/phayes/geoPHP/wiki/Example-format-converter.
This solves the main problem. Only need to a bit mess up with adding feature etc.
Any easier solution?
While there may not be a direct method to convert from a mysql spatial entity to geojson, you can try the following:
get the WKT (Well Known Text) of the entity. (MySQL Reference)
convert from WKT to geojson (Done in perl, although you should be able to find it in other languages or write your own in javascript);
Note that just calling jsonEncode() on the entity, as others have suggested, will not yeild geoJson.
My personal suggestion, which does not directly answer your question, would be to store the data in the format you need it retrieved in. It will reduce the overhead required to process the data every time you need it.
The easiest way to do this is to store the geojson in plain text as you suggested. If, for whatever reason, you also need the geometry stored in native format, you can store it in another column. The only downside is keeping the two columns in sync.