How do I create a Federated Table in Google Cloud SQL - php

I have a Joomla (PHP) website with an existing hosted MySQL database.
I have a Google Cloud SQL Instance with some statistical data in.
I need to query the data across both databases and would like the query to run on the Google Cloud SQL instance.
My research so far has lead me to belive that the best way to do this is to create a federated table inside the Google Cloud SQL database but in attempting to do this I am not getting the results I expect (neither am I getting an error?!)
Joomla MySQL table:
CREATE TABLE test_table (
id INT(20) NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL DEFAULT '',
other INT(20) NOT NULL DEFAULT '0',
PRIMARY KEY (id),
INDEX name (name),
INDEX other_key (other)
)
ENGINE=MyISAM
DEFAULT CHARSET=latin1;
Google Cloud SQL:
CREATE TABLE federated_table (
id INT(20) NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL DEFAULT '',
other INT(20) NOT NULL DEFAULT '0',
PRIMARY KEY (id),
INDEX name (name),
INDEX other_key (other)
)
ENGINE=FEDERATED
DEFAULT CHARSET=latin1
CONNECTION='mysql://*uid*:*pwd*#*joomla_server_ip*:3306/*database_name*/test_table';
Where
*uid*, *pwd*, *joomla_server_ip* and *database_name*
Are all valid values.
Both statements execute fine with no errors, but after inserting data to *test_table* on Joomla I am unable to see any data in *federated_table* on Google Cloud SQL.
I have tried the federated table creation using both the command line tool (Windows) and using the SQuirrel SQL JDBC client.
Because I am seeing no errors what so ever I'm not sure if the problem is at the Joomla database end or the Google Cloud SQL database end. So any help will be greatly appreciated. I am assuming the problem is with the connection between the two databases, but am open to trying any other theroies that you may throw at me.
EDIT:
I'm now using a different client to connect (MySQL Workbench) and this reports an error when trying to do the same thing
1286 Unknown storage engine 'FEDERATED' 1266 Using storage engine InnoDB for table 'federated_table'

Shortly after asking this question Google added the MySQL Wire Protocol to Google Cloud SQL.
http://googlecloudplatform.blogspot.co.uk/2013/10/google-cloud-sql-now-accessible-from-any-application-anywhere.html
It is now possible to create Federated tables in the normal way.

Related

How to deploy MySQL database from v5.7.19 to a remote MySQL database of v5.5.6

It's actually my fault that I did not think about it earlier that, my remote server MySQL version (on shared hosting) is 5.5.6, but my local MySQL version is 5.7.19.
I developed a Laravel (v6.6.0) Web Application, where I ran the migration on the very first run, but as it's completely a personal project, I continued modifying the database by hand where and how necessary, (but off-the-record, I kept changing the migration files as well though I never ran them after the first instance).
I migrated all the data from some other tables and my application was ready to deploy. But when I was exporting the local database tables, and importing them to the remote database, it's giving me a well-known error:
Specified key was too long; max key length is 767 bytes
I actually ignored it because all the tables were imported nicely. But recently I found its caveats - all the AUTO_INCREAMENT and PRIMARY_KEY are not present on my remote database.
I searched what I could, but all the solutions are suggesting to delete the database and create it again with UTF-8 actually could not be my case. And a solution like the following PHP-way is also not my case as I'm using PHPMyAdmin to Import my table while I'm getting the error:
// File: app/Providers/AppServiceProvider.php
use Illuminate\Support\Facades\Schema;
public function boot()
{
Schema::defaultStringLength(191);
}
I also tried running the following command on my target database:
SET #global.innodb_large_prefix = 1;
But no luck. I also tried replacing all the occurrences of my .sql local file:
from utf8mb4 to utf8, and
from utf8mb4_unicode_ci to utf8_general_ci
but found no luck again.
From where the error specifically is coming from, actually the longer foreign keys, like xy_section_books_price_unit_id_foreign, and at this stage when everything is done, I don't know how can I refactor all the foreign keys to 5.5 compatible.
Can anybody please shed some light on my issue?
How can I deploy my local database (v5.7) without losing my PRIMARY_KEYs, FOREIGN KEYS and INDEXes to a v5.5 MySQL database keeping the data intact?
Change your key names. You can overwrite the "default generated" very long key names when you create them. See https://laravel.com/docs/5.8/migrations Available index types for the documentation
I ran in a similar issue when migrating from SQL server to MySQL and the autogenerated key names that had full long namespaces and key names were simply too long. So by replacing those all by hand crafted unique index names I got around those problems.
You don't really need unique names in MySQL, but if you use SQLITE for unit tests you do need unique names.
so instead of:
public function up()
{
....
$table->primary('id');
// generates something like work_mayeenul_islam_workhorse_models_model_name_id_primary_key
$table->index(['foobar','bazbal']);
// generates something like work_mayeenul_islam_workhorse_models_model_name_foobar_bazbal_index
}
You use your own defined, you know these to be short index names.
public function up()
{
....
$table->primary('id', 'PK_short_namespace_modelname_id');
$table->index(['foobar', 'bazbal'], 'IX_short_namespace_modelname_foobar_bazbal');
}
Thank you #Tschallacka for your answer. My problem was, I cannot run php artisan migrate anymore because I've live data on those tables. First of all, the issue let me learn newer things (Thanks to my colleague Nazmul Hasan):
Lesson Learnt
Keys are unique but could even be gibberish
First, I found a pattern in the foreign keys: {table_name}_{column_name}_foreign. Similarly in index keys: {table_name}_{column_name}_index. Lesson learned that the foreign key or index key doesn't have to be in such a format to make work. It has to be unique, but it can be anything and could be gibberish too. So password_resets_email_index key can easily be pre_idx or anything else.
But that was not the issue.
Solution
For the solution, I tried digging the .sql file table by table and scope by scope. And I found only 2 of the UNIQUE key declaration was showing blocking error. And there were 3 other occasions where there were warnings:
ALTER TABLE `contents` ADD KEY `contents_slug_index` (`slug`); --- throwing warning
ALTER TABLE `foo_bar` ADD UNIQUE KEY `slug` (`slug`); --- throwing error
ALTER TABLE `foo_bar` ADD KEY `the_title_index` (`title`) USING BTREE; --- throwing warning
ALTER TABLE `password_resets` ADD KEY `password_resets_email_index` (`email`); --- throwing waring
ALTER TABLE `users` ADD UNIQUE KEY `users_email_unique` (`email`); --- throwing error
Finally, the solution came from this particular StackOverflow thread:
INNODB utf8 VARCHAR(255)
INNODB utf8mb4 VARCHAR(191)
With inspection on those table with the knowledge of that SO thread, I found:
The issue is: with collation utf8mb4_unicode_ci in MySQL 5.5/5.6 the field value cannot be greater than 191. But,
with collation utf8_unicode_ci in MySQL 5.5/5.6 the field value cannot be greater than 255. But with utf8_unicode_ci you cannot save emoji etc.
So I decided to stay with the utf8_unicode_ci for a comparatively longer value. So for a temporary remedy:
I changed all those particular columns, I changed from utf8mb4_unicode_ci to utf8_unicode_ci
If those particular columns exceed 255, I reduced them to 255
So for example, if the table is like below:
CREATE TABLE `foo_bar` (
`id` bigint(20) UNSIGNED NOT NULL,
`cover` varchar(500) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`title` varchar(300) COLLATE utf8mb4_unicode_ci NOT NULL,
`slug` varchar(300) COLLATE utf8mb4_unicode_ci NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
I changed only the necessary columns:
CREATE TABLE `foo_bar` (
`id` bigint(20) UNSIGNED NOT NULL,
`cover` varchar(500) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`title` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`slug` varchar(255) COLLATE utf8_unicode_ci NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
And that's it. This temporary remedy is working just fine, and I didn't have to change the foreign key or index key.
Why this temporary remedy? Because eventually I'll go with MySQL 5.7+, but before that, at least try to cope with the previous versions.

MySQL error when copying/importing database

the premise to my problem is this:
I know very little about MySQL, and I have a MySQL database of a wordpress blog that I'm trying to copy between two instances of XAMPP, from one under OS X, to the other under Windows 7. I assume they both are InnoDB, not sure, though. Both XAMPP versions are the latest to date.
I export the database to an .sql file with all the default settings and then try to import it into an empty database of the same name, when it gets to this table
Indexes for table wp_posts
ALTER TABLE `wp_posts`
ADD PRIMARY KEY (`ID`), ADD KEY `post_name` (`post_name`),
ADD KEY `type_status_date` (`post_type`,`post_status`,`post_date`,`ID`),
ADD KEY `post_parent` (`post_parent`), ADD KEY `post_author` (`post_author`),
ADD FULLTEXT KEY `crp_related` (`post_title`,`post_content`),
ADD FULLTEXT KEY `crp_related_title` (`post_title`),
ADD FULLTEXT KEY `crp_related_content` (`post_content`);
I get this error:
#1795 - InnoDB presently supports one FULLTEXT index creation at a time
What should I do to succesfully copy the database over?
I understand it's an InnoDB bug of some kind, Googling the exact error phrase I found this thread
MySQL Error When Copying or Importing Database
but I don't know enough to understand what the OP did to his db so it worked...
I also found this
http://sourceforge.net/p/phpmyadmin/feature-requests/1553/
I understand this is directly related to the error i'm getting, and that it probably tells a solution ("That is why PhpMyAdmin at export should separate creation of FULLTEXT indexes into few SQL commands") but I don't know enough to understand that either, and Googling further about FULLTEXT only finds pages about "fulltext search" function.
But what should I DO, exactly? Please advice.

Magento Error Log - 'core_store' doesn't exist in /mysite/lib/Zend/Db/Statement/Pdo.php

A similar question has been asked at:
Magento - Base Table core_file_storage Doesn't exist
However it is referring to the file 'core_file_storage' and mine is referring to 'core_store'. I tried to add a comment to this already existing question but I do not have enough reputation points to do so. I tried to figure out how I could add my question to the existing question but I had no luck. I am sorry if I am doing the wrong thing by creating a new question, but the answers do not solve my problem. Please tell me if there is an official way for me to add to an existing question and I will do it, although my question is slightly different so maybe a fresh question is necessary.
Question:
I have very basic knowledge with files and databases. This is my first time experiencing the Error Log file. Basically my website is working fine on my live server, but when I have tried to move it to my local server (MAMP) I am getting an error and my site won't work. I have looked at the Error Log and it says the following:
[14-Apr-2014 16:50:01 UTC] PHP Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[42S02]: Base table or view not found: 1146 Table ‘mysite.core_store' doesn't exist' in /home/mysite/lib/Zend/Db/Statement/Pdo.php:228
Stack trace:
#0 /home/mysite/lib/Zend/Db/Statement/Pdo.php(228): PDOStatement->execute(Array)
#1 /home/mysite/lib/Varien/Db/Statement/Pdo/Mysql.php(110): Zend_Db_Statement_Pdo->_execute(Array)
#2 /home/mysite/lib/Zend/Db/Statement.php(300): Varien_Db_Statement_Pdo_Mysql->_execute(Array)
#3 /home/mysite/lib/Zend/Db/Adapter/Abstract.php(479): Zend_Db_Statement->execute(Array)
#4 /home/mysite/lib/Zend/Db/Adapter/Pdo/Abstract.php(238): Zend_Db_Adapter_Abstract->query('SELECT `main_ta...', Array)
#5 /home/mysite/lib/Varien/Db in /home/mysite/lib/Zend/Db/Statement/Pdo.php on line 234
This error repeats itself throughout my error log.
My site is running on Magento with a Magento theme. It was working fine before this and I had no problems when moving it to my local server (MAMP). The only changes that I think could have caused this issue, was when I was trying to speed up my site.
I did this by removing images I no longer use from the website folder and also by removing Magento stores that came with the theme I purchased, but I don't use. The fact that the error is for the file 'core_store' it suggests to me that it could be to do with the stores I removed, however the site continued to work on my live server after removing the store. The reason I think it could be to do with the fact I removed images I was no longer using from the folder, is because on the other question, someone has answered saying "the 'core_file_storage' tables are used for storing uploaded images for each product".
I have searched Google trying to get information on what the 'core_store' table does in Magento, but all the results are related to 'core_store' problems, rather than explaining what the 'core_store' is. If anyone could tell me what the 'core_store' table does in Magento, maybe I could help provide more information on the problem.
Thanks
I got the same Issue and resolve with below queries :
First, if you do not yet have core_directory_storage, run:
CREATE TABLE `core_directory_storage` (
`directory_id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`name` VARCHAR(255) NOT NULL DEFAULT '',
`path` VARCHAR(255) NOT NULL DEFAULT '',
`upload_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
`parent_id` INT(10) UNSIGNED NULL DEFAULT NULL,
PRIMARY KEY (`directory_id`),
UNIQUE INDEX `IDX_DIRECTORY_PATH` (`name`, `path`),
INDEX `parent_id` (`parent_id`),
CONSTRAINT `FK_DIRECTORY_PARENT_ID` FOREIGN KEY (`parent_id`) REFERENCES `core_directory_storage` (`directory_id`) ON UPDATE CASCADE ON DELETE CASCADE
) COMMENT='Directory storage' COLLATE='utf8_general_ci' ENGINE=InnoDB ROW_FORMAT=DEFAULT;
Then, run:
CREATE TABLE `core_file_storage` (
`file_id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`content` LONGBLOB NOT NULL,
`upload_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
`filename` VARCHAR(255) NOT NULL DEFAULT '',
`directory_id` INT(10) UNSIGNED NULL DEFAULT NULL,
`directory` VARCHAR(255) NULL DEFAULT NULL,
PRIMARY KEY (`file_id`),
UNIQUE INDEX `IDX_FILENAME` (`filename`, `directory`),
INDEX `directory_id` (`directory_id`),
CONSTRAINT `FK_FILE_DIRECTORY` FOREIGN KEY (`directory_id`) REFERENCES `core_directory_storage` (`directory_id`) ON UPDATE CASCADE ON DELETE CASCADE
) COMMENT='File storage' COLLATE='utf8_general_ci' ENGINE=InnoDB ROW_FORMAT=DEFAULT;
core_store is a very simple table, and is set up by Magento during the installation process like many other tables. It assigns website/store Id numbers and names to your admin and frontend area(s).
You said you you're trying to move your store to a local server. What often happens in cases like this is that your Magento local.xml file (located in app/etc of your Magento root) needs to be updated to reflect your new database username, password, and table prefix. Magento pulls the information from this file to connect to your database. (as an aside, always make sure this file can't be accessed publicly)
Can you browse your local database to verify that core_store does indeed exist?

Using Laravel database sessions with SQL Server

I'm using Laravel on top of SQL Server and I'm trying to store the PHP sessions in the database. I have created the sessions table per the http://laravel.com/docs/session#database-sessions but I am getting the following error when loading a page:
PDOException was thrown when trying to read the session data: SQLSTATE[22001]: [Microsoft][SQL Server Native Client 11.0][SQL Server]String or binary data would be truncated.
Update:
I fixed this by creating the table manually:
CREATE TABLE portal_sessions (
id VARCHAR(255) PRIMARY KEY NOT NULL,
last_activity INT NOT NULL,
payload TEXT NOT NULL);
Make sure that your Payload column is large enough to hold all your session data. I'd suggest making sure that it is "VARCHAR(MAX)"

SQLite syntax not compatible with MySQL?

I'm using PDO and trying to make my application support both MySQL and SQLite, but in sqlite I get this error when I try to import my database schema:
SQLSTATE[HY000]: General error: 1 near "AUTO_INCREMENT": syntax error
The query looks like this:
CREATE TABLE events (
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL,
title VARCHAR(64) NOT NULL,
description LONGTEXT,
starttime DATETIME DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY(id),
KEY name(name)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
(and it works in a MySQL database.)
I don't understand what the problem is here? Shouldn't both database systems be compatible?
http://www.sqlite.org/autoinc.html
In SQLite it's called AUTOINCREMENT, not AUTO_INCREMENT
They should be compatible as regards the ANSI SQL standards, and all SQL databases should adhere to that. However, AutoIncrement is not a part of that standard, but an extra feature implemented by some databases (including MySQL). Not all databases provide that feature, or may provide it in a different manner, or with different syntax.
AUTO_INCREMENT is MySQL-specific. SQLite apparently has a similar thing, AUTOINCREMENT.
Unfortunately though SQL should be a standard, each database implementation is different and have its own peculiarities, so you have to arrange your Query to make it work on SQLite.
No, they support a completely different set of features. The most significant difference is that SQLite uses dynamic data types whereas MySQL uses static data types, but there are many other differences too.
They do however both support a common subset of SQL, so it is possible to write some simple SQL statements that will work in both systems.

Categories