MySQL table from phpMyAdmin's tracking reports - php

Using phpMyAdmin I can track a certain table's transactions (new inserts, deletes, etc), but is it possible to change or export it to a SQL table to be imported into my site using PHP.

While it is not exactly what I was looking for, I found a way to do it for the time being, one of the tables in the phpmyadmin (it's own) database is called pma__tracking, which contains a record of all tables being tracked, one of its columns is the data_sql longtext column which writes each report (ascendingly which is a bit annoying) in the following format
# log date username
data definition statement
Just added it for future references.

Related

MYSQL Table creation & index for huge table

I have a peculiar situation that brings me to you for advice, as this is the first time in quite a while where I don't have a clue where to start.
I have a large database of records with a single table that contains about 700k rows. The table is mostly text strings with a lot of indexes, as there's many ways to interface with the data on our website.
I am attempting to build a single tool that sound very simple:
In this table there is a column called business_codes.
This table includes a series of two-digit strings separated by a tilde "~".
Now this column cannot be NULL, thus it always has at least a single two-digit string for each record.
We have a job that updates these records directly from our vendor each hour via cron & some PHP.
The tool I am trying to create will pull the strings for each record, then cross-reference each one with all the other records in the database & return the amount of times it was used.
For instance. You login to the site, and you check this tool.
Your own business_codes in the database are AB~CD~EF
I want the tool to sort through each of your business_codes and output suggested business_codes to add to your profile based on the use of other records with the same business code.
The tool would look up each of your own codes and result:
84% of other business_codes with with AB in their profile also use HI
77% of other business_codes with with CD in their profile also use LM
This is absolute madness. My first stab at it was to create a metadata table and import the codes into the new table individually without the ~.
I can write the PHP to display all of this, my question is on the structure of the table (if a table even needs to be created). I am at a loss on how to even approach this issue.
Any help would be appreciated.

How to update all records of all tables in database?

I have a table with a lot of records (could be more than 500 000 or 1 000 000).
I want to update some common columns with the same field name in all tables throughout the database.
I know the traditional way to write separate queries to individual tables but not one query to update all records of all tables.
What is the most efficient way to do this in SQL, without using some dialect-specific features, so it works everywhere (Oracle, MSSQL, MySQL, Postgres etc.)?
ADDITIONAL INFO: There are no calculated fields. There are indexes. Used generated SQL statements that update the table row by row.
(This sounds like the classic case for normalizing that 'column'.)
Anyway... No. There is no single query to locate that column across all tables, then perform an UPDATE on each of the tables.
In MySQL, you can use the table information_schema.COLUMNS to locate all the tables containing a particular named column. With such a SELECT, you can generate (using CONCAT(), etc) the desired UPDATE statements. But then, you need to manually run them (via copy and paste).
Granted, you could probably write a Stored Procedure to wrap that into a single call, but that is too risky. What if some other table has the same column name, but should not be updated?
Example of building ALTERs to change tables' Engines: http://mysql.rjweb.org/doc.php/myisam2innodb#generating_alters
Example of using an SP to "pivot" rows to columns, complete with executing the generated code: http://mysql.rjweb.org/doc.php/pivot
As for common code across multiple vendors -- forget it! Virtually every syntax needs some amount of tweaking.

How can I compare a mysql table between two databases and update the differences efficiently?

Here is the setup, I have multiple online stores that I would like to use the same product database. Currently they are all separate, so updating anything requires going through and copying products over, it is a giant pain. What I would like to do is create a master product database that every night, each site will compare its database with, and make updates accordingly.
The idea is one master database of products that will be updated a few times a day, and then say at 2:00 AM, a cron job will run pulling the updates to the individual websites.
Just a few more details on the database, there is one table 'products' that needs to be compared, but it also needs to look at table 'prodcuts_site_status' to determine the value for the products status for each given site, so I can't simply dump the master table and re-important it into the site databases.
Creating a php script to go row by row and compare and update would be easy enough, but I was hoping there existed a more elegant/efficient solution in mysql. Any suggestions?
Thanks!
To sum up you could try 3 different methods:
use SELECT ... INTO OUTFILE and then LOAD DATA INFILE from MySQL Cross Server Select Query
use the replication approach described here Perl: How to copy/mirror remote MYSQL table(s) to another database? Possibly different structure too?
use a FEDERATED storage engine to join tables from different servers http://dev.mysql.com/doc/refman/5.0/en/federated-storage-engine.html

Check for changes in database schema and update

At our company we have a business solution which includes CMS, CRM and several other systems.
These are installed in several domains, for each of our clients.
The systems are still in development, so new tables and fields are added to the database.
Each time we want to release a new version to our clients, i have to go through their database and insert the new fields and tables manually.
Is there a way that this could be done automatically(a script maybe that detects the new fields and tables and inserts them?)
We are using php and mysql.
We would like to avoid backing up the clients data, dropping the database tables, running the sql query to insert all the database tables(including the new ones) and then re-inserting the customers data. Is this possible?
Toad for MySQL
DB Extract, Compare-and-Search Utility — Lets you compare two MySQL databases, view the differences, and create the script to update the target.
What you are looking for is
ALTER TABLE 'xyz' ADD 'new_colum' INT(10) DEFAULT '0' NOT NULL;
or if you want to get rid of a colum
ALTER TABLE 'xyz' DROP 'new_colum';
Put all table edits into an update.php file and the either call and delete it once manually or try to select "new_colum" once and update the database when it's not present.
OR what I do: "I have a settingsfield "software version" and use this as a trigger to update my tables.
But since you have to install the new scripts anyways you can just call it manually.

Archive MySQL data using PHP every week

I have a MySQL DB that receives a lot of data from a source once every week on a certain day of the week at a given time (about 1.2million rows) and stores it in, lets call it, the "live" table.
I want to copy all the data from "live" table into an archive and truncate the live table to make space for the next "current data" that will come in the following week.
Can anyone suggest an efficient way of doing this. I am really trying to avoid -- insert into archive_table select * from live --. I would like the ability to run this archiver using PHP so I cant use Maatkit. Any suggestions?
EDIT: Also, the archived data needs to be readily accessible. Since every insert is timestamped, if I want to look for the data from last month, I can just search for it in the archives
The sneaky way:
Don't copy records over. That takes too long.
Instead, just rename the live table out of the way, and recreate:
RENAME TABLE live_table TO archive_table;
CREATE TABLE live_table (...);
It should be quite fast and painless.
EDIT: The method I described works best if you want an archive table per-rotation period. If you want to maintain a single archive table, might need to get trickier. However, if you're just wanting to do ad-hoc queries on historical data, you can probably just use UNION.
If you only wanted to save a few periods worth of data, you could do the rename thing a few times, in a manner similar to log rotation. You could then define a view that UNIONs the archive tables into one big honkin' table.
EDIT2: If you want to maintain auto-increment stuff, you might hope to try:
RENAME TABLE live TO archive1;
CREATE TABLE live (...);
ALTER TABLE LIVE AUTO_INCREMENT = (SELECT MAX(id) FROM archive1);
but sadly, that won't work. However, if you're driving the process with PHP, that's pretty easy to work around.
Write a script to run as a cron job to:
Dump the archive data from the "live" table (this is probably more efficient using mysqldump from a shell script)
Truncate the live table
Modify the INSERT statements in the dump file so that the table name references the archive table instead of the live table
Append the archive data to the archive table (again, could just import from dump file via shell script, e.g. mysql dbname < dumpfile.sql)
This would depend on what you're doing with the data once you've archived it, but have you considered using MySQL replication?
You could set up another server as a replication slave, and once all the data gets replicated, do your delete or truncate with a SET BIN-LOG 0 before it to avoid that statement also being replicated.

Categories