How to get table structures from a .frm file using PHP? - php

Let I have created a table named after pb00k via phpMyAdmin which SQL is as bellow:
CREATE TABLE `pb00k` (
`k3y` int(255) NOT NULL AUTO_INCREMENT,
`n4m3` text NOT NULL,
`numb3r` text NOT NULL,
PRIMARY KEY (`k3y`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
So it generates a pb00k.frm file in mysql/data/ which contains this structure. Now I want to get this structure from this file using PHP.
Is it possible?

Yes it's possible to recover at least part of information. (For the benefit of other readers the poser of the question is already aware that there are easier ways to get the column metadata).
The challenge is that .frm files are not so well documented because any need to decipher them by the general community is pretty rare. Also the format of the files may vary with the operating system.
However, by viewing the files with hexdump or a similar utility you can see partly what is going on. Then you better informed to read the files in a PHP program and decode the raw binary data.
I did this as an exercise some time back, and I was able to recover number of columns, column names and column types.
Below is a sample to show how to extract column names. My .frm was for a table names "stops", but you can substitute your own .frm.
<?php
$fileName = "stops.frm";
// read file into an array of char
//---------------------------------
$handle = fopen($fileName, "rb");
$contents = fread($handle, filesize($fileName));
fclose($handle);
$fileSize=strlen($contents); // save the filesize fot later printing
// locate the column data near the end of the file
//-------------------------------------------------
$index = 6; // location of io_size
$io_size_lo = ord($contents[$index]);
$io_size_hi = ord($contents[$index+1]);
$io_size = $io_size_hi *0x100 + $io_size_lo; // read IO_SIZE
$index = 10; // location of record length
$rec_len_lo = ord($contents[$index]);
$rec_len_hi = ord($contents[$index+1]);
$rec_len = $rec_len_hi * 0x100 + $rec_len_lo; // read rec_length
// this formula uses io_size and rec_length to get to column data
$colIndex = ( ( (($io_size + $rec_len)/$io_size) + 1) * $io_size ) + 258;
$colIndex -= 0x3000; // this is not documented but seems to work!
// find number of columns in the table
//-------------------------------------------------
echo PHP_EOL."Col data at 0x".dechex($colIndex).PHP_EOL;
$numCols = ord($contents[$colIndex]);
//Extract the column names
//--------------------------------------
$colNameIndex = $colIndex+0x50; //0X50 by inspection
echo "Col names at 0x".dechex($colNameIndex).PHP_EOL;
$cols=array();
for ($col = 0; $col < $numCols; $col++){
$nameLen = ord($contents[$colNameIndex++]); // name length is at ist posn
$cols[]['ColumnName']= substr($contents,$colNameIndex,$nameLen-1); // read the name
$colNameIndex+=$nameLen+2; // skip ahead to next name (2 byte gap after \0)
}
print_r($cols);
This should get you started. I will add to this when I have time in the coming days if you think is heading in the right direction.
EDIT. I updated the code so it should work for any .frm file (from Table). For sure there is a free tool to recover mySQL (based on innoDB engine) available at https://github.com/twindb/undrop-for-innodb. Having read through the code and the associated blogs, they are not using the .FRM files for recovery. The same table information is also stored in the innoDB dictionary and they are using this to recover table formats etc.
There is also a way to read the .FRM files content. This described here https://twindb.com/how-to-recover-table-structure-from-frm-files-online/. However, they are using mySQL to read the .frm files and recreating tables from there.
There is also a utility a package of utilities found here https://www.mysql.com/why-mysql/presentations/mysql-utilities/ that contains a .frm reader. This was made by Oracle, who are the only people who know the format of the .frm files! The utility is free so you can download it.
Oracle publish some information on the format of .frm files https://dev.mysql.com/doc/internals/en/frm-file-format.html, but it is both incomplete and incorrect! See this previous Stack question.
https://dba.stackexchange.com/questions/208198/mysql-frm-file-format-how-to-extract-column-info
Now after all that if you still want to try to parse the .frm files yourself for fun or for learning, then you need to be patient and spend time unravelling quite a complicated structure. If you want to keep trying that is OK but send me your .FRM file ( to sand_groper80#hotmail.com) so I can check it out and I will send you some PHP code in a few days that will extract some additional information like datatype and display sizes.

Related

Which method is better to load MySQL database?

So I have a script that reads a text file, organizes it into an array then uses this code to loop through the data to input into the proper columns/rows inside MySQL server:
$size = sizeof($str)/14;
$x=0;
$a=0; $b=1; $c=2; $d=3; $e=4; $f=5; $g=6; $h=7; $i=8; $j=9; $k=10; $l=11; $m=12; $n=13;
mysql_query('TRUNCATE TABLE scores');
do {
$query = "INSERT INTO scores (serverid,resetid, rank,number,countryname,land,networth,tag,gov,gdi,protection,vacation,alive,deleted)
VALUES ('$str[$a]','$str[$b]','$str[$c]','$str[$d]','$str[$e]','$str[$f]','$str[$g]','$str[$h]',
'$str[$i]','$str[$j]','$str[$k]','$str[$l]','$str[$m]','$str[$n]')";
mysql_query($query,$conn);
$a=$a+14; $b=$b+14; $c=$c+14; $d=$d+14; $e=$e+14; $f=$f+14; $g=$g+14; $h=$h+14; $i=$i+14; $j=$j+14; $k=$k+14; $l=$l+14; $m=$m+14; $n=$n+14;
$x++;
} while ($x != $size);
mysql_close($conn);
This code figures out how large the file is loops through all 13 columns until it reaches the last row in the text file. Each time it is ran it clears the DB and loads the new data (as intended).
My question is: is this a good way of doing it? Or is there a faster more clean way to do the same thing as my code above?
Could I use the LOAD DATA LOCAL INFILE '$myFile'" . " INTO TABLE ranksfeed_temp FIELDS TERMINATED BY ',' to do the same job in a more efficient manner? What are your thoughts? I'm trying to make my code more efficient and fast.
LOAD DATA would be faster and more efficient to import a character separated file like csv. LOAD DATA is optimized for importing large files into you MySQL table, whereas you are running one query per row from your textfile, which ist incredibly slow in execution.
Please pay attention to the fact that the LOCAL option is only for files which are placed on the client side of your MySQL-Server-Client Connection. Try to load the file form the machine which acts as the MySQL directly.
Disabling possible keys on your table before inserting can give you extra speed while importing. Try it with disabled keys and without to benchmark the results.

How To Import Movielens Data To Mysql

How can i import UTF-8 data form Movielens to MySql.
I get the data from http://grouplens.org/datasets/movielens/ and for my recommender system Thesis purpose, i just want the 100K and Tag Gnome data only.
I've been searching on google and in this forum and i don't find anything about importing these files to MySQl. Myself, currently using PhpMyAdmin for managing MySQL, so if anybody know how to easily import those files to MySQL.
I'm fine if you guys recommend me to iterate it one by one using php, but please explain to me the code.
You'll need to write some custom code to import all of their data into MySQL. Dumbest answer on Stack Overflow ever, right?
So they provide a set of flat files, each described in the README.
README
allbut.pl
mku.sh
u.data
u.genre
u.info
u.item
u.occupation
u.user
u1.base
u1.test
u2.base
u2.test
u3.base
u3.test
u4.base
u4.test
u5.base
u5.test
ua.base
ua.test
ub.base
ub.test
In a nutshell:
Make your own database and tables in MySQL.
Programatically open a file and parse each line to SQL.
Import the SQL into MySQL.
???
Profit!
Yeah, I know I still haven't really told you anything, let's do one and you can hopefully do the others.
I'll do u.genre, because I'm lazy and it is easy.
Make a new table, I'll assume you know how to make tables and such.
u.genre has two things: a genre and an id.
unknown|0
Action|1
...etc...
So your table should have two fields.
You'll use two data types: https://dev.mysql.com/doc/refman/5.7/en/data-types.html
id - unsigned TINYINT
TINYINT unsigned is 0 to 255
genre - VARCHAR(20)
VARCHAR 20 is up to 20 characters, their longest is "Documentary" so that'll give you a bit of extra room if they add a new one.
Open the file get the contents: https://secure.php.net/manual/en/function.file-get-contents.php
$filecontents = file_get_contents("u.genre");
Now let's split up the file by line: https://secure.php.net/manual/en/function.explode.php
$genres = explode("\n", $filecontents);
Now we'll loop through the $genres using foreach and explode again: https://secure.php.net/manual/en/control-structures.foreach.php
foreach ($genres as &$row) {
list($genre,$id) = explode("|",$row);
# more here later
}
Now let's just output SQL, skipping if either of the fields are empty.
if ($genre!="" && $id!=="") {
print "INSERT INTO genre (genre,id) VALUES ($genre,$id);\n";
}
Put it all together...
<?php
$filecontents = file_get_contents("u.genre");
$genres = explode("\n", $filecontents);
foreach ($genres as &$row) {
list($genre,$id) = explode("|",$row);
if ($genre!="" && $id!=="") {
$sql = "INSERT INTO genre (genre,id) VALUES ($genre,$id);\n";
print $sql;
# Insert each into your DB here.
}
}
?>
Save it and run it from the commandline or put it in a browser for no good reason.
There are too many resources out there showing how to insert data into MySQL, so I'll leave it at this. Everyone's database setup is a bit different, so writing it up for my particular setup won't help you.

MySQL Database & PHP - Match row with files and append extension

i've got the following problem:
I've taken over a MS-SQL Database from my superior which has been developed by him. Sadly the database is in really bad shape for development.
The Database has already been "converted" to MySQL by me and the data imported. Now the problem is, theres a table "hotels" which had got rows named "image1, image2, image3" etc up to image24. I removed them from the table and created a new table called hotel_images where the images are assigned to a hotel. Now to describe my problem :
The imported data contained strings for each image such as "007593-20110809-145433-01" but the extension was missing. All the images were placed in the same directory (there are about 4000) and only the string has been saved.
I already did a workaround function myself when pulling the data into the website where i check file_exists and then return the different extensions (.BMP, .GIF, JPG etc) but i don't like this solution.
Is there any possiblity for me to check all strings available in a single table with the image folder and add the proper extension to the table if the string matches? It must be something like
SELECT image from hotel_images (search for value in /images/) IF MATCH ALTER TABLE hotel_images set image = this + .extension
I would appreciate any advice!
Edit: it just came to my attention that i could do a dir listing to a text-file from the folder and then match it against every string in the table and if match replace it - is that a possible solution?
You have to SELECT all rows (without extension)
Then, in PHP, foreach on all images to find if they're existing in the folder, take their extensions
Then UPDATE row with the existing filename, adding the extension...
With an example :
$images_query = mysql_query("SELECT id, image_name from hotel_images");
while($image = mysql_fetch_array($images_query)){
if(file_exists($image["image_name"])){
//Get extension
$ext = "...";
//Then update row with new name
mysql_query("UPDATE hotel_images SET image_name = '" . $image["image_name"] . $ext ."' WHERE id = " . $image["id"]);
}
}
Are you searching for something like that ?
It's not tested script, did it directly in the SO textarea ;)

optimizing Code for inserting 27000*2 keys from plain text file to DB

I need to insert data from a plain text file, explode each line to 2 parts and then insert to the database. I'm doing in this way, But can this programme be optimized for speed ?
the file has around 27000 lines of entry
DB structure [unique key (ext,info)]
ext [varchar]
info [varchar]
code:
$string = file_get_contents('list.txt');
$file_list=explode("\n",$string);
$entry=0;
$db = new mysqli('localhost', 'root', '', 'file_type');
$sql = $db->prepare('INSERT INTO info (ext,info) VALUES(?, ?)');
$j=count($file_list);
for($i=0;$i<$j;$i++)
{
$data=explode(' ',$file_list[$i],2);
$sql->bind_param('ss', $data[0], $data[1]);
$sql->execute();
$entry++;
}
$sql->close();
echo $entry.' entry inserted !<hr>';
If you are sure that file contains unique pairs of ext/info, you can try to disable keys for import:
ALTER TABLE `info` DISABLE KEYS;
And after import:
ALTER TABLE `info` ENABLE KEYS;
This way unique index will be rebuild once for all records, not every time something is inserted.
To increase speed even more you should change format of this file to be CSV compatible and use mysql LOAD DATA to avoid parsing every line in php.
When there are multiple items to be inserted you usually put all data in a CSV file, create a temporary table with columns matching CSV, and then do a LOAD DATA [LOCAL] INFILE, and then move that data into destination table. But as I can see you don't need much additional processing, so you can even treat your input file as a CSV without any additional trouble.
$db->exec('CREATE TEMPORARY TABLE _tmp_info (ext VARCHAR(255), info VARCHAR(255))');
$db->exec("LOAD DATA LOCAL INFILE '{$filename}' INTO TABLE _tmp_info
FIELDS TERMINATED BY ' '
LINES TERMINATED BY '\n'"); // $filename = 'list.txt' in your case
$db->exec('INSERT INTO info (ext, info) SELECT t.ext, t.info FROM _tmp_info t');
You can run a COUNT(*) on temp table after that to show how many records were there.
If you have a large file that you want to read in I would not use file_get_contents. By using it you force the interpreter to store the entire contents in memory all at once, which is a bit wasteful.
The following is a snippet taken from here:
$file_handle = fopen("myfile", "r");
while (!feof($file_handle)) {
$line = fgets($file_handle);
echo $line;
}
fclose($file_handle);
This is different in that all you are keeping in memory from the file at a single instance in time is a single line (not the entire contents of the file), which in your case will probably lower the run-time memory footprint of your script. In your case, you can use the same loop to perform your INSERT operation.
If you can use something like Talend. It's an ETL program, simple and free (it has a paid version).
Here is the magic solution [3 seconds vs 240 seconds]
ALTER TABLE info DISABLE KEYS;
$db->autocommit(FALSE);
//insert
$db->commit();
ALTER TABLE info ENABLE KEYS;

Editing Data in an XLS with PHP then importing into mySQL

I am trying to import an XLS file into PHP, where I can then edit the information and import it into mySQL. I have never done anything related to this, so I am having a hard time grasping how to approach it.
I have looked at a few open source projects:
PHP Excel Reader
ExcelRead
PHPExcel
None of these options perfectly fit what I want to do or maybe I just haven't gone deep enough into the documentation.
There are some things that needed to be taken into consideration. The XLS file cannot be converted into any other file format. This is being made for ease-of-access for nontechnical users. The XLS file is a report generated on another website that will have the same format (columns) every time.
For example, every XLS file with have the same amount of columns (this would be A1):
*ID |Email |First Name |Last Name |Paid |Active |State |Country|*
But, there are more columns in the XLS file than what is going to be imported into the DB.
For example, the rows that are being imported (this would be A1):
*ID |Email |First Name |Last Name |Country*
I know one of two ways to do edit the data would be A. Use something like PHPExcel to read in the data, edit it, then send it to the DB or B. Use something like PHPExcel to convert the XLS to CSV, do a raw import into a temp table, edit the data, and insert it into the old table.
I have read a lot of the PHPExcel documentation but, it doesn't have anything on importing into a database and I don't really even know where to start with editing the XLS before or after importing.
I have googled a lot of keywords and mostly found results on how to read/write/preview XLS. I am looking for advice on the best way of doing all of these things in the least and simplest steps.
See this article on using PHP-ExcelReader, in particular the short section titled "Turning the Tables".
Any solution you have will end up looking like this:
Read a row from the XLS (requires an XLS reader)
Modify the data from the row as needed for your database.
Insert modified data into the database.
You seem to have this fixation on "Editing the data". This is just PHP--you get a value from the XLS reader, modify it with PHP code, then insert into the database. There's no intermediate file, you don't modify the XLS--it's just PHP.
This is a super-simple, untested example of the inner loop of the program you need to write. This is just to illustrate the general pattern.
$colsYouWant = array(1,2,3,4,8);
$sql = 'INSERT INTO data (id, email, fname, lname, country) VALUES (?,?,?,?,?)';
$stmt = $pdo->prepare($sql);
$sheet = $excel->sheets[0];
// the excel reader seems to index by 1 instead of 0: be careful!
for ($rowindex=2; $rowindex <= $sheet['numRows']; $rowindex++) {
$xlsRow = $sheet['cells'][$rowindex];
$row = array();
foreach ($colsYouWant as $colindex) {
$row[] = $xlsRow[$colindex];
}
// now let's "edit the row"
// trim all strings
$row = array_map('trim', $row);
// convert id to an integer
$row[0] = (int) $row[0];
// capitalize first and last name
// (use mb_* functions if non-ascii--I don't know spreadsheet's charset)
$row[2] = ucfirst(strtolower($row[2]));
$row[3] = ucfirst(strtolower($row[3]));
// do whatever other normalization you want to $row
// Insert into db:
$stmt->execute($row);
}

Categories