Brief background - Im writing a special import manager to be run via cron. The basic operation is:
Delete all records from table
Load data from a file or a string into the table
Not to difficult, however because of the apocolyptic delete at the beginning of this sequence id like to use a transaction so i can rollback if anything goes wrong during the process. However, some of the tables updated by this process are MyIsam which doesnt support transactions.
So the main question is: Is there an easier way to detect the storage engine for a given table than doing a show create table and then doing what little parsing it takes to get the storage engine - ie. id like to avoid this extra query.
Secondly: I havent actually tried this yet as im still writing soem other pieces that fit in - so perhaps Zend_Db simply ignores beginTransaction,commit, and rollback if not supported on the table in question?
Also Im not using Zend_Db_Table for this - just the Adapter (Pdo_Mysql). Alternatively im perfectly willing to use raw PDO if that somehow allows a more elegant solution.
(Im not interested in using mysqlimport for this particular implementation for a number of reasons im not going to get into so lets just say its not an option at all)
I'd suggest solving your problem with renaming the original table and deleting it after successful completion ;)
Don't know if this is still relevant for you, but what about this response:
How can I check MySQL engine type for a specific table?
Related
I need to create a millions of logs using php and mysql or write to excel or to Pdf.
And i need to create this is in fastest method.
I tried the following method to insert data-
$cnt=200000;
for($i=1;$i<=$cnt;$i++)
{
$sql="insert into logs('log_1','log_2','time'") Values('abcdefgh.$i','zyxwvu.$i');
query=mysql_query();
}
But its taking too much time to do the operation. Please help me if anybody know the solutions.
as per my understanding , you don't want the error logs ? you want to submit records in database, excel and PDF and those can be millions or billions records right ?
Well, i had a similar problem months ago and there are several ways to do this, but really don't know what's the quickest:
1.- Archive engine: Use an engine of type archive for the table, this engine was created to store big amounts of data like logs.
2.- MongoDB: I do not test yet this database but i read a lot about it and it seems that works very well on this situations
3.- Files: I thought in this solution when i was trying to fix my problem but it was the worst of all solutions (for me at least, beacuase i need to have the data in database to make some reports, so need to create a daemon to parse the files and store in database)
4.- Database Partitioning: This solution is compatible with the first one (and even with your current engine type), just check this link to create some partitions to your database
FYI: My current solution is Archive engine + Partitions by month
Hope it helps
So I have an old website which was coded over an extended period of time but has been inactive for 3 or so years. I have the full PHP source to the site, but the problem is I do not have a backup of the database any longer. I'm wondering what the best solution to recreating the database would be? It is a large site so manually going through each PHP file and trying to keep track of which tables are referenced is no small task. I've tried googling for the answer but have had no luck. Does anyone know of any tools that are available to help extract this information from the PHP and at least give me the basis of a database skeleton? Otherwise, has anyone ever had to do this? Any tips to help me along and possibly speed up the process? It is a mySQL database I'm trying to use.
The way I would do it:
Write a subset of SQLi or whatever interface was used to access the DB to intercept all DB accesses.
Replace all DB accesses with the dummy version of yours.
The basic idea is to emulate the DB so that the PHP code runs long enough to activate the various DB accesses, which in turn will allow you to analyze the way the DB is built and used.
From within these dummy functions:
print the SQL code used
regenerate just enough dummy results to let the rest of the code run, based on the tables and fields mentioned in the query parameters and the PHP code that retrieves them (you won't learn much from a SELECT *, but you can see what fields the PHP code expects to get from it)
once you have understood enough of the DB structure, recreate the tables and let the original code work on them little by little
have the previous designer flogged to death for not having provided a way to recreate the DB programatically
There are currently two answers based on the information you provided.
1) you can't do this
PHP is a typeless language. you could check you sql statements for finding field and table names. but it will not complete. if there is a select * from table, you can't see the fields. so you need to check there php accesses the fields. maybe by name or by index. you could be happy if this is done by name, because you can extract the name of the fields. finally the data types will missing. also missing: where are is an index on, what are primary keys, constrains etc.
2) easy, yes you can!
because your php is using a modern framework with contains a orm. this created the database for you. a meta information are included in the php classes/design.
just check the manual how to recreate the database.
Okay, I know this i kinda a vague question. But I have a requirement where I need to list all the tables and the columns which are not being used in the application. I can do manual code search but that would take so much time. The number of tables to check are 140 and maximum number of fields in a table are 90.
Right now I have started searching the code for table names, and I have created an excel sheet with all the table names, and when I found a table in code I highlight that in green. So tables are bit easier.
My question really is, is there a way to speed up this process? or some methods / techniques can be applied?
Thank you?
All depends on your app size and the coding technique.
If I had a large application, I would enable full mysql log (or hack into any database wrapper I may have, to log the queries), run the application, and then extract the information from the log. However, doing so you are just moving the problem. Instead of worrying to capture all the queries, you now need to ensure to run each and every line of your application code (so you are sure that nothing escaped and that you have analyzed all the possibilities).
This is in fact called "code coverage analysis" and there are tools which will help you with that.
This said, I believe that the manual analysis may be quicker for small applications.
I suggest you to build a script that perform the job. For example, you can optain the table list in a database with a query like this:
show tables from YOUR_DATABASE;
More info: http://dev.mysql.com/doc/refman/5.0/en/show-tables.html
And then you can loop your tables and check for fields using:
show columns from YOUR_TABLE;
More info: http://dev.mysql.com/doc/refman/5.0/en/show-columns.html
Finally you can search (grep for example) your tables and fields in your code and write a log or something similar.
I am currently working on an existing website that lists products, there are currently a little over 500 products.
The website has a text file for every product and I want to make a search option, thinking of reading all the text files and create an xml document with the values once a day that can be searched.
The client indicated that they wanted to add products and is used to add them using the text files. There might be over 5000 products in the future so I think it's best to do this with mysql. This means importing the current products and create a crud page for products.
Does anyone have experience with a PHP website that does not use MySQL? Is it possible to keep adding text files and just index them once a day even if it would mean having over 5000 products?
5000 seems like an amount that's still managable to index with a daily cron job. As long as you don't plan on searching them real-time, it should work. It's not ideal, but it would work.
Yes, it is very much possible, NOT plausible that you use files for these type of transactions.
It is also better to use XML instead of normal TXTs for the job. 5000 products with what kind of data associated to them might create problems in future.
PS
Why not MySQL?
Mysql was made because file based databases are slow and inaccurate.
Just use mysql. If you want to keep your old txt based database, just build an easy script that will import each file one by one and create corresponding tables in your sql database.
Good luck.
It's possible, however if this is a anything more than simply an online catalog, then managing transaction integrity is horrendously difficult - and that you're even asking the question implies that you are not in a good position to implement the kind of controls required. And as you've already discovered, it doesn't make for easy searching (BTW: mysql's fulltext indexing is a very blunt instrument - it's not a huge amount of effort to implement an effective search engine yourself - or there are excellent ones available off-the-shelf, e.g. mnogosearch)
(as a conicdental point, why XML? It makes managing the data much more complicated than it needs to be)
and create a crud page for products
Why? If the client wants to maintain the data via file uploads and you already need to port the data, then just use the same interface - where the data is stored is not relevant just now.
If there are issues with hosting+mysql, then using SQLite gives most of the benefits (although it wion't scale as well).
Since RedBean creates all columns by itself what would happen if I don't need a field any more. Is there an easy way to remove it without deleting the table and lose all data?
Can this be solved automaticaly or how would RedBean react if I delete the column manually?
Delete the table column in the usual way from your MySQL client (say, phpMyAdmin or SQLYog) or from the MySQL console.
RedBean can't get confused by this "external meddling" that you're worried about, because it runs on each PHP script execution and, to the best of my knowledge, carries no state across invocations. It's really just an abstraction over data storage.
Interestingly, the RedBean Wiki doesn't appear to talk about this sort of thing at all.