I need to create a millions of logs using php and mysql or write to excel or to Pdf.
And i need to create this is in fastest method.
I tried the following method to insert data-
$cnt=200000;
for($i=1;$i<=$cnt;$i++)
{
$sql="insert into logs('log_1','log_2','time'") Values('abcdefgh.$i','zyxwvu.$i');
query=mysql_query();
}
But its taking too much time to do the operation. Please help me if anybody know the solutions.
as per my understanding , you don't want the error logs ? you want to submit records in database, excel and PDF and those can be millions or billions records right ?
Well, i had a similar problem months ago and there are several ways to do this, but really don't know what's the quickest:
1.- Archive engine: Use an engine of type archive for the table, this engine was created to store big amounts of data like logs.
2.- MongoDB: I do not test yet this database but i read a lot about it and it seems that works very well on this situations
3.- Files: I thought in this solution when i was trying to fix my problem but it was the worst of all solutions (for me at least, beacuase i need to have the data in database to make some reports, so need to create a daemon to parse the files and store in database)
4.- Database Partitioning: This solution is compatible with the first one (and even with your current engine type), just check this link to create some partitions to your database
FYI: My current solution is Archive engine + Partitions by month
Hope it helps
Related
I'm currently building a web-app which displays data from .csv files for the user, where they are edited and the results stored in a mySQL database.
For the next phase of the app I'm looking at implementing the functionality to write the results into ** existing .DBF** files using PHP as well as the mySQL database.
Any help would be greatly appreciated. Thanks!
Actually there's a third route which I should have thought of before, and is probably better for what you want. PHP, of course, allows two or more database connections to be open at the same time. And I've just checked, PHP has an extension for dBase. You did not say what database you are actually writing to (several besides the original dBase use .dbf files), so if you have any more questions after this, state what your target database actually is. But this extension would probably work for all of them, I imagine, or check the list of database extensions for PHP at http://php.net/manual/en/refs.database.php. You would have to try it and see.
Then to give an idea on how to open two connections at once, here's a code snippet (it actually has oracle as the second db, but it shows the basic principles):
http://phplens.com/adodb/tutorial.connecting.to.multiple.databases.html
There's a fair bit of guidance and even tutorials on the web about multiple database connections from PHP, so take a look at them as well.
This is a standard kind of situation in data migration projects - how to get data from one database to another. The answer is you have to find out what format the target files (in this case the format of .dbf files) need to be in, then you simply collect the data from your MySQL file, rearrange it into the required format, and write a new file using PHP's file writing functions.
I am not saying it's easy to do; I don't know the format of .dbf files (it was a format used by dBase, but has been used elsewhere as well). You not only have to know the format of the .dbf records, but there will almost certainly be header info if you are creating new files (but you say the files are pre-existing so that shouldn't be a problem for you). But the records may also have a small amount of header data as well, which you would need to write to work out and each one in the form required.
So you need to find out the exact format of .dbf files - no doubt Googling will find you info on that. But I understand even .dbf can have various differences - in which case you would need to look at the structure of your existing files to resolve those if needed).
The alternative solution, if you don't need instant copying to the target database, is that it may have an option to import data in from CSV files, which is much easier - and you have CSV files already. But presumably the order of data fields in those files is different to the order of fields in the target database (unless they came from the target database, but then you wouldn't presumably, be trying to write it back unless they are archived records). The point I'm making, though, is you can write the data into CSV files from the PHP program, in the field order required by your target database, then read them into the target database as a seaparate step. A two stage proces in other words. This is particularly suitable for migrations where you are doing a one off transfer to the new database.
All in all you have a challenging but interesting project!
So I have an old website which was coded over an extended period of time but has been inactive for 3 or so years. I have the full PHP source to the site, but the problem is I do not have a backup of the database any longer. I'm wondering what the best solution to recreating the database would be? It is a large site so manually going through each PHP file and trying to keep track of which tables are referenced is no small task. I've tried googling for the answer but have had no luck. Does anyone know of any tools that are available to help extract this information from the PHP and at least give me the basis of a database skeleton? Otherwise, has anyone ever had to do this? Any tips to help me along and possibly speed up the process? It is a mySQL database I'm trying to use.
The way I would do it:
Write a subset of SQLi or whatever interface was used to access the DB to intercept all DB accesses.
Replace all DB accesses with the dummy version of yours.
The basic idea is to emulate the DB so that the PHP code runs long enough to activate the various DB accesses, which in turn will allow you to analyze the way the DB is built and used.
From within these dummy functions:
print the SQL code used
regenerate just enough dummy results to let the rest of the code run, based on the tables and fields mentioned in the query parameters and the PHP code that retrieves them (you won't learn much from a SELECT *, but you can see what fields the PHP code expects to get from it)
once you have understood enough of the DB structure, recreate the tables and let the original code work on them little by little
have the previous designer flogged to death for not having provided a way to recreate the DB programatically
There are currently two answers based on the information you provided.
1) you can't do this
PHP is a typeless language. you could check you sql statements for finding field and table names. but it will not complete. if there is a select * from table, you can't see the fields. so you need to check there php accesses the fields. maybe by name or by index. you could be happy if this is done by name, because you can extract the name of the fields. finally the data types will missing. also missing: where are is an index on, what are primary keys, constrains etc.
2) easy, yes you can!
because your php is using a modern framework with contains a orm. this created the database for you. a meta information are included in the php classes/design.
just check the manual how to recreate the database.
I am currently working on an existing website that lists products, there are currently a little over 500 products.
The website has a text file for every product and I want to make a search option, thinking of reading all the text files and create an xml document with the values once a day that can be searched.
The client indicated that they wanted to add products and is used to add them using the text files. There might be over 5000 products in the future so I think it's best to do this with mysql. This means importing the current products and create a crud page for products.
Does anyone have experience with a PHP website that does not use MySQL? Is it possible to keep adding text files and just index them once a day even if it would mean having over 5000 products?
5000 seems like an amount that's still managable to index with a daily cron job. As long as you don't plan on searching them real-time, it should work. It's not ideal, but it would work.
Yes, it is very much possible, NOT plausible that you use files for these type of transactions.
It is also better to use XML instead of normal TXTs for the job. 5000 products with what kind of data associated to them might create problems in future.
PS
Why not MySQL?
Mysql was made because file based databases are slow and inaccurate.
Just use mysql. If you want to keep your old txt based database, just build an easy script that will import each file one by one and create corresponding tables in your sql database.
Good luck.
It's possible, however if this is a anything more than simply an online catalog, then managing transaction integrity is horrendously difficult - and that you're even asking the question implies that you are not in a good position to implement the kind of controls required. And as you've already discovered, it doesn't make for easy searching (BTW: mysql's fulltext indexing is a very blunt instrument - it's not a huge amount of effort to implement an effective search engine yourself - or there are excellent ones available off-the-shelf, e.g. mnogosearch)
(as a conicdental point, why XML? It makes managing the data much more complicated than it needs to be)
and create a crud page for products
Why? If the client wants to maintain the data via file uploads and you already need to port the data, then just use the same interface - where the data is stored is not relevant just now.
If there are issues with hosting+mysql, then using SQLite gives most of the benefits (although it wion't scale as well).
For a customer i'm working on a small project to index a bunch (around 30) Excel spreadsheets. Main goal of the project is to search fast through uploaded excel files. I've googled to find a solution but I didn't found an easy solution yet.
Some options I'm considering:
-Do something manually with PHPExcel and MySQL and store column information using meta tables. Use the FullText options of the table to return search results.
-Use a document store (like MongoDB) to store the files and combine this with ElasticSearch / Solr to get fast results.
-Combination of both, use Solr on the relational database.
I think the second option is a bit overkill, I don't want to spend to much time on this problem. I'd like to hear some opinions about this, other suggestions are welcome :)
I agree with the others. I've done several systems in the past that suck spreadsheets into a database. It is an excellent way of getting a familiar user interface without any programming. I've tended then to make use of email to get the spreadsheets a central location for reading either by MS Access and, in more recent years, read by PHP into a MySQL database.
PHP is particularly good as you can connect it easily to a mail server to automatically read and process the spreadsheets.
Brief background - Im writing a special import manager to be run via cron. The basic operation is:
Delete all records from table
Load data from a file or a string into the table
Not to difficult, however because of the apocolyptic delete at the beginning of this sequence id like to use a transaction so i can rollback if anything goes wrong during the process. However, some of the tables updated by this process are MyIsam which doesnt support transactions.
So the main question is: Is there an easier way to detect the storage engine for a given table than doing a show create table and then doing what little parsing it takes to get the storage engine - ie. id like to avoid this extra query.
Secondly: I havent actually tried this yet as im still writing soem other pieces that fit in - so perhaps Zend_Db simply ignores beginTransaction,commit, and rollback if not supported on the table in question?
Also Im not using Zend_Db_Table for this - just the Adapter (Pdo_Mysql). Alternatively im perfectly willing to use raw PDO if that somehow allows a more elegant solution.
(Im not interested in using mysqlimport for this particular implementation for a number of reasons im not going to get into so lets just say its not an option at all)
I'd suggest solving your problem with renaming the original table and deleting it after successful completion ;)
Don't know if this is still relevant for you, but what about this response:
How can I check MySQL engine type for a specific table?