Writing records into a .dbf file using PHP? - php

I'm currently building a web-app which displays data from .csv files for the user, where they are edited and the results stored in a mySQL database.
For the next phase of the app I'm looking at implementing the functionality to write the results into ** existing .DBF** files using PHP as well as the mySQL database.
Any help would be greatly appreciated. Thanks!

Actually there's a third route which I should have thought of before, and is probably better for what you want. PHP, of course, allows two or more database connections to be open at the same time. And I've just checked, PHP has an extension for dBase. You did not say what database you are actually writing to (several besides the original dBase use .dbf files), so if you have any more questions after this, state what your target database actually is. But this extension would probably work for all of them, I imagine, or check the list of database extensions for PHP at http://php.net/manual/en/refs.database.php. You would have to try it and see.
Then to give an idea on how to open two connections at once, here's a code snippet (it actually has oracle as the second db, but it shows the basic principles):
http://phplens.com/adodb/tutorial.connecting.to.multiple.databases.html
There's a fair bit of guidance and even tutorials on the web about multiple database connections from PHP, so take a look at them as well.

This is a standard kind of situation in data migration projects - how to get data from one database to another. The answer is you have to find out what format the target files (in this case the format of .dbf files) need to be in, then you simply collect the data from your MySQL file, rearrange it into the required format, and write a new file using PHP's file writing functions.
I am not saying it's easy to do; I don't know the format of .dbf files (it was a format used by dBase, but has been used elsewhere as well). You not only have to know the format of the .dbf records, but there will almost certainly be header info if you are creating new files (but you say the files are pre-existing so that shouldn't be a problem for you). But the records may also have a small amount of header data as well, which you would need to write to work out and each one in the form required.
So you need to find out the exact format of .dbf files - no doubt Googling will find you info on that. But I understand even .dbf can have various differences - in which case you would need to look at the structure of your existing files to resolve those if needed).
The alternative solution, if you don't need instant copying to the target database, is that it may have an option to import data in from CSV files, which is much easier - and you have CSV files already. But presumably the order of data fields in those files is different to the order of fields in the target database (unless they came from the target database, but then you wouldn't presumably, be trying to write it back unless they are archived records). The point I'm making, though, is you can write the data into CSV files from the PHP program, in the field order required by your target database, then read them into the target database as a seaparate step. A two stage proces in other words. This is particularly suitable for migrations where you are doing a one off transfer to the new database.
All in all you have a challenging but interesting project!

Related

Wordpress - Using a database to update custom posts

I have a Wordpress site that utilizes a custom post type, call it CPT-1, that I created using JetEngine. Inside of CPT-1 are meta fields. Once that was setup, I did a bulk insert of data using Ultimate CSV Importer Pro and it put this information into CPT-1 and I could put each column of data into the meta fields I wanted to use. These fields are then used later in tables.
Is there a way to go around the CSV Importer part of this process and just pull from a database? In the long term, i'd like to make changes to certain posts and upload different posts while using CPT-1 but I don't think using a CSV every time will be easy or accurate. If I could just pull from a database that I make updates to, I can track those changes easily and manage it.
I have database experience but not so much with Wordpress databases. What tables would I have to pay attention to if I were to go down this route?
Wordpress uses MySQL as a backend, so there is no reason you can't just insert the data directly. You'll need to get the credentials Wordpress uses to connect to the database, and then connect yourself, probably from your own custom PHP script.
I am generally skiddish doing things like you described because Wordpress is a complex piece of software and I don't have a lot of awareness of what it is doing behind-the-scenes (nor do they really intend users to have such awareness, most functionality is hidden from the user.)
However, if you have been doing a CSV import, and you have tested it extensively, and it's working fine with that method, there is no reason you couldn't carry out this same thing with less manual work on your part via a PHP script.
I'm afraid I can't get much more specific in my answer because I don't have information about what exactly you did with the CSV.
A straightforward (but not super efficient) way of doing this would be a PHP script where you initiate a database connection to the database you update, and a second connection to the MySQL database, fetch a query of whatever rows you want to update (whatever you would normally be exporting via CSV) and iterate row-by-row and insert this data into the MySQL database. You can make this significantly more efficient by making a single prepared statement, and then executing it repeatedly with each row of values.
A more efficient way of doing it would be to pull the data into your PHP script and then format it as a single query which you could then add into the MySQL database.
If you already have CSV importing working, you could even do a "lazy" solution where you just write a PHP script that generates a CSV and then feeds it into MySQL and imports it the same way your other program was. It's hard for me to tell from what you said, which of these solutions would work. However, I have used all three solutions, depending on what I'm doing and what kind of error handling I want.
In general, if errors happen rarely-to-never you are probably better off with the single, bulk insert methods whether one query or PHP automating the export of a CSV and then passing it to be imported into Wordpress' MySQL database.

PHP - MySQL call or JSON static file for unfrequently updated information

I've got a heavy-read website associated to a MySQL database. I also have some little "auxiliary" information (fits in an array of 30-40 elements as of now), hierarchically organized and yet gets periodically and slowly updated 4-5 times per year. It's not a configuration file though since this information is about the subject of the website and not about its functioning, but still kind of a configuration file. Until now, I just used a static PHP file containing an array of info, but now I need a way to update it via a backend CMS from my admin panel.
I thought of a simple CMS that allows the admin to create/edit/delete entries, periodical rare job, and then creates a static JSON file to be used by the page building scripts instead of pulling this information from the db.
The question is: given the heavy-read nature of the website, is it better to read a rarely updated JSON file on the server when building pages or just retrieve raw info from the database for every request?
I just used a static PHP
This sounds like contradiction to me. Either static, or PHP.
given the heavy-read nature of the website, is it better to read a rarely updated JSON file on the server when building pages or just retrieve raw info from the database for every request?
Cache was invented for a reason :) Same with your case - it all depends on how often data changes vs how often is read. If data changes once a day and remains static for 100k downloads during the day, then not caching it or not serving from flat file would would simply be stupid. If data changes once a day and you have 20 reads per day average, then perhaps returning the data from code on each request would be less stupid, but from other hand, all these 19 requests could be served from cache anyway, so... If you can, serve from flat file.
Caching is your best option, Redis or Memcached are common excellent choices. For flat-file or database, it's hard to know because the SQL schema you're using, (as in, how many columns, what are the datatype definitions, how many foreign keys and indexes, etc.) you are using.
SQL is about relational data, if you have non-relational data, you don't really have a reason to use SQL. Most people are now switching to NoSQL databases to handle this since modifying SQL databases after the fact is a huge pain.

Loading large XML file from url into MySQL

I need to load XML data from an external server/url into my MySQL database, using PHP.
I don't need to save the XML file itself anywhere, unless this is easier/faster.
The problem is, I will need this to run every hour or so as the data will be constantly updated, therefore I need to replace the data in my database too. The XML file is usually around 350mb.
The data in the MySQL table needs to be searchable - I will know the structure of the XML so can create the table to suit first.
I guess there are a few parts to this question:
What's the best way to automate this whole process to run every hour?
Whats the best(fastest?) way of downloading/ parsing the xml (~350mb) from the url? in a way that I can -
load it into a mysql table of my own, maintaining columns/ structure
1) A PHP script can keep running in background all the time, but this is not the best scenario or you can set a php -q /dir/to/php.php using cronos (if running on linux) or other techniques to makes server help you. (You still need access to server)
2) You can use several systems, the more linear one, less RAM consuming, is if you decide to work with files or with a modified mySQL access is opening your TCP connection, streaming smaller packages (16KB will be ok) and streaming them out on disk or another connection.
3) Moving so huge data is not difficult, but storing them in mySQL is not waste. Performing search in it is even worst. Updating it is trying to kill mySQL system.
Suggestions:
From what i can see, you are trying to synchronize or back-up data from another server. If there is just one file then make a local .xml using PHP and you are done. If there are more than one i will still suggest to make local files as most probably you are working with unstructured data: they are not for mySQL. If you work with hundreds of files and you need to search them fast perform statistics and much much more... consider to change approach and read about hadoop.
MySQL BLOOB or TEXT columns still not support more than 65KB, maybe you know another technique, but i never heard about it and I will never suggest to do so. If you are trying it just to use SQL SEARCH commands you took the wrong path.

PHP bug tracker: storing attached files

Given a PHP bug tracker project with an SQL DB (MySQL, PostgreSQL, Oracle...), which should be able to store attached files for each bug.
How would you basically store info (file info and the file itself) on DB & disk?
e.g.
DB: the table bug would have a related table bug_files having a bug_id field, and a filename field containing path to file on disk
Disk: storing files in an efficient way (avoiding having too many files in a single directory), e.g. automatically create directories 1-1000, 1001-2000, etc. so we can have /1001-2000/1234/bugfile.jpg or random subdirectories like /z/e/x/q/1000_bugfile.jpg ?
...or are there a more efficient ways?
Thanks.
EDIT
It also depends on how you want to get
to these files, do you use a back-end
webpage that fetches all bugs and
creates the links for you? Or do you
get an e-mail after a bug occured and
do you have to find it manually? I
don't think this choice mathers a lot.
Files would be listed / uploaded / downloaded through the bug tracking web application (=> HTTP upload / download).
Nobody except developers / sysadmins would be able to view the automatically generated directory structure (however it would be more convenient to have a "clear" structure).
I'd let the file system do its job (file storage). Databases can be used for file storage but it's not (generally) as efficient, e.g. the file data may be put in the database buffers - this in itself isn't bad, but it may take resources away from other tables, row data and reduce the performance of other queries.
Creating directories based on a meaningful combination of date and project names etc. would help reduce the performance loss when having many files in the same directory.
I'd strongly suggest using recognizeable directory structure, perhaps even date based or something that matches up with (parts) of your bug filename. F.e. '20110506-bugfile' would be in /2011/05/06/ Perhaps this is a little to matching and only 2011/05/ would be enough.
It also depends on how you want to get to these files, do you use a back-end webpage that fetches all bugs and creates the links for you? Or do you get an e-mail after a bug occured and do you have to find it manually? I don't think this choice mathers a lot..
A slightly different option is to add the file into your database in the table bug (http://www.php-mysql-tutorial.com/wikis/mysql-tutorials/uploading-files-to-mysql-database.aspx), then you don't have to create a directory structure, BUT, this would not allow you to find the files using an FTP ofcourse.

suggested ways to import various pipe delimited file into a db based around buffer table using php/mysql?

I am trying to import various pipe delimited files using php 5.2 into a mysql database. I am importing various formats of piped data and my end goal is to try put the different data into a suitably normalised data structure but need to do some post processing on the data to put it into my model correctly.
I thought the best way to do this is to import into a table called buffer and map out the data then import into various tables. I am planning to create a table just called "buffer" with fields that represent each columns (there will be up to 80 columns) then apply some data transforms/mapping to get it to the right table.
My planned approach is to create a base class that generically reads the the pipe data into the buffer table then extend this class by having a function that contain various prepared statements to do the SQL magic, allowing me the flexibility to check the format is the same by reading the headers on the first row and changing it for one format.
My questions are:
Whats the best way to do step one of reading the data from a local file saved into the table? I'm not too sure if i should use the LOAD DATA of mysql (as suggested in Best Practice : Import CSV to MYSQL Database using PHP 5.x) or just fopen then insert the data line by line.
is this the best approach? How have other people approach this?
Is there anything in the zen framework that may help?
Additional : I am planning to do this in a scheduled task.
You don't need any PHP code to do that, IMO. Don't waste time on classes. MySQL LOAD DATA INFILE clause allows a lot of ways to import data, for 95% of your needs. Whatever delimiters, whatever columns to skip/pick. Read the manual attentively, it's worth to know what you CAN do with it. After importing the data, it can be already in a good shape if you write the query properly. The buffer table can be a temporary one. Then normalize or denormalize it and drop the initial table. Save the script in a file to reproduce the sequence of scripts if there's a mistake.
The best way is to write a SQL script, test if finally the data is in proper shape, seek for mistakes, modify, re-run the script. If there's a lot of data, do tests on a smaller set of rows.
[added] Another reason for sql-mostly approach is that if you're not fluent in SQL, but are going to work with a database, it's better to learn SQL earlier. You'll find a lot of uses for it later and will avoid the common pitfalls of programmers who know it superficially.
I personally use the free ETL software Kettle by Pentaho (this bit of software is commonly referred to as kettle). While this software is far from perfect, I've found that I can often import data in a fraction of the time I would have to spend writing a script for one specific file. You can select a text file input and specify the delimiters, fixed width, etc.. and then simply export directly into your SQL server (they support MySql, SQLite, Oracle, and much more).
There are dozens and dozens of ways. If you have local filesystem access to the MySQL instance, LOAD DATA. Otherwise you can just as easily transform each line into SQL (or a VALUES line) for periodic submittal to MySQL via PHP.
In the end i used dataload AND modified this http://codingpad.maryspad.com/2007/09/24/converting-csv-to-sql-using-php/ for different situations.

Categories