I have a Wordpress site that utilizes a custom post type, call it CPT-1, that I created using JetEngine. Inside of CPT-1 are meta fields. Once that was setup, I did a bulk insert of data using Ultimate CSV Importer Pro and it put this information into CPT-1 and I could put each column of data into the meta fields I wanted to use. These fields are then used later in tables.
Is there a way to go around the CSV Importer part of this process and just pull from a database? In the long term, i'd like to make changes to certain posts and upload different posts while using CPT-1 but I don't think using a CSV every time will be easy or accurate. If I could just pull from a database that I make updates to, I can track those changes easily and manage it.
I have database experience but not so much with Wordpress databases. What tables would I have to pay attention to if I were to go down this route?
Wordpress uses MySQL as a backend, so there is no reason you can't just insert the data directly. You'll need to get the credentials Wordpress uses to connect to the database, and then connect yourself, probably from your own custom PHP script.
I am generally skiddish doing things like you described because Wordpress is a complex piece of software and I don't have a lot of awareness of what it is doing behind-the-scenes (nor do they really intend users to have such awareness, most functionality is hidden from the user.)
However, if you have been doing a CSV import, and you have tested it extensively, and it's working fine with that method, there is no reason you couldn't carry out this same thing with less manual work on your part via a PHP script.
I'm afraid I can't get much more specific in my answer because I don't have information about what exactly you did with the CSV.
A straightforward (but not super efficient) way of doing this would be a PHP script where you initiate a database connection to the database you update, and a second connection to the MySQL database, fetch a query of whatever rows you want to update (whatever you would normally be exporting via CSV) and iterate row-by-row and insert this data into the MySQL database. You can make this significantly more efficient by making a single prepared statement, and then executing it repeatedly with each row of values.
A more efficient way of doing it would be to pull the data into your PHP script and then format it as a single query which you could then add into the MySQL database.
If you already have CSV importing working, you could even do a "lazy" solution where you just write a PHP script that generates a CSV and then feeds it into MySQL and imports it the same way your other program was. It's hard for me to tell from what you said, which of these solutions would work. However, I have used all three solutions, depending on what I'm doing and what kind of error handling I want.
In general, if errors happen rarely-to-never you are probably better off with the single, bulk insert methods whether one query or PHP automating the export of a CSV and then passing it to be imported into Wordpress' MySQL database.
Related
I'm developing a Wordpress plugin in php 7.4 that pulls data from an API and writes it into the WP database. This data is used later by the Custom Post Types and Advanced Custom Fields plugins (mentioning this just in case it's relevant) that present it to the user via different pages on a website.
The problem I'm facing is that with big amounts of data, the script that loads that data (called via Ajax) will just crash at some point and return a generic error, and after lots of researching and testing and talking to the hosting the only conclusion is that what's happening is that the script is running out of memory, even though we're giving it as much as we possibly can. When the script loads smaller amounts of data, it works perfectly fine, supporting the lack-of-memory theory.
Since I'd rather optimize the code than having someone pay more for hosting, I've been trying different strategies to do that, but I can't seem to find one that makes a significant impact, so I was hoping to get some ideas.
Something to know about the API and the process that's currently done when loading data: the data that's pulled every time is a refresh of pre-existing data (think a bunch of multiple records), so most of the data already exists in the database. The problem is that, because of how the API is implemented, there is NO WAY to know which data has changed and which hasn't (that's been thoroughly discussed with the API provider).
Things that I'm already doing to optimize the script:
Compare records coming from the API to records already existing in the database, so that I only save/update the new ones (I do this via an in_array() comparison of the ID of the record vs the IDs of the existing records)
Fetch from the API only the strictly necessary fields from every record
Skip native WP functions that store data to the database whenever possible, using custom functions to directly write to database, to avoid WP performance overhead
This doesn't seem to be enough for big amounts of data. Fetching the data itself seems not to be the problem regardless of the size, it's the processing of the data and entering it into the database. So I guess I'm looking for strategies that would help optimizing the processing of big chunks of data in this situation.
EDIT:
Clarifying this being an AJAX script: the PHP script (the API data importer) that gets called via AJAX gets also called via CRON by a different process, and fails as well when dealing with big amounts of data.
My stack is php and mysql.
I am trying to design a page to display details of a mutual fund.
Data for a single fund is distributed over 15-20 different tables.
Currently, my front-end is a brute-force php page that queries/joins these tables using 8 different queries for a single scheme. It's messy and poor performing.
I am considering alternatives. Good thing is that the data changes only once a day, so I can do some preprocessing.
An option that I am considering is to create run these queries for every fund (about 2000 funds) and create a complex json object for each of them, store it in mysql indexed for the fund code, retrieve the json at run time and show the data. I am thinking of using the simple json_object() mysql function to create the json, and json_decode in php to get the values for display. Is this a good approach?
I was tempted to store them in a separate MongoDB store - would that be an overkill for this?
Any other suggestion?
Thanks much!
To meet your objective of quick pageviews, your overnight-run approach is very good. You could generate JSON objects with your distilled data, or even prerendered HTML pages, and store them.
You can certainly store JSON objects in MySQL columns. If you don't need the database server to search the objects, simply use TEXT (or LONGTEXT) data types to store them.
To my way of thinking, adding a new type of server (mongodb) to your operations to store a few thousand JSON objects does not seem worth the the trouble. If you find it necessary to search the contents of your JSON objects, another type of server might be useful, however.
Other things to consider:
Optimize your SQL queries. Read up: https://use-the-index-luke.com and other sources of good info. Consider your queries one-by-one starting with the slowest one. Use the EXPLAIN or even the EXPLAIN ANALYZE command to get your MySQL server to tell you how it plans each query. And judiciously add indexes. Using the query-optimization tag here on StackOverflow, you can get help. Many queries can be optimized by adding indexes to MySQL without changing anything in your php code or your data. So this can be an ongoing project rather than a big new software release.
Consider measuring your query times. You can do this with MySQL's slow query log. The point of this is to identify your "dirty dozen" slowest queries in a particular time period. Then, see step one.
Make your pages fill up progressively, to keep your users busy reading while you get the data they need. Put the toplevel stuff (fund name, etc) in server-side HTML so search engines can see it. Use some sort of front-end tech (React, maybe, or Datatables that fetch data via AJAX) to render your pages client-side, and provide REST endpoints on your server to get the data, in JSON format, for each data block in the page.
In your overnight run create a sitemap file along with your JSON data rows. That lets you control exactly how you want search engines to present your data.
I'm currently building a web-app which displays data from .csv files for the user, where they are edited and the results stored in a mySQL database.
For the next phase of the app I'm looking at implementing the functionality to write the results into ** existing .DBF** files using PHP as well as the mySQL database.
Any help would be greatly appreciated. Thanks!
Actually there's a third route which I should have thought of before, and is probably better for what you want. PHP, of course, allows two or more database connections to be open at the same time. And I've just checked, PHP has an extension for dBase. You did not say what database you are actually writing to (several besides the original dBase use .dbf files), so if you have any more questions after this, state what your target database actually is. But this extension would probably work for all of them, I imagine, or check the list of database extensions for PHP at http://php.net/manual/en/refs.database.php. You would have to try it and see.
Then to give an idea on how to open two connections at once, here's a code snippet (it actually has oracle as the second db, but it shows the basic principles):
http://phplens.com/adodb/tutorial.connecting.to.multiple.databases.html
There's a fair bit of guidance and even tutorials on the web about multiple database connections from PHP, so take a look at them as well.
This is a standard kind of situation in data migration projects - how to get data from one database to another. The answer is you have to find out what format the target files (in this case the format of .dbf files) need to be in, then you simply collect the data from your MySQL file, rearrange it into the required format, and write a new file using PHP's file writing functions.
I am not saying it's easy to do; I don't know the format of .dbf files (it was a format used by dBase, but has been used elsewhere as well). You not only have to know the format of the .dbf records, but there will almost certainly be header info if you are creating new files (but you say the files are pre-existing so that shouldn't be a problem for you). But the records may also have a small amount of header data as well, which you would need to write to work out and each one in the form required.
So you need to find out the exact format of .dbf files - no doubt Googling will find you info on that. But I understand even .dbf can have various differences - in which case you would need to look at the structure of your existing files to resolve those if needed).
The alternative solution, if you don't need instant copying to the target database, is that it may have an option to import data in from CSV files, which is much easier - and you have CSV files already. But presumably the order of data fields in those files is different to the order of fields in the target database (unless they came from the target database, but then you wouldn't presumably, be trying to write it back unless they are archived records). The point I'm making, though, is you can write the data into CSV files from the PHP program, in the field order required by your target database, then read them into the target database as a seaparate step. A two stage proces in other words. This is particularly suitable for migrations where you are doing a one off transfer to the new database.
All in all you have a challenging but interesting project!
There is a MySQL database table include Persons records and they filled once and then they will not changes later, but will be used several times in various type for preview and most important issue is that they are more than 1000 records and will showing as paging in various sort of Name, LName, Address, Code etc. Also they will use in search form and the search form has filter option.
Due to the above description and the unnecessary features which is in database (MySQL) (i.e. connection, insert, edit) advise me alternative source which I can create it from database after each changing data and then using it for preview and search.
I was thinking about text file as json format but it's not good idea because it will use RAM and cpu to be prepared.
if it's important so language is PHP.
So:
I need an alternative to create a source file from one big MySQL database table records which will use as read-only but include sort, search, filter option and without inserting, modifying, connection feature.
Maybe SQLite is what you are looking for?
If you don't need to provide editing on each record, but just be able to display them and sort them, you could still use the database to store, but fetch all (or parts) of the dataset and display them on a page using Datatables (https://datatables.net/).
That plugin will enable you to sort, search and everything on the browser.
So I have an old website which was coded over an extended period of time but has been inactive for 3 or so years. I have the full PHP source to the site, but the problem is I do not have a backup of the database any longer. I'm wondering what the best solution to recreating the database would be? It is a large site so manually going through each PHP file and trying to keep track of which tables are referenced is no small task. I've tried googling for the answer but have had no luck. Does anyone know of any tools that are available to help extract this information from the PHP and at least give me the basis of a database skeleton? Otherwise, has anyone ever had to do this? Any tips to help me along and possibly speed up the process? It is a mySQL database I'm trying to use.
The way I would do it:
Write a subset of SQLi or whatever interface was used to access the DB to intercept all DB accesses.
Replace all DB accesses with the dummy version of yours.
The basic idea is to emulate the DB so that the PHP code runs long enough to activate the various DB accesses, which in turn will allow you to analyze the way the DB is built and used.
From within these dummy functions:
print the SQL code used
regenerate just enough dummy results to let the rest of the code run, based on the tables and fields mentioned in the query parameters and the PHP code that retrieves them (you won't learn much from a SELECT *, but you can see what fields the PHP code expects to get from it)
once you have understood enough of the DB structure, recreate the tables and let the original code work on them little by little
have the previous designer flogged to death for not having provided a way to recreate the DB programatically
There are currently two answers based on the information you provided.
1) you can't do this
PHP is a typeless language. you could check you sql statements for finding field and table names. but it will not complete. if there is a select * from table, you can't see the fields. so you need to check there php accesses the fields. maybe by name or by index. you could be happy if this is done by name, because you can extract the name of the fields. finally the data types will missing. also missing: where are is an index on, what are primary keys, constrains etc.
2) easy, yes you can!
because your php is using a modern framework with contains a orm. this created the database for you. a meta information are included in the php classes/design.
just check the manual how to recreate the database.