Dashcode mysql datasource binding - php

Hi I've got a tricky question (aren't they all tricky?)
I'm converting a database driven site that uses php, to a site being built with Dashcode.
The current site selects data held in a mySQL database and dynamically creates the page content. Originally this was done to reduce site maintenance because all the current content could be maintained and checked offline before uploading to the live database, therefore avoiding code changes.
In dashcode you can work from a JSON file as a datasource - which is fine, it works - except for the maintenance aspect. The client is not willing (and I understand why) to update several hundred lines of fairly structured JS object code when the database holds the data and is updated from elsewhere.
So - What's the best way to get Dashcode to link to the database data?

Where are you getting the the JSON from? Is that being generated from the original MySQL? Could you not generate the JSON from the MySQL and therefore use the original maintenance procedure prior to uploading to MySQL?

For my projects I usually create a php intermediate that when accessed logs into the MySQL database and phrases the results into xml in the body of the page. Just point daschcode to the php file in the data source. Parameters can even be passed into the php script through with GET through the url in the data source.

Related

How to optimize writing lots of data to database in WP plugin

I'm developing a Wordpress plugin in php 7.4 that pulls data from an API and writes it into the WP database. This data is used later by the Custom Post Types and Advanced Custom Fields plugins (mentioning this just in case it's relevant) that present it to the user via different pages on a website.
The problem I'm facing is that with big amounts of data, the script that loads that data (called via Ajax) will just crash at some point and return a generic error, and after lots of researching and testing and talking to the hosting the only conclusion is that what's happening is that the script is running out of memory, even though we're giving it as much as we possibly can. When the script loads smaller amounts of data, it works perfectly fine, supporting the lack-of-memory theory.
Since I'd rather optimize the code than having someone pay more for hosting, I've been trying different strategies to do that, but I can't seem to find one that makes a significant impact, so I was hoping to get some ideas.
Something to know about the API and the process that's currently done when loading data: the data that's pulled every time is a refresh of pre-existing data (think a bunch of multiple records), so most of the data already exists in the database. The problem is that, because of how the API is implemented, there is NO WAY to know which data has changed and which hasn't (that's been thoroughly discussed with the API provider).
Things that I'm already doing to optimize the script:
Compare records coming from the API to records already existing in the database, so that I only save/update the new ones (I do this via an in_array() comparison of the ID of the record vs the IDs of the existing records)
Fetch from the API only the strictly necessary fields from every record
Skip native WP functions that store data to the database whenever possible, using custom functions to directly write to database, to avoid WP performance overhead
This doesn't seem to be enough for big amounts of data. Fetching the data itself seems not to be the problem regardless of the size, it's the processing of the data and entering it into the database. So I guess I'm looking for strategies that would help optimizing the processing of big chunks of data in this situation.
EDIT:
Clarifying this being an AJAX script: the PHP script (the API data importer) that gets called via AJAX gets also called via CRON by a different process, and fails as well when dealing with big amounts of data.

Optimize DB connectivity for AngularJS app

I have an AngularJS v1 app that connects to Oracle DB and takes the values from it.
At the beginning, each page had just several such values, so the optimization was not in question. But now I have pages, that contain dozens of elements with values taken from DB.
Currently, for each element, the name of the element is passed to PHP file that opens a connection to DB, reads the last value (using rownum and order by time) and returns this value back to AngularJS.
So, as you can imagine, it takes quite a while to display all dozens of values on a page (due to the fact that AngularJS first loads all the values and then displays them all together).
I would like to somehow optimize this connectivity.
The example of a page can be seen at: NP04 Photon Detectors
And the code can be found here: Github
The PHP file in question is: app/php-db-conn/elementName.conn.php
Thanks!
UPDATE
I have updated the code to partly retrieve data using the array, but it seems to me that I'm forming JSON in an incorrect way. Could someone please help me out?
UPDATE 2
Managed to make it all work, but now it takes even longer to load than before despite the fact that it transfers less data.

Desired way of running a web scraping cronjob within a Laravel web app to save data to DB

Currently working on a new web service that requires me to scrape a site every couple hours to save the data into my MySQL database.
My question is - how should my scraper run?
For now I see a few ways:
The cronjob runs a scraping script written in PHP, scrapes the data, and saves the data into a flatfile (i.e. csv), which then I setup a Controller to parse the data and have my Model save the data
The cronjob runs a scraping script written in PHP, scrapes the data, and immediately saves the data into my DB as each rows of data comes in
Of the two methods above, which one is better? If I am simply talking out of my ass, could you please suggest me a better way to:
Scrape the date
Save the data to my DB
Of the two options of saving scrapped data, if I were you, I would go after the second way. The reason is simply that it is easier to manage the scrapped data once they are already in DB -- it would save you the burden of generating and using temporary files.
Saving (appending new data) into a flat file may be faster than inserting into DB. But when time/performance is critical, you can either run your cronjob more frequently or run multiple copies of your cronjob (say, each of them scrapping different websites or different web pages).

How do I export my mongodb collection into a table on my website?

I want to create a very simple table that lists all the data in a mongodb database.
The database is hosted locally and updated every minute with information scraped by scrapy.
There are two pieces of data that will populate this table and apart from the "_id" element they are the datatypes in the database.
Because there will be new data added frequently but irregularly I was thinking the data should be pulled only when the website is loaded.
Currently the webpage is nothing more than an html file on my computer and I'm still in the process of learning how to host it. I'd like to be able to have the database accessible before making the website available as making this information available is its primary function.
Should I write a php script to pull the data?
Is there a program that already does this?
Do you know of any good tutorials that would be able to break the process down step-by-step?
If you are just looking to export the data into a file (like a csv) you could try this:
http://blogs.lessthandot.com/index.php/datamgmt/dbprogramming/mongodb-exporting-data-into-files/
The csv may be more useful if you are planning to analyze the data offline.
Otherwise, you could write a script in PHP or Node.JS that connects to the database and finds all the records and displays them.
The Node function you would want is called find:
http://mongodb.github.io/node-mongodb-native/api-generated/collection.html#find

Caching data using xml

I'm presently on a e-learning project using the custom php (not any framework or cms).For content of one page of my project i have to fetch about 1000 data from database,presently i'm using pagination on that page and displaying 100s of data every page.Now i'm thinking if i fetch all data from the database at a time and store it in xml and when user sweep between the pages of pagination the data will be fetched from the xml rather than database it may be good in the sense that it will reduce the database hits.But i have confusion that is the xml pursing may effect on my project execution time?If any better better idea please share with me.
My project's environment is like below
php 5
Mysql
Jquery
This still sounds inefficient since you have to still parse the xml.
I believe the most efficient way to do it (optimised for page views) would be to pre-generate the html of your lists.
That means everytime the database changes, you re-create the html, but only once.
Then all you do is simply serve that html from your web-server without any script executing.

Categories