Save data from an api call in MySQL with PHP? - php

I have database table with the names of businesses and want to take advantage of some of the api's that are available. What I am wondering is how I could write a PHP script that would take my data (business name and maybe address) and process it against the api storing the resulting data in a mysql database.
I would like to in one script run all the business data in my database against an API storing it back in another table.
Thanks in advance for any and all help!

Creating a caching layer is a good idea to prevent unnecessary API calls, but I'd recommend keeping it out of the database, because then you'll be making a DB call which trades overhead A with overhead B. I'd recommend using Memcached or saving the API data in a txt file even.

Related

Temporary storage in PHP

I am making a program that would pose a question and users would be able to answer it. This is a Internet of things project and as such there will be a lot of calls to the database, which would only want to increase the value of answer A or B by one. For the record I am using PHP and Mysql and there are many unique items that would send update requests to the server.
What can I do to reduce the calls to the database?
The solution I came up with was to store somehow the data on the server, then sync the data with the database on a scheduled interval.
To update the results I would need to know only 3 things - item id, and both results. For code clarity and simplicity I made a model object with those attributes.
So far I came up with/found several ideas:
Sessions - make session an array and just put the model objects inside it
Create a file on the server that would store the data
Use superglobal variables
Create a PHP class that would have an array, in which the objects would go to and interact with the class
Use some API - but I would be completely dependent on it working
Which one of the solutions given be the best in terms of simplicity, security and performance or is there a better way to do this whole thing?
You can have one database for row data and once you authorize then you can sync the data into actual database which your application needs. After sync, you can run a job of deleting unwanted data from the row database. Let me know if it works for you.

Suggestion on copying data from server database to android device sqlite

I'm developing an Android app for salesman, so they can use their device to save their order. Specifically, every morning the salesman would go to the office and fetch the data for that day.
Currently, I can get the data by sending a request to php file, and like common practice we insert those data into sqlite in the Android so it can work offline. However, with current approach the device needs 6-8 seconds on getting the data and inserting those data to sqlite. As the data grow bigger I think it would make it slower. What I had found is that the process of inserting data into sqlite takes quite amount of time.
So, I've been thinking about dumping all data that is needed by the salesman into a sqlite file, so I could send only that file which I guess is more efficient. Can you please lead me on how to do that? Or is there any other way which is more efficient approach for this issue?
Note:
Server DB: Mysql
Server: PHP
You can do here different approach to achieve loading speed:
If your data can be pre-loaded with apk, you can just store those inside .apk and when user download app, it will be there, you just need to call remaining updated data.
If you need refreshed data every time, you can do call chunk of data from server in multiple calls, which will fetch and store data in database and update on UI.
If data is not too much, (I say, if there are 200-300 data, we should not consider it much more) you can do simple thing:
When you call network call for fetching data, you should pass that data objects to database for storing and at same time (before storing in db), just return the entire list object to Activity/Fragment, so it will have the data and you can see those data to user, in mean time, it will store in database.
Also, you can use no-sql in side sqlite, so you don't need to parse objects every time (which is costly compare to no-sql) and store all data in sql as entire object and whenever require, just fetch from db and parse it as per requirement.
Thanks to #Skynet for mentioning transaction, it does improve the process alot.. So I'll stay with this approach for now..
You can do something like so:
db.beginTransaction();
try {
saveCustomer();
db.setTransactionSuccessful();
} catch {
//Error in between database transaction
} finally {
db.endTransaction();
}
For more explanation: Android Database Transaction..

Access and store large amount of data from mysql server

We are developing an iOS/Android application which downloads large amounts of data from a server.
We're using JSON to transfer data between the server and client devices.
Recently the size of our data increased a lot (about 30000 records).
When fetching this data, the server request gets timed out and no data gets fetched.
Can anyone suggest the best method to achieve a fast transfer of data?
Is there any method to prepare data initially and download data later?
Is there any advantage of using multiple databases in the device(SQLite dbS) and perform parallel insertion into db's?
Currently we are downloading/uploading only changed data (using UUID and time-stamp).
Is there any best approach to achieve this efficiently?
---- Edit -----
i think its not only the problem of mysql records, at peak times multiple devices are connecting to the server to access data, so connections also goes to waiting. we are using performance server. i am mainly looking for a solution to handle this sync in device. any good method to simplify the sync or make it faster using multi threading, multiple sqlite db etc,...? or data compression, using views or ...?
A good way to achieve this would probably be to download no data at all.
I guess you won't be showing these 30k lines at your client, so why download them in the first place?
It would probably be better to create an API on your server which would help the mobile devices to communicate with the database so the clients would only download the data they actually need / want.
Then, with a cache system on the mobile side you could make yourself sure that clients won't download the same thing every time and that content they have already seen would be available off-line.
When fetching this data, the server request gets timed out and no data gets fetched.
Are you talking only about reads or writes, too?
If you are talking about writing access, as well: Are the 30,000 the result of a single insert/update? Are you using a transactional engine like InnoDB, e.g.? If so, Are your queries wrapped in a single transaction? Having auto commit mode enabled can lead to massive performance issues:
Wrap several modifications into a single transaction to reduce the number of flush operations. InnoDB must flush the log to disk at each transaction commit if that transaction made modifications to the database. The rotation speed of a disk is typically at most 167 revolutions/second (for a 10,000RPM disk), which constrains the number of commits to the same 167th of a second if the disk does not “fool” the operating system.
Source
Can anyone suggest the best method to achieve a fast transfer of data?
How complex is your query designed? Inner or outer joins, correlated or non-correlated subqueries, etc? Use EXPLAIN to inspect the efficiency? Read about EXPLAIN
Also, take a look at your table design: Have you made use of normalization? Are you indexing properly?
Is there any method to prepare data initially and download data later?
How do you mean that? Maybe temporary tables could do the trick.
But without knowing any details of your project, downloading 30,000 records on a mobile at one time sounds weird to me. Probably your application/DB-design needs to be reviewd.
Anyway, for any data that need not be updated/inserted directly to the database use a local SQLite on the mobile. This is much faster, as SQLite is a file-based DB and the data doesn't need to be transferred over the net.

How can I handle 5M Transactions every day with MySQL and the whole LAMP?

Well, Maybe 5M is not that much, but it needs to receive a XML based on the following schema
http://www.sat.gob.mx/sitio_internet/cfd/3/cfdv3.xsd
Therefore I need to save almost all the information per row. Now by law we are required to save the information for a very long time and eventually this database will be very very veeeeery big.
Maybe create a table every day? something like _invoices_16_07_2012.
Well, I'm lost..I have no idea how to do this, but I know is possible.
On top of that, I need to create a PDF and 2 more files based on each XML and keep them on HD.
And you should be able to retrieve your files quickly using a web site.
Thats a lot of data to put into one field in a single row (not sure if that was something you were thinking about doing).
Write a script to parse the xml object and save each value from the xml in a separate field or in a way that makes sense for you (so you'll have to create a table with all the appropriate fields). You should be able to input your data as one row per xml sheet.
You'll also want to shard your database and spread it across a cluster of servers on many tables. MySQL does support this but I've only boostrapped my own sharding mechanism before.
Do not create a table per XML sheet as that is overkill.
Now, why do you need mysql for this? Are you querying the data in the XML? If you're storing this data simply for archival purposes, you don't need mysql, but can instead compress the files into, say, a tarball and store them directly on disk. Your website can easily fetch the file in this way.
If you do need a big data store that can handle 5M transactions with as much data as you're saying, you might also want to look into something like Hadoop and store the data in a Distributed File System. If you want to more easily query your data, look into HBase which can run on top of Hadoop.
Hope this helps.

Flat File or Database to store small amounts of records. Which would be faster for many connections/users

I Have a web application that will support a large number of connections.
Each time a session is created, or a refresh is called, I will run a service that will collect data and store it for viewing. Then using php, read this data from somewhere, and display it back to the user.
My question is; If i'm only reading and writing from a single table with 5 columns and 50~100 rows(per user), would it be faster to store this information in flat file(s) and read from it?
You'll only know for sure by benchmarking it, but keep in mind that the developers of the RDBMS systems have already taken care of the necessary optimizations to move data in and out of database tables, and MySQL has a strong API for PHP, supporting transactional writes.
I would go for the database over flat files for sure, but benchmark your own situation.

Categories