Which is faster? 1 Million lines array or database? - php

In a PHP script, I got a set of params (zip codes / addresses) that will not change frequently, so I'm looking to move this particular db table to a config file. Would it be faster reading a file containing an array with 1 million lines with zip codes as keys or a db table to scan and get the remaining items of the address (street, city, state).
Thanks,

Try to store data in database rather than file.for million line i guess database if faster than file.
if you want to achieve performance you can use cache like APCCache or use index in databse over zip field.
sphinx is opensource index which allows faster performance over text search.

Based on the number of zip codes I'd say go with the DB instead of the associative array. And you'll be able to search either addresses or zip codes or even ids.

Related

How to find the different lines between 2 files in PHP

I'm working on a small PHP application which update a stock products regularly, i'm getting the updated file from the server, and i have the old one in my directory, so what is the best way to get only the updated products(lines) between these two files, for information both files contain arround 70000 product lines.
I though to store the data of each file into an array, then use "array_diff" to compare them, it will work theoretically, but will be good idea with 70000 on each array?
Thanks in advance.
I'd use the diff command.
For reference:
https://www.geeksforgeeks.org/diff-command-linux-examples/

CSV with 56.6mb of data, store as CSV file or in database?

I am creating a website where I want people to submit location addresses. To avoid spelling mistakes I would like users to select from a list (town name, county).
I came across the following site http://www.doogal.co.uk/london_postcodes.php which allows me to download a 56 mb large csv file containing the location data I need.
However I have never used a csv file larger than 1mb before with more than 30000 rows of data on my websites. Usually I would just upload to PhpMyAdmin.
What is better? Uploading the csv file to PHPmyadmin database or accessing the '.csv' file directly using php?
Regards
It depends on what you want to achieve. I am not so sure that using a CSV is benefitial since using a database will allow you to
Cache data
Create Indexes for fast searching
Create complex queries
Do data manipulation, etc
The only way I would think a CSV is better, is if you would use always, all the data. Otherwise, I would go for a database. The end result would be much more organized, much faster, and you could build on top of it.
Hope this helps.
If you're going to do lookups, I'd recommend you put it into a database table and add indexes on the fields that you will be searching on (https://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html). A flat file is not a good way to store data that you have to access or filter quickly.

Read a CSV file or make a MySQL query

I'm making a CMS where once a user searches for something a cache file (CSV) is generated using MySQL and after that the same CSV is included and served by PHP for the same search.
Now I want to allow users to filter data from that same cache/static file using jQuery.
I have two options of
Make a DB query to generate the result based on user's filter parameters
Read that cache/static (which is in CSV format) and generate the result based on user's parameters using PHP only.
Both my Database and CSV files are small about 2000 rows in the MySQL Database and Max 500 lines in a CSV file. Average length of the CSV file would be around 50 lines. There will be several(say about 100) CSV files for diferrent searches.
Which technique will be faster and efficient? I'm on a shared host.
Search results are like product information on eCommerce websites.
MySQL servers in shared hosts context are most of the time ridiculously overloaded and may be very slow / unresponding sometimes.
If you want a work around, you could make your php script create a CSV file from the data table for the first user of the day, then read the CSV file for the rest of the day.
Because of you're on a shared host, total number 2K is not a problem, but the IO of harddisk is.
Put the database search results in memeory, such as mysql memeory engine table,
let redis manages the cache with TTL is better.

Best way to migrate 6k images from Access into MySQL?

I wouldn't mind donating to anyone who helps me with this issue.
Should I store binary information with a BLOB data type? Should I store VARCHAR containing paths? I don't know how to do either of these automated at the moment. The images are currently embedded into an Access database as OLE Objects. This migration cannot be manual; it will have to be done automatically using scripts or programs because there are about 6k records.
Any ideas or recommendations?
You can use Leban's OLEtoDisk to export your images all at once. You can specify a "naming" column, your primary key for example, and constant fields to be appended/prepended to the naming column.
Your pictures are then called "exported1.jpg","exported2.jpg", ... assuming you choose to prepend exported and the id's where 1 and 2. It should be simple to move the files to a server and write a script to insert the correct paths into the MySQL database. Assuming this is a one time thing, because that's what it sounds like.
Just tested it with 4000 small (~150 kb) pictures, it was done in 2 minutes on a limited virtual machine. So 6000 should not be a problem.

Simplest way to total up columns?

I have a PHP script that imports up to 10 or so different CSV files. Each row of each file contains bank account info, including balance. After I've imported all the CSV data into my database, I'd like to make sure the data got in there correctly by comparing the total account balance in the database to the total account balance of the CSV files.
I see at least a few options:
Manually total up all the account balances in Excel - yuck.
Write a PHP script to read each CSV file and total up the account balances - also yuck.
Some third option that I hope exists. It would be amazing if I could do something like:
excel --file="cd.csv" | sum --column="E"
That's obviously not a real thing but hopefully you get the idea. Using some combination of PHP, MySQL, Linux commands, Excel and/or any other tools, is there a simple way to do this?
Don't have to complete answer for you, but AWK should be able to solve your problem: Have a look at these 2 posts:
https://superuser.com/questions/53652/transforming-csv-file-using-sed
Shell command to sum integers, one per line?
I'm not enough of a AWK expert to give the solution, but perhaps someone else can help us here.
Another option (which you may also consider yuck) is to use a library like PHPExcel
You can iterate over the CSV file using fgetcsv() which converts each line to an array of values. You can accumulate the value of the array element containing the balance as you move thru each iteration until you get the sum total. Use glob to get the list of CSV files in a folder.
You may not have to "manually" total up the account balances, if you can use Excel functions from your application, the Excel formula in VBA would be:
Application.Sum(Range("A:A"))
where A:A is for column A.
Try using CSVFix with it's summary option. It will get you more data than you need, but should be easy to use.
Otherwise, this sounds like a good use for Awk.
Why can't you just automagically total up the account balances in excel with a formula and export them with the rest of the data?
A bit of a different angle: Make use of the MySql CSV engine to expose your CSV files to Mysql and then do a normal SQL SUM.
See also: The CSV Storage Engine

Categories