PHP + MySQL - What alternatives to handle (small) time series? [closed] - php

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I was asked to build an expense report framework which allows users to store their expenses, one at a time, via a web form. The number of entries will never be more than 100-200 per day.
In addition to the date and time, to be provided by the user, there must be a pre-defined set of tags (e.g.: transportation, lodging, food) to choose from for each new row of data, as well as fields for currency, amount and comments.
Afterwards, it must be possible (or rather, easy) to fetch the entries in the db between two dates and store the data in a pandas data frame (or R data table) for posterior statistical analysis and plotting.
I first thought about using PHP to insert the data in a mySQL database table, where the tags would be columns of booleans (True/False). The very simple web form would load by default with all tags set to False and it would be up to the user to turn the right ones to True prior to submission.
This said, I am now wondering about the other approaches I can or should explore. I've been reading about openTSDB and InfluxDB, which are designed to handle massive amounts of data, but I am also interested in hearing from coders up-to-date with the latest technologies about other possible options.
In short, I wish to choose a wise approach which is neither dated nor a (complex) cannon to kill a fly.

You could try Axibase Time-Series Database Community Edition. It's free.
Supports tags for entities, metrics, and series tags
Provides open-source API clients for R, Python, and PHP
Range time-series query is a core use case
Check out App examples you can easily build in PHP, Go, NodeJS. Application code is open source under Apache 2 license and is hosted on github.
Disclosure: I work for the Axibase.

Related

Database design for a chat app [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm trying to create a web chat app using AJAX, PHP and mySQL. I'm having trouble with the database structure.. Here's what I've thought :
A users table: Contains basic user's info
A Chat table: Contains basic columns like 'to', 'from' 'timestamp' etc..
The problem:
I think that this will get pretty messy very quickly since lots of users will be querying the same table. Not to mention some security issues. I want to create a separate table for each conversation. Is this a good idea? What would be your preferred structure?
Separate table for each conversation would be very messy indeed. A single table would get huge and degrade performance with sufficient volume and accumulation.
If you don't need to store each line of conversation in perpetuity in the database, you can simply purge the conversation from the chat lines table once it's over. You'd only need to keep it there if you wanted to search lines in past conversations. (Use other approaches for keeping chat statistics etc.)
You could archive a concatenated/serialized version of the conversation, ie. the whole lot in one chunk, into a file in the filesystem, or into a separate table with the relevant metadata (users, length, duration etc.). Then simply reload it, whenever an old conversation becomes active again.
If you do want to distribute your per-table load, you could e.g. track typical user connections and then generate an adequate amount of group-dedicated tables, or use any other user aggregation algorithm that works. But if you do purge the chat lines table periodically, it'll take a huge volume of usage before database performance will become an issue.

Need Best database to process huge data [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have situation as below.
Every day I am getting 256 GB products information from different online shops and content providers (Ex. CNET datasource).
These information can be CSV, XML and TXT files. Files will be parsed and storing into MongoDB.
Later information will be transformed to searchable and indexed into Elasticsearch.
All 256 GB information is not different every day. Mostly 70% information will be same and few fields like Price, Size, Name and etc will be changed frequently.
I am processing Files usig PHP.
My problem are
Parsing huge data
Mapping the fields inside DB ( ex. title is not title for all onlineshops. They will give field name as Short-Title or some other name)
Increasing GB of information every day. How to store all and process. ( may be Bigdata but not sure how to use it)
Searching information fast with huge data.
Please suggest me suitable Database for this problem.
parsing huge data - Spark is fastest distributed solution for your need, thought you have 70% same data just for comparing its duplicate you anyway have to process it, here you can do mapping n all as well.
data store, if you are doing any aggregation here, i would recommend to use HBase/Impala , if each row of product is important to you use cassandra
For serching nothing is faster than lucene, so use use Solr or Elasticsearch whatever you think comfortable, both are good.

How to make multiple open source web apps use the same database [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I would like to set up an online store and a point of sale application for a food coop.
My preference is php/mysql, but I can't find any projects which accomplish both these requirements. I was wondering if it would be possible to use separate store and pos apps and get them using the same product database.
The questions I have about this are:
is it a bad idea?
Should one of the apps be modified to use the same tables as the other or should there be a database replication process which maps the fields together (is this a common thing?)
is it a bad idea?
The greatest danger might be that if someone successfully attacks your online store, then the pos systems might get affected as well. E.g. from a DOS attack. That wouldn't keep me from taking this route, though.
Should one of the apps be modified to use the same tables as the other or should there be a database replication process which maps the fields together (is this a common thing?)
If you can get at least one of the two systems to use the products data in read only mode, then I'd set up a number of views to translate between the different schemata without physically duplicating any data.

retrieving datatbase information on a web page [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
This will be a total novices question, but I am looking for advice.
My apologies, In the post as I failed to mention that the database that I am working on is MySQL.
I know absolutely nothing in regards to any technologies that retrieve or get information from a database. The only 3 facts that I know is that it can be done by either PHP or HTML5, I should be able to pick it up and that I will make many mistakes
Could the community suggest which would be the better technology to learn and would any be able to suggest a starting point?
Yours in advance
Keith
In order to retrieve database information, you generally only need a database such as MySQL - and a client to perform your queries (fetching data from the database).
Your client could be anything, a commandline tool or a PHP script opening a connection to your database and performing the desired queries.
Fetching data alone will not get you very far unless you can display that information somewhere, or even provide access to it or (if desired) allow users to interact with it.
Basically, if you want to retrieve database information and show it on a website, your minimum requirements would be HTML, a database server, a database (preferably with some data to run some tests with) and some kind of scripting language (such as PHP).
There are numerous tutorials out there on how to make your first steps with this.
Here is one.
Start with PHP + MySQL. There are a lot of manuals and examples over the Internet. Google it.

Fastest geolocation (IP to town)? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am writing small geolocation service: then user come to my site I should to set his town from his IP-address. Now I found three way to solve this problem:
Create from PHP connection to MySql DB and select town from it.
From PHP go to cgi script (perl,c ?) and select town from file with towns and IP-addrs.
Use services like http://ipinfodb.com/ip_location_api.php and get town from it.
But what way would be fastest? Minimal time etc?
Thanks!
3.
Primarily because of just how much data you'd have to manually compile together to do either 1 or 2.
There is no easy answer to it because a lot depends on unknown factors such as:
Speed of your MySQL DB
Speed of your php inplementation and size of the file
Speed of the location_api service
In other words, there are only two ways to find out the answer:
build them all and test
gather all parameters (speeds, bandwidth, concurrent users of all systems) and calculate/guesstimate.
I've used the MaxMind database for country-level lookup from PHP (there is example code for other languages). The downloadable database is in a binary format optimised for speed of reading - although I've not compared it to a import into Mysql and searching with SQL, I have no doubt of Maxmind when they say it would be faster to use the API and original data rather than via another means, like SQL.

Categories