Which is best way to hold raw data temporary? [closed] - php

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I am working on some analytic web application. Every time i need processed raw data from database to be hold temporary.
So, what will be best way to hold that raw data temporary.
1) Temporary Table
2) Cache
3) Hash map
4) View
5) Actual table
6) Temporary file
7) ....
there will heavy read & write operations.
Please suggest.

It depends on what exactly you are tracking, and how much data you have. In the simplest terms, tables housing temporary information in your existing database could do it. On the opposite end of the spectrum, you could have MongoDB housing aggregate data with Memcached/Varnish caching entire HTTP responses with TTLs.
Cheers

I would consider using Redis or APC. These are key-value storages which will perfectly fit for your purpose of storing processed data and sharing it between sessions/requests.
Wish you luck!

Personally, I would use PHP Session Variables. It holds the data, and is easily referenced when needed.

I would suggest a table in database to hold data and delete content afterwards

i would suggest you to go for database table because it will be persist until you remove it.
session may not be good option because it will be lost if the user closes browser and also you can not store large amount of data into session.

A temp file is never a good choice because it's always vulnerable. A table in the database is the best way to store temporary data since you can even set a time for it to get truncated if you're sure about how much time you need that information to be there!

I would suggest you use a caching solution.
My personal experience with APC for similar use cases has been good.
Using APC, I have been able to avoid the additional overhead of hitting the db every time I need to access the data. Its useful because I need to access the data multiple times in a short span of time and then discard it. So using a db solution did not seem to be as efficient.
Further more, its easier to access APC, as data read/write is as simple as key => value, I guess a file read/write solution might require a bit more complex code.
The few things you need to be careful of are, the data could be lost in the instance of a server restart.

Related

Can I use JSON as a database? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last year.
Improve this question
Currently, I am working on a website and just started studying backend. I wonder why nobody uses JSON as a database. Also, I don't quite get the utility of php and SQL. Since I could easily get data from JSON file and use it, why do I need php and SQL?
ok! let assume you put the data in a JSON variable and store it in a file for all your projects.
obviously, u need to add a subsystem for getting back up, then you will write it.
you must increase the performance for handling a very large amount of data, just like indexing, hash algorithms, and... , assume u handle it.
if you need some API for working and connecting with a variety of programming languages, u need to write them.
what about functionalities? what if you need to add some triggers, store procedures, views, full-text search and etc? ok, you will pay your time and add them.
ok, good job, but your system will grow up and you need to scale it, can you do it? u will write abilities for clustering across servers, sharding, and ...
now you need to guarantee that your system will compatible with ACID rules, to keep atomicity, Consistency, Isolation, and Durability.
can you always handle all querying techniques (Map/Reduce) and respond with a fast and standard structure?
now it's time to offer very quick write speeds, it brings serious issues for you
ok, now proper your solutions for condition racing, isolation level, locking, relations and ...
after you do all this work plus thousands of many others, probably you will have a DBMS a little bit just like MongoDB or other relational and non-relational databases!
so it's better to use them, however, obviously, you can choose to don't to use them too, I admit that sometimes saving data in a single file has better performance, but only sometimes, in some cases, with some data, for some purpose! if you know what exactly you do, then ist OK to save data in a JSON file.

MySQL query or $_SESSION variable or Cookies for non-sensitive data? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
My site's menu tree is three layers deep, relatively static, and completely database driven.
I made the presumption that querying the mysql database for the complete menu tree with every page load would be more resource intensive than querying it once and storing it in a $_SESSION variable. My menus take up 77,835 bytes - using strlen(serialize($menuarray)).
It finally occurred to me that I have no data to back up this presumption and I can't find this answer anywhere. I presume storing in a $_SESSION var eats memory whereas a mysql query essentially uses the same amount of memory (although I can unset the var after the menu is generated) but also taxes the cpu. Both of course will access the disk.
Cookies are a third option and am willing to hear arguments in their favor as well, although I hold a bias against them.
So, for non-sensitive data, would you go Query, $_SESSION, or Cookies and why?
UPDATE
I realized that another options would be to cache the serialized menu query output either to the database or the disk.
Or, even cache the HTML output to a file and include the menus when needed.
UPDATE 2
Just the fact that this question was put on hold hints at something - perhaps that the use of server resources is a fuzzy area. It's strange trying to optimize without a clearer sense of the impacts of memory, cpu cycles, disk access, db connections, etc.
In the end, caching makes sense and I will go with the accepted answer.
I recommend you get from the database and cache the result. You'll just store the result one time, so it will not consume much memory, and you won't make a db query every time you load the page.
You can do it by simply using OPCache, Memcached, Redis, etc. Or, in another way, simply get the value from db, create a file, save the content, and get the value from the file, removing and adding again in certain time intervals.

MySQL: HOW TO MOVE OLD DATA TO DB [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I finished creating an accounting web application for an organization using codeignter and mysql db, and I have just submitted it to them, they liked the work, but they asked me how they would transfer their old manual data to the new one online, so that their members would be able to see their account balances and contributions history.
This is a major problem for me because most of my tables make use of 'referential integrity' to ensure data synchronization and would not support the style of their manual accounting.
I know a lot of people here have faced cases like this and I would love to know the best way to collect users history, and I also know this might probably be flagged as not a real question, but I really really have to ask people with experience.
I would appreciate all answers. Thanks (And vote downs too)..
No matter what the case is, data conversions are very challenging and almost always time consuming. Depending on the consistency of the data in question, it could be a case that about 80% of the data will transfer over neatly if you create a conversion program using PHP. That conversion code in and of itself may be more time consuming than it is worth. If you are talking hundreds of thousands of records and beyond, it is probably a good idea to make that conversion program work. Anyone who might suggest there is a silver bullet is certainly not correct.
Here are a couple of suggested steps:
(Optional) Export your Excel spreadsheets to Access. Access can help you to standardize data and has tools in place to help you locate records which have failed in some way. You can also create filters in Access if you need to. The benefit of taking this step, if you are familiar with Access, is that you have already begun the conversion process to a database. As a matter of fact, if you so desire, you can import your MySQL database information into Access as well. The benefit of this is pretty obvious: You can create a query and merge your two separate tables together to form one table, which could save you a great deal of coding.
Export your Access table/query into a CSV file (note, if you find it is overkill or if you don't have Access, you can skip step 1 and simply save your .xls or .xlsx file to type .csv. This may require more legwork for your PHP conversion code but that is probably a matter of preference. Some people prefer to avoid Access as much as possible, and if you don't normally use it you will be wasting time trying to learn it just to save yourself a little bit of time).
Utilize PHP's built-in str_getcsv function. This will convert a CSV file into a PHP array.
Create your automated program to parse through each record. Based on the column and its requirements, you can either accept or reject records. You can then export your data, such as was done in this SO answer, back to CSV. You can save two different CSV files, one with accepted records, and one with rejected records.
With rejected records, which are all but inevitable when transferring from a spreadsheet, you will need to have a course of action. The simplest way for your clients is probably to give them a procedure to either manually import records into the database, if you've given them an interface to do so, or - probably simpler but requiring more back-and-forth - to update the records in Excel to be compliant with the new system.
Edit
Based on the thread under your question which sheds more light on what you are trying to do (i.e., have a parent for each transaction that is an accepted loan), you should be able to contrive a parent field, even if it is not complete, by creating a parent record for each set of transactions based around an account. You can do this via Access, PHP, or, more likely, a combination.
Conclusion
Long story short, data conversions take time. If you put the time in up front, it will be far easier to maintain a standardized series of information in the long run. If you find something which takes less time in the beginning, it will mean additional work for you in the long run in order to make this "simple" fix work over time.
Similarly, the closer you can get legacy data to conform to your new data, the easier it will be for your clients to perform queries etc. While this may mean that some manual entry will be required on the part of you or your client, it is better to inform the client of the pros and cons of each method fully and let them decide. My recommendation would always be to put extra work in at the front-end because it almost always ends up cheaper than having to deal with a quick fix in the long run, but that is not always practical given real world constraints.

Is it right to store mysql fetch result in a text file using php filing and load when needed inside the view using MVC [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I designed my Model View Controller. What i want to do is to store mysql fetched query data into php file or localStorage or sessionStorage, so that the stored information can be reused without bothering database over and over. I can save this information into a php session as well but using a lot of sessions could cause your server slow down.
Is this a good practice in this particular case to store information inside any of these said storages?
The concept of what you're trying to do is called "caching". Here's an article on a simple way to do it, kind of like what you're talking about: http://www.xeweb.net/2010/01/15/simple-php-caching/.
Whether or not it's "good practice" depends largely on your application. If it's a SQL query who's results don't change often (you might want to re-think why you're doing a query in those cases) and a complicated query that is very time consuming to run then go for it. I would advise against doing it unless that criteria apply because there's a reason you're using a database: you want it to be dynamic. Most databases can handle a lot of queries and your queries and data can be optimized for queries that you'll be running pretty often.
Do you notice it taking a long time to run the query? Have you isolated the problem to the database? Most importantly: does the data change very often and do your users care that their views won't be up to date?
I should add, it's VERY rarely a good idea to store anything like this inside of session storage. The only things you'd use session storage for is for a user name or an anonymous user's shopping cart. Never for query caching.
No not at all. Your database is the fastest and most efficient at these tasks, so using something like a session to store data is a bad idea. You can look into persistent objects though, with an ORM like Doctrine.

Logging in a PHP webapp [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I want to keep logs of some things that people do in my app, in some cases so that it can be undone if needed.
Is it best to store such logs in a file or a database? I'm completely at a loss as to what the pros and cons are except that it's another table to setup.
Is there a third (or fourth etc) option that I'm not aware of that I should look into and learn about?
There is at least one definite reason to go for storing in the database. You can use INSERT DELAYED in MySQL (or similar constructs in other databases), which returns immediately. You won't get any return data from the database with these kinds of queries, and they are not guaranteed to be applied.
By using INSERT DELAYED, you won't slow down your app to much because of the logging. The database is free to write the INSERTs to disk at any time, so it can bundle a bunch of inserts together.
You need to watch out for using MySQL's built in timestamp function (like CURRENT_TIMESTAMP or CUR_DATE()), because they will be called whenever the query is actually executed. So you should make sure that any time data is generated in your programming language, and not by the database. (This paragraph might be MySQL-specific)
You will almost certainly want to use a database for flexible, record based access and to take advantage of the database's ability to handle concurrent data access. If you need to track information that may need to be undone, having it in a structured format is a benefit, as is having the ability to update a row indicating when and by whom a given transaction has been undone.
You likely only want to write to a file if very high performance is an issue, or if you have very unstructured or large amounts of data per record that might be unweidly to store in a database. Note that Unless your application has a very large number of transactions database speed is unlikely to be an issue. Also note that if you are working with a file you'll need to handle concurrent access (read / write / locking) very carefully which is likely not something you want to have to deal with.
I'm a big fan of log4php. It gives you a standard interface for logging actions. It's based on log4j. The library loads a central config file, so you never need to change your code to change logging. It also offers several log targets, like files, syslog, databases, etc.
I'd use a database simply for maintainability - also multiple edits on a file may cause some getting missed out.
I will second both of the above suggestions and add that file locking on a flat file log may cause issues when there are a lot of users.

Categories