Basically for a plugin for a dynamic site (site can be fairly large) I am caching results of some sort of search (because results are from an external search), the results can be 400-1500 characters in length.
As the results come in an array, I use json_encode (faster than serialize) to store in the database, but ~1.5KB per entry (since there may be 10,000)=15MB seems a little large for me.
My questions are:
* Is this an acceptable (your opinion) size per entry?
* Will running GZip or similar and storing in a binary field in MySQL be more efficient or take too much CPU time in the end? Anything normally used similar?
I prefer to not use memcached or alike as it's needed to be portable (but would that be better as well?) this is mostly a theory question for me, I just require input before I implement anything solid.
There will always be a CPU Cost for any kind of compression, it depends if you have the resources to handle it without any noticeable slowdown.
Space is cheap and abundant, so 15megs is ok.
But if you really want to compress your field, then check out Mysql's COMPRESS() and UNCOMPRESS() functions.
This could be dropped into your code and it would work without changing any PHP/Logic.
Related
I'm creating application that will create a very very large array, and will search them.
I just want to know if there is a good PHP array search algorithm to do that task?
Example: I have an array that contains over 2M keys and values, what is the best way to search?
EDIT
I've Created a flatfile dbms that based on arrays so i want to find the best way to search it
A couple of things:
Try it, benchmark several approaches, and see which one is the faster
Consider using objects
Do think about DB's at least... it could be a NoSQL key->value storage thing like Redis.io (which is dead-fast)
Search algorithms, sure there are plenty of them around
But storing an assoc array of 2M keys in memory will mean you'll have tons of hash collisions, which will slow you down anyway. Sort the array, chunk it, and apply a decent search algorithm and you might get it to work reasonably fast, but to be brutally honest, I would say you're about to make a bad decision.
Also consider this: PHP is stateless by design, each time your script runs, the data has to be loaded into memory again (for each request if it's a web application you're writing). It's not unlikely that that will be a bigger bottleneck than a brute-force search on a HashTable will ever be.
The quickest way to find this out is to run a test, once with APC (or alternatives) turned off, and then again, but cache the array you want to search first. Measure the difference between the two, and you'll get an idea of how much the actual construction of the array is costing you
The best way to go would be to use array_search(). PHP has heavily optimized their in C written functions.
If this is still too slow, you should switch to an other 'programming' language (PHP isn't popular for its speed).
There are algorithms available that use your graphics card to search specific values in parallel.
I'm working on a full text index system for a project of mine. As one part of the process of indexing pages it splits the data into a very, very large number of very small pieces.
I have gotten the size of the pieces to be as low as a constant 20-30 bytes, and it could be less, it is basically 2 8 byte integers and a float that make up the actual data.
Because of the scale I'm looking for and the number of pieces this creates I'm looking for an alternative to mysql which has shown significant issues at value sets well below my goal.
My current thinking is that a key-value store would be the best option for this and I have adjusted my code accordingly.
I have tried a number but for some reason they all seem to scale even less than mysql.
I'm looking to store on the order of hundreds of millions or billions or more key-value pairs so I need something that won't have a large performance degradation with size.
I have tried memcachedb, membase, and mongo and while they were all easy enough to set up, none of them scaled that well for me.
membase had the most issues due to the number of keys required and the limited memory available. Write speed is very important here as this is a very close to even workload, I write a thing once, then read it back a few times and store it for eventual update.
I don't need much performance on deletes and I would prefer something that can cluster well as I'm hoping to eventually have this able to scale across machines but it needs to work on a single machine for now.
I'm also hoping to make this project easy to deploy so an easy setup would be much better. The project is written in php so it needs to be easy accessed from php.
I don't need to have rows or other higher level abstractions, they are mostly useless in this case and I have already made the code from some of my other tests to get down to a key-value store and that seems to likely be the fastest as I only have 2 things that would be retrieved from a row keyed off a third so there is little additional work done to use a key-value store. Does anyone know any easy to use projects that can scale like this?
I am using this store to store individual sets of three numbers, (the sizes are based on how they were stored in mysql, that may not be true in other storage locations) 2 eight byte integers, one for the ID of the document and one for the ID of the word and a float representation of the proportion of the document that that word was (number of times the work appeared divided by the number of words in the document). The index for this data is the word id and the range the document id falls into, every time I need to retrieve this data it will be all of the results for a given word id. I currently turn the word id, the range, and a counter for that word/range combo each into binary representations of the numbers and concatenate them to form the key along with a 2 digit number to say what value for that key I am storing, the document id or the float value.
Performance measurement was somewhat subjective looking at the output from the processes putting data into or pulling data out of the storage and seeing how fast it was processing documents as well as rapidly refreshing my statistics counters that track more accurate statistics of how fast the system is working and looking at the differences when I was using each storage method.
You would need to provide some more data about what you really want to do...
depending on how you define fast large scale you have several options:
memcache
redis
voldemort
riak
and sooo on.. the list gets pretty big..
Edit 1:
Per this post comments I would say that you take a look to cassandra or voldemort. Cassandra isn't a simple KV storage per se since you can storage much more complex objects than just K -> V
if you care to check cassandra with PHP, take a look to phpcassa. but redis is also a good option if you set a replica.
Here's add a few products and ideas that weren't mentioned above:
OrientDB - this is a graph/document database, but you can use it to store very small "documents" - it is extremely fast, highly scalable, and optimized to handle vast amounts of records.
Berkeley DB - Berkeley DB is a key-value store used at the heart of a number of graph and document databases - supposedly has a SQLite-compatible API that works with PHP.
shmop - Shared memory operations might be one possible approach, if you're willing to do some dirty-work. If you records are small and have a fixed size, this might work for you - using a fixed record-size and padding with zeroes.
handlersocket - this has been in development for a long time, and I don't know how reliable it is. It basically lets you use MySQL at a "lower level", almost like a key/value-store. Because you're bypassing the query parser etc. it's much faster than MySQL in general.
If you have a fixed record-size, few writes and lots of reads, you may even consider reading/writing to/from a flat file. Likely nowhere near as fast as reading/writing to shared memory, but it may be worth considering. I suggest you weigh all the pros/cons specifically for your project's requirements, not only for products, but for any approach you can think of. Your requirements aren't exactly "mainstream", and the solution may not be as obvious as picking the right product.
HI
I got a doubt I have seen reading mysql data is slower in case of large tables...I have done lots of optimization but cant get through..
what I am thinking is will it give a better speed if I store data in a piece of file??
off course each data will be a separate file. so millions of data = millions of file. I agree it will consume the disk space... but what about reading process?? is it faster??
I am using PHP to read file...
Reading one file = fast.
Reading many / big files = slow.
Reading singular small entries from database = waste of I/O.
Combining many entries within the database = faster than file accesses.
As long as your tables are properly indexed and as long as you are using those indices (that's right), using a relational DB (like mysql) is going to be much faster, more robust, flexible (insert many buzzwords here), etc.
To examine why your queries' performance does not match your expectations, you can use the explain clause with your selects (http://dev.mysql.com/doc/refman/5.1/en/explain.html).
To answer the topic, yes.
By which I mean that there are so many (unmentioned) factors that it's impossible to unequivocally state that one will be faster than the other every time.
It depends on what kind of data you're storing. Structured data is usually much faster and more flexible/powerful to read using SQL, since that's exactly what its made for. If you want to search, filter, sort or group by a certain attribute, the index structures and optimizations of a DBS are appropriate.
However, when using a DB for storing large files (BLOBs), which contain unstructured data in the sense that you are not going to search, filter, sort or group by any part of the files, then these files just blow up the database size and make it slow. There is an interesting study by Microsoft on this topic (just have to find the link yet). This study is the reason why Microsoft introduced the External BLOB storage in their SQLServer, which basically means what you asked: The BLOBs are saved in files outside the database, because they measured that access is much faster that way.
When storing files (e.g., pictures, videos, documents...) you often have some metadata on the file which you want to be able to use with a structured query language like SQL, while the actual files don't necessarily need to be saved in the database.
Reading from a dbms (MySQL is one) is faster in most cases, because they have built in cache that will keep the data in memory, so next time you try to read the same data, you will not have to wait on the incredible slow hard drive.
A dbms is essentially reading from your hard drive + a cache to speed things up (+ some data sorting algorithms). Remember, your database is stored on your hard drive :)
It depends on a lot of factors, not least of which is what kind of file system you're using. MySQL uses files for storage anyway, so read speed isn't the issue -- the biggest factor will be how fast MySQL can find your data, compared to how fast it can be looked up in your filesystem.
Generally, though, MySQL is quite good about finding data quickly -- after all, that's its purpose in life. So unless you have a really good reason why the FS should be much faster, stick with the DB and check your indexes and such.
By choosing a custom file storage system you will lose the benefits of using a relational database. Also your code might not be easy maintainable.
Nonetheless, there are many who believe that relational databases offer too much complexity at the cost of speed. Have a look at the NoSQL entry in wikipedia and read about possible alternatives.
I'm wondering what the best way is to prevent my server from being stressed when I want to get a lot of data using PHP.
Convert all my data to an XML sheet? Use caching?
The data comes from many different databases and different tables.
Any ideas?
Thanks in advance
Before you do anything, stress it until it breaks.
When it breaks, profile it to identify the breaking point, and work on that.
Repeat from point 1 until the performance limits are acceptable.
Neither.
Both add significant complexity within your PHP code. Converting to XML has absolutely no added value - indeed not only is it an additional cost encoding in XML, you also have a cost for decoding. As for actucally storing your data in XML - this is plain ridiculous.
The MySQL DBMS provides very effective result caching. The underlying OS should provide very good I/O caching. You gain nothing by adding your own caching layer except for slower code and hotter cpus.
If you have an OLTP type database and are routinely collating large sets of data, then you might want to think about pre-consolidating it or implementing an OLAP schema within your database.
OTOH if you simply want to improve the performance of your system, then look aleswhere - particularly your data scema.
C.
I think caching will be the way to go , except if this data doesn't change very often :)
So I'm going to be working on a home made blog system in PHP and I was wondering which way of storing data is the fastest. I could go in the MySQL direction, or I could go with my own little way of doing it which is storing all of the information (encoded in JSON) in files.
Which way would be the fastest, MySQL or JSON files?
For a small, single user 'database', a file system would likely be quicker - as the size and complexity grows, a database server like MySQL or SQL Server is hard to beat.
I would definately choose a DB option (as you need to be able to search and index stuff). But that does not mean you need a fully realized separate DB service.
MySQL is definitely the more scalable solution.
But the downside is you need to set up and maintain a separate service.
On the other hand there are DBs that are file based and still give you access with standard SQL (SQLite SQLite.org) jumps to mind. You get the advantages of SQL but you do not need to maintain a separate service. The disadvantage is that they are not as scalable.
I would choose a MySQL database - simply because it's easier to manage.
JSON is not really a format for storage, it's for sending data to JavaScripts. If you want to store data in files look into XML or Serialized PHP (which I suspect is what you are after, rather than JSON).
Forgive me if this doesn't answer your question very directly, but since it is a homecooked blog system is it really worth spending time thinking about what storage backend right now is faster?
You're not going to be looking at 10,000 concurrent users from day 1, it doesn't sound like it will need to scale to any maningful degree in the foreseeable future.
Why not just stick with MySQL as a sensible choice rather than a fast one? If you really want some sense that you designed for speed maybe bolt sqlite on instead.
Since you are thinking you may not have the need for a complex relational structure, this might be a fun opportunity to try something more down the middle.
Check out CouchDB, it is a document-based, schema free database (yet still indexable). The database is made of documents that contain named fields (think key-value pairs).
Have fun....
Though I don't know for certain, it seems to me that a MySQL database would be a lot faster, especially as the amount of data gets larger and larger.
Also, using MySQL with PHP is super easy, especially if you use an abstraction class like ezSQL. ezSQL makes working with a database really simple and I think you'd be creating more unnecessary work for yourself by going the home-brewed JSON direction.
I've done both. I like files for very simple problems and databases for complicated problems.
For file solutions, note these problems as the number of files increases:
1) Much more disk space is used than you might expect, because even tiny files use up a whole block. Blocks are fairly large on filesystems which support large drives.
2) Most filesystems get very slow when the number of files in a directory gets very large. My solution to this (assuming the names of the files are reasonably spread out across the alphabet) is to create a directory consisting of the first two letters of the filename. Thus, the file, "animal.txt" would be found at an/animal.txt. This works surprisingly well. If your filenames are not reasonable well-distributed across the alphabet, use some sort of hashing function to create the directories. Sounds a little crazy, but this can work very, very well, and I've used it for very fast solutions with tens of thousands of files.
But the file solutions really only fit sometimes. Unless you have a great reason to go with files, use a database.
This is really cool. It's a PHP class that controls a flat-file database with queries http://www.fsql.org/index.php
For blogs, I recommend caching the pages because blogs usually only have static content. This way, the queries only get run once while caching. You can update the cached pages when a new blog post is added.