sqlite permanently in-memory - php

Is PHP able to handle sqlite data as inmemory DB?
I have a <50MB database and would like a php script to do SELECTs (and if possible also UPDATEs) to the sqlite without slow disk file reading or writing each time, the script is ran.
With java and c++ I know great use-cases, but how to force PHP to access the sqlite inmemory without reloading the file again and again?

There are multiple ways to do it:
Do nothing, and let the OS cache the database in disk caches / memory buffers. This is good if you have a small database (and <50 MB is small), and if you have lots of memory.
Use a tmpfs and copy your database file in it, then open it in PHP.
Use sqlite://:memory: (but you will start from a blank database).

In memory sql databases might be what you are looking for.

Related

keep database in memory, even if client script is disconnected

I need to preprocess some statistics data before writing it to the main database. My php-cli script retrieves data every 10 minutes and should store it somewhere. Every hour all saved data are preprocessed again and then writted to the main db.
I thought that sqlite should be nice solution if I would keep it in memory. I have not very big amounts of data (I am able to keep it in my RAM).
Actually, I am new to sqlite (previously I was working only with mySQL).
I found here that I can use :memory: instead of file name to work with memory only. But after client is disconnected database is destroyed, and if I will try to connect again to :memory: from my script - it will be different (new, empty) database.
Is it right? If it is, how can I work with same database using different php script calls, if sqlite is stored in memory?
P.S. perhaps it is a solution to put sqlite file to some "magic" directory? (I am using linux)
First, linux is pretty smart about using a filesystem cache. Thus, reading data from a disk is often surprisingly fast, and you should measure if the performance gain is worth it.
If you want to go ahead, one method you might consider is using a ramdisk. Linux provides a way to create a filesystem in memory, and you can place your sqlite file in there.
# mkdir -i /mnt/ram
# mount -t ramfs -o size=20m ramfs /mnt/ram
# touch mydb.sql
# chown apache:apache mydb.sql
... or something similar.
SQLite database created in :memory: is automatically destroyed when you disconnect from database handle or exit/terminate process that what using it.
If you want to have multiple PHP sessions to have access to database, you cannot use :memory:. If you don't have big amounts of data, you should store your intermediate results in helper tables in some persistent database - MySQL or SQLite using real file on disk.

Writing a SQLite db loaded using :memory to disk

I'm generating quite a substantial (~50Mb) SQLite3 database in memory that I need to be able to write to disk once the generation of said database is complete. What is the best way of approaching this using PHP?
I have tried creating a structurally identical SQLite3 db on disk, and then using INSERTS to populate it, but it is far too slow. I have also drawn a blank looking at the online PHP SQLite3 docs.
What I have found is the SQLite3 Backup API, but not sure how best to approach interfacing with it from PHP. Any ideas?
The backup API is not available in PHP.
If you wrap all INSERTs into a single transaction, the speed should be OK.
You could avoid the separate temporay database and make the disk database almost as fast by increasing the page cache size to be larger than 50 MB, disabling journaling, and disabling synchronous writes.

How to speed up exporting large Excel file using php and SQL Server

I am trying to export Excel in php having more than 100k records but it took more than an hour to export the record set.
I am using SQL Server as my database.
Is there any way to speed up the entire process?
One option would be to make sure you minimize disk IO which means grouping multiple writes to the file into one batch. Such as writing 100 records at once not just 1.
This is easily possible if you're writing CSV files but I'm not sure if the PHP Excell library you're using exposes this implementation detail.
One other option would be saving your file in a RamDisk Drive, a small filesystem allocated in ram and mapped as a disk drive.
On Unix/Linux platforms /dev/shm is a shared memory filesystem. Reading/Writing stuff from it is much much faster than HDD disk IO.
If you're using Windows i remember there were some RamDisk solutions. You'd have to download some 3rd party software that implements a windows disk driver.

Any idea in implementing ring buffer using MySQL

First, what I intend to do is to use memory to store the most recent "user update" records for each user.
I am new to MySQL. How can I create tables in memory?
In official website, it is said that we can set ENGINE = MEMORY when creating table. But the document claims that those tables in memory are always, for read, not for write.
I have simply no idea how to do that.
I am into this problems for a few days. I can't install memcache and any PHP extension in server as I'm not using Virtual Private Server, what I can do is just transfer scripts and files in httpdocs folder... I have also tried using flat files to store data to work as buffer/cache, but I found that I cannot write/create files in server's file directory due to denied permission, and I am not allowed to change this permission.
Using MySQL to buffer may be the only choice left for me. Hope someone can give me some hints.
Thank you.
p.s. I am using Linux Apache server running PHP, with MySQL as DB.
ENGINE = MEMORY tables can be used for both read or write.
The only thing to be careful of is that all data in a memory table disappears when the server crashes, is turned off, rebooted, etc. (As you would expect for an in-memory table.)
You really should read carefully about MEMORY engine of MySQL. The data is stored in RAM so when the server is powered off, or rebooted, the RAM will be cleared, and data will be wiped. MEMORY table should be the fastest accessible table type of MySQL, but only stores temporary data, with no guarantee.
If I understood right, you are trying to make static cache of some sort of data generated from PHP, aren't you? The easiest way is to write them as solid file cache in your www directory, either HTML or JS. If you can't chmod your directory to writable, then store them in MySQL should be fine too, but only if that actually helps.
The idea of cache data is to: reduce SQL queries, reduce disk I/O, reduce code generation. But using MEMORY table costs too much memory usage. Store them in a normal MyISAM table should be fine too, and safe you a lot of background work.
However, there should be 2 things to consider: 1, if the cache does not exist when accessing; 2, if the cache is up-to-date.
Giving your result some sort of key should be a good idea, so the PHP checks for cached date first, if doesn't not exist, generate the cache, then display, or otherwise, display the cache directly.

What is faster, flat files or a MySQL RAM database?

I need a simple way for multiple running PHP scripts to share data.
Should I create a MySQL DB with a RAM storage engine, and share data via that (can multiple scripts connect to the same DB simultaneously?)
Or would flat files with one piece of data per line be better?
Flat files? Nooooooo...
Use a good DB engine (MySQL, SQLite, etc). Then, for maximum performance, use memcached to cache content.
In this way, you have the ease and reliability of sharing data between processes using proven server software that handles concurrency, etc... But you get the speed of having your data cached.
Keep in mind a couple things:
MySQL has a query cache. If you are issuing the same queries repeteadly, you can gain a lot of performance without adding a caching layer.
MySQL is really fast anyway. Have you load-tested to demonstrate it is not fast enough?
Please don't use flat files, for the sanity of the maintainers.
If you're just looking to have shared data, as fast as possible, and you can hold it all in RAM, then memcached is the perfect solution.
If you'd like persistence of data, then use a DBMS, like MySQL.
Generally, a DB is better, however, if you are sharing a small, mostly static amount of data, there might be performance benefits (and simplicity) of doing it with flat files.
Anything other than trivial data sharing and I would pick a DB however.
1- Where the flat file can be usefull:
Flat file can be faster than a database, but in very specific applications.
They are faster if the data is read from start to finish without any search or write.
If the data dont fit in memory and need to be read fully to get the job done, It 'can' be faster than a database. Also if there is lot more write than read, flat file also shine, most default databases setups will need to make the read queries wait for the write to finish in order maintain indexes and foreign keys. Making the write queries usually slower than simple reads.
TD/LR vesion:
Use flat files for jobs based system(Aka, simple logs parsing), not for web searches queries.
2- Flat files pit falls:
If your going with a flat file, you will need to synchronize your scripts when the file change using custom lock mechanism. Which can lead to slowdown, corruption up to dead lock if you have a bug.
3- Ram based Database ?
Most databases have in memory cache for query results, search indexes, making them very hard to beat with a flat file. Because they cache in memory, making it run entirely from memory is most of the time ineffective and dangerous. Better to properly tune the database configuration.
If your looking to optimize performance using ram, I would first look at running your php scrips, html pages, and small images from a ram drive. Where the cache mechanism is more likely to be crude and hit the hard drive systematically for non changing static data.
Better result can be reach with a load balancer, clustering with a back plane connections up to ram based SAN array. But that's a whole other topic.
5- can multiple scripts connect to the same DB simultaneously?
Yes, its called connection pooling. In php (client side) its the function to open a connection its mysql-pconnect(http://php.net/manual/en/function.mysql-pconnect.php).
You can configure the maximum open connection in php.ini I think. Similar setting on mysql server side define the maximum of concurrent client connections in /etc/mysql/my.cnf.
You must do this in order to take advantage of parrallel processessing of the cpu and avoid php script to wait the query of each other finish. It greatly increase performance under heavy load.
There is also one connection pool/thread pool in Apache configuration for regular web clients. See httpd.conf.
Sorry for the wall of text, was bored.
Louis.
If you're running them on multiple servers, a filesystem-based approach will not cut it (unless you've got a consistent shared filesystem, which is unlikely and may not be scalable).
Therefore you'll need a server-based database anyway to allow the sharing of data between web servers. If you're serious about either performance or availability, your application will support multiple web servers.
I would say that the MySql DB would be better choice unless you have some mechanism in place to deal with locks on the flat files (and some way to control access). In this case the DB layer (regardless of specific DBMS) is acting as an indirection layer, letting you not worry about it.
Since the OP doesn't specify a web server (and PHP actually can run from a commandline) then I'm not certain that the caching technologies are what they're after here. The OP could be looking to do some sort of flying data transform that isn't website driven. Who knows.
If your system has a PHP cache (that caches compiled PHP code in memory, like APC), try putting your data into a PHP file, as PHP code. If you have to write data, there are some security issues.
I need a simple way for multiple
running PHP scripts to share data.
APC, and memcached are both good options depending on context. shared memory may also be an option.
Should I create a MySQL DB with a RAM
storage engine, and share data via
that (can multiple scripts connect to
the same DB simultaneously?)
That's also a decent option, but will probably not be as fast as APC or memcached.
Or would flat files with one piece of
data per line be better?
If this is read-only data, that's a possibility -- but may be slower than any of the options above. Especially if the data is large. Rather than writing custom parsing code, however, consider simply building a PHP array, and include() the file.
If this is a datastore that may be accessed by several writers simultaneously, by all means do NOT use a flat file! Writing to a flat file from multiple processes is likely to lead to file corruption. You can lock the file, but you risk lock contention issues, and long lock wait times.
Handling concurrent writes is the reason applications like mysql and memcached exist.

Categories