I need to preprocess some statistics data before writing it to the main database. My php-cli script retrieves data every 10 minutes and should store it somewhere. Every hour all saved data are preprocessed again and then writted to the main db.
I thought that sqlite should be nice solution if I would keep it in memory. I have not very big amounts of data (I am able to keep it in my RAM).
Actually, I am new to sqlite (previously I was working only with mySQL).
I found here that I can use :memory: instead of file name to work with memory only. But after client is disconnected database is destroyed, and if I will try to connect again to :memory: from my script - it will be different (new, empty) database.
Is it right? If it is, how can I work with same database using different php script calls, if sqlite is stored in memory?
P.S. perhaps it is a solution to put sqlite file to some "magic" directory? (I am using linux)
First, linux is pretty smart about using a filesystem cache. Thus, reading data from a disk is often surprisingly fast, and you should measure if the performance gain is worth it.
If you want to go ahead, one method you might consider is using a ramdisk. Linux provides a way to create a filesystem in memory, and you can place your sqlite file in there.
# mkdir -i /mnt/ram
# mount -t ramfs -o size=20m ramfs /mnt/ram
# touch mydb.sql
# chown apache:apache mydb.sql
... or something similar.
SQLite database created in :memory: is automatically destroyed when you disconnect from database handle or exit/terminate process that what using it.
If you want to have multiple PHP sessions to have access to database, you cannot use :memory:. If you don't have big amounts of data, you should store your intermediate results in helper tables in some persistent database - MySQL or SQLite using real file on disk.
Related
Is PHP able to handle sqlite data as inmemory DB?
I have a <50MB database and would like a php script to do SELECTs (and if possible also UPDATEs) to the sqlite without slow disk file reading or writing each time, the script is ran.
With java and c++ I know great use-cases, but how to force PHP to access the sqlite inmemory without reloading the file again and again?
There are multiple ways to do it:
Do nothing, and let the OS cache the database in disk caches / memory buffers. This is good if you have a small database (and <50 MB is small), and if you have lots of memory.
Use a tmpfs and copy your database file in it, then open it in PHP.
Use sqlite://:memory: (but you will start from a blank database).
In memory sql databases might be what you are looking for.
First, what I intend to do is to use memory to store the most recent "user update" records for each user.
I am new to MySQL. How can I create tables in memory?
In official website, it is said that we can set ENGINE = MEMORY when creating table. But the document claims that those tables in memory are always, for read, not for write.
I have simply no idea how to do that.
I am into this problems for a few days. I can't install memcache and any PHP extension in server as I'm not using Virtual Private Server, what I can do is just transfer scripts and files in httpdocs folder... I have also tried using flat files to store data to work as buffer/cache, but I found that I cannot write/create files in server's file directory due to denied permission, and I am not allowed to change this permission.
Using MySQL to buffer may be the only choice left for me. Hope someone can give me some hints.
Thank you.
p.s. I am using Linux Apache server running PHP, with MySQL as DB.
ENGINE = MEMORY tables can be used for both read or write.
The only thing to be careful of is that all data in a memory table disappears when the server crashes, is turned off, rebooted, etc. (As you would expect for an in-memory table.)
You really should read carefully about MEMORY engine of MySQL. The data is stored in RAM so when the server is powered off, or rebooted, the RAM will be cleared, and data will be wiped. MEMORY table should be the fastest accessible table type of MySQL, but only stores temporary data, with no guarantee.
If I understood right, you are trying to make static cache of some sort of data generated from PHP, aren't you? The easiest way is to write them as solid file cache in your www directory, either HTML or JS. If you can't chmod your directory to writable, then store them in MySQL should be fine too, but only if that actually helps.
The idea of cache data is to: reduce SQL queries, reduce disk I/O, reduce code generation. But using MEMORY table costs too much memory usage. Store them in a normal MyISAM table should be fine too, and safe you a lot of background work.
However, there should be 2 things to consider: 1, if the cache does not exist when accessing; 2, if the cache is up-to-date.
Giving your result some sort of key should be a good idea, so the PHP checks for cached date first, if doesn't not exist, generate the cache, then display, or otherwise, display the cache directly.
I've readed in some blogs that apache and php_cli don't share APC data because are running in different processes...
But, I need use the same data cached in Apache (user in browser) and cron processes (php_cli).
How to do it?
I've tried to access some keys from php_cli and it really can't get it.
Some idea?
It's not possible using APC's data cache. The data is in shared memory that is only available inside Apache. The only alternative would to be use some sort of external storage. Depending on your exact needs, this could be as simple as a text file, or as complex as a relational database, NoSQL database, or other key-value store like memcached.
I have a pretty large db in MySql, and I need to take backups of it every day or so.
I need to be able to take backups from any computer, so therefore I thought about making a php script to do this and put this php script online (offcourse with password protection and authorization etc so that only I can access it).
I wonder however, how is this done properly?
What commands should I use, and is it possible to change settings of the backup (for instance Add AUTO_INCREMENT value = true)?
I would appreciate examples...
Also, if this is a bad method (unsafe, or maybe gives bad backups with bad sql files), what other method would be preferred?
I have shell-access and I have a VPS (ubuntu server).
My Mysql version is 5.1
Thanks
There's no need to involve PHP in the database backup. You just need a script that uses mysqldump to backup the database, and setup a CRON job to periodically execute the script:
mysqldump db_name > backup-file.sql
...will backup your database to a file, by redirecting the output from the mysqldump to the specified file name.
Peter brought up a good point, that the command would only give you one day of archiving--any archive over two days old would be overwritten. This would allow you have a rolling log going back seven days:
CURRENT_DAY_OF_WEEK=`date '+%u'`
FILENAME="mysqlbackup_"$CURRENT_DAY_OF_WEEK".sql"
mysqldump db_name > $FILENAME
Also be aware that file permissions will apply - can't write a file if the user executing the script doesn't have permissions to the folder.
I agree with OMG Ponies mysqldump + script is the way to go.
The only other option that I use is to set up a slave server. This provides an almost instant backup against hardware failure and can be located in a different building to your main server. Unless you have a large number of writes to the database, you don't necessarily need a very powerful server as it is not processing queries, only database updates.
I've built a simple cms with an admin section. I want to create something like a backup system which would take a backup of the information on the website, and also a restore system which would restore backups taken. I'm assuming backups to be SQL files generated of the tables used.
I'm using PHP and MySQL here - any idea how I can do this?
One simple solution for backup:
Call mysqldump from php http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html, and save the backup file somewhere convenient.
This is a safe solution, because you have mysqldump on any mysql installation, and it's a safe way to generate a standard sql script
It's also safe to save the whole database.
Be careful with blobs, like saved images inside database
Be careful with utf-8 data
One simple solution for restore:
Restore operation can be done with the saved script.
First disable access to the the site/app.
Take down the database.
Restore the database from the script with mysqlimport http://dev.mysql.com/doc/refman/5.0/en/mysqlimport.html
Calling external applications from php:
http://php.net/manual/en/book.exec.php
If linux/unix, check out "man pg_dump". It basically dumps whole databases (with all tables and table definitions), e.g. "pg_dump mydb > db.sql"
Create a script that calls mysqldump
Is it just for yourself or do you want it as a feature of the CMS?
I would advise using MySQL Administrator from www.mysql.com to create backups. It's a very useful tool which you can use to schedule backups from external databases you have access to. It allows you to restore to and be selective over which tables.
If it's for a feature of the CMS then a) I'm not sure that's a great plan and b) one of the above should do it!
for the backup part you can use a ready solution. check out astrails-safe for mysql/pgsql/plain_files/subversion backup with encryption and upload to S3/SFTP.
shouldn't be too hard to add the restore capability (basically just decrypt, decompress, and pipe data to the appropriate command). for mysql (w/o the encryption):
zcat -d mysqldump-blog.090820-0000.sql.gz | mysql database_name