Translate PHP site trough DB or Local Files? - php

I have php db driven website that uses a lot of flash for user interaction.
I need to make it multilangual like 20+ languages.
Site is quite large and has a lot of users coming to it every day.
Other developer i work with saying we should store translation in local files e.g. /lang/english.php /lang/german.php etc.
I was thinking since database is on the same dedicated server there should not be a slow down, which way you think will work is faster?

I don't know if it's an option, but you could also use gettext().
That way your translations are stored in local files (faster than a database) and you have the advantage that there are programs like poedit (takes some getting used to...) that you or a translator can use to automatically generate the translation files so it's a bit easier to maintain then php files.

Local files are a LOT faster than DB content (Although you can save the DB output in a local cache, like files or even memcache or APC), probably not that easy to translate, but it will help you with the basic speed of implementation too, You should take a look at:
http://framework.zend.com/manual/en/zend.translate.html
You can use only this part of the framework and it will give you a HUGE boost, it supports DB based translation or local files (a lot of adapters)
UPDATE:
thanks Corbin, you are right, it's better to have the direct link.

Related

PHP web application working in a read only file system

I have a web application, that uses Yii2, that I would like to make it work on a read only file system.
For the caching components, is quite easy. There is other options, like database, or Redis, etc.
But I have some specific uses, like for example, the HTML purifier, that, to purify a string, needs a runtime folder access, to process stuff.
How to attend these specific cases? Ditch everything that can't be done in memory, or not in a cloud storage, like S3, for example?
Thanks in advance.

PHP Q: How to save config values using external global variables? [duplicate]

How to save a variable at application level(same for all users) in php which will get updated after some time?
I've tried to find about it. I've found the following solutions:
Implement using file handling.
Cache (Memcache or APC)
Implement using Database Support
2 is considered as best (AFAIK). (I'm not allowed to install anything on the Server)
What about other two (mentioned above) or any other options and how I can implement those? I'm bit concerned because traffic is moderately high (but the bad thing is that I still can't use any cache mechanism). We just need to save the contents of buffer of around 255 bytes at application level.
Any snippets, pointers or help of any sort would be highly appreciated.
Thanks.
You need a permanent storage, not cache or something like that.
If your application doesn't use a database already, there are several options you can choose from:
write to a text file, a simple one line entry or preferably in a format like xml, json
write to a light storage engine like sqlite, simple storage (Amazon S3)
If your app uses a database already, why not store that data in a separate table?
This is what databases are for. If you don't want to spend a lot of time setting up a large database application, try out sqlite.
Some caches (memcache in particular) are lossy, and most won't survive being restarted. Use a database.
If you do not have the option to use databases you can consider writing the data to a file on disk.

Is it faster to upload and download images into a MongoDB than on disk

Say we want to develop a photo site.
Would it be faster to upload or download images to or from MongoDB than store or download images from disk... Since mongoDB can save images and files in chunks and save metadata.
So for a photosharing website, would it be better (faster) to store the images on a mongodb or on a typical server harddisk. etc.
im thinking of using php, codeigniter btw if that changes the performance issues regarding the question.
Lightweight web servers (lighttpd, nginx) do a pretty good job of serving content from the filesystem. Since the OS acts as a caching layer they typically serve content from memory which is very fast.
If you want to serve images from mongodb the web server has to run some sort of script (python, php, ruby... of course FCGI, you can't start a new process for each image), which has to fetch data from mongodb each time the image is requested. So it's going to be slow? The benefits are automatic replication and failover if you use replica sets. If you need this and clever enough to know to achieve it with FS then go with that option... If you need a very quick implementation that's reliable then mongodb might be a faster way to do that. But if your site is going to be popular sooner or later you have to switch to the FS implementation.
BTW: you can mix these two approaches, store the image in mongodb to get instant reliability and then replicate it to the FS of a couple of servers to gain speed.
Some test results.
Oh one more thing.. coupling the metadata with the image seems to be nice until you realize the generated HTML and the image download is going to be two separate HTTP requests, so you have to query mongo twice, once for the metadata and once for the image.
When to use GridFS for storing files with MongoDB - the document suggests you should. It also sounds fast and reliable, and is great for backups and replication. Hope that helps.
Several benchmarks have shown MongoDB is approximately 6 times slower for file storage (via GridFS) versus using the regular old filesystem. (One compared apache, nginx, and mongo)
However, there are strong reasons to use MongoDB for file storage despite it being slower -- #1 free backup from Mongo's built-in sharding/replication. This is a HUGE time saver. #2 ease of admin, storing metadata, not having to worry about directories, permissions, etc. Also a HUGE time saver.
Our photo back-end was realized years ago in a huge gob of spaghetti code that did all kinds of stuff (check or create user dir, check or create date dirs, check for name collision, set perms), and a whole other mess did backups.
We've recently changed everything over to Mongo. In our experience, Mongo is a bit slower (it may be 6 times slower but it doesn't feel like 6 times slower), and anyway- so what? All that spaghetti is out the window, and the new Mongo+photo code is much smaller, tighter and logic simpler. Never going back to file system.
http://www.lightcubesolutions.com/blog/?p=209
You definitely do not want to download images directly from MongoDB. Even going through GridFS will be (slightly) slower than from a simple file on disk. You shouldn't want to do it from disk either. Neither option is appropriate for delivering image content with high throughput. You'll always need a server-side caching layer for static content between your origin/source (be it mongo or the filesystem) and your users.
So what that in mind you are free to pick whatever works best for you, and MongoDB's GridFS provides quite a few features for free that you'd otherwise have to do yourself when you're working directly with files.

How to save a variable at application level in php?

How to save a variable at application level(same for all users) in php which will get updated after some time?
I've tried to find about it. I've found the following solutions:
Implement using file handling.
Cache (Memcache or APC)
Implement using Database Support
2 is considered as best (AFAIK). (I'm not allowed to install anything on the Server)
What about other two (mentioned above) or any other options and how I can implement those? I'm bit concerned because traffic is moderately high (but the bad thing is that I still can't use any cache mechanism). We just need to save the contents of buffer of around 255 bytes at application level.
Any snippets, pointers or help of any sort would be highly appreciated.
Thanks.
You need a permanent storage, not cache or something like that.
If your application doesn't use a database already, there are several options you can choose from:
write to a text file, a simple one line entry or preferably in a format like xml, json
write to a light storage engine like sqlite, simple storage (Amazon S3)
If your app uses a database already, why not store that data in a separate table?
This is what databases are for. If you don't want to spend a lot of time setting up a large database application, try out sqlite.
Some caches (memcache in particular) are lossy, and most won't survive being restarted. Use a database.
If you do not have the option to use databases you can consider writing the data to a file on disk.

Storing frequently accessed data in a file rather than MySQL

I'm working on a PHP content management system and, in testing, have noticed that quite a few of the system's MySQL tables are queried on almost every page but are very rarely written to. What I'm wondering is will this start to weigh heavily on the database as site traffic increases, and how can I solve/prevent this?
My initial thoughts were to start storing some of the more static data in files (using PHP serialization) but does this actually reduce server load? What I'm worried about is that I'd be simply transferring the high load from the database to the file system!
If somebody could clue me in on the better approach, that would be great. In case the volume of data itself has a large effect, I've detailed some of the data I'll be storing below:
Full list of Countries (including ISO country codes)
Site options (skin, admin email, support URLs etc.)
Usergroups (including permissions)
You have to remember that reading a table from a database on a powerful server and on a fast connection is likely to be faster than reading it from disk on your local machine. The database will cache the entirety of these small, regularly accessed tables in memory.
By implementing the same functionality yourself in the file system, there is only a small possible speed up, but a huge chance to mess it up and make it slower.
It's probably best to stick with using the database.
Optimize your queries (using mysql slow query log) and EXPLAIN function.
If tables are really rarely written to you can use native MySQL caching. You have nothing to change in you code, just enable mysql caching in my.conf.
Try out using template engine like Smarty (smarty.net). It has it's own caching system that works pretty well and will REALLY reduce server load.
You can also use Memcache, but it is really worth using only with really high load websites. (I think that Smarty will be enough.)
Databases are much better at handling large data volumes than the native file system.
Don't worry about optimizing your site to reduce server load, until you actually have a server load problem. :-)
The tables you mentioned (countries and users) will normally be cached in memory by MySQL directly unless you are expecting quite a few millions of records in these tables.
In case where these tables will not fit in memory, you may want to consider a general-purpose distributed memory caching system, such as memcached.
If your database is properly indexed, it will be much faster to query data from the database. If you want to speed that up, look into memcached or similar.
Databases are exactly for this purpose.. To store and provide data. Filesystem is for scripts and programming.
If you encounter load problems, consider using Memcached or another utility for database.
You may also consider trying to cache different parts of your page directly into database as whole sections (eg. a sidebar, that doesn't change too much, generated header section, ..)
you could cache output (flush(), ob_flush() etc.) to a file and include that instead of having multiple MySQL reads. caching is definitely faster than accessing MySQL multiple time.
reading a static file is much faster than adding overhead via php and mysql processing.
You need to evaluate the performance via load testing to avoid prematurely optimising.
It would be foolish and quite possibly increase overall load to store data in files with serialization, databases are really good at retrieving data.
If after analysis there is a true performance hit (which I doubt unless you are talking about massive loading), then caching is a better solution.
It's more important to have a well designed system that facilitates changes as needs arise.
Here's a link to a couple script that will essentially do what dusoft is talking about and cache the output buffer to a file:
http://www.addedbytes.com/articles/caching-output-in-php/
Used this way, it's more of a bolt-on-after-the-fact type of solution, but this same behavior can certainly be implemented in a more integrated fashion if considered earlier in the process. Many frameworks also have this kind of thing built in.

Categories