I am currently developing a website in PHP and I decided to go with gettext to manage the translations. I set up a nice Pootle server so that I can easily manage the translations and a bash script that runs with cron that extracts all of the values from the PHP files to be translated and creates the .pot translation template file.
So far, so good. However, I just remembered that part of the text of the site is stored in a database. Let's call it "products" for simplicity. I want the product description, name, and a few other fields to be translatable, but it would be great if I could have a centralized way to translate them without having to create a separate interface just to translate the database entries. Since Pootle is already set up, it would be nice to be able to use that.
I thought of two solutions:
Forget using a database and use only PHP arrays
Write a script that will extract all of the values from the database and generate a file that will then be scanned by the aforementioned bash script and add the values to the pot file, and another script that will run just after the bash script to re-update all of the values in the DB.
Neither of these solutions really seem to be ideal. The first one would be easy to set up and easy to use with Pootle, but I lose all flexibility that comes with using a DBMS and I would have to import the entire array every time I want to use it. Loss of functionality isn't really that bad, because I (currently) am not performing any advanced calculations on the rows, basically just SELECTs and that's it. The second one, could work, but would take significantly more planning (and coding) to set up correctly.
Are there any other ways that I'm missing that would give me the flexibility of a database, but allow me to easily translate it in a centralized place along with the rest of the site, like Pootle?
You can generate pot/po files directly from the database and feed them into Pootle then. Then you would be able to use gettext functions directly on values returned from database.
As an example, you can look at phpMyAdmin, where we use similar approach to translate structured text file.
Related
Instead of eval() I am investigating the pros and cons with creating .php-files on the fly using php-code.
Mainly because the generated code should be available to other visitors and for a long period of time, and not only for the current session. The generated php-files is created using functions dedicated for that and only that and under highly controlled conditions (no user input will ever reach those code files).
So, performance wise, how much load is put on the webserver when creating .php-files for instant execution using include() later elsewhere compared to updating a database record and always query a database at every visit?
The generated files should be updated (overwritten) quite frequently but not very frequent compared to how frequently they will be executed
What are the other pro/cons? Should the possibility of the combination of one user overwriting the code files at the same time as others is currently executing them introduce complicated concurrent conflict solving? Using Mutex? Is it next to impossible to overwrite the files if visitors is constantly "viewing" (executing) them?
PS. I am not interested in alternative methods/solutions for reaching "the same" goal, like:
Cached and/or saved output buffers, as an alternative, is out of the question, mainly because the output from the generated php-code is highly dynamic and context-sensitive
Storing the code as variables in a database and create dynamic php code that can do what is requested based on stored data, mainly because I don't want to use a database as backend for the feature. I don't ever need to search the data, query it for Aggregation, ranking or any other data collecting or manipulation
Memcached, APC etcetera. It's not a caching feature I want
Stand-alone (not PHP) server with custom compiled binary running in memory. Not what I am looking for here, although this alternative have crossed my mind.
EDIT:
Got many questions about what "type" of code is generated. Without getting into details I can say: It's very context sensitive code. Code is not based on user direct input but input in terms of choices, position and flags. Like "closed" objects in relation to other objects. Most code parts is related to each other in many different, but very controlled, ways (similar to linked lists, genetic cells in AI-code etcetera) so querying a database is out of the question. One code file will include one or more others, and so on..
I do the same thing in an application. It generates static PHP Code from data in a MySQL database. I store the code in memcached and use ‘eval’ to execute it. Only when something changes in the MySQL database I regenerate the PHP. It saves an awful lot of MySQL reads
I'm currently building a web-app which displays data from .csv files for the user, where they are edited and the results stored in a mySQL database.
For the next phase of the app I'm looking at implementing the functionality to write the results into ** existing .DBF** files using PHP as well as the mySQL database.
Any help would be greatly appreciated. Thanks!
Actually there's a third route which I should have thought of before, and is probably better for what you want. PHP, of course, allows two or more database connections to be open at the same time. And I've just checked, PHP has an extension for dBase. You did not say what database you are actually writing to (several besides the original dBase use .dbf files), so if you have any more questions after this, state what your target database actually is. But this extension would probably work for all of them, I imagine, or check the list of database extensions for PHP at http://php.net/manual/en/refs.database.php. You would have to try it and see.
Then to give an idea on how to open two connections at once, here's a code snippet (it actually has oracle as the second db, but it shows the basic principles):
http://phplens.com/adodb/tutorial.connecting.to.multiple.databases.html
There's a fair bit of guidance and even tutorials on the web about multiple database connections from PHP, so take a look at them as well.
This is a standard kind of situation in data migration projects - how to get data from one database to another. The answer is you have to find out what format the target files (in this case the format of .dbf files) need to be in, then you simply collect the data from your MySQL file, rearrange it into the required format, and write a new file using PHP's file writing functions.
I am not saying it's easy to do; I don't know the format of .dbf files (it was a format used by dBase, but has been used elsewhere as well). You not only have to know the format of the .dbf records, but there will almost certainly be header info if you are creating new files (but you say the files are pre-existing so that shouldn't be a problem for you). But the records may also have a small amount of header data as well, which you would need to write to work out and each one in the form required.
So you need to find out the exact format of .dbf files - no doubt Googling will find you info on that. But I understand even .dbf can have various differences - in which case you would need to look at the structure of your existing files to resolve those if needed).
The alternative solution, if you don't need instant copying to the target database, is that it may have an option to import data in from CSV files, which is much easier - and you have CSV files already. But presumably the order of data fields in those files is different to the order of fields in the target database (unless they came from the target database, but then you wouldn't presumably, be trying to write it back unless they are archived records). The point I'm making, though, is you can write the data into CSV files from the PHP program, in the field order required by your target database, then read them into the target database as a seaparate step. A two stage proces in other words. This is particularly suitable for migrations where you are doing a one off transfer to the new database.
All in all you have a challenging but interesting project!
I'm starting a Incident Tracking System for IT, and its likely my first PHP project.
I've been designing it in my mind based on software I've seen like vBulletin, and I'd like it to have i18n and styles editables.
So my first question goes here:
What is best method to store these things, knowing they will be likely static. I've been thinking about getting file content with PHP, showing it in a text editor, and when save is made, replace the old one. (Making a copy if it hasn't ever been edited before so we have the "original").
I think this would be considerably faster than using MySQL and storing the language / style.
What about security here? Should I create .htaccess for asking for pw on this folder?
I know how to make a replace using for each getting an array from database and using strreplace ($name, $value, $file) but if I store language in file, can I make a an associative array with it's content (like a JSON).
Thanks a lot and sorry for so many questions, im newbie
this is what im doing in my cms:
for each plugin/program/entity (you name it) i develop, i create a /translations folder.
i put there all my translations, named like el.txt, de.txt, uk.txt etc. all languages
i store the translation data in JSON, because its easy to store to, easy to read from and easiest for everyone to post theirs.
files can be easily UTF8 encoded in-file without messing with databases, making it possible to read them in file-mode. (just JSON.parse them)
on installation of such plugins, i just loop through all translations and put them in database, each language per table row. (etc. a data column of TEXT datatype)
for each page render i just query once the database for taking this row of selected language, and call json_decode() to the whole result to get it once; then put it in a $_SESSION so the next time to get flash-speed translated strings for current selected language.
the whole thing was developed having i mind both performance and compatibility.
The benefit for storing on the HDD vs DB is that backups won't waste as much space. eg. once the file is backed-up once, it doesn't take up tape on the next day. Whereas, a db gets fully backed up every day and takes up increasing amounts of space. The down-side to writing it to the disk is that it increases your chance of somebody uploading something malicious and they might be clever enough to figure out how to execute it. You just need to be more careful, that's all.
Yes, use .htaccess to limit any action on a writable folder. Good job thinking ahead of that risk.
Your approach sounds like a good strategy.
Good luck.
Based on this tutorial I have built a page which functions correctly. I added a couple of dropdown boxes to the page, and based on this snippet, have been able to filter the results accordingly. So, in practice everything is working as it should. However, my question is regarding the efficiency of the proceedure. Right now, the process looks something like this:
1.) Users visits page
2.) Body onload() is called
3.) Javascript calls a PHP script, which queries the database (based on criteria passed along via the URL) and exports that query to an XML file.
4.) The XML file is then parsed via javascript on the users local machine.
For any one search there could be several thousand results (and thus, several thousand markers to place on the map). As you might have guessed, it takes a long time to place all of the markers. I have some ideas to speed it up, but wanted to touch base with experienced users to verify that my logic is sound. I'm open to any suggestions!
Idea #1: Is there a way (and would it speed things up?) to run the query once, generating an XML file via PHP which contained all possible results, store the XML file locally, then do the filtering via javascript?
Idea #2: Create a cron job on the server to export the XML file to a known location. Instead of using "Gdownloadurl(phpfile.php," I would use gdownloadurl(xmlfile.xml). Thus eliminating the need to run a new query every time the user changes the value of a drop down box
Idea #3: Instead of passing criteria back to the php file (via the URL) should I just be filtering the results via javascript before placing the marker on the map?
I have seen a lot of webpages that place tons and tons of markers on a google map and it doesn't take nearly as long as my application. What's the standard practice in a situation like this?
Thanks!
Edit: There may be a flaw in my logic: If I were to export all results to an XML file, how (other than javascript) could I then filter those results?
Your logic is sound, however, I probably wouldn't do the filtering in Javascript. If the user's computer is not very fast, then performance will be adversely affected. It is better to perform the filtering server side based on a cached resource (xml in your case).
The database is probably the biggest bottleneck in this operation, so caching the result would most likely speed your application up significantly. You might also consider you have setup your keys correctly to make your query as fast as possible.
I have a windows program which generates PGP forms which will be filled in later.
Those PHP forms will populate a database. It looks very much like MySql, but I can't be certain, so let's call it ODBC.
And, yes, it does have to be a windows program.
There will also be PHP forms which query the database - examine which tables and fields it contains and then generates forms which can be used to search the database (e.g, it finds a table with fields "employee_name", etc and generates a form which lets you search based on employee name.
Let's call that design time and run time.
At design time, some manager or IT guy or similar gets to define the nature of the database and at runtime 1) a worker fills in the form daily and 2) management can extract reports.
Here's my question: given that the database is defined at "design time" (and populated at run time), where and how is best to do so?
1 I could use an ODBC interface from the windows program, but I am having difficulty finding something good to work with Delphi. Things like ADO and firebird tend to expect you to already have a database and allow you to manipulate it, but I can find no code example of how to create a database and some tables, so ...
2 I could used DOS commands from Delphi in my windows program. I just tried and got a response to MySql --version, but am not sure if MySql etc are more interactive. That is, can I use a script file or a very long stacked command with semicolons and returns separating? e.g 'CREATE DATABASE db; CREATE TABLE t1;'
3) Since the best way to work with databases seems to be PHP, perhaps my windows program could spit out a PHP page which would, when run in a browser, create the database.
I have tried to make this as uncomplicated as I can, but please feel free to ask questions. It may be that there are several valid ways, but there is probably one 'better' solution in terms of ease of implementation or maintenance.
Better scratch option 3. What if the user later wants to come back and have the windows program change the input form? It needs to update the database too.
Creating a database is usually a database administrator task. Unless it is a local database, maybe an embedded one, the user would need to know where and how create the database on the remote server, and she can have no clue about it. Where to store the database files? Which disks are available? And there could be many more parameters to set (memoery buffers size, etc.), users to be created and so on. And also you need very elevate privileges to be able to create a database, not something you give to average users or applications.
Thereby usually you ask the database administrator to create your database/schema, he will give you the credentials you need to connect, and then your application (or its setup) will create and initialize the needed objects (tables, etc.). Creating table (and other object) is usually as simple as running "CREATE TABLE...." statements. Just remember SQL takes one command only, if you need to run several commands you have to send them one after another yourself, although there are Delphi components which are able to split a script in commands and run one after another.