I have some trouble using PEAR HTML_Table (Strict error, bug seems still open).
I want to find a standard way to create HTML output that produces a table from associative arrays (where the key shall be in col#1, value in col#2 if nested it shall make a sub-table, if possible, if not, just indent the sub-key.
Also, if possible, would be nice to have formatting means like alteranting rows and hover of lines, but that's obviously an option.
Significant: I would like to have "plain php code" rather than an extension that requires a dll due to update restrictions on the PHP server I use.
Any hints / tips for me to do this without crunching my own code?
There are lot's of table generation classes out there.
https://github.com/search?l=PHP&q=datagrid&type=Repositories
Just to name a few:
https://github.com/donquixote/cellbrush
https://github.com/naomik/htmlgen
https://github.com/intrip/bootstrap-table-generator
DataGrid Classes for Zend Framework - http://modules.zendframework.com/?query=datagrid
Related
As I've started building a project, there will be quite a few entries in the .po translation file. I use Poedit to build these.
My question is, what is the best practice for entries within this file? I was thinking, instead of referencing entries such as:
echo _('This is an entry.');
I was thinking of organizing them like:
echo _('error_pwd');
echo _('error_user_taken');
Which, once ran through the translation file, would output something like:
Password incorrect. Please try again.
Username is already taken. Please try another.
So, all my translations can be organized by type, such as error_, msg_, status_, tip_, etc.
Has anyone seen it done this way, or have any suggestions on a more organized method?
In fact it doesn't matter!
It's just up to you.
However, I advise you do not split translations in sections.
No there's any benefit in doing so. Actually, the most projects use the one file approach for all msgid entries.
Like django, see.
Of course, If you still want split translation by sections, might you want take a look on Domains:
From PHP doc:
This function (textdomain()) sets the domain to search within when calls are made to
gettext(), usually the named after an application.
Also, as earlier said, the advantage when using msgid as a real phrase ou word (instead underline or dotted notation key) is that it stays as default message if no there's a translation for entry.
And here goes some helpful links:
Django Porject - i18n, Definition
PHP textdomain function
What is bindtextdomain, textdomain in gettext?
How to determine which catalog to be used
This is a standard approach for other framework, e.g. Symfony/Laravel:
trans('error.validation');
But it has a downfall, if you forget to translate one phrase on your site it will appear like the keyword 'error.validation'
I'm trying to figure out how to take a simple custom xml file (its actually an EML file, but simpeXML works with it anyway) and take tagnames and the text that follows (i think simpleXML calls them children) and put them into a MAP, with key/value pairs. I've looked at some examples on this site and others about converting to arrays and such but they all seem extremely complicated for my needs. I should note that my custom xml does not contain ANY attributes and this conversion only needs to work with MY custom xml file and not any others ever.
So a simple example of my eml file is here
<lesson>
<unit>4</unit>
</lesson>
So then basically what I would want is a MAP, or whatever a key/value collection is called in php that would give me:
Map[0](lesson,null)
Map[1](unit,4)
It's important that I get the null values (or an empty string is ok too), so I can verify that the eml file is valid. I need to validate it with php, not using a namespace validator or a dtd file or however that is done. So the first key/value pair, or the root tag, HAS to be lesson, and then ill also verify that there is a unit tag, then a title tag, then at least one other type of tag etc...I can do that easy if i can get everything into a key/value collection. Also, there are many tagnames that are the same, so keys should be not-unique. However for the value, they should be unique, but only to the tag name. So unit can only one one "4", but another tag, lets say imageID could also have "4". This is not a requirement but a "nice to have" and I can probably figure that out if its not simple. But if its REALLY hard then I will skip it all together.
I hope this makes sense.
And no, I don't think Im allowed to use json. I'm sure it can be done in simpleXMl but if its impossible, then please provide a method to do it in json (assuming that json is included with PHP and not an extension that has to be loaded).
This is university homework, so I can't use extensions or anything else that would require anything beyond what comes with the XAMPP basic package (php, mysql, apache etc...).
Really surprised I got no votes or views or answers or anything on this. I did figure this out in the end. Oh yeah...got the tumbleweed badge for this too!
Anyway the answer was actually quite simple. I used the simplexml_load_file function in PHP which actually supports any kind of xml-style. So after running this,
$eml = simplexml_load_file("unit.eml");
I then did things like this
foreach ($eml->children() as $child)
$tag = $child->getName();
$tagInfo = $child;
And used $tag and $tagInfo to iterate through my eml and get everything I needed.
I find Yii great framework, and the example website created with yiic shell is a good point to start... however it doesn't cover the topic of multi-language websites, unfortunately. The docs covers the topic of translating short messages, but not keeping the multi-lingual content ...
I'm about to start working on a website which needs to be in at least two languages, and I'm wondering what is the best way to keep content for that ...
The problem is that the content is mixed extensively with common elements (like embedded video files).
I need to avoid duplicating those commons ... so far I used to have an array of arrays containing texts (usually no more than 1-2 short paragraphs), then the view file was just rendering the text from an array.
Now I'd like to avoid keeping it in arrays (which requires some attention when putting double quotations " " and is inconvenient in general...).
So, what is the best way to keep those short paragraphs? Should I keep them in DB like (id | msg_id | language | content ) and then select them by msg_id & language? That still requires me to create some msg_id's and embed them into view file ...
Is there any recommended paradigm for which Yii has some solutions?
Thanks,
m.
Gettext is good for its ease of translation, but the default PHP implementation is not thread safe. Yii therefore uses its own unpacker, dramatically increasing processing time compared to php arrays.
Since I was setting up a high volume, high transaction site, the performance hit was not acceptable. Also, by using APC, we could cache the PHP translation further increasing performance.
My approach was therefore to use PHP arrays but to keep the translations in a DB for ease of translation, generating the needed files when translations are changed.
The DB is similar to this :
TABLE Message // stores source language, updated by script
id INT UNSIGNED
category VARCHAR(20) // first argument to Yii::t()
key TEXT // second argument to Yii::t()
occurences TINYINT UNSIGNED // number of times found in sources
TABLE MessageTranslation // stores target language, translated by human
id INT UNSIGNED
language VARCHAR(3) // ISO 639-1 or 639-3, as used by Yii
messageId INT UNSIGNED // foreign key on Message table
value TEXT
version VARCHAR(15)
creationTime TIMESTAMP DEFAULT NOW()
lastModifiedTime TIMESTAMP DEFAULT NULL
lastModifiedUserId INT UNSIGNED
I then modified the CLI tool yiic 'message' command to dump the collected strings into the DB.
http://www.yiiframework.com/wiki/41/how-to-extend-yiic-shell-commands/
Once in the DB, a simple CMS can be setup to provide translators an easy way to translate and at the same time providing versioning information, reverting to older versions, checking quality of translators, etc ...
Another script, also modified from yiic, then takes the DB info and compiles it into PHP arrays. Basically a JOIN of the two tables for each language, then build an array using 'Message'.'key' and 'MessageTranslation'.'value' as (what else?) key => value ... saving to file named from 'Message'.'category' in folder specified by language.
The generated files are loaded as normal by Yii CPhpMessageSource.
For images, this was as simple as placing them in folders with the proper language and getting the app language when linking.
<img src="/images/<?php echo Yii::app()->language; ?>/help_button.png">
Note that in real life, I wrote a little helper method to strip off the country from the language string, 'en_us' should be 'en'.
A Yii application by default uses yii::t() method for translating text messages and there are 3 different types for message sources:
CPhpMessageSource : Translations are stored as key-value pairs in a PHP array.
CGettextMessageSource : Translations are stored as GNU Gettext files. (PO Files)
CDbMessageSource : Message translations are stored in database tables.
If i don't misunderstand, you are using classic arrays for translations. I recommend to you using GetText and PO files with Yii for translation operations.
You can find lot of information about translation and i18n with yii in this official documentation page.
Well I think what is concerned here is how to translate static text/messages on the page and Yii solves it pretty well using Yii:t() and Edigu's answer is for it.
I check out the post on FlexicaCMS about translating dynamic content in database, well ultimately that will be the next after you solve static text/message problem, and that is a truly good approach using Yii's behavior. Not sure if FlexicaCMS authors are too ambitious in supporting translation that way as it would make content translation a worry-free thing - really great.
One thing they don't mention is the url of translated page. For example your.site.com/fr/translated_article_title.html. I mean the url must has /language_id/ part in it so it can help with SEO.
In Yii1 and Yii2 yii\i18n\GettextMessageSource doesn't use Yii perfect cache engine anyway (look at the source) to enhance the load of PO or MO files. It's not recommended to load these files by using php pure code (including yii\i18n\GettextMessageSource) (it's so slower than php array idx) :
http://mel.melaxis.com/devblog/2006/04/10/benchmarking-php-localization-is-gettext-fast-enough/
However php gettext ext for MO files is a few faster than translation php array because it uses cache but the negative point is : every change in MO requires server restart.
I think the best solution would be to extend yii\i18n\GettextMessageSource in your own code library and add the cache ability to GettextMessageSource to enhance its performance and use your extended version as the component.
protected function loadMessages($category, $language);
Just don't check MO modified date in every load to compare against the cache , instead clear the cache when the MO or PO files are changed (it can be a schedule).
I'm developing a website in PHP and I'd like to give the user to switch from German to English easily.
So, a translation politic must be considered:
Should I store the data and its translation in a database table ((1, "Hello", "hallo"), (2, "Good morning", "Guten Tag") etc .. ?
Or should I use the ".mo" Files to store it?
Which way is the best?
What are the pros and the cons?
After having just tackled this myself recently (12 languages and counting) on a production system and having run into some major performance issues along the way I would suggest a hybrid system.
1) Store the language strings and translations in a database--this will make it easy to interact with/update/remove items plus will be part of your normal backup routines.
2) Cache the languages into flat files on the server and draw those out as necessary to display on the page.
The benefits here are many--mostly it is fast! I am not dealing with connection overhead for MySQL or any traffic slowdowns during the transfer. (especially important if your DB server is not localhost).
This will also make it very easy to use. Store the data from your database in the file as a php serialized array and GZIP the contents of the file to shrink storage overhead (this also makes it faster in my benchmarking).
Example:
$lang = array(
'hello' => 'Hallo',
'good_morning' => 'Guten Tag',
'logout_message' = > 'We are sorry to see you go, come again!'
);
$storage_lang = gzcompress( serialize( $lang ) );
// WRITE THIS INTO A FILE SUCH AS 'my_page.de'
When a user loads your system for the first time do a file_exists('/files/languages/my_page.de'). If the file exists then load the content, un-gzip, and un-serialize and it is ready to go.
Example
$file_contents = get_contents( 'my_page.de' );
$lang = unserialize( gzuncompress( $file_contents ) );
As you can see you can make the caching specific to each page in the system keeping the overhead even smaller and use the file extension to denote language... (my_page.en, my_page.de, my_page.fr)
If the file DOESN'T exist then query the DB, build your array, serialize it, gzip it and write the missing file--at the same time you have just constructed the array that the page needed so continue on to display the page and everyone is happy.
Finally, this allows you to build in update pages accessible to non-programmers but you also control when changes appear by deciding when to remove cache files so they can be rebuilt by the system.
Warnings and Pitfalls
When I kept everything in the database directly we hit some MAJOR slowdowns when our traffic spiked.
Trying to keep them in flat-file arrays only was so much trouble because updates were painful and prone to errors.
Not GZIP compressing the contents of the cache files made the language system about 20% slower in my benchmarks.
Make sure all of your database fields containing languages are set to UTF8-general-ci (or at least one of the UTF8 options, I find general-ci best for my use). If you don't you will not be able to store non-unicode character sets in your database (like Chinese, Japanese, etc)
Extension:
In response to a comment below, be sure to set your database tables up with page level language strings in mind.
id string page global
1 hello NULL 1
2 good_morning my_page.php 0
Anything that shows up in headers or footers can have a global flag that will be queried in every cache file created, otherwise query them by page to keep your system responsive.
PHP arrays are indeed the fastest way to load translations. However, you really don't want to update these files by hand in an editor. This might work in the beginning, and for one or two languages, but when your site grows this gets really hard to maintain.
I advise you to setup a few simple tables in a database where you keep the translations, and build a simple app that lets you update the translations (some forms to add and update texts). As for the database: use one table to store translation variables; use another to link translations to these variables.
Example:
`text`
id variable
1 hello
2 bye
`text_translations`
id textId language translation
1 1 en hello
2 1 de hallo
3 2 en bye
4 2 de tschüss
So what you do is:
create the variable in the first table
add translations for it in the second table (in whatever language you want)
After you've updated the translations, create/update a language file for each language that you're using:
select the variables you need and its translation (tip: use English if there's no translation)
create a big array with all this stuff, e.g.:
$texts = array('hello' => 'hallo', 'bye' => 'tschüss');
write the array to a file, e.g.:
file_put_contents('de.php', serialize($texts));
in your PHP/HTML create the array from the file (based on selected language by user), e.g.:
$texts = unserialize(file_get_contents('de.php'));
in your PHP/HTML use the variables, e.g.:
<h1><?php echo $texts['hello']; ?></h1>
or if you like/enabled PHP short tags:
<p><?=$texts['bye'];?></p>
This setup is very flexible, and with a few forms to update the translations it's easy to keep your site up to date in multiple languages.
I'd also suggest Zend Framework Zend_Translate package.
The manual gives a good overview on How to decide which translation adapter to use. Even when not using ZF, this will give you some ideas about what is out there and what the pros and cons are.
Adapters for Zend_Translate
Array
Use PHP arrays Small pages;
simplest usage; only for programmers
Csv
Use comma separated (.csv/.txt) files
Simple text file format; fast; possible problems with unicode characters
Gettext
Use binary gettext (*.mo) files GNU standard for linux;
thread-safe; needs tools for translation
Ini
Use simple ini (*.ini) files
Simple text file format; fast; possible problems with unicode characters
Tbx
Use termbase exchange (.tbx/.xml) files
Industry standard for inter application terminology strings; XML format
Tmx
Use tmx (.tmx/.xml) files
Industry standard for inter application translation; XML format; human readable
Qt
Use qt linguist (*.ts) files
Cross platform application framework; XML format; human readable
Xliff
Use xliff (.xliff/.xml) files
A simpler format as TMX but related to it; XML format; human readable
XmlTm
Use xmltm (*.xml) files
Industry standard for XML document translation memory; XML format; human readable
There are some factors you should consider.
Will the website be updated frequenytly? if yes, by whom? you or the owner? how much data / information are you dealing with? and also... are you doing this frequently (for many clients) ?
I can hardly think that using a relational database can couse any serious speed impacts unless you are having VERY high traffic (several hundreds of thousands of pageviews per day).
Should you be doing this frequently (for lots of clients) think no further: build up a CMS (or use an existing one). If you really need to consider speed impact, you can customize it so that when you are done with the website you can export static HTML pages where possible.
If you are updating frequently, the same as above applies.
If the client has to update (and not you), again, you need a CMS.
If you are dealing with lots of infomration (big and lots of articles), you need a CMS.
All in all, a CMS will help you build up your website structure fast, add content fast and not worry that much about code since it will be reusable.
Now, if you just need to create a small website fast, you can easily do this with hardcoded arrays and datafiles.
If you need to provide web interface for adding/editting translations, then database is a good idea.
If, however, your translations are static, I would use gettext or even plain PHP array.
Either way you can take advantage of Zend_Translate.
Small comparison, the first two from Zend tutorial:
Plain PHP arrays: Small pages; simplest usage; only for programmers.
Gettext: GNU standard for linux; thread-safe; needs tools for translation.
Database: Dynamic; Worst performance.
I would recommend PHP arrays, they can be built around a GUI for easy access.
Be realize the everybody in the world when dealing with computer, they usually know some common English used in computer or internet like About Us, Home, Send, Delete, Read More etc. Question : Are they really need to be translated?
Ok, honestly, some translation to that words is actually not about 'required', it's all about 'style'.
Now, if it's really wanted, for the common words that no need to be changed forever, it's better use a php file which output lang array for only local and English. And for some contents such as blog, news and some descriptions, use database and save in as many as language translation required. You must do it manually.
Using and rely on Google Translate? I think you have to think 1000 times. At least for this decade.
I would like to design a web app that allows me to sort, browse, and display various attributes (e.g. title, tag, description) for a collection of man pages.
Specifically, these are R documentation files within an R package that houses a collection of data sets, maintained by several people in an SVN repository. The format of these files is .Rd, which is LaTeX-like, but different.
R has functions for converting these man pages to html or pdf, but I'd like to be able to have a web interface that allows users to click on a particular keyword, and bring up a list (and brief excerpts) for those man pages that have that keyword within the \keyword{} tag.
Also, the generated html is somewhat ugly and I'd like to be able to provide my own CSS.
One obvious option is to load all the metadata I desire into a database like MySQL and design my site to run queries and fetch the appropriate data.
I'd like to avoid that to minimize upkeep for future maintainers. The number of files is small (<500) and the amount of data is small (only a couple of hundred lines per file).
My current leaning is to have a script that pulls the desired metadata from each file into a summary JSON file and load this summary.json file in PHP, decode it, and loop through the array looking for those items that have attributes that match the current query (e.g. all docs with keyword1 AND keyword2).
I was starting in that direction with the following...
$contents=file_get_contents("summary.json");
$c=json_decode($contents,true);
foreach ($c as $ind=>$val ) { .... etc
Another idea was to write a script that would convert these .Rd files to xml. In that case, are there any lightweight frameworks that make it easy to sort and search a small collection of xml files?
I'm not sure if xQuery is overkill or if I have time to dig into it...
I think I'm suffering from too-many-options-syndrome with all the AJAX temptations. Any help is greatly appreciated.
I'm looking for a super simple solution. How might some of you out there approach this?
My approach would be parsing the keywords (from your description i assume they have a special notation to distinguish them from normal words/text) from the files and storing this data as searchindex somewhere. Does not have to be mySQL, sqlite would surely be enough for your project.
A search would then be very simple.
Parsing files could be automated as post-commit-hook to your subversion repository.
Why don't you create table SUMMARIES with column for each of summary's fields?
Then you could index that with full-text index, assigning different weight to each field.
You don't need MySQL, you can use SQLite which has the the Google's full-text indexing (FTS3) built in.