PHP or JavaScript presentation library for articles - php

Is there some good library written in PHP or JavaScript to present articles? It would be cool to have Latex-style syntax or some similar markup language and provide nice looking styles. For example in text:
\section{Some section}
\label{sec:label}
This is paragraph~\ref{sec:label}.
It would generate HTML code like:
<h3>1. Some section</h3>
This is paragraph 1.

Use some regex. Make something similar to a BBCode parser or Markdown (as on Stackoverflow).

It sound like you want to use a template engine for this.
There are many different template engines for PHP. The most popular is smarty
There are also many others to choose from at this list.

You can use the same system as what GitHub and Stack overflow uses, it's called markdown and it comes in a variety of languages.
you can view information here: http://daringfireball.net/projects/markdown/
and you can download the libraries here http://michelf.com/projects/php-markdown/

Related

Simple HTML site localization using with specific language routes

I want to translate a simple HTML site into two languages, I have always opted to duplicate the site and put each copy in a "language directory" as follows:
en/page.html
es/page.html
I feel the i18next library could be useful (for what I've read), but I would like to keep urls like in the previous case but with only one version of the file and I don't see a way to achieve this with such library.
Is this possible? Do I need to use node with express to achieve this? Or use a PHP solution?
Thanks for further answers.
Thanks
I think PHP solution is much more better than Javascript. For PHP multilanguage support check gettext, Below are few links which help you to understand
http://www.gnu.org/software/gettext/
https://blog.udemy.com/php-gettext/
complete example of gettext in php
http://blog.lingohub.com/2013/07/php-internationalization-with-gettext-tutorial/
Google Search: (php gettext tutorial)
Wordpress also use gettext libraries and tools for i18n.
http://codex.wordpress.org/I18n_for_WordPress_Developers

A secure commenting system for php

I want to implement a commenting system for my website. I looked around and found CKEditor to be the best WYSIWYG editor I found. I tried its bbcode output and it works perfectly. However if I use bbcode output, when I want to show the comments to the users, I should use a reliable parser to parse the bbcode to HTML. If I use HTML output, I may need to use something to prevent XSS in the comments. Which way you suggest for a simple commenting system. I already integrated CKEditor to my system and prefer a very lightweight and simple approach without so much bloat (like PEAR). Also, StackOverflow seems pretty awesome. Is it possible to use something similar for my php?
I should use a reliable parser to parse the bbcode to HTML.
PHP has a pecl BBCode extension.
Also, StackOverflow seems pretty awesome. Is it possible to use something similar for my php?
SO uses Markdown. Markdown parser in PHP is also available

php wiki parser for trac-style formatting

I am creating a very simple cms for my site and rather than using html, I'd like to insert content in the same kind of wiki-format that's used by the Trac project.
Do you know of any open-source php scripts/classes that I can grab and use for this?
Note: I am not trying to create a wiki site. Just that formatting aspect - like how this stack exchange site accepts wiki mark-up and renders it nicely.
After doing some more research, I think I've found it.
The Forever For Now wiki-syntax-to-html parser is pretty much the same as the formatting on the Trac project.
~I have not looked at the code yet, but its pretty likely to be cool. (like Fonzie)~
Edit - I've, now, looked at the code and its beautiful and elegant and does the job.
PHP Markdown might work for you.

Markup filter wanted for a public website

Developing a community site where everyone can post text,
I'm looking for a markup filter:
What is not part of the markup must be escaped (htmlspecialchars()) as it is.
Should turn URL-s automatically into links
Should support some form of basic markups (bold, image, url, pre, list)
Should have a simple parser, that turns user input text into HTML
Content on the site is public to everyone, XSS must not allowed to happen.
What do you suggest? What markup language in the first place? BBCode? Wiki? Markdown? Are there any complete API-s with good examples?
PHP is available on the server side. If there is a WYSIWYG-like texarea in addition (like here on SO) that would be a fantastic bonus!
BBCode is old and it's very verbose (pretty much HTML) but both CKEditor and TinyMCE supports it.
Wiki syntax is somewhat confusing to new users and you have to override the CamelCased words.
Markdown seems to be the de facto standard of today's web applications and StackOverflow uses it. There is a very good PHP implementation, not sure about RTEs but StackOverflow uses WYM Editor.
Also, check out the Wikipedia entry on Lightweight Markup Languages.
I think I'm going to go with BBCode via NBBC: http://nbbc.sourceforge.net/
Great list of tags supported, auto-detects complicated link, configurable, slim implementation.

Text Parser with PHP, like Instapaper

I'm trying to write a text parser with PHP, like Instapaper did. What I want to do is; get a webpage and parse it in text-only mode.
It's simple to get the webpage with cURL and strip HTML tags. But every webpage have some common areas; like header, navigation, sidebar, footer, banners etc. I only want to get the article in text mode and exclude all other parts. It's also simple to exclude those parts if I know the "id" or "class" info. But I'm trying to automatize this process and apply for any page, like Instapaper.
I get all the content between but I don't know how to exclude header, sidebar or footer and get only the main article body. I have to develop a logic to get only the main article part.
It's not important for me to find the exact code. It would also be useful to understand how to exclude unnecessary parts as I can try to write my own code with PHP. It would also be useful if there any examples in other languages.
Thanks for helping.
You might try looking at the algorithms behind this bookmarklet, readability - It's got a decent success rate for extracting content among on all web page rubbish.
Friend of mine made it, that's why I'm recommending it - since I know it works, and I'm aware of the many techniques he's using to parse the data. You could apply these techniques for what your asking.
you can take a look at the source from Goose -> it already does alot of this like instapaper text extractions
https://github.com/jiminoc/goose/wiki
Have a look at the ExtractContent code from Shuyo Nakatani.
See original Ruby source http://rubyforge.org/projects/extractcontent/ or a port of it to Perl http://metacpan.org/pod/HTML::ExtractContent
You really should consider using a HTML parser for this. Gather similar pages and compare the DOM trees to find the differing nodes.
this article provides a comparison of different approaches. the java library boilerpipe was rated highly. at the boilerpipe site you find his scientific paper which compares to other algorithms.
not all algorithms suite all purposes. the biggest application of such tools is to just get the raw text to index as a search engine. the idea being that you don't want search results to be messed up by adverts. such extractions can be destructive; meaning that it wont give you "the best reading area" which is what people want with instapaper or readability.

Categories