I just started a course on php, and we have to each make and present a simple program of our choosing that takes one input and automatically generates a different output.
I chose to make a program that automatically takes a text that a user types into a forum's text field, and run it through google translate, and translates the text into another language in the same textfield, ready to edit. The problem is, when I incorporated the Google Translate API guideline according to the official Google Webmaster's guide, it translates everything EXCEPT what is inside textfields.
Anyone know if there is a way to work around it?
Thanks in advance for your help.
Ok, it is as easy as this, but still remains a hack:
<?php
$ch = curl_init("http://translate.google.com/translate_a/t?client=t&sl=en&tl=de&q=Hello%20World");
curl_exec($ch);
curl_close($ch);
?>
What you need to change is sl, tl and q values.
sl = Input language
tl = Output language
q = Your text, that will be translated
The response will look like this:
[[["Hallo Welt","Hello World","",""]],,"en",,[["Hallo Welt",[1],true,false,999,0,2,0]],[["Hello World",1,[["Hallo Welt",999,true,false]],[[0,11]],"Hello World"]],,,[["en"]],70]
Shouldn't be to difficult to extract the first array.
Related
I am working on a content oriented website, I have to implement web search, I am thinking of auto suggest search, like this:
How it can be done?
I want suggestions followed by the search term as in image, I am using lamp stack.
Suggest me some methods to implement this.
Here are the steps:
Write PHP code that will take search keywords and return results in JSON format
Create form in HTML
On every key stroke in search box take search keywords and make AJAX request to search code you made in step 1
Now display the search response you received in JOSN format
http://www.bewebdeveloper.com/tutorial-about-autocomplete-using-php-mysql-and-jquery
To achieve this in your website, you need to know about AJAX and Database in PHP or any other Server Side language. Then you can use Full Text Search in SQL to do the query. So:
PHP mysqli
AJAX
Full Text Search (Match & Against)
I'm looking for a way to get a user default language by the country. For example I have windows in english but still I would like to get my country 2 letter language ("cs")
You can see an example of what I want, In the source code of http://search.conduit.com/ using (Autocompleteplus) as well. This is what I see:
window.language = "en-us";
window.countryCode = "cz";
window.suggestBaseUrl = "http://api.autocompleteplus.com/?q=UCM_SEARCH_TERM&l=cs&c=cz&callback=acp_new";
You can see the api url has inside "l=cs&c=cz" how did they get this information? I would like to have the same thing I use the same autocompleteplus method just need a way to generate the l=(user true langague)&c=(country code) and performance is important as well. It's autosuggestions for my website search.
This is Ed from AutoComplete+. Getting the user country is typically done when using our API through server side implementations. There are however some open APIs that can assist you. Regardless, you can use our autocomplete feed without the user country. Feel free to contact us directly for further info at http://www.autocompleteplus.com
Thanks,
--ed
I want to display a large amount of text using a php echo command. I have that data stored in mysql database table in a text field. What i want to achieve is that the data should be displayed in the same manner in which i store it in the text field.
for example:
As entered in Mysql table by its Interface One reason people lie is to achieve personal power.
Achieving personal power is helpful for someone who pretends to be more confident than he really is. For example, one of my friends threw a party at his house last month. He asked me to come to his party and bring a date.
Although this lie helped me at the time, since then it has made me look down on myself.
Should be displayed exactly as the above rather than:
One reason people lie is to achieve personal power.
Achieving personal power is helpful for someone who pretends to be more confident than he really is. For example, one of my friends threw a party at his house last month. He asked me to come to his party and bring a date.
Although this lie helped me at the time, since then it has made me look down on myself.
Any ideas/tips on how this can be achieved?
I know that i can manually insert html tags between the text for formatting but i dont want to manually do so. Any way around?
nl2br($foo); will automatically add a <br> tag wherever there is a linebreak in $foo. You can echo nl2br($foo);.
As an alternative, try the <pre> tag. <pre><?php echo $foo; ?></pre>. You many need more styling, but it will preserve whitespace like your linebreaks.
My solution is:
I'm using GWT TextArea textAreaWidget widget.
Before insert the TextArea string to MySQL table I replace all line change and tab characters:
-new line
String toInsert=textAreaWidget.getText().replaceAll(Character.toString((char) 10), "\n\r"));
-tab
String toInsert=textAreaWidget.getText().replaceAll(Character.toString((char) 9), "\t"));
Example:
http://www.tutorialspoint.com/gwt/gwt_textarea_widget.htm
I am recently working in a project. There I need to convert language from English to Japanese by button click event. The text is in a div. Like this:
"<div id="sampletext"> here is the text </div>"
"<div id="normaltext"> here is the text </div>"
The text is come from database. How can I convert this text easily?
Assuming that you have both the English and the Japanese version in the database, you can do two things:
Use AJAX to load the correct text from the database and replace the contents of the div. There are tons and tons of tutorials on the internet about AJAX content replacement.
Put both languages on the website and hide one using CSS display:none. Then use some JavaScript to hide/display the correct div when a button is clicked.
The first is technically more complex but keeps your page size small. The second one is very easy to do, but your page size is larger because you need to send both languages.
If the div is small and there is only one or two of these on the page, I recommend number two, the CSS technique. If the div is large (i.e. a complete article) or there are many of them then use the first method.
If you mean translating the text, you cannot do it easily. To get some idea of the best attempts that software can make at translating natural languages, go to Google Translate or Babelfish. It's not that good, but it's sometimes an intelligible starting point.
If you just mean setting the language attribute on an element, then assign a new language code to the lang property of the div element object.
document.getElementById("normaltext").lang = "en-US";
I don't know the language code for Japanese; possibly ja-ja.
Assuming your literals have an id in your database you could put that id as a class in your div. Then with jquery fetch the ID, send it to your Ajax back-end and fetch the translated one.
First, if you have the texts in a database it really doesn't matter if you render it in divs, tables or whatever.
First you need a php api for some translation service. Here is just an example that might give you some ideas.
$textArray = getTextForThisPage();
?>
...
english_to_japanese($textArray["text1"]);?>
...
I'm still stuck on my problem of trying to parse articles from wikipedia. Actually I wish to parse the infobox section of articles from wikipedia i.e. my application has references to countries and on each country page I would like to be able to show the infobox which is on corresponding wikipedia article of that country. I'm using php here - I would greatly appreciate it if anyone has any code snippets or advice on what should I be doing here.
Thanks again.
EDIT
Well I have a db table with names of countries. And I have a script that takes a country and shows its details. I would like to grab the infobox - the blue box with all country details images etc as it is from wikipedia and show it on my page. I would like to know a really simple and easy way to do that - or have a script that just downloads the information of the infobox to a local remote system which I could access myself later on. I mean I'm open to ideas here - except that the end result I want is to see the infobox on my page - of course with a little Content by Wikipedia link at the bottom :)
EDIT
I think I found what I was looking for on http://infochimps.org - they got loads of datasets in I think the YAML language. I can use this information straight up as it is but I would need a way to constantly update this information from wikipedia now and then although I believe infoboxes rarely change especially o countries unless some nation decides to change their capital city or so.
I'd use the wikipedia (wikimedia) API. You can get data back in JSON, XML, php native format, and others. You'll then still need to parse the returned information to extract and format the info you want, but the info box start, stop, and information types are clear.
Run your query for just rvsection=0, as this first section gets you the material before the first section break, including the infobox. Then you'll need to parse the infobox content, which shouldn't be too hard. See en.wikipedia.org/w/api.php for the formal wikipedia api documentation, and www.mediawiki.org/wiki/API for the manual.
Run, for example, the query: http://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&format=xmlfm&titles=fortran&rvsection=0
I suggest you use DBPedia instead which has already done the work of turning the data in wikipedia into usable, linkable, open forms.
It depends what route you want to go. Here are some possibilities:
Install MediaWiki with appropriate
modifications. It is a after all a
PHP app designed precisely to parse
wikitext...
Download the static HTML version, and parse out the parts you want.
Use the Wikipedia API with appropriate caching.
DO NOT just hit the latest version of the live page and redo the parsing every time your app wants the box. This is a huge waste of resources for both you and Wikimedia.
There is a number of semantic data providers from which you can extract structured data instead of trying to parse it manually:
DbPedia - as already mentioned provides SPARQL endpoint which could be use for data queries. There is a number of libraries available for multiple platforms, including PHP.
Freebase - another creative commons data provider. Initial dataset is based on parsed Wikipedia data, but there is some information taken from other sources. Data set could be edited by anyone and, in contrast to Wikipedia, you can add your own data into your own namespace using custom defined schema. Uses its own query language called MQL, which is based on JSON. Data has WebID links back to correspoding Wikipedia articles. Free base also provides number of downloadable data dumps. Freebase has number of client libraries including PHP.
Geonames - database of geographical locations. Has API which provides Country and Region information for given coordinates, nearby locations (e.g. city, railway station, etc.)
Opensteetmap - community built map of the world. Has API allowing to query for objects by location and type.
Wikimapia API - another location service
To load the parsed first section, Simply add this parameter to the end of the api url
rvparse
Like this:
http://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&format=xmlfm&titles=fortran&rvsection=0&rvparse
Then parse the html to get the infobox table (using Regex)
$url = "http://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&format=json&titles=Niger&rvsection=0&rvparse";
$data = json_decode(file_get_contents($url), true);
$data = current($data['query']['pages']);
$regex = '#<\s*?table\b[^>]*>(.*)</table\b[^>]*>#s';
$code = preg_match($regex, $data["revisions"][0]['*'], $matches);
echo($matches[0]);
if you want to parse one time all the articles, wikipedia has all the articles in xml format available,
http://en.wikipedia.org/wiki/Wikipedia_database
otherwise you can screen scrape individual articles e.g.
To update this a bit: a lot of data in Wikipedia infoboxes are now taken from Wikidata, which is a free database of structured information. See data page for Germany for example, and https://www.wikidata.org/wiki/Wikidata:Data_access for information on how to access the data programatically.
def extract_infobox(term):
url = "https://en.wikipedia.org/wiki/"+term
r = requests.get(url)
soup = BeautifulSoup(r.text, 'lxml')
tbl = soup.find("table", {"class": "infobox"})
if not tbl:
return {}
list_of_table_rows = tbl.findAll('tr')
info = {}
for tr in list_of_table_rows:
th = tr.find("th")
td = tr.find("td")
if th is not None and td is not None:
innerText = ''
for elem in td.recursiveChildGenerator():
if isinstance(elem, str):
# remove references
clean = re.sub("([\[]).*?([\]])", "\g<1>\g<2>", elem.strip())
# add a simple space after removing references for word-separation
innerText += clean.replace('[]','') + ' '
elif elem.name == 'br':
innerText += '\n'
info[th.text] = innerText
return info
I suggest performing a WebRequest against wikipedia. From there you will have the page and you can simply parse or query out the data that you need using a regex, character crawl, or some other form that you are familiar with. Essentially a screen scrape!
EDIT - I would add to this answer that you can use HtmlAgilityPack for those in C# land. For PHP it looks like SimpleHtmlDom. Having said that it looks like Wikipedia has a more than adequate API. This question probably answers your needs best:
Is there a Wikipedia API?