This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
HTML Scraping in Php
Far from being web developer expert, so sorry in advance if I'm missing something basic:
I need to copy a table into mySql database using PHP; the table resides in a website which I don't own, however I have permission to copy and publish.
Manually when I watch this website in my web-browser I need to click on a link in the main website URL (I can't reach the final destination page link since it changes all time, however the main page link is static and the link to click is also static).
Example to such a content I need to copy from (just an example, this is not the real content):
http://www.flightstats.com/go/FlightStatus/flightStatusByAirport.do?airportCode=JFK&airportQueryType=0
Most people are going to ask what have you tried. Since you mentioned that you don't have much development experience, here are some tips on how to go about it - have to put it as an answer so it is easier to read.
What you're going to need to do is scraping.
Using PHP, you'd use the following functions at the very least
file_get_contents() - this function will read the data in the URL
preg_match_all - use of regular expressions will let you get the data you are looking for. Though some/many people will say that you should go through the DOM.
The data that is returned with preg_match_all can be stored into your MySQL table. Though because the data changes so frequently, you might be better off just scraping that section and storing the entire table as cache (though I do have to say I have no idea what you are trying to do on your site - so I could well be wrong).
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How would you transform a pre-existing web app in a multilingual one?
Best way to internationalize simple PHP website
I'm trying to figure out how to translate all the static texts on my webpage (I'm using PHP). But I'm not really sure what the "correct" way is. This is what I thought of so far, but maybe it's all wrong :D
1.
For every static piece of text on the page, just get the translation with something like "getTranslation("Hello World!") and it will just look up the translation in the database or a file like XML/CSV/PHP with all the translations.
But this seem pretty bad since we will have to query the database or parse the file on every page, everytime it's refreshed/loaded.
2
Everytime a page is loaded I could read from the database/file and store the translations for the current language in an array and get the translations from the array as the page is building, instead querying the database / parsing the file again.
3
Is there some way to read the translations only once and then make it accesible for all pages? The only thing I can think of is php's SESSION but it just seems so wrong to store the translations there.
So what the "most common" or "right" way to do it?
Happy hunting!
Sounds like you need gettext. gettext is widely used and widely supported. I'm pretty sure it's also pretty well optimized.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to create a sitemap using PHP & MySQL
So I'm kinda stuck here. I have a website with a pretty big database, that constantly changes. Now I want to help the search engines by supplying a sitemap.xml file. Normally I would use a webservice that would do this, but thats not really possible in this case.
To be honest. I have no clue where to start. How would I go about doing this? Sorry if this is a too basic question, but Google couldn't help me.
Edit: Some more info. DB is currently 1k pages. Want to go up to like 10k. I use Mysql to echo this from my database, and then htaccess to rewrite the URLs.
(PHP's get ID, etc)
You need to install a crawler of doing it like a webservice. The easier way is to write a php script and generate sitemap XML file by yourself.
Write a query to get the links from your database and then iterate over it to create a sitemap.
See this post for example php script How to create a sitemap using PHP & MySQL
This question already has answers here:
Closed 11 years ago.
I'm looking for a PHP web-crawler to gather all the links to for a large site and tell me if the links are broken.
So far I've tried modifying an example on here myself. My question about the codeI've also tried grabbing phpDig but the site is down. any suggestions would be great on how I should proceed would be great.
EDIT
The problem isn't the grabbing of the links the issue of the scale I'm not sure if the script I modified is sufficient enough to grab what possibly be thousands of URL's as I tried setting the depth for the search link to 4 and the crawler timed out through the browser. Someone else mentioned something about killing processes as to not overload the server, could someone please elaborate on the issue.
Not a ready-to-use solution, but Simple HTML Dom parser is one of my favourite dom parsers.
It let's you use CSS selectors for finding nodes over the document, so you can easily find <a href="">'s.
With these hyperlinks's you can build your own crawler and check if the pages are still available.
You can find it here.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I heard xml is used as database, can anybody give me a simple tip or link to tutorial how to store some information in database ? what is the best use of xml on php realted to data things?
I'm gonna throw my hat in, simply because I am working on a personal project that does in fact use XML as it's storage mechanism. Notice, I didn't call it a database. It's not a database, at least in the way most would define it. As expressed in an article I read recently, XML is not data oriented, it's document oriented.
In my case, I'm building a simple OO php/XML resume site for my girlfriend. I am using an XML file to store the content. I chose this mainly because it's small, lightweight, interchangeable, and easy to read. Initially, I thought I could just provide the XML to her, and she could fill in the blanks. XML is straightforward enough to allow a laymen to do that.
As I continued, I realized that it wasn't very difficult to throw in an admin type interface where she could simply enter values in a form and update the resume that way.
Since the site is not really a web site, but a web document, XML works well here and nicely separates content.
Of course, I could have used JSON as well, and I may in fact adapt things to handle either JSON or XML, but I decided to use XML initially simply out of familiarity, and (this is arguable) that I assumed it would be easier for a laymen to parse when entering content.
XML is not supposed to be used as a database but as a way to transport data in an application agnostic way. For example, say you have many RSS feeds in Google Reader and you want to add them into Thunderbird. You will export them from Google Reader in the XML format, and then import that XML file into Thunderbird. Both applications will know how to read and write from the XML and how to use the information (the RSS feeds) in it.
If you want to store information in a useful way that, for example, lets you organize and search through it, you will need a full fledged database. Some good ones are Mysql and Postgresql. Both of those work well with PHP and have extensive tutorials to begin with, all easily accessible via any search engine.
You can answer this question yourself after reading this very entertaining article by one of Stackoverflow founders:
Back to Basics by Joel Spolsky
Check out some of the responses I got to my question "Is there a simple, flat, XML-based query-able data storage solution?" on the Programmers.StackExchange Site.
It's a mixed bag. SimpleXML is great with PHP, but there is a lot of FUD when it comes to XML query languages and implementations..
To add to what Fanis said, if you want something lightweight then I strongly recommend MongoDB or SQLite