For example, say if a user wanted to 'add a place' to my database, how could I create a page almost instantly with that place's name in the domain e.g www.mydomain.com/placename?
I understand it might be a complex solution so if it is too complex to explain can you please push me into the right direction of what I should be researching.
Create functionality to create "pretty urls" in php. Read more about that here: http://www.roscripts.com/Pretty_URLs_-_a_guide_to_URL_rewriting-168.html
Create parsing functionality for the urls, so that it recognizes "/placename" as the page "placename"
Create a database structure for pages with the page id, title, content and the url slug, etc.
Create functionality to fetch the right page from the database according to the matching url slug.
Create functionality to render the retrieved informaton
If I understood you right that's one approach to what you want to do.
I'm assuming you're using Apache. If so, create a rule using mod_rewrite that forwards requests for /place/placename to /place.php?name=placename. Then write the place.php script, which will pull the user page from the database and display it in the appropriate fashion.
That's one way to do it - there are others.
First of all try to understand mod rewrite.
You could "mask: a GET url into a much nicer format.
Start here : http://www.elated.com/articles/mod-rewrite-tutorial-for-absolute-beginners/
Then google on and get yourself familiar with all the possibilities.
After that make sure the GET variable is unique in your database, to be absolutely sure use a unique ID.
Example :
domain.com/PLACEID/PLACENAME/
mod_rewrite could then translate this to your php script into :
domain.com/place.php?VAR=PLACEID&VAR2=PLACENAME
Search the data from the user/place through the PLACEID .
Good luck
Related
I have created a custom page in Wordpress which presents different data based on the query string. So my query string right now looks something like
http://example.com/extcat/?uid=15&src=blog
Now this is not getting picked either by Google's spidering software or by tracking software properly. They are all tracking it as one single page
http://example.com/extcat
without the uid.
What I want is to rewrite the above query in a format like one of these
http://example.com/extcat/uid/15/src/blog
http://example.com/extcat/15/blog
http://example.com/extcat/15/?src=blog
http://example.com/extcat/15/
I don't care which format I use.
I tried using .htaccess and the WordPress api both. Nothing seems to work.
Here is an in-depth article on how to accomplish what you are doing. Basically, you'll need to register your custom query variables so WP can do it's magic using add_rewrite_rule(). It's a long article, but will provide a better answer then I can write. Plus, you'll learn a whole lot!
https://premium.wpmudev.org/blog/building-customized-urls-wordpress/
I want to search on website pragmatically using PHP like as we search on website manually, enter query on search box press search and result came out.
Suppose I want to search on this website by products names or model number that are stored in my csv file.
if the products number or model number match with website data then result page should be displayed ..
I search on below question but not able to implement.
Creating a 'robot' to fill form with some pages in
Autofill a form of another website and send it
Please let me know how we can do this PHP ..
Thanks
You want to create a “crawler” for websites.
There are some things to consider first:
You code will never be generic. Each site has proper structure and you can not assume any thing (Example: craigslist “encode” emails with a simple method)
You need to select an objective (Emails ? Items information ? Links ?)
PHP is by far one of the worst languages to do that.
I’ll suggest using C# and the library called AgilityHtmlPack. It allows you to parse HTML pages as XML documents (So you can do XPath expressions and more to retrieve information).
It surely can be done in PHP, but I think it will take at least 10x time in php compared to c#.
Currently I have url's in this format:
http://www.domain.com/members/username/
This is fine.
However each user may have several 'songs' associated with their account.
The url's for the individual song's look like this:
http://www.domain.com/members/username/song/?songid=2
With the number at the end obviously referring to the ID in the MySQL database.
Using jQuery/javascript, the ID is collected from the URL and the database is then queried and the relevent song/page is rendered.
I would like to change these URL's to the following format instead:
http://www.domain.com/members/username/song/songname/
But I have absolutely no idea how to go about it. I've been doing quite a bit of reading on the subject but haven't found anything quite relevant to my situation.
To further compound the challenge, song names are not always unique. For instance if we image the song name 'hello' it is quite possible that another song may exist in the database with the same name, albeit with a different song ID.
Given the limit information you are recieving in this question I am quite content with more generalised answers, describing the approach to take.
General info:
Apache/Nginx proxy
Backend: PHP
jQuery/Javascript front end
I don't know how do you store songs in the database but an idea:
use URL rewrite to rewrite members/username/song/songname/ to song.php?user=username&song=songname. There are plenty of tutorials here or perhaps try to use an URL rewrite-generator tool.
In song.php, get these GET values. Do a MySQL query where the songname and the username match. Output the result.
Note: it is OBLIGATORY to make that a user can store only one song with a given name. Also, the storing user's name MUST be stored. Else this is impossible.
Simple Apache rewrites, in the main httpd.conf file, or an htaccess file if you don't have access to the main config file should suffice
Here is the real case, in the NewsNow.co.uk, there are many link of uptodate news from thousands of website. And the example for one of the news url:
http://newsnow.co.uk/A/471722742?-19721
all the news url are formated like that, but when we click it, we will be brought to the real url, for ex:
http://www.abcactionnewsx.com/dpp/news/state/bla-bla
anyone know how to achieve this efficiently ?
Store a table of 'internal' paths (the 'newsnow' urls) and the 'destination' urls in a database of some sort; sqlite3 would be a fine choice for smaller applcations.
You could hash the 'internal' paths if lookup time for specific strings was too slow in the database you chose.
When a request comes in, look it up in the database and send back a 302 response with the 'target' URL as the new location for the resource.
This is done using a rewrite engine that is built into common webservers like Apache or Nginx. These engines allow you to write rules that transform a url like the first one into something that would be better understood by your php pages. For example, you could create rules that would turn your first link above into:
http://newsnow.co.uk/index.php?catagory=A&id=471722742&referrer=-19721
This is transparent to the user and they will only ever see the link they first typed in. You can then use the variables being passed in to perform whatever actions you desire. In this case you might want to use the variables to perform some kind of database lookup to retrieve the actual destination that you are interested in. Then it's just a question of performing a php redirect to the link in question.
Check out the following link for a very quick intro to Apache's rewriting capabilities (called mod_rewrite): http://www.besthostratings.com/articles/mod-rewrite-intro.html
I have a html site. In that site around 100 html files are available. i want to develop the search engine . If the user typing any word and enter search then i want to display the related contents with the keyword. Is't possible to do without using any server side scripting? And it's possible to implement by using jquery or javascript?? Please let me know if you have any ideas!!!
Advance thanks.
Possible? Yes. You can download all the files via AJAX, save their contents in an array of strings, and search the array.
The performance however would be dreadful. If you need full text search, then for any decent performance you will need a database and a special fulltext search engine.
3 means:
Series of Ajax indexing requests: very slow, not recommended
Use a DB to store key terms/page refernces and perform a fulltext search
Utilise off the shelf functionality, such as that offered by google
The only way this can work is if you have a list of all the pages on the page you are searching from. So you could do this:
pages = new Array("page1.htm","page2.htm"...)
and so on. The problem with that is that to search for the results, the browser would need to do a GET request for every page:
for (var i in pages)
$.get(pages[i], function (result) { searchThisPage(result) });
Doing that 100 times would mean a long wait for the user. Another way I can think of is to have all the content served in an array:
pages = {
"index" : "Some content for the index",
"second_page" : "Some content for the second page",
etc...
}
Then each page could reference this one script to get all the content, include the content for itself in its own content section, and use the rest for searching. If you have a lot of data, this would be a lot to load in one go when the user first arrives at your site.
The final option I can think of is to use the Google search API: http://code.google.com/apis/customsearch/v1/overview.html
Quite simply - no.
Client-side javascript runs in the client's browser. The client does not have any way to know about the contents of the documents within your domain. If you want to do a search, you'll need to do it server-side and then return the appropriate HTML to the client.
The only way to technically do this client-side would be to send the client all the data about all of the documents, and then get them to do the searching via some JS function. And that's ridiculously inefficient, such that there is no excuse for getting them to do so when it's easier, lighter-weight and more efficient to simply maintain a search database on the server (likely through some nicely-packaged third party library) and use that.
some useful resources
http://johnmc.co/llum/how-to-build-search-into-your-site-with-jquery-and-yahoo/
http://tutorialzine.com/2010/09/google-powered-site-search-ajax-jquery/
http://plugins.jquery.com/project/gss
If your site is allowing search engine indexing, then fcalderan's approach is definitely the simplest approach.
If not, it is possible to generate a text file that serves as an index of the HTML files. This would probably be rudimentarily successful, but it is possible. You could use something like the keywording in Toby Segaran's book to build a JSON text file. Then, use jQuery to load up the text file and find the instances of the keywords, unique the resultant filenames and display the results.