This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
What's the shebang (#!) in Facebook and new Twitter URLs for?
It usually comes straight after the domain name.
I see it all the time, like in Twitter and Facebook urls.
Is it some special sort of routing?
# is the fragment separator. Everything before it is handled by the server, and everything after it is handled by the client, usually in JavaScript (although it will advance the page to an anchor with the same name as the fragment).
after # is the hash of the location; the ! the follows is used by search engines to help index ajax content. After that can be anything, but is usually rendered to look as a path (hence the /). If you want to know more, read this.
Related
This question already has answers here:
Why is the hash part of the URL not available on the server side?
(4 answers)
Closed 4 years ago.
Why does $_GET deliver different results if I call an URL with an anchor compared to without?
example:
https://www.myurl.com/#anchor?param1=x¶m2=y
if I read the GET params, REQUEST, $_SERVER['QUERY_STRING'], parse_url($url, PHP_URL_QUERY)
all are emtpy
but with
https://www.myurl.com/?param1=x¶m2=y
everything works as expected.
Can anyone explain me this please?
Basically the hash component of the page URL (the part following the # sign) is processed by the browser only - the browser never passes it to the server. This sadly is part of the HTML standard and is the same whether or not you are using IE or any other browser (and for that matter PHP or any other server side technology).
Check the explanation from here.
Anchors go at the end, hence the name. :)
https://www.myurl.com/?param1=x¶m2=y#anchor
This question already has answers here:
How to create friendly URL in php?
(8 answers)
SEO friendly url using php [duplicate]
(5 answers)
Closed 8 years ago.
I`ve learning development of websites recently and have appreciated quite some concepts in php and mySQL databases. Since I use GET and POST i sometimes end up with such urls: http://seta.com/news.php?articleid=231. how do i make my urls look like this instead: http://seta.com/news/today.
Maybe if one helps me on the subject so that i can search, I don`t even know what I am looking for.
This is called SEO urls or pretty urls. You can use the .htaccess file and regular expressions to rewrite the http://seta.com/news/today url to http://seta.com/news.php?articleid=231 for your program so your program can function without any modifications.
If your program in dynamically creating these links, you will have to update those places so it's outputting the urls as the new pretty urls.
Another way is to direct all urls to news.php and use $_SERVER['REQUEST_URI'] to extract the information from the url. This requires modifications to your code and it will stop working for the ?articleid=123 urls which is useful during development.
By using the GET method to submit data , a question mark followed by the submitted data will be shown like the article id for example : http://seta.com/news.php?articleid=231
<?php $article_id=$_GET['articleid']; ?>
By using the POST method , the url is not going to change :http://seta.com/news
<?php $article_id=$_POST['articleid']; ?>
i hope this is what you looking for
You need to learn some MVC php frameworks then..
CodeIgnitor is good for start
url like http://seta.com/news/today has URI Segments
where news is 1st segment and today is 2nd segment.
IN .htaccess using mode_rewrite you can remove .php from the link and create your links accordingly as you want.
This question already has answers here:
PHP validation/regex for URL
(21 answers)
Closed 8 years ago.
I know there are already questions for validating links. But I'm very bad with regex, and I don't know how to validate a user input (in html) is equivalent to these URL:
http://www.domain.com/?p=123456abcde
or
http://www.domain.com/doc/123456abcde
I guess it's like this
/^(http://)(www)((\.[A-Z0-9][A-Z0-9_-]*).com/?p=((\.[A-Z0-9][A-Z0-9_-]*)
I need the regex or the two URL. Thanks
This might not be a job for regexes, but for existing tools in your language of choice. Regexes are not a magic wand you wave at every problem that happens to involve strings. You probably want to use existing code that has already been written, tested, and debugged.
In PHP, use the parse_url function.
Perl: URI module.
Ruby: URI module.
.NET: 'Uri' class
This will match both your strings.
(http:\/\/)?(www\.)?([A-Z0-9a-z][A-Z0-9a-z_-]*).com\/(\?p=)?([A-Z0-9a-z][\/A-Za-z0-9_-]*)
I highly recommend using a regex checker, you can find some for (almost) every OS and there are even some online ones such as: http://regexpal.com/ or http://www.quanetic.com/Regex.
This will match any valid domain with the format you specified.
http(s)?:\/\/(www\.)?[a-zA-Z0-9-\.]+\.[a-z]{2,6}\/(\?p=|doc\/)[a-z0-9]+
Replace [a-z]{2,6} with com if you only want .com domains. See it in action here.
This question already has answers here:
Why would a developer place a forward slash at the start of each relative path?
(4 answers)
Closed 9 years ago.
I noticed today just by chance that sometimes I write "/directory/file.extension" instead of "directory/file.extension" and that both seem to work sometimes. It seemed as though "directory/file.extension" worked every time among HTML, JavaScript, and PHP. In some cases, PHP did not like "/directory/file.extension" such as when using include.
In general is it better not to use the forward slash among HTML, JavaScript, and PHP? Does it matter for HTML and JavaScript?
I'm looking for an explanation as to why or why not more so than just a confirmation.
If a path doesn't begin with / it is a relative URL. This means that the actual pathname is determined based on the URL of the document that contains the URL. So if you have a page with URL /dir1/dir2/dir3/file.extension, and it contains a link to directory/file2.ext2, clicking on the link will go to /dir1/dir2/dir3/directory/file2.ext2. But if that same link were in a page with URL /dir1/file.extension it would go to /dir1/directory/file2.ext2.
Relative URLs are useful when you have a collection of pages that you want to move around as a unit, such as copying them from a development environment to production. As long as the relationships between all the files stay the same, the links between them will work.
If the path begins with /, it's called an absolute URL (strictly speaking, it should also contain the protocol, such as http:, and the server name //www.company.com). It will be interpreted from the server's document root, no matter where the link appears. Absolute URLs are useful for referencing files that are not part of the same collection. For instance, you might have a Javascript library that's used by pages at various levels in your document hierarchy.
This question already exists:
Closed 11 years ago.
Possible Duplicate:
How to parse HTML with PHP?
i want to write a php-program that count all hyperlinks of a website, the user can enter.
how to do this? is there a libary or something which i can parse and analyze the html about the hyperlinks?
thanks for your help
Like this
<?php
$site = file_get_contents("someurl");
$links = substr_count($site, "<a href=");
print"There is {$links} in that page.";
?>
Well, we won't be able to give you a finite answer but only pointers. I've done a search engine once out of php so the principle will be the same:
First of all you need to code your script as a console script, a web script is not really appropriate but it's all a question of tastes
You need to understand how to work with sockets in PHP and make requests, look at the php socket library at: http://www.php.net/manual/ref.network.php
You will need to get versed in the world of HTTP requests, learn how to make your own GET/POST requests and split the headers from the returned content.
Last part will be easy with regexp, just preg_match the content for "#()*#i" (the last expression might be wrong, i didn't test it at all ok?)
Loop the list of found hrefs, compare to already visited hrefs (remember to take into account wildcard GET params in your stuff) and then repeat the process to load all the pages of a site.
It IS HARD WORK... good luck
You may have to use CURL to fetech the contents of the webpage. Store that in a variable then parse it for hyperlinks. You might need regular expression for that.