Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
I'm currently working on a multilanguage website and I need it to be SEO optimalized. I support the dutch, english and american-english language/location. I have a couple of questions about the optimalization.
Structure
This is my site structure:
http://www.example.com/nl/
http://www.example.com/en/
http://www.example.com/us/
Note:
Every other language, like http://www.example.com/br/, will redirect to http://www.example.com/en/
The page without language extension, http://www.example.com, wil redirect to the /en/ version as well.
Questions
I searched the internet for the best way to let Google know of the different languages. Google talks about the 'hreflang' meta: "When multiple language versions of an URL are used, every single language page must identify all language versions including itself." But I can't find that tag in the source code of any big website (Google, Apple, Facebook, Microsoft and almost every other multi-language site). Why not? How do they handle it?
When I search in the Dutch Google for, let's say, the Microsoft site, the first result I see is: http://www.microsoft.com/nl-nl/ (note the /nl-nl/ at the end of the url). When I enter the site and select the us-location in the footer of the site it redirects me to http://www.microsoft.com/en-us/. Ok, that's logical. But... When I search in the US Google for the Microsoft site I don't see http://www.microsoft.com/en-us/ but I only see http://www.microsoft.com, without the /en-us/. Why? I don't understand. Because even when you're in the US, the http://www.microsoft.com website redirects you directly to http://www.microsoft.com/en-us/! So why and how did they remove the language parameter from the URL? Also: this is not the Microsoft site only (see http://www.hp.com)
And now a bit of a summary: what exactly should I do for making my website multi-language SEO-proof? I already redirect every user to the /en/, /us/ or /nl/ site. I also put in every page the language in the <html> tag (like <html lang="en-US">). So what should I do more? Set the HTTP header? Use the hreflang-metatag?
To which one should I set the default <link rel=”alternate” href=”http://www.example.com/” hreflang=”x-default” />? I mean, the .com/ doesn't exist, I only have the domain with language parameter. I see a lot of websites that have the .com/ as default, but when I enter it, it always redirects me to the /nl/ or /en/.
Hope someone can help me out!
You already separate your page content for different languages using different language URLs. This is recommended because you are not having multiple versions of the same URL, but with different content (if you were to use only sessions for example, and would show the same page in different languages).
I add hreflangs to my multilingual site, as per Google recommendation. I suggest you do the same.
From Google docs https://support.google.com/webmasters/answer/189077?hl=en :
If you have multiple language versions of a URL, each language page must identify all language versions, including itself.
So on every page, you specify hreflangs for all versions of that page
in your different languages.
Let me give you an example from my web application, that supports 2 languages ru and tk. For ru I do not specify lang in the URL, for tk language, /tm/ is added to URL. If you go to https://www.tmawto.com/cars you will see the following meta:
<link rel="alternate" hreflang="ru" href="https://www.tmawto.com/" />
<link rel="alternate" hreflang="tk" href="https://www.tmawto.com/tm/" />
It goes the same for all pages. For each page you specify hreflang version in all other languages. No need for additional HTTP header.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am trying to secure my WordPress site from hackers - more specifically, individual non-content pages that appear to be getting more hits. I am using Siteground and installed WordPress a few months ago. Checking the website statistics I was taken aback by what I saw. I have briefly summarised the page hits below.
https:// ... /wp-admin/admin-ajax.php --> this page has been viewed over 16k times each month. Considering my site has been live for only 2 months, contains no SEO, and no-one knows about its existence this is odd!
https:// ... /index.php/wp-json/wp/v2/users/ --> this page is giving away my usernames.
https:// ... /index.php/wp-json/wp/v2/pages --> appears to display code from one of my main pages.
And a whole load of pages that appear very odd to be accessing:
https:// ... /index.php/wp-json/wp/v2/taxonomies
https:// ... /index.php/wp-json/wp/v2/categories
https:// ... /index.php/wp-json/wp/v2/taxonomies/post_tag
https:// ... /index.php/wp-json/wp/v2/taxonomies/category
https:// ... /index.php/wp-json/wp/v2/tags
https:// ... /wp-admin/load-styles.php --> shows blank screen
There's a whole load more, some redirects to the WordPress login page and others show a blank screen. There's also some URLs that allow any user to download a *.woff file (whatever that is?!).
Point is, I thought WordPress would be secure enough to not let these pages appear visible and show details at the very least.
Is there anything I can do? As I pointed out, I'm using Siteground which doesn't use cPanel.
I thought the most difficult part of a blog site is the content creation and overall web design. I'm not sure now.
Any help and/or advice would be greatly appreciated.
Thank you.
As for accessing login pages, that's to be expected with a WordPress site - bots love them.
You should have a strong password and use a nonstandard username for your admin-rights accounts. Bots will always access the default page with default login credentials to try it out. You could go another step and move the login page, too, that will massively drop accesses to the real login page, there's a plugin for it if you aren't comfortable coding that yourself: WPS Hide Login.
As for the wp-json URLs, you can ensure they are requiring logins / disabled with answers provided here, such as a plugin that disables it: Disable REST API.
Concerning the .woff files, those are just font files, either a bot scrapes over them or a user is accessing them to view the web page as it was designed; not a concern really.
WordPress has a decent article on additional things you can do to secure your website as well here.
Is there a way to show content on a website from a different domain, without changing the URL. Considering I own and control both Domains and FTPs.
Example:
second-site.com/about, without changing its URL, will show content from first-site.com/about. It gets even trickier, the about page will have the equivalent of URL in another language. So /about may be /uber-uns (in German), and I have to know this somehow so the URL can point to the specific content.
The reason the client wants this setup is because he's running a multi-lang website offering travel packages. He has a ton of prices for hotels and airlines which are the same for both languages, so naturally, he wants to enter them once (contradicting my initial proposal to have two separate CMS's).
Is there anyway to achieve this via htaccess? I'm using ExpressionEngine which is based on CodeIgniter/PHP.
I've researched on Stack and Online, but haven't found a safe or standard way to achieve this.
I have built a multilanguage website using Yii1 PHP framework, it supports both Arabic and English. Every URL in the site has a form: www.example.com/lang/(title_of_page OR something like slug for the articles/news)
Except home page for English and Arabic, that has the same url: www.example.com. The user can change the language, so the language of site will be changed and the page will be reloaded with another language, but it keeps the same url.
Problem: Home page with Arabic language doesn't appear on Google Arabic search but page with English does.
I have used xml-sitemap online tool to make a sitemap file from website URLs but I found that all Arabic URLs couldn't be crawled.
Does this problem appear because I have the same URL for home page for every language or could be another reason?
I'm no SEO master, but it may be the cause, that the site's language depends on cookies and don't know how Google likes that.
Searched a bit for an official information and I found this link of Google which states:
Keep the content for each language on separate URLs. Don’t use cookies
to show translated versions of the page. Consider cross-linking each
language version of a page. That way, a French user who lands on the
German version of your page can get to the right language version with
a single click.
So the answer is simple, don't use the same URL when changing language on the home page. I don't know your site, what it's main language, but I think you should make a primary language with www.example.com URL and secondary languages with base URL like www.example.com/lang/.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am about to create an Online Shopping site for my one of the client. I have to make this site SEO Friendly and therefore I must have to understand few things before I proceed to make a custom CMS Based website.
As I said I am going to make a Custom CMS Based website so that my client will be able to add new content through CMS but I don't understand few things.
For Example: I have an index.php page which has many links to different products and all of these links are created through Database using PHP. Site Link like
http://www.def.com/shoes/Men-Shoes
My Questions:
1) I want to know that when the GoogleBot crawls my site, will it also open my dynamically created links and index them? Will GoogleBot also index the content of my dynamic links?
2) Do I have to create seperate pages for all of the products on site and store them on my server? Or just a single page which serves dynamically according to user query for every product?
I read this
"It functions much like your web browser, by sending a request to a web server for a web page, downloading the entire page, then handing it off to Google’s indexer."
is it right?
my above query was actually looking like this and I used .htaccess file to make it pretty
http://www.def.com/shoes.php?type=Men-Shoes
so is it right and google will crawl it to index?
SEO is a complex science in itself and Google is always changing the goal posts and modifying their algorithm.
While you don't need to create separate pages for each product, creating friendly URL's using the .htaccess file can make them look better and easier to navigate. Also creating a site map and submitting this to Google Via their webmaster tools will help them to know which pages to index.
GoogleBot will follow the links in your site, including dynamically created one, but it is important not to try and game the system using Blackhat methods if long term success is your aim.
Also, use social media (Twitter, Facebook, Google+) to help promote your brand and make sure you follow Google's guidelines with regards to SEO and inpage optimisation.
There is a huge amount of information on the internet on this subject, but be careful what advice you follow.
Google and other search engines index the dynamic links too. So a way to avoid duplicate content is to use the "Crawl"->"URL Parameters" tool in Google Webmasters. You can read more about how that works here https://support.google.com/webmasters/answer/6080548?rd=1. Set "Crawl" field to "No URLs". By this way you could hide from search dynamic links but you have to have a list of all of your dynamic links of your website/CMS in order not to hide important content accidentally. The "URL Parameters" feature is available in Bing Webmaster tools also http://searchenginewatch.com/sew/how-to/2195777/bing-webmaster-tools-an-overview#.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I was wondering if a site using both pretty urls and dynamic urls will be penalized for duplicate content.
Let's say
http://example.com/article/1 is the same as http://example.com/?article=1. Is this bad for SEO?
Extra question:
Entering http://example.com/?blabla=qwerty will load the default home page. Is http://example.com/?blabla=qwerty treated as different page than http://example.com ?
What happens if the user enters http://example.com/????article=1, is it different than http://example.com/?article=1? Thanks
Forget end users - if search engine bot can index both the pages then it's bad SEO.
Let's say if Google is indexing http://example.com/article/1 as well as http://example.com/?article=1 then it will be treated as duplicate content on same site.
However http://example.com/?blabla=qwerty and http://example.com and all such variations are treated as a single page.
So it's not bad for SEO, but definitely not a good strategy. Best practice is to redirect http://example.com/?article=1 to http://example.com/article/1.
http://example.com/article/1 and http://example.com/?article=1 is treated as two different URLs to a search engine. They are bad for SEO because of the following reasons:
link juice is split between the 2 URLs
duplicate content on the same site as mentioned by Pavan.
As in 1, the same principle applies. http://example.com/?blabla=qwerty is indeed treated as a different page than http://example.com/
http://example.com/????article=1 is indeed different from http://example.com/?article=1. In the first case, the GET parameter is "???article" having a value of 1 and the second example, the GET parameter is "article" having a value of 1.
Now to solve this, you can use one of several strategies:
Use Canonical URLs
The Canonical URL serves to consolidate link signals for the duplicate or similar content. More to read on Google Webmaster Tools. In your case, you should add a canonical URL in the header such as.
<link rel="canonical" href="http://example.com/article/1" />
More background information can be found on Moz
Use 301 redirects
Where links are not canonical, use 301 permanent redirects to pass over the link juice to the new URL. Reference: Moz 301
Using rel="next" and rel="prev"
Where there's paginated content, use HTML attributes rel="next" and rel="prev" to indicate to google that the pages are paginated and linked. This will solve handle issues such as ?page=1 and ?page=2. Read more about Indicate paginated content at Google.
<link rel="prev" href="http://www.example.com/article?page=1">
<link rel="next" href="http://www.example.com/article?page=3">