My website got hacked. After that I have cleaned my whole code and db. When I search website with my keyword on google the result shows as hacked page. Link shows right but meta info and google cached page is not right.
Use webmaster tools and submit a request to Google to review this again. You will have to make sure that the website is indeed clean. There is a good guide on how to clean the site here - https://docs.joomla.org/Security_Checklist/You_have_been_hacked_or_defaced
Related
When I search my site in the google search, my site shows up in the first place but, it is showing my site has been hacked, It's been 2 months and still it is showing the same message when I google it . my site is built in PHP.
I know a little bit of technical stuff because I am a beginner in developing , I tried to find out and I checked in the Title, Meta, Image descriptions tags and page title and page description but did not see any difference in my text. Even my technical team could not figure it out.
I am so stressed out please help me guys. I have attached a picture on how it is showing in the google search.
Search for site may be hacked keyword in database and Also if you are using any CMS then update to latest version so updated version contain latest valunerability fixes.
Dont use any nulled version plugin becasue that might contain malicious code which is encrypted in base64 or eval. and you can submit your website to google for review so they can remove site may be hacked if site is clean.
As per you that Hacked content not there in viewsouce then you can submit to Google for review.Within 72 hours they will replay.
you can follow below link to submit your website to Google so they can review and remove site may be hacked in Google search result
https://www.google.com/webmasters/tools/home?hl=en
I've recently transferred my Wordpress website to a new server and all seemed to go smoothly. However we've discovered that Facebook can no longer pick up data for our posts.
We posts news stories to Facebook and usually it populates the header, excerpt and image as soon as we post the URL to our page, however it is no longer doing this.
Facebook Open Graph debugger shows that Facebook is seeing a 404 page for all posts published AFTER the server move. It's displaying no image and the title shows the archives list for that date. The same issue happens when someone 'likes' an individual post using the social button directly on the website.
Important to note that any URLs for posts published before the server move work fine. This data can be found by Facebook without a problem.
The website is www.thisisardee.ie. Below are examples of a post before and after the transfer, so you can see both.
BEFORE (working): http://www.thisisardee.ie/2016/03/09/ardee-western-bypass-backed-transport-authorities/
AFTER (not working): http://www.thisisardee.ie/2016/03/23/mcguinness-recalls-brussels-terror-fear/
Any help would be hugely appreciated. It's massively affecting our website as people are sharing our posts on Facebook and they're appearing without image or correct title. It looks awful.
Thanks in advance.
Finally found the reasoning behind this.
After the move to Cloud hosting, from Shared hosting, I had updated my DNS records. However, I never updated my IPv6 record.
This wouldn't normally be an issue (and it explains why 99% of websites/services had no issue scraping/using my site) but Facebook appears to prioritise IPv6 over everything. So it was using my IPv6 record which was pointing to my old server and an old version of my website. Hence, it was pulling in no information for the page.
I'm surprised it was still linking to the new page after we manually put in the image and title when posting to Facebook.
I spotted this after my Share hosting plan was officially cut off today, therefore deleting my old website. The links began showing a 404 error.
Hopefully this can help others, as I've seen a lot of people with a similar issue but no solution. Update your IPv6 record as Facebook uses it!
Sometimes I see on Google links with my terms searched on Google as parameter. For example, if I search "StrangeWord", I can see in results:
example.com/p=StrangeWord
I'm pretty sure it is generated automatically, how to do it? I'm using PHP with Nginx.
It isn't generated automatically. If that page is in the index, it's because there was a crawlable link to that page - whether intentionally done by the webmaster or not - and Google happened to crawl that link.
Links can get generated by users sharing such a page, bookmarking it, or even linking to it from their own sites / social profiles
I apologize ahead of time for the non descriptive title, as I wasn't really sure how to word this.
I've currently switched some of my Wordpress sites that have a responsive design that implement a slider over to WooSlider. Works super well, and I love it. However, there is something stopping me from switching all of my sites over. And I understand this is not a WooSlider only fault, but it's something I cannot Google and find out.
This is happening on every page view, even those without a slider.
In Google Analytics it shows domain.com/?wooslider-javascript=load&t=1352743207&ver=1.0.0 as a page view. For every single page. I obviously don't want this, but I don't know how to get rid of it.
Another example of this happening is using Gravity Forms with a referrer info plugin that shows page views, search query, browser, etc.
When the form is sent, the following is sent via email.
Page visited 1: domain.com/?wooslider-javascript=load&t=1352743207&ver=1.0.0 (http://domain.com/?wooslider-javascript=load&t=1352743207&ver=1.0.0)
Page visited 2: domain.com/about (http://domain.com/contact/about/
Page visited 3: domain.com/?wooslider-javascript=load&t=1352751787&ver=1.0.0 (http://domain.com/?wooslider-javascript=load&t=1352751787&ver=1.0.0)
Page visited 4: domain.com/contact/ (http://domain.com/contact/)
So obviously I don't want that js file to show up as a page view. How can I remedy this?
Thanks!
Google Analytics Configuration Mistake #2: Query String Variables
wooslider-javascript,t,ver
I've tried a bunch of techniques to crawl this url (see below), and for some reason the title comes back incorrect. If I look at the source of the page with firebug I can see the correct title tag, however, if I view the page source it's different.
Using several php techniques I get the same result. Digg is able to crawl the page and parse the correct title.
Here's the link: http://lifehacker.com/#!5772420/how-to-make-ios-more-like-android
The correct title is "How to Make Your iPhone (or Other iOS Device) More Like Android"
The parsed title is "Lifehacker, tips and downloads for getting things done"
Is this normal? How are they doing this? Is there a way to get the correct title?
That's because when you request it using PHP (without any JS support) you're getting the main page of lifehacker - which is lifehacker.com.
Lifehacker switched their CMS recently so that all requests go to an initial page and then everything after the hashbang is read by a JS script in the main page to figure out which page needs to be served. You need to modify your program to take this into account
EDIT
Have a gander at these links
http://code.google.com/web/ajaxcrawling/docs/getting-started.html
http://www.tbray.org/ongoing/When/201x/2011/02/09/Hash-Blecch
Found the answer:
http://lifehacker.com/#!5772420/how-to-make-ios-more-like-android
becomes:
http://lifehacker.com/?_escaped_fragment_=5772420/how-to-make-ios-more-like-android