Drupal website hacked, but cannot find source? [closed] - php

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I got mail from Google Webmaster tools that strange URLs where indexed. URLs like mywebsite.com/cheap-medicine/, etc.
I have a Drupal website and I can see those URLs are indexed. And using proxy I can see the page myself. However, I cannot find the source.
I have looked into a bunch of files but they are unchanged.
Also I searched my entire database and of course looked into Drupal backend for strange content.
I even searched my entire server using Linux grep, also no result for words on the page. The database URL / routing tables also show no strange URLs.
I did of course also check .htaccess files
How are these URLs accessible if I cannot find them anywhere?

Look into your .htaccess file, it contains a lot of power. It can make these strange URIs mask themselves. Try to check the validity of that file. This might be where this is coming from.
If your .htaccess file, or any .htaccess file inside any subdirectory of the site weren't hacked on then you probably want to reinstall the Drupal core. If you followed proper development practice by never editing third party core files, then you will not lose any work or time, because it will be a fresh default copy of what you installed the first time.
After this, make sure core runs correctly in a default state, and that the problem is gone. Then you can copy back in your source files to your Drupal framework and reconfigure and resume.
If the problem comes back after you put your source files back, then the problem is in your sources.
You can also try grepping for the terms individually i.e. grep -rin "medicine" ./* on a GNU/Linux box to see if these terms show up.

Related

Moodle Data size freeup [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I would like to reduce my moodle data size as the size is more than 115GB, i have installed moodle 2.9.1. Please help me with the methods to reduce the size of the moodle data folder by deleting old data or unwanted data from it without affecting the working application. Also please let me know if any moodle plugins available for this. Thanks in advance
I'm assuming you are using Moodle 2.0 or above (you don't specify in your question).
You can probably safely remove files from the "temp" subdirectory.
It is likely, however, that the vast majority of files will be found in the "filedir" subdirectory. There is no safe way to manually remove files from here - they must be deleted via the user interface or by writing code to use the Moodle files API to delete unwanted files.
Deleting files directly from the "filedir" without allowing Moodle to also update the relevant entries in the mdl_files table will result in fatal errors if the file is accessed via the Moodle code.
I suggest you start by looking to see if there are old, unused courses that can be deleted via the Moodle interface.

How to copy directory from a public http url to my Server [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have some files in directory and sub directory in an open HTTP site
For Example:
http://example.com/directory/file1
http://example.com/directory/file2
http://example.com/directory/sub-directory/file1
http://example.com/directory/sub-directory/file2
http://example.com/directory/sub-directory2/file1
http://example.com/directory/sub-directory2/file2
I want to copy the full directory to my server.
I don't have SSH or FTP access to the http://example.com
I have tried transloader script which grabs only one file every time.
I need to copy the full directory exactly as is on the HTTP server to my new server.
Thanks
Use wget or curl:
wget -r --no-parent mysite.com
You are unable to do this. You can grab the content of the visual layer/GUI that the site provides to you, but you can not grab any of the "behind the scenes" pages which the site has. You wont be able to get any of the site which is doing the back end processing to create what you see on the front end.
The only way to do do this is if you have access to the directories on the site. By this, I mean when you go to the base directory, such as example.com/test/, it just gives a list of all possible files in that directory. As it stands though, most sites protect against it, therefore unless you have direct access, this is not doable as it would be entirely insecure and would create many headaches for development and privacy.

Is there a way to recover a file went dead after a power outage? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I was just developing a website on NetBeans 7.4, using PHP, i have saved the file a dozen times before by CTRL + S.
The millisecond i hit CTRL + S again to save my last changes on the file, a power outage happens, and my computer just went off. (Yeah, i'm so lucky.)
After a while, the power came back on, and i opened my computer to see all the file is gone.
I had hundreds of lines of code and (stupidly) didn't use git or any CVS for the project. The other files are just fine, but my computer went off while it was writing into the file i was working on (home.php).
I took a screenshot because i can't copy and paste the contents, as it's a bunch of NULL's on line 1.
home.php :
My question is, is there any way i can recover this file, or did i just lose my 3.5 hours of work?
I've tried
To look for the file in windows cache
404 : my file wasn't there.
To look for the file in netbeans' own cache directory.
404 : my file wasn't there either
To look for the output in chrome cache
404 : no chance.
System recovery
That didn't help because i don't have a restore point for 4 hours ago.
As mentioned, if the file's contents have been overwritten, then there's not much you can do.
You could try and find an earlier version of your file using data recovery software and performing a deep scan of your drive. This will look for data that is not tied to a file (i.e. an earlier version of your work).
You could try:
Recuva: small and free, feature rich, gets the job done
GetDataBack: not free, but highly effective (I've used it in the past, was quite satisfied with the results)
There's also a wikipedia article on data recovery software here, where you can check many other options
Right Click the file and check for the older version Named as Previous Version or Restore Previous version, it will restore fraction of your file but not complete.
This might guide you: http://www.techrepublic.com/blog/windows-and-office/recover-data-files-in-windows-7-with-previous-versions/4992/

Apache give access permission only to server [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I've got WAMP server on my Windows machine (just starting to study Servers and what not so I'm new to all this).
What I want to know is how can I give Apache permission to access a folder, but users should not be able to access that folder.
I've got a folder containing images which anyone would be able to view if they knew the structure of my server's file system and directories. Therefore, what I wish to do is that this folder should be accessible by my .html and .php pages but not by a user who inputs the URL of the folder/image directly in their browser.
I realize this may not be possible, but there must be some alternative to what I'm trying to achieve. I'm very new to all this so I'd like to know if I'm going about this wrong way, whether I'm on the right track or if I simply need to edit my permissions in the httpd.conf file.
Unfortunately that's not possible. The way the browser loads images when they're referenced in your website is not different from the way it does load them when a user enters the same URL directly. SO you get either both or none.
What you CAN do is: disable indexing, so entering just the directory name without the image name results in an "Access Forbidden" error. For that, put this anywhere in your Apache config:
<Directory c:/path/to/your/directory>
Options -Indexes
</Directory>
(You may have to use Backslashes on Windows, not sure. Haven't done any Apache config on Windows fore some time. Can anybody help me out here?)
Another thing you can do is to write an PHP (or use any other server side language) script that reads those images and pases them to the browser. That way, you could check the referrer the browser sends and react to it. But I would not recommend this, as it yields more trouble than it solves, therefore I won't give you a ready made script for this.

Why create virtual host in development [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
There may many reasons. But I can find only these.
By creating vhost we maintain same file structure in the server.
We can have several server instance in one machine.
But are these really matter ? I doubt myself.
What is the difference between keep separate folder in localhost vs having separated vhost in localhost and deploying to the server.
Is there any other reasons to add(or are these not the reasons at all ?)
Thanks in advance.
Because your first point is the biggest reason.
If you have http://localhost/devel vs http://devel.local your relative pathing can get all screwed up
If you had a developer who wanted to make a home link they may do Home
This will redirect you to root folder on localhost and you wont end up where you should be
it is also a separation of concerns. If you do a vhost you know you are only within that project. Another thing is if say you had a .htaccess file in localhost, it would affect settings in your project folder if you did not override the .htaccess in your project folder
Another reason is subdomains, you cannot really mimic subdomains with folders without using a .htaccess, it is much easier with vhosts
You always want to mimic production as closely as possible otherwise you will run into bugs on production that you will spend minutes/hours/days debugging that you might not have run into if you would have mimiced the environment in the first place

Categories