Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 8 years ago.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Improve this question
I'm developing an application composed of two elements: a simple PHP server interacting with a SQL DB and a client which on certain events will send information to the server to log.
I would like a way to somehow verify (on the server's side) that the incoming query is indeed coming from my client, because there is a possibilty that someone will decompile the client file and see how I connect and send commands to the server, and that would let them inject false data.
I have no idea how to do such a mechanism though simply because anything I implement in the client could (theoretically) be viewed after decompilation process. Or maybe obfuscation is a solution in this case?
If someone is intrepid enough to decompile your client, they will simply write their own client using your mechanism and there's no way you can distinguish the two. No amount of "authentication" will stop that. (Like for example, if someone gets my private SSH key, game over: they are me until those keys are revoked.)
The best you can do is make it hard for them to decompile, detect intrusion, and limit damage. Some ideas, but you really should consider the attack patterns you expect to face:
Only allow the client to execute certain commands with certain parameters
Do not allow any more than the expected number of commands per time period
Limit the IP from which those certain commands can come
Be able to revoke client privileges on the server
PS: Expect this question to be closed or at least downvoted. It's not about code, but about design.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I am receiving malicious request therefore i seek your help i log page urls visited and some of them are like http://example.com/?a=fetch&content=<php>die(#md5(HelloThinkCMF))</php> and some are like http://example.com/?XDEBUG_SESSION_START=phpstorm and one more like http://example.com/index.php?s=/Index/\think\app/invokefunction&function=call_user_func_array&var what are these user trying to do and how should I improve the security and take precaution
It seems the users are trying to use URL injection to attack your website. They are adding malicious code to urls and sending them to the web server. If this code is run by the Php process, then it can cause damage to databases or the file system.
I faced a similar problem. I was able to fix the problem by installing Fail2Ban and ModSecurity. ModSecurity is an open source Web Application Firewall. It allows blocking malicious requests using predefined rules. Fail2Ban is a server intrusion prevention tool that checks for certain text patterns in log files using regular expressions. It automatically adds rules to the system's firewall, banning the user.
See these blog posts on how to install Fail2ban and Modsecurity.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
Been messing around with Firebug on a test website. Need some help regarding it.
This page is 2.7kb but takes 11.39 seconds to load:
http://puu.sh/8V9sk.png
Would someone point me in the right direction?
Thanks
Hovering the timeline within Firebug's Net panel provides some more detailed information on the network request timings. So you can see, which part of the request is slow.
Regarding your screenshot the Waiting part (purple) takes the longest, which means that your server-side script takes some time to execute.
Though because client-side debugger tools like Firebug cannot provide information on why a script on the server-side is slow, you need to analyze the problem in your server-side script.
I ended up allowing the site to cache all the web scripts allowing the page to load quicker. This can be done in Moodle via
Site Administration > Appearance > Settings.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I built a Website using PHP and want to deploy it on internet. I want to know what is the best option as to host on web server or cloud and what will be pros and cons doing that.
Resources needed for my site:
PHP
Mysql
Apache or lighttpd
My site is simple CMS with 10 pages (max).
A web server has defined hardware specifications, meaning that if too many users try to access it, it will fail to answer their requests.
Cloud hosting providers will restrict you in what you can do (what language, what APIs you can access, ...), but they usually allow for automatic scaling, meaning: If the first instance's ("server's") load exceeds a certain limit, a second instance may start automatically to handle half of the load, and so on.
A single server often is sufficient for PHP sites, but you may suffer the slashdot effect, i. e. a sudden peak of attention to your site may bring it down quickly.
You didn't specify the kind of application you are going to deploy and kind of resources you application need. Anyhow below are the points you need to consider to decide cloud vs non-cloud.
1) Availability
2) Scalability
3) Security
4) Cost effective
Prons:
1) Security
2) Limited control.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
Seomeone placed script in my site that send email, how I cant found this script ?
I use parallels and Linux CentOs.
I'm search keyword in site "mail(", but also cant be that code is like hash
It could be anywhere, and it could be anything. It could even have been deleted.
We did have a situation a while back where a client lost control of their password due to a keylogger and someone was uploading a CGI script to spam emails, running it then deleting it. We only found out via FTP logs what was going on.
Try checking your ftp logs, web server logs and if all that fails and you are sure it is php then try searching for eval( as that is an often used tactic to hide what a script is doing.
More importantly though, my suggestion would be to get someone who is experienced in server management to have a look at your site as a matter of urgency. If they were able to upload a file to your site once, then even if you remove it, it won't stop them doing it again until you find exactly how they were able to do it.
You might also have a look at your scripts. Is there a contact form somewhere on your site? You might have not escaped userinput very well, which gives an attacker the ability to send mails to other recipients.
I had a similar situation in my early days until the host blocked the script and told me to fix it.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions must demonstrate a minimal understanding of the problem being solved. Tell us what you've tried to do, why it didn't work, and how it should work. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I intent on creating a PHP application that regularly calls into an ASP.Net website to check for updates and then make another call to download updates).
I will be using SSL for transport security.
The clients need to be PHP as they run other bespoke bash commands.
I will need to download data and I've not decided what form.
Can anyone please suggest for my .net application:
How to restrict access to the server (I won't know incoming request IP's)?
The best structured data transport mechanism? maybe I could use JSON?
nb. if I can create a web service that PHP could consume that's even better!
Using JSON is definitively a good idea. It's less verbose than xml and fit particularly well with object oriented programming.
For your authentication issue, you could pass over each request an authentication token (a random string) that only your web service and your PHP code will know.
This is safe since SSL will encrypt the request and your auth token will never be "clear" on the network.