I want to test something when apache crashes. The thing I want to test involves Windows asking me if it wants to send an error report. Any way to make Apache crash and ask me to send an error report on it?
Just kill the apache instance running.
In windows: go to taskmanager>kill the process
In linux: pkill processname
Take a look at Advanced Process Termination, especially its crash options, those might do what you want (display the send error report message box), although I haven't tested it. It's worth a shot though.
I agree with the earlier idea that you should crash it using windows.
The basic of the apache is that for each connection request, it "fork" a new process. Since Windows don't have a built in "fork" functionality, it has to create a new process each request. As such, it can be glitchy especially if there are multiple processes running.
For me, everytime I "restart" apache on Windows while maintaining a connection, I get an "Illegal Operation" from Apache's process. Not sure that can be reproduced 100% of the time, but it does occur to me from time to time when I restart.
Alex provides a possible answer here:
Microsoft Application Verifier [...] can do fault injection (Low Resource Simulation) that makes various API calls fail, at configurable rates. [...]
Related
I'm using IIS 7.5 with PHP and I'm having troubles with my application, it is VERY slow and it can take more than 2 minutes to display the login screen.
I believe this is due to some kind of queue of requests to process.
I've taken a look at the "Worker Processes" menu in IIS and I found out there are tends request in the DefaultAppPool which seem to be waiting for a response.
Is this normal? How can I get rid of them?
I think you have some "bottleneck" in your code, because all servers like Nginx, Apache, IIS must work well in a lot of situations (we don't talking about highload sites, because it is separated topic).
So I suggest you to try profile your code. For example you can use xhprof:
https://github.com/phacility/xhprof
And xhprof will show you where is "bottleneck" in your code
I need to test the behaviour of some tool I use on my web server, but it works only in cause of server fault. So I need to crash the server by some way. I tested a lot of script found in google like: infinite loops while(true), some preg_match(...), str_repeat(...) functions - nothing crashes it) Even tried to retreive 8Gb file - no problems, php just says about Internal server error. Thanks for any help.
I think it might be possible to get apache to segfault with mod_php by providing a regex that needs backtracking, setting high pcre limits and low php memory limits. I can't recall which versions where involved unfortunately.
Are you sure it's not good enough to just send a kill signal?
--edit--
That is send kill signal to your web server. Something similar to killall -9 apache-httpd or whatever the name of your webserver process is. Just check with your admin that this will target the correct processes.
I have a php program that does extensive curl requests to scrape web pages. It could be up to a million requests. I need to completely stop the script from running. Even though I stopped it in my browser, it is still processing requests. How can I stop it permanently?
You are just killing the request, you will need to stop apache to stop it for now. In the future redesign it so that the process looks for a kill switch (like the presence of a file) and stops processing if it finds it. Sounds like you are jamming a long running process into a php script, why not run it as a normal system process directly?
Assuming you are running the typical lamp stack, SSH into your machine, if necessary, and restart Apache.
If you are really going to perform long running tasks with PHP, I must suggest you consider using cron to run them or implement a task queue of some sort. It's generally a really bad idea to have these sort of things fired from a browser request.
Restart Apache. If you're using XAMP, stop and start it from the control panel.
If not, on Windows, go to task manager and end the apache.exe process. Then start it again.
Why the hell is everyone assuming you're running Apache? Restart your web server and it should be dandy. In the future, you could have a kill switch like (example):
while(!file_exists('stop.txt'))
Then just make that file when you're ready to stop ^.^ Or have a finite number of iterations before cutting off.
I'm looking for some ideas to do the following. I need a PHP script to perform certain action for quite a long time. This is an extension for a CMS and this can't be anything else but PHP. It also can't be a command line script because it should be used by common people that will have only the standard means of the CMS. One of the options is having a cron job (most simple hostings have it) that will trigger the script often so that instead of working for a long time it could perform the action step by step preserving its state from one launch to the next one. This is not perfect but I can't see of any other solutions. If the script will be redirecting to itself server will interrupt it. What other options can suit?
Thanks everyone in advance!
What you're talking about is a daemon or long running program that waits for calls by client programs, performs and action, provides a response then keeps on waiting for more calls.
You might be familiar w/ these in the form of Apache & MySQL ;) Anyway PHP is generally OK in this regard, it does have the ability to function over raw sockets as well as fork sub-processes to handle multiple requests simultaneously.
Having said that PHP daemons are a tool where YMMV. Some folks will say they work great, other folks like me will say they have issues w/ interprocess communication and leaking memory even amidst plethora unset() calls.
Anyway you likely won't be able to deploy a daemon of any type on a shared hosting environment. You'll need to get a better server package or stick with a Cron based solution.
Here's a link about writing a PHP daemon.
Also, one more note. Daemons do crash from time to time and therefore you may still need to store state about whats going on, just in case someone trips over the power cord to your shared server :)
I would also suggest that you think about making it a daemon but if not then you can simply use
set_time_limit(0);
ignore_user_abort(true);
at the top to tell it not to time out and not to get interrupted by anything. Then call it from the cron to start it every day or whatever. I have this on many long processing daily tasks and it works great for me. However, it won't be able to easily talk to the outside world (other scripts can't query it or anything -- if that is what you want look into php services) so once you get it running make sure it will stop and have it print its progress to a logfile.
I know about PHP not being multithreaded but i talked with a friend about this: If i have a large algorithmic problem i want to solve with PHP isn't the solution to simply using the "curl_multi_xxx" interface and start n HTTP requests on the same server. This is what i would call PHP style multithreading.
Are there any problems with this in the typical webserver environment? The master request which is waiting for "curl_multi_exec" shouldn't count any time against its maximum runtime or memory length.
I have never seen this anywhere promoted as a solution to prevent a script killed by too restrictive admin settings for PHP.
If i add this as a feature into a popular PHP system will there be server admins hiring a russian mafia hitman to get revenge for this hack?
If i add this as a feature into a
popular PHP system will there be
server admins hiring a russian mafia
hitman to get revenge for this hack?
No but it's still a terrible idea for no other reason than PHP is supposed to render web pages. Not run big algorithms. I see people trying to do this in ASP.Net all the time. There are two proper solutions.
Have your PHP script spawn a process
that runs independently of the web
server and updates a common data
store (probably a database) with
information about the progress of
the task that your PHP scripts can
access.
Have a constantly running daemon
that checks for jobs in a common
data store that the PHP scripts can
issue jobs to and view the progress
on currently running jobs.
By using curl, you are adding a network timeout dependency into the mix. Ideally you would run everything from the command line to avoid timeout issues.
PHP does support forking (pcntl_fork). You can fork some processes and then monitor them with something like pcntl_waitpid. You end up with one "parent" process to monitor the children it spanned.
Keep in mind that while one process can startup, load everything, then fork, you can't share things like database connections. So each forked process should establish it's own. I've used forking for up 50 processes.
If forking isn't available for your install of PHP, you can spawn a process as Spencer mentioned. Just make sure you spawn the process in such a way that it doesn't stop processing of your main script. You also want to get the process ID so you can monitor the spawned processes.
exec("nohup /path/to/php.script > /dev/null 2>&1 & echo $!", $output);
$pid = $output[0];
You can also use the above exec() setup to spawn a process started from a web page and get control back immediately.
Out of curiosity - what is your "large algorithmic problem" attempting to accomplish?
You might be better to write it as an Amazon EC2 service, then sell access to the service rather than the package itself.
Edit: you now mention "mass emails". There are already services that do this, they're generally known as "spammers". Please don't.
Lothar,
As far as I know, php don't work with services, like his concorrent, so you don't have a way for php to know how much time have passed unless you're constantly interrupting the process to check the time passed .. So, imo, no, you can't do that in php :)