I need to test the behaviour of some tool I use on my web server, but it works only in cause of server fault. So I need to crash the server by some way. I tested a lot of script found in google like: infinite loops while(true), some preg_match(...), str_repeat(...) functions - nothing crashes it) Even tried to retreive 8Gb file - no problems, php just says about Internal server error. Thanks for any help.
I think it might be possible to get apache to segfault with mod_php by providing a regex that needs backtracking, setting high pcre limits and low php memory limits. I can't recall which versions where involved unfortunately.
Are you sure it's not good enough to just send a kill signal?
--edit--
That is send kill signal to your web server. Something similar to killall -9 apache-httpd or whatever the name of your webserver process is. Just check with your admin that this will target the correct processes.
Related
I deployed NGinx, php-fpm and php 8 on a EC2 / Linux 2 instance (T4g / ARM) to run a php application. As I had done for the previous version of this application and php 7.
It runs well, excepted for all first requests. Whatever the actions (clicking a button, submitted a text, etc.), the first request always takes 2.2x minutes, then the following ones run quickly.
The browsers (Firefox and Chrome) are just waiting for the response, then react normally.
I see nothing from the logs (especially, the slow-log is empty) and the caches seem to work well.
I guess I missed a configuration point. Based on my readings, I tried a lot of things about the configuration of php-fpm and php, but unsuccessfully.
Is someone already encountered this kind of issue?
Thanks in advance
Fred
Activation of all logs for php-fpm and php,
Augmentation of the memory for the process,
Checking of the system parameters (nlimit, etc.),
etc.
You've not provided details of the nginx config, nor the fpm config.
I see nothing from the logs
There's your next issue. The default (combined) log format does not show any timing information. Try adding $upstream_response_time and $request_time to your log file. This should tell you if the issue is outside your host, between nginx and PHP, or on the PHP side.
You should also be monitoring the load and CPU when those first couple of hits arrive along with the opcache usage.
first of all, thanks to #symcbean for your point. It helped me to find the script taking a long time to render and to fix the problem.
The problem was not due to the configuration of NGinx, PHP-FPM or PHP. It occurred because of an obscure parameter for auto-update of the application running on these components, forcing the application to call a remote server and blocking the rendering.
I'm getting into Web Sockets now and have been successfully using the online websockets Pusher(didn't like it) and Scribble(amazing but downtime is too frequent since it's just one person running it).
I've followed this tutorial http://www.flynsarmy.com/2012/02/php-websocket-chat-application-2-0/ on my localhost and it works great!
What I wanted to ask is, how do I setup the server.php from the above file to run as a websocket server on an online webhost/shared server?
Or do I need to get a VPS (and if so, which one do you recommend and how can I setup the websocket server there as I've never really used a VPS before!)
Thank you very much for reading my question and answering. I've read all other question/answers here regarding sockets but haven't been able to find the answer to my above questions yet. Hopefully I find it here!
This is tricky.
You need to execute the server.php script and it needs to never exit. If you have an SSH access to your shared server, you could execute it just like they do on the screenshot and make it run as a background task using something like nohup:
$ nohup php server.php
nohup: ignoring input and appending output to `nohup.out'
After invoking this (using the SSH connection), you may exit and the process will continue running. Everything the script prints will be stored into nohup.out, which you can read at any time.
If you don't have an SSH access, and the only way to actually execute a PHP script is through Apache as the result of a page request, then you could simply go to that page using a browser and never close the browser. But there will be a time out one day or another and the connection between you and Apache will close, effectively stopping the server.php script execution.
And in those previous cases, a lot of shared hosts will not permit a script to run indefinitely. You will notice that there's this line in server.php:
set_time_limit(0);
This tells PHP that there's no time limit. If the host made PHP run in safe mode (which a lot of them do), then you cannot use set_time_limit and the time limit is probably 30 seconds or even less.
So yes, a VPS is probably your best bet. Now, I don't own one myself, and I don't know what's a good/bad price, but I'd say HostGator seems fine.
I have a php program that does extensive curl requests to scrape web pages. It could be up to a million requests. I need to completely stop the script from running. Even though I stopped it in my browser, it is still processing requests. How can I stop it permanently?
You are just killing the request, you will need to stop apache to stop it for now. In the future redesign it so that the process looks for a kill switch (like the presence of a file) and stops processing if it finds it. Sounds like you are jamming a long running process into a php script, why not run it as a normal system process directly?
Assuming you are running the typical lamp stack, SSH into your machine, if necessary, and restart Apache.
If you are really going to perform long running tasks with PHP, I must suggest you consider using cron to run them or implement a task queue of some sort. It's generally a really bad idea to have these sort of things fired from a browser request.
Restart Apache. If you're using XAMP, stop and start it from the control panel.
If not, on Windows, go to task manager and end the apache.exe process. Then start it again.
Why the hell is everyone assuming you're running Apache? Restart your web server and it should be dandy. In the future, you could have a kill switch like (example):
while(!file_exists('stop.txt'))
Then just make that file when you're ready to stop ^.^ Or have a finite number of iterations before cutting off.
I've written some JS scripts on my school's VLE.
It uses the UWA Widget Format and to communicate with a locally-hosted PHP script, it uses a proxy and AJAX requests.
Recently we've moved the aforementioned locally-hosted server from a horrible XP-based WAMP server to a virtual Server 2008 distribution running IIS and FastCGI PHP.
Since then - or maybe it was before and I just didn't notice - my AJAX calls are starting to take in excess of 1 second to run.
I've run the associated PHP script's queries on PHPMyAdmin and, for example, the associated getCategories SQL takes 0.00023s to run so I don't think the problem lies there.
I've pinged the server and it consistently returns <1ms as it should for a local network server on a relatively small scale network. The VLE is on this same network.
My question is this: what steps can I take to determine where the "bottleneck" might be?
First of all, test how long your script is actually running:
Simplest way to profile a PHP script
Secondly, you should check the disk activity on the server. If it is running too many FastCGI processes for the amount of available RAM, it will swap and it will be very slow. If the disk activity is very high, then you know you've found your culprit. Solve it by reducing the maximum number of fastcgi processes or by increasing the amount of server RAM.
I want to test something when apache crashes. The thing I want to test involves Windows asking me if it wants to send an error report. Any way to make Apache crash and ask me to send an error report on it?
Just kill the apache instance running.
In windows: go to taskmanager>kill the process
In linux: pkill processname
Take a look at Advanced Process Termination, especially its crash options, those might do what you want (display the send error report message box), although I haven't tested it. It's worth a shot though.
I agree with the earlier idea that you should crash it using windows.
The basic of the apache is that for each connection request, it "fork" a new process. Since Windows don't have a built in "fork" functionality, it has to create a new process each request. As such, it can be glitchy especially if there are multiple processes running.
For me, everytime I "restart" apache on Windows while maintaining a connection, I get an "Illegal Operation" from Apache's process. Not sure that can be reproduced 100% of the time, but it does occur to me from time to time when I restart.
Alex provides a possible answer here:
Microsoft Application Verifier [...] can do fault injection (Low Resource Simulation) that makes various API calls fail, at configurable rates. [...]