Recently L started experiencing performance issues with my online application hosted on bluehost.
I have an online form that takes a company name and event handler "onKeyUp" tied up to that field. Every time you put a character into the field it sends request to server which makes multiple mysql queries to get the data. Mysql queries all together take about 1-2 seconds. But since requests are send after every character that is put in it easily overloads the server.
The solution for this problem was to cancel previous XHR request before sending a new one. And it seemed to work fine for me (for about a year) until today. Not sure if bluehost changed any configuration on server (I have VPS), or any php/apache settings, but right now my application is very slow due to the amount of users i have.
And i would understand gradual decrease in productivity that may be caused bu database grow, but it suddenly happened over the weekend and speeds went down like 10 times. usual request that took about 1-2 seconds before now takes 10-16 seconds.
I connected to server via SSH & ran some stress test sending lots of queries to see what process monitor (top) will show. And as I expected, for every new request it was a php process created that was put in queue for processing. This queue waiting, apparently, took the most of wait-time.
Now I'm confused, is it possible that before (hypothetical changes on server) every XHR Abort command was actually causing PHP process to quit, reducing additional load on server, and therefore making it work faster? And now for some reason this doesn't work anymore?
I have WAMP installed on Windows 7, as my test environment, and when I export the same database and run the stress-test locally it works fast. Just like it used to be on server before. But on windows I dont have such handy process monitor as TOP, so i cannot see if php processes are actually created and killed respectively.
Not sure how to do the troubleshooting at this point.
Related
I developed a site using Zend Framework 2. It is basically a price comparison site that integrates with many of the top affiliate networks out there. I wrote a script that checks prices from each affiliate network, and then updates my local DB with that price. Depending on which affiliate network I am contacting, I may be making an API call (Amazon or CJ.com), or I may be looking at an XML product feed (Pepperjam or LinkShare). The XML product feed would be hosted locally.
At present, there are around 3,500 sku's that I am checking with this script. The vast majority of them (95%+) are targeting an XML product feed. I would estimate that this script should probably take in the neighborhood of 10 minutes to complete. Some of the XML files I am looking at are around 8 MB in size.
I have tested this script thoroughly in my local environment and taken great lengths to make sure that there is no memory leak or something of that nature which would cause performance issues. As an example, I made sure to use data streams where possible to avoid putting the XML file in memory over and over, etc. Suffice to say, the script runs locally without issue.
This script is intended to be run as a cron job, however I do have a way to trigger it via the secure admin interface ad-hoc. Locally, this is how I initiate the script to run, and everything goes rather smoothly.
When I deploy my code to the shared hosting account, I am having all sorts of problems. In order to troubleshoot, I attached logging to various stages of this script to track when it starts, how it progresses, and when each step completes, etc. All of this is being logged to a MySQL database.
Problem #1: If I run the script ad-hoc via an HTTP request, I find that it will run for a couple minutes, and then the script starts again (so there are now two instance apparently running). Wait another couple minutes, and a third one will start, etc..... Here is an example when I triggered the script to run at 10:09pm via an HTTP request.
Screenshot of process manager
Needless to say, I DO NOT run it via an HTTP request because it only serves to get me in trouble with my web hosting provider :)
Problem #2: When the script runs on the server, triggered via a cron job, it is failing to complete. I have taken the production copy of the database and taken it locally along with the XML files, it runs fine. So it should not be a problem with bad data exposing bad code. My observation is - the script nearly runs for the exact same amount of time - before aborts, or is terminated, or whatever. The last record updated is generally timestamped around 4 minutes and 30 seconds or so (if memory serves) after the script is triggered. The SKU list is constantly changing so the record that it ends on differs, but the the time of the last update is nearly the same each time. Nothing is being logged in the error logs. I monitored server resources via SSH top command and there is nothing out of the ordinary. CPU usage is in check and memory used does not go up.
I have a shared hosting account through Bluehost. My thoughts were that perhaps it was a script max execution time issue. I extended the max execution time in the script itself and via php.ini. Made no difference.
So I guess what I am looking for is some fresh ideas of where to go next. What questions should I be asking my hosting company so they can help me get to the bottom of this. They are only somewhat helpful to say the least. Could it be some limitation on my hosting account? Triggering some sort of automatic monitor that is killing the script? What types of Apache settings could be problematic for a script of this nature? PHP.ini settings? Absolutely any input you can provide would be helpful.
And why, when triggered via HTTP, would it keep spinning up new instances? I guess I could live w/o running it manually, and only run it via a cron job, but that isn't working either. So .... interested in hearing the communities thoughts on this. Thanks!
I haven't seen your script, neither did I work with your hoster, so everything below is just a guess - and a suggestion.
Given your description, I would say you're right that your script might have been killed by timeout when run from cron. I'm not sure why it keeps spawning new instances of your script when you execute it manually via an HTTP request, but it may also be related to a timeout (e.g. if they have a logic that restarts a script if it has not produced an output within a certain time, or something like that).
You can follow up with your hosting provider about running long-running (or memory-consuming) script in their environment, and they might have some FAQ or document already written that covers this topic.
Let me suggest an option for you in case if your provider is unable to help.
From what you said, I expect your script runs an SQL query to get a list of SKUs, and then slowly iterates over this list, performing some job on every item (and eventually dies for whatever reason, as we learned).
How about if you create a temporary table (or file - just any kind of persistent storage on the server) that would save the last processed record ID of the script, or NULL if the script successfully completed. That way you'll be able to make your script start with the last processed record (if the last processed record had id = 1000, add ... WHERE id > 1000 to the main query that fetches SKUs), and you won't really care if the script completed its first attempt or not (if not, it will keep processing from that very point when it was killed, on its second try).
Alternatively, to extend this approach, you can limit one invocation to the certain amount of records to process (e.g. 100 or 1000), again, saving the last processed record ID in the database or somewhere else.
The main idea is: if the script fails to process all SKUs at once, just make it restartable so that it does not lose its progress.
I'm facing a challenge here. My windows 10 PC needs to run all the time, with some programs running on it. However, as a commonplace about windows, it does hang/freeze/BSOD once in a while, randomly. And since I'm not in front of it all the time, sometimes I won't know that it's stuck, for long, till I check it and have to manually hard restart it.
To overcome this problem I'm thinking of an idea like this:
Some program (probably .bat file) can be set to run in the PC, that sends a ping (or some message) to a webservice running remotely, every 10 mins or so.
A PHP script (the webservice) running in my host server (I own a hosting space for my website) can listen to this particular ping (or message), and wait.
If this webservice doesn't receive the ping (or msg) when expected, it simply sends out an email notifying the same.
So whenever the windows hangs/freezes, that .bat file would stop sending as well, triggering the notification from the websvc in the next 10 mins.
This is an idea, but frankly I still don't know how to actually achieve it technically, and whether it's truly feasible. Also, I'm not sure if I'm missing something crucial in terms of server load, etc.
Would greatly appreciate any help with the idea, and if possible pointers to the script that I can put on the server. Also, I'm not sure how to set it up to listen continuously.
Can someone please help here?
What about this?
Have your web service on your host comprise of one page, one database and one cron job.
The database has one table with one record that holds a time.
The cron job checks the the database-table-record every 10 minutes and if the time in the record is in the past, the cron sends you an email.
The page, when requested, simply updates the record to be the current time + 10 minutes. Have your Windows machine request this page every 10 minutes.
So essentially, the cron job is ready to send you an email, but it never can because the PC is always requesting a page to reset the time - until it can't.
Alright, so here's how I Finally achieved this whole idea, as suggested by #Warren above.
Created a simple db table in mysql in my hosting server, with just 2
fields, id and next_time.
Created a simple php page, which inserts/updates the current time + 10mins into the above table.
Created a python script, that checks in this table, if the time stored is < the current time. If yes, then sends a mail to me.
Scheduled this python script as cron job to run every 10 mins.
Thus when the PC hangs, for more than 10 mins, the script would let me know.
Thanks a lot for the help in coming up with this plan. Hope this helps someone else thinking of a similar thing to do.
Improvisation: I moved the above codes to my local raspberry pi web server, to remove dependency on the remote hosting server.
Next step: I'm planning to let the python script on the raspberry pi control a relay, which would toggle the reset switch of the PC, when the above event happens. So, not only would I know when the windows goes on BSOD, but it'll also be restarted on it's own.
Well, as a next step, I made some more simplification to the solution for the original requirement.
No more PHP now. Just one Python Script, and a small hardware improvement.
As I'm still learning new ways with this Raspberry Pi, I now connected the RPi to the PC via ethernet cable as a peer-to-peer connection.
Enabled ping response from Windows, as per this link.
Then wrote another python script to simply ping the windows PC (with a static IP for ethernet adapter)
If the ping fails, then send the email as the earlier script.
As earlier, setup this new script as the cron job to run every 10 mins, instead of the earlier script.
So if the windows hangs, I assume the ping would fail too, and thus an email would be sent out.
Thus now the web-server and database are both eliminated from the equation.
Still waiting for the Relay module to arrive, so I can implement the next step of automatic hard reboot.
Here's the problem I'm having.
I have my development machine set up with Nginx and PHP-FPM. On the whole it works fine, except when I have a long-running process that imports lots of very big files into a database. Now before you start thinking this is another "gateway timeout" issue, it's not (sorta).
The long running process itself is working fine. That process actually sends back unbuffered JSON to the client to update it's display during the long process run. Again, this all works perfectly and I've never seen a gateway timeout on this particular connect.
My problem is when I open another tab in my browser and try and do something else in my PHP program while the import process is running. It is these subsequent connections that hang and eventually return the "504 gateway timeout". If it's an entirely different PHP project, it appears to work so it is only when the same 'index.php' script is being executed.
The kicker is, if I grab my android tablet and pull up a page there it works fine.
So it would appear that PHP-FPM is only allow a single script execution per client.
What.... the....?
I actually have the same set up on 3 machines and they all do the same thing. Anyone know what I've done to botch this up?
Thanks.
I'm working on an application which gets some data from a web service using a PHP soap client. The web service accesses the clients SQL server, which has very slow performance (some requests will take several minutes to run).
Everything works fine for the smaller requests, but if the browser is waiting for 2 minutes, it prompts me to download a blank file.
I've increased the php max_execution_time, memory_limit and default_socket_timeout, but the browser will always seem to stop waiting at exactly 2 minutes.
Any ideas on how to get the brower to hang around indefinitely?
You could change your architecture from pull to push. Then the user can carrying on using your web application & be notified when the data is ready.
Or as a simple work around (not ideal) if you are able to modify the soap server you could have another web service that checks if the data is ready, then on the client you could call this every 30secs to keep checking if data is available rather than waiting.
The web server was timing out - in my case, Apache. I initially thought it was something else as I increased the timeout value in httpd.conf, and it was still stopping after two minutes. However, I'm using Zend Server, which has an additional configuration file which was setting the timeout to 120 seconds - I increased this and the browser no longer stops after two minutes.
i am working on an art/programming project that involves using a lab of 30 imacs. i want to synchronize them in a way that will allow me to execute a script on each of them at the very same time.
the final product is in flash player, but if i am able to synchronize an type of data signal through a web page, i'd be able to run the script at the same time. so far my attempts have all had fatal flaws.
the network i'm using is somewhat limited. i don't have admin privileges but i don't think it matters really. i log into my user account on all 30 imacs, run the page or script so i can run my wares.
my first attempts involved running flash player directly.
at first i tried using the system time and had the script run every two minutes. this wasn't reliable because even though the time in my user account is synced there is discrepancy between imacs. a quarter of a second is too much even.
my next try involved having one mac acting as the host which writes a variable to a text file. all other 29 flash players checked for changes in this file multiple times a second. this didn't work. it would work with 3 or 4 computers but then would be flaky. the strain on the server was too great and flash is just unreliable. i figured i'd try using local shared objects but that wasn't reliable. i tried having the host computer write to 30 files and have each mac read only one each but that didn't work either. i tried using local connection but it is not made for more than two computers.
my next try involved having a php server time script run on my web server and have the 30 computers check the time of that files nearly 30 seconds. i don't think my hosting plans supports this because the server would just stop working after a few seconds. too many requests or something.
although i haven't had success with a remote server, it will probably be more reliable with another clever method.
i do have one kludge solution as a last straw (you might laugh): i would take an audio wire and buy 29 audio splitters and plug all of them in. then i would run flash player locally and have it execute when it hears a sound. i've done this before. all you have to do is touch the other end of the wire and the finger static is enough to set it off.
what can i do now? i've been working on this project on and off for a year and just want to get it going. if i can get a web page synchronized on 30 computers in a lab i could just pass data to flash and it would likely work. i'm more confident with a remote server but if i can do it using the local mac network, that would be great.
Ok, here is how i approached my problem using socket connection with flash and php. Basically, first you setup a client script that is to be installed on all 30 imac 'client' machines. lets assume all machines are on a private network. When these clients are activated they are connected to a server(php), by using socket. The php server script would have an ip and a port that these clients connects to, handles client connections pool, message routing and etc, and the server will be running at all time. The socket connection allows the server-client interaction by sending messages back and forth, and these messages can trigger things to do. You should read up more on socket connection/server client interaction. This is just a little summary of how i got my project done.
Simple tutorial on socket/server client connection using php and flash