I am running a PHP web application on a local network (APACHE server) that allows clients to write notes that are stored on the database.
Whenever 3 or more client machines connect to the server, two clients machines will end up waiting for their page to load indefinitely until client 3 performs an action on the application (i.e AJAX drop down menus, confirm button, open another page or refresh the page). The higher the number of clients, the more who wait for that one machine to make an action on the server.
At times (though rarely), all clients wait indefinitely and I have to reset the APACHE server.
When there are 2 clients connected, all is smooth (although I think its because it slims the chance of the waiting issue to almost never). I have tested the server settings and all seems well (just some tweaks from the default settings)
I believe this is related to my code but cannot track the issue.
Upon my research for a solution I have come across these possible solutions but want to ask around if anyone has experienced this issue and their solution:
Multi threading
Output Buffering
Disable Antivirus
Related
I have an application in which a module lists out some tour packages from various third party vendors through SOAP calls. This takes about 90 seconds to load.
Once one of the package is clicked, another webservice is called to get the details of the package.
Now once you click browser back from here, it is supposed to show the list without calling the webservices again (from cache). It happens so in dev machine which is http. But on production server the back button refreshes the list page and I have to wait again for about 90 seconds which is pretty painful.
Is it because of HTTPS? How do i force navigate back without refreshing the previous page?
The application in written in PHP, The production server is redhat while the dev is windows (if it helps).
It has nothing to do with HTTPS. Cache is only controlled by the Expires/Cache-Control HTTP headers, which work the same regardless of whether the connection is encrypted or not. Most likely your production server is enforcing different cache headers than your development box.
Having said that, you should also employ server-side caching for such an expensive operation. Perhaps have the data be refreshed periodically by a cron-job or such and save them on the server for fast retrieval. Any page that requires 90 seconds of work to be displayed needs to be rethought.
it is a php document that only has 2 variables that are hard-coded. It is a simple splash website that lets customers read about a product and lets them click to the actual product page. When I send 2000-4000 clicks within a 4-5 hour period the page doesn't load all the way because the high load! I do not have SSH access, only FTP. Anything I can do here?
Yeah if you are using sessions and the default time out is left as is, it will fill up your memory on the server and slow right down, set your session timeout value to a low number.
Which can be done with the info from here:
How to change the session timeout in PHP?
There could be two possibilities:
a) since you have not SSH access I'll assume you're on a shared server, and if that's the case then 2000+ page loads for your site may be too much. You'd have to upgrade your hosting plan.
b) Too many session files for the server to handle. the server may be creating many sessions files as time passes and it keeps receiving more and more requests. To delete the sessions file you'd need SSH access or have the server admin do it for you.
If your server can't deal with 500+ requests an hour then you'd be better off just getting a better hosting plan.
Servers are not limitless, and normally, even higher grade private, self-managed machines, tend to be somewhat slow.
I have a really weird behavior going on.
I'm hosting a tracking software for users, that mainly logs mobile traffic. Now, the path is as follows:
1. My client gets a php code snippet to put in his website.
2. This code sends a cURL post (based on predefined post fields like: visiotr IP, useragent, host etc) to my server.
3. my server logs the data, and decide what the risk level is.
4. it then responds the client server about the status. That is, it sends "true" or "false" back to the client server.
5. client server gets that r
esponse, and decides what to do (load diffrent HTML content, redirect, block the visitor etc).
The problem I'm facing is, for some reason, all the requests made from my client's server to my server, are recorded and stored in the a log file, but my clients report of click loss as if my server sends back the response, but their server fails to receive those responses or something.
I may note that, there are tons of requests every minute from different clients' servers, and from each client himself.
Could the reason be related to the CURL_RETURNTRANSFER not getting any response ? or, maybe the problem is cURL overload ?
I really have no idea. My server is pretty fast, and uses only 10% of its sources.
Thanks in advance for your thoughts.
You touched very problematic domain - high load servers, you problem can be in so many places, so you will have to really spend time to fix it, or at least partially fix.
First of all, you should understand what is really going on, check out this simplified scheme:
Client's php code tries to open connection to your server, to do this it sends some data via network to your server
Your server (I suppose apache) tries to accept it, if it has resources - check max connections properties in apache config
If server can accept connection it tries to create new thread (or use one from thread pool)
After thread is started, it runs your php script
Your php script do some work, connecto to db and sends response back via network
Client waits till the answer from p5 or closes connection because of timeout
So, at each point you can have bottleneck:
Network bandwidth
Max opened connections
Thread pool size
Script execution time
Max database connections, table locks, io wait times
Clients timeouts
And it is not a full list of possible places where problem can occur and finally lead to empty curl response.
From the very start I suggest you to add logging to both PHP codes (clients and servers) and store all curl_error problems in some text file, at least you will see what problems occur often.
I have the following question regarding the memcached module in PHP:
Intro:
We're using the module to prevent the same queries from being sent to the Database server, on a site with 500+ users in every moment.
Sometimes (very rarely) the memcahed process defuncts and all active users start generating queries to the database, so everything stops working.
Question:
I know, that memcached supports multiple servers, but I want to know what happens when one of them dies? Is there some sort of balancer background or something, that can tell Ow! server 1 is dead. I'll send everything to server 2 until the server 1 goes back online. or the load is being sent equally to each one?
Possible sollutions:
I need to know this, because if it's not supported our sysadmin can set the current memcached server to be a load ballancer and to balance the load between several other servers.
Should I ask him to create the load-balancing manualy or is this feature supported by default and what are the risks for both methods?
Thank you!
You add multiple servers in your PHP script, not in Memcache's configuration.
When you use Memcached::addServers(), you can specify a weight for every server. In your case, you might set one Memcache server to be higher than the other and only have the second act as a failover.
Using Memcached::setOption(), you can set how often a connection should be retried and set the timeout. If you know your Memcache servers die a lot, it might be worth it to set this lower than the defaults, but it shouldn't be necessary.
I used php script to parse remote xml file and print output on web page into a div. Since I need output have to be synchronized with currently playing track, I used Javascript to reload div content every 20sec. While testing the page I faced an issue with my hosting, and got message "IP Connection limit exceeded", site was not accessible. I've changed IP to solve this. Is there a workaround to parse metadata without bumping the server and running into web hosting issues?
<script>
setInterval(function() {
$('#reload').load('current.php');
}, 20000);
</script>
Since a web page is a client-based entity, it is in nature unable to receive any data that it hasn't requested. That being said, there are a few options that you may consider.
First, I don't know what web host you are using, but they should let you refresh the page (or make a request like you are doing) more than once every 20 seconds, so I would contact them about that. A Denial of Service attack should be more like 2 or 3 times per second per connection. There could be a better answer for this that I'm just not seeing, but at first glance that's my take on that.
One option you may want to consider is using a Web Socket, which is a new feature of HTML 5 enabling the Web Server to maintain an open connection between the Visitor's Browser and send packets of data back and forth. This prevents the need for the browser to constantly poll the server every 20 seconds. Granted, these are new and I believe they only work in Safari and Chrome. I haven't experimented with them but plan to in the future.
In conclusion, I don't know of a better way than polling the server every so often to check for changes. Based on my browser's XMLHttpRequest tab, this is how gmail looks for new messages. If your host won't allow you more requests per time interval, perhaps decrease the frequency you are polling the server or switch to a different host.