it is a php document that only has 2 variables that are hard-coded. It is a simple splash website that lets customers read about a product and lets them click to the actual product page. When I send 2000-4000 clicks within a 4-5 hour period the page doesn't load all the way because the high load! I do not have SSH access, only FTP. Anything I can do here?
Yeah if you are using sessions and the default time out is left as is, it will fill up your memory on the server and slow right down, set your session timeout value to a low number.
Which can be done with the info from here:
How to change the session timeout in PHP?
There could be two possibilities:
a) since you have not SSH access I'll assume you're on a shared server, and if that's the case then 2000+ page loads for your site may be too much. You'd have to upgrade your hosting plan.
b) Too many session files for the server to handle. the server may be creating many sessions files as time passes and it keeps receiving more and more requests. To delete the sessions file you'd need SSH access or have the server admin do it for you.
If your server can't deal with 500+ requests an hour then you'd be better off just getting a better hosting plan.
Servers are not limitless, and normally, even higher grade private, self-managed machines, tend to be somewhat slow.
Related
I am running a PHP web application on a local network (APACHE server) that allows clients to write notes that are stored on the database.
Whenever 3 or more client machines connect to the server, two clients machines will end up waiting for their page to load indefinitely until client 3 performs an action on the application (i.e AJAX drop down menus, confirm button, open another page or refresh the page). The higher the number of clients, the more who wait for that one machine to make an action on the server.
At times (though rarely), all clients wait indefinitely and I have to reset the APACHE server.
When there are 2 clients connected, all is smooth (although I think its because it slims the chance of the waiting issue to almost never). I have tested the server settings and all seems well (just some tweaks from the default settings)
I believe this is related to my code but cannot track the issue.
Upon my research for a solution I have come across these possible solutions but want to ask around if anyone has experienced this issue and their solution:
Multi threading
Output Buffering
Disable Antivirus
a quick question.
I'm looking at doing a multi-domain hit counter over many different domains, preferabbly in PHP.
What would the best way to track each hit be?
I was thinking storing a central database and updating the number in the database every time a page on any domain is loaded - but wouldn't that have major performance issues?
I was also thinking about 'basic number stored in text option' - but is it possible to edit a file from different servers/domains.
Any advice would be great!
if i get you right then you have different websites that sit on different servers?
in this case i'm not sure about editing a file from a different server and i wouldn't go there.
instead of editing a remote file, just update a remote DB (example)
best solution is using a non-blocking servers (like nodejs) which will update a DB on every page load (you can easily access remote DBs on other servers, or send a curl call to designated file on a master server). by using non-blocking web servers you will not slow down the page's load time.
google's analytics works a bit differently - it loads a script from google-analytics.com and this script gets all the info. the problem is that this only happens after the DOM has loaded.
if you are going for a solution like this - just put an AJAX call at the top of every page that you want to monitor.
Is it possible to keep a SQL connection/session "open" between PHP program iterations, so the program doesn't have to keep re-logging in?
I've written a PHP program that continually (and legally/respectfully) polls the web for statistical weather data, and then dumps it into a local MYSQL database for analysis. Rather than having to view the data through the local database browser, I've wanted to have it available as an online webpage hosted by an external web host.
Not sure of the best way to approach this, I exported the local MYSQL database up onto my web host's server, figuring that because the PHP program needs to be continually looping (and longer than the default runtime, with HTML also continually refreshing its page), it would be best if I kept the "engine" on my local computer where I can have the page continually looping in a browser, and then have it connect to the database up on my web server and dump the data there.
It worked for a few hours. But then, as I feared might happen, I lost access to my cPanel login/host. I've since confirmed through my own testing that my IP has been blocked (the hosting company is currently closed), no doubt due to the PHP program reconnecting to the online SQL database once every 10 minutes. I didn't think this behavior and amount of time between connections would be enough to warrant an IP blacklisting, but alas, it was.
Now, aside from the possibility of getting my IP whitelisted with the hosting company, is there a way to keep a MYSQL session/connection alive so that a program doesn't have to keep re-logging in between iterations?
I suppose this might only be possible if I could keep the PHP program running indefinitely, perhaps after manually adjusting the max run-time limits (I don't know if there would be other external limitations, too, perhaps browser limits). I'm not sure if this is feasible, or would work.
Is there some type of low-level system-wide "cookie" for a MYSQL connection? With the PHP program finishing and closing (and then waiting for the HTML to refresh the page), I suppose the only way to not have to re-log in again would be with some type of cookie, or IP address access (which would need server-side functionality/implementation).
I'll admit that my approach here probably isn't the most efficient/effective way to accomplish this. Thus, I'm also open to alternative approaches and suggestions that would accomplish the same end result -- a continual web-scrape loop that dumps into a database, and then have the database continually dumped to a webpage.
(I'm seeking a way to accomplish this other than asking my webhost for an IP whitelist, or merely determining their firewall's access ban rate. I'll do either of these if there's truly no feasible or better way.)
Perhaps you can try Persistent Database Connection.
This link explains about persistent connectivity: http://in2.php.net/manual/en/function.mysql-pconnect.php
Here it gets a little complicated. I'm in the last few months to finish a larger Webbased Project, and since I'm trying to keep the budget low (and learn some stuff myself) I'm not touching an Issue that I never touched before: load balancing with NGINX, and scalability for the future.
The setup is the following:
1 Web server
1 Database server
1 File server (also used to store backups)
Using PHP 5.4< over fastCGI
Now, all those servers should be 'scalable' - in the sense that I can add a new File Server, if the Free Disk Space is getting low, or a new Web Server if I need to handle more requests than expected.
Another thing is: I would like to do everything over one domain, so that the access to differend backend servers isnt really noticed in the frontend (some backend servers are basically called via subdomain - for example: the fileserver, over 'http://file.myserver.com/...' where a load balancing only between the file servers happens)
Do I need an additional, separate Server for load balancing? Or can I just use one of the web servers? If yes:
How much power (CPU / RAM) do I require for such a load-balancing server? Does it have to be the same like the webserver, or is it enough to have a 'lighter' server for that?
Does the 'load balancing' server have to be scalable too? Will I need more than one if there are too many requests?
How exactly does the whole load balancing work anyway? What I mean:
I've seen many entries stating, that there are some problems like session handling / synchronisation on load balanced systems. I could find 2 Solutions that maybe would fit my needs: Either the user is always directed to the same machine, or the data is stored inside a databse. But with the second, I basically would have to rebuild parts of the $_SESSION functionality PHP already has, right? (How do I know what user gets wich session, are cookies really enough?)
What problems do I have to expect, except the unsynchronized sessions?
Write scalable code - that's a sentence I read a lot. But in terms of PHP, for example, what does it really mean? Usually, the whole calculations for one user happens on one server only (the one where NGINX redirected the user at) - so how can PHP itself be scalable, since it's not actually redirected by NGINX?
Are different 'load balancing' pools possible? What I mean is, that all fileservers are in a 'pool' and all web servers are in a 'pool' and basically, if you request an image on a fileserver that has too much to do, it redirects to a less busy fileserver
SSL - I'll only need one certificate for the balance loading server, right? Since the data always goes back over the load balancing server - or how exactly does that work?
I know it's a huge question - basically, I'm really just searching for some advices / and a bit of a helping hand, I'm a bit lost in the whole thing. I can read snippets that partially answer the above questions, but really 'doing' it is completly another thing. So I already know that there wont be a clear, definitive answer, but maybe some experiences.
The end target is to be easily scalable in the future, and already plan for it ahead (and even buy stuff like the load balancer server) in time.
You can use one of web servers for load balacing. But it'll be more reliable to set the balacing on a separate machine. If your web servers responds not very quickly and you're getting many requests then load balancer will set the requests in the queue. For the big queue you need a sufficient amount of RAM.
You don't generally need to scale a load balancer.
Alternatively, you can create two or more A (address) records for your domain, each pointing to different web server's address. It'll give you a 'DNS load-balancing' without a balancing server. Consider this option.
I am designing a file download network.
The ultimate goal is to have an API that lets you directly upload a file to a storage server (no gateway or something). The file is then stored and referenced in a database.
When the file is requsted a server that currently holds the file is selected from the database and a http redirect is done (or an API gives the currently valid direct URL).
Background jobs take care of desired replication of the file for durability/scaling purposes.
Background jobs also move files around to ensure even workload on the servers regarding disk and bandwidth usage.
There is no Raid or something at any point. Every drive ist just hung into the server as JBOD. All the replication is at application level. If one server breaks down it is just marked as broken in the database and the background jobs take care of replication from healthy sources until the desired redundancy is reached again.
The system also needs accurate stats for monitoring / balancing and maby later billing.
So I thought about the following setup.
The environment is a classic Ubuntu, Apache2, PHP, MySql LAMP stack.
An url that hits the currently storage server is generated by the API (thats no problem far. Just a classic PHP website and MySQL Database)
Now it gets interesting...
The Storage server runs Apache2 and a PHP script catches the request. URL parameters (secure token hash) are validated. IP, Timestamp and filename are validated so the request is authorized. (No database connection required, just a PHP script that knows a secret token).
The PHP script sets the file hader to use apache2 mod_xsendfile
Apache delivers the file passed by mod_xsendfile and is configured to have the access log piped to another PHP script
Apache runs mod_logio and an access log is in Combined I/O log format but additionally estended with the %D variable (The time taken to serve the request, in microseconds.) to calculate the transfer speed spot bottlenecks int he network and stuff.
The piped access log then goes to a PHP script that parses the url (first folder is a "bucked" just as google storage or amazon s3 that is assigned one client. So the client is known) counts input/output traffic and increases database fields. For performance reasons i thought about having daily fields, and updating them like traffic = traffic+X and if no row has been updated create it.
I have to mention that the server will be low budget servers with massive strage.
The can have a close look at the intended setup in this thread on serverfault.
The key data is that the systems will have Gigabit throughput (maxed out 24/7) and the fiel requests will be rather large (so no images or loads of small files that produce high load by lots of log lines and requests). Maby on average 500MB or something!
The currently planned setup runs on a cheap consumer mainboard (asus), 2 GB DDR3 RAM and a AMD Athlon II X2 220, 2x 2.80GHz tray cpu.
Of course download managers and range requests will be an issue, but I think the average size of an access will be around at least 50 megs or so.
So my questions are:
Do I have any sever bottleneck in this flow? Can you spot any problems?
Am I right in assuming that mysql_affected_rows() can be directly read from the last request and does not do another request to the mysql server?
Do you think the system with the specs given above can handle this? If not, how could I improve? I think the first bottleneck would be the CPU wouldnt it?
What do you think about it? Do you have any suggestions for improvement? Maby something completely different? I thought about using Lighttpd and the mod_secdownload module. Unfortunately it cant check IP adress and I am not so flexible. It would have the advantage that the download validation would not need a php process to fire. But as it only runs short and doesnt read and output the data itself i think this is ok. Do you? I once did download using lighttpd on old throwaway pcs and the performance was awesome. I also thought about using nginx, but I have no experience with that. But
What do you think ab out the piped logging to a script that directly updates the database? Should I rather write requests to a job queue and update them in the database in a 2nd process that can handle delays? Or not do it at all but parse the log files at night? My thought that i would like to have it as real time as possible and dont have accumulated data somehwere else than in the central database. I also don't want to keep track on jobs running on all the servers. This could be a mess to maintain. There should be a simple unit test that generates a secured link, downlads it and checks whether everything worked and the logging has taken place.
Any further suggestions? I am happy for any input you may have!
I am also planning to open soure all of this. I just think there needs to be an open source alternative to the expensive storage services as amazon s3 that is oriented on file downloads.
I really searched a lot but didnt find anything like this out there that. Of course I would re use an existing solution. Preferrably open source. Do you know of anything like that?
MogileFS, http://code.google.com/p/mogilefs/ -- this is almost exactly thing, that you want.