I'm having the following situation:
An auction website where all users connected make a ajax request to the server every 2 seconds.
The data changes every 2 seconds so it cannot be cached for a long duration so I was wondering.
What would be the best way to accomplish this:
If I have 200 request in the same second, serve them the same response instead of running again php and connecting to mysql to get results.
So I don't know if this could be done with such a small duration of cache of 1 second, also I don't know what would be better to use, something on Nginx side, or something on PHP side such as APC.
Any ideas? does it make sense?
My problem is that I've tried to tweak Nginx and php-fpm and right now it can handle 200 requests/s at 2000ms response time, at 500requests/s is about 5000ms so I'm looking for a way to speed things up and handle as much requests per second as possible.
Update:
The website is running on Symfony2 so any suggestions related to it are also welcome.
Update 2!!!
I have moved the part of the application that handled the ajax request into a single php file without using the Symfony2 framework. It does 3 sql queries and returns json response. Now it can handle 1000+ requests at 150ms/second, it's really incredible.. I guess Symfony2 really needs tweaks to be able to do the same and I guess the problem was not php but all the memory used by the framework.
Vanilla PHP is of course faster than any PHP framework but maintaining dozens of such scripts is painful. You can stick with Symfony and use Varnish to handle the heavy load. Cache TTL can be as low as 1 second and with Varnish you can handle thousands of requests.
Related
I have 3 codeigniter based application instances on two separate servers.
Server 1.
First instance is application, second instance is rest API, both use same database. ( I know there is no benefit to have two instances on same machine, other than cleanliness, and that is why I have it this way ).
Server 2.
This server holds only rest API with whole bunch of php data processing functions. I call this server worker because that is what it only does.
This server works as an endpoint for many API services I am connecting with.
So all this server does as first function is receive requests from application, sometimes it processes those requests before anything else.
Then sends requests to API service. Process is complete this session is over.
In short time API service responds with results where this server takes and processes the data then it sends the result to the application.
Application is at times heavy on amount of very simple sql queries, for the most part insert/update on single table. Amount of sent requests is kept to minimal as well, just because for the most part I send data as many requests in one. I call this bulk request.
What is very heavy is amount of responses I get, I can get up to a 1000 responses to one request within few seconds.( I can't minimize that, because I need every single one ), and then each response I get also is being followed by another two identical responses just to make sure I got it, which I threat as duplicate as soon as I can, and stopping that one process.
Then I process every response with php ( not too heavy just matching result arrays ) and post it to my rest API on the application server to update application tables.
Now when I run say 1 request that returns 1000 responses, application is processing data fine with correct results, but the server is pretty much not accessible in this time for other users.
Everything running on an (LAMP) Ubuntu 16.04 with mysql and apache.
Framework is latest codeigniter.
Currently my setup is...
...for the application server
2 vCPUs
4GB RAM
...for worker API server
1 vCPUs
1GB RAM
I know the server setup is very weak, and it bottlenecks for sure. But this was just for development stage.
Now I am moving into production and would like to hear opinions if you have any on how to best approach this.
I am a programmer first, then server administrator.
So I was debating switching to NGINX, I think I will definitely go with php-fpm, maybe MariaDB but I read of thread management is important. This app will not run heavy all the time probably 50/50 so I think just because of that I may not be able to set it to optimal for all times anyway, and may end up with not any better performance at the end.
Then probably will have to multiply servers and setup load balancing, also high availability.
Not sure about all this.
I don't think that just upgrading the servers to maximum will help tho. I can go all the way up too 64 GB RAM and 32 vCPUs per server.
Can I hear your opinions please?
Maybe share some experience?
Links to resources if you have some good ones?
Thank you very much. I hope you can help me.
Thank you.
None of your questions matter. Well, that is an exaggeration. Machines today are not enough different to worry about starting with the "best" on day one. Instead, implement something, run with it for a while, then see where your bottlenecks in order to decide what to do next.
Probably you won't have any bottlenecks for a long time.
After some investigation, I updated the title of the question. Please see my updates below.
Original question:
I'm building a website with Wordpress and make sometimes use of async calls to WP REST API endpoints.
Calling this endpoints from my AJAX functions leads often to TTFB times from at least ~780ms:
But if I open the URL/endpoint directly in the browser I get TTFB times that are 4-5 time faster:
I wonder where the delays come frome. I'm running this page on my local dev server with Apache 2.4, HTTP/2 and PHP 7 enabled.
What is the best way to monitor such performance "issues"?
Please mind: I'm not using Wordpress' built-in AJAX-functionality. I'm just calling something like
axios.get(`${url}/wp-json/wp/v2/flightplan`)
inside a React component that I've mounted in my homepage-template.
Update
Damn interesting: clearing cookies reduces TTFB a lot:
Update 2
After removing the other both AJAX calls, the flightplan request performs much faster. I think there are some issues with concurrent AJAX requests. I've read a bit about sessions locking, but since Wordpress and all of the installed plugins doesn't make use of sessions, this can't be the reason.
Update 3
Definitively, it has something to do with my local server setup. Just deployed the site to a "real" webserver:
But it would still be interesting to know how to setup a server that can handle concurrent better.
Update 4
I made a little test: calling 4 dummy-requests before calling the "real" ones. The script only returns a "Foobar" string. Everything looks fine at this time:
But when adding sleep(3) to the dummy AJAX-script, all other requests takes much longer too:
Why?
Because your Ajax call will wait the loading of all your WP plugins :)
So you need to make some test without plugin, and activate one by one to see which one slow down your ajax call.
I am creating a web application which will support more than 2000 users.
But 2000 concurrent connection will create problem in apache.
So, I googled and found a way that we can create HTTP request queue on server and handle them one by one.
But how should I achieve it using apache and PHP.
I suggest using NGINX or another event-driven server, as it will do what you're wanting without re-creating the wheel by building an HTTP request queue. Of course, if you really want to scale properly, you may think about a load balancer with more than one web server behind it. 2000 concurrent connections is quite large, and really isn't necessary for 2000 users, as not all users will be sending requests simultaneously. The "connection" only last as long as it takes to serve up a page.
You can also use Apache Benchmark (http://httpd.apache.org/docs/2.2/programs/ab.html) to do some quick, preliminary load testing. I believe you'll find that you don't need near the resources you think you do.
No, I'm not trying to see how many buzzwords I can throw into a single question title.
I'm making REST requests through cURL in my PHP app to some webservices. These requests need to be made fairly often since much of the application depends on this API. However, there is severe latency with the requests (2-5 seconds) which just makes my app look painfully slow.
While I'm halfway to a solution with a recommendation to cache these requests in Memcached, I'm still not satisfied with that kind of latency ever appearing within the application.
So here was my thought: I can implement AJAX long-polling in the background so that the user never experiences the latency outright. The REST requests/Memcache lookups will be done all through AJAX at a set interval.
But this is all really new to me and I'm not sure if this is the best approach. And if I'm on the right track, I do know that PHP + Apache is not going to handle something like this well. But PHP is the only language I know. I'd ideally like to set up something like Tornado in Python, but I'm just not sure if I'm over-engineering right now or not.
Any thoughts here would be helpful and much appreciated.
This was some pretty quick turnaround, but I went back through and profiled my app by echoing out microtime() throughout the relevant processes. Turns out that I'm not parallelizing my cURL requests and that's where I take the real hit. It takes approximately 2 seconds to do that, which means very long delays while each cURL request is done in succession.
I'm keeping my self busy working on app that gets a feed from twitter search API, then need to extract all the URLs from each status in the feed, and finally since lots of the URLs are shortened I'm checking the response header of each URL to get the real URL it leads to.
for a feed of 100 entries this process can be more then a minute long!! (still working local on my pc)
i'm initiating Curl resource one time per feed and keep it open until I'm finished all the URL expansions though this helped a bit i'm still warry that i'l be in trouble when going live
any ideas how to speed things up?
The issue is, as Asaph points out, that you're doing this in a single-threaded process, so all of the network latency is being serialized.
Does this all have to happen inside an http request, or can you queue URLs somewhere, and have some background process chew through them?
If you can do the latter, that's the way to go.
If you must do the former, you can do the same sort of thing.
Either way, you want to look at way to chew through the requests in parallel. You could write a command-line PHP script that forks to accomplish this, though you might be better off looking into writing such a beast in language that supports threading, such as ruby or python.
You may be able to get significantly increased performance by making your application multithreaded. Multi-threading is not supported directly by PHP per se, but you may be able to launch several PHP processes, each working on a concurrent processing job.