Rest WEB service pool requests at one moment - php

I don't know did I make a correct caption of my question so feel free to correct me. So I have a REST web service written on PHP (It retreives some data from database and returns some xml data). I have tested it with JMeter and made a conclusion that it responses fast (min 50 requests per second with answer faster than 5 sec.) But in my log of requests I found that sometimes I recieve the pool of requests (means ~10-20 requests at one moment) and my service is unavailable to proceed all of them. What can I do. How can I handle them? How can I make some queue and response each of them separately. Thanks

Related

Looking for best server setup solution for my web application

I have 3 codeigniter based application instances on two separate servers.
Server 1.
First instance is application, second instance is rest API, both use same database. ( I know there is no benefit to have two instances on same machine, other than cleanliness, and that is why I have it this way ).
Server 2.
This server holds only rest API with whole bunch of php data processing functions. I call this server worker because that is what it only does.
This server works as an endpoint for many API services I am connecting with.
So all this server does as first function is receive requests from application, sometimes it processes those requests before anything else.
Then sends requests to API service. Process is complete this session is over.
In short time API service responds with results where this server takes and processes the data then it sends the result to the application.
Application is at times heavy on amount of very simple sql queries, for the most part insert/update on single table. Amount of sent requests is kept to minimal as well, just because for the most part I send data as many requests in one. I call this bulk request.
What is very heavy is amount of responses I get, I can get up to a 1000 responses to one request within few seconds.( I can't minimize that, because I need every single one ), and then each response I get also is being followed by another two identical responses just to make sure I got it, which I threat as duplicate as soon as I can, and stopping that one process.
Then I process every response with php ( not too heavy just matching result arrays ) and post it to my rest API on the application server to update application tables.
Now when I run say 1 request that returns 1000 responses, application is processing data fine with correct results, but the server is pretty much not accessible in this time for other users.
Everything running on an (LAMP) Ubuntu 16.04 with mysql and apache.
Framework is latest codeigniter.
Currently my setup is...
...for the application server
2 vCPUs
4GB RAM
...for worker API server
1 vCPUs
1GB RAM
I know the server setup is very weak, and it bottlenecks for sure. But this was just for development stage.
Now I am moving into production and would like to hear opinions if you have any on how to best approach this.
I am a programmer first, then server administrator.
So I was debating switching to NGINX, I think I will definitely go with php-fpm, maybe MariaDB but I read of thread management is important. This app will not run heavy all the time probably 50/50 so I think just because of that I may not be able to set it to optimal for all times anyway, and may end up with not any better performance at the end.
Then probably will have to multiply servers and setup load balancing, also high availability.
Not sure about all this.
I don't think that just upgrading the servers to maximum will help tho. I can go all the way up too 64 GB RAM and 32 vCPUs per server.
Can I hear your opinions please?
Maybe share some experience?
Links to resources if you have some good ones?
Thank you very much. I hope you can help me.
Thank you.
None of your questions matter. Well, that is an exaggeration. Machines today are not enough different to worry about starting with the "best" on day one. Instead, implement something, run with it for a while, then see where your bottlenecks in order to decide what to do next.
Probably you won't have any bottlenecks for a long time.

Accept Concurrent OR Parallel request in PHP

I'm currently working on Web Service which typically should handle 100 request at 1 minute and process all requests parallel. As per I know, the $_GET method only accept one request at a time and process it even if the client send multiple request at same instance of time. Until first request does not complete other request can not be executed.
For Example.. If suppose Client send the 10 request to the Web Service in one instance of time and consider that each request will take 10 secs to execute that means 10 requests will take 100 seconds to execute.
My question is; Can't we reduce the response time. I mean, If i execute all 10 request in parallel that means all request will execute within 10 Secs. I know this type of thing can be achieved in Java. Since I never created the web service in PHP. So please can anyone tell me how to achieve this in PHP.
Is there way to handle the requests concurrently or parallel in PHP. I searched many things regarding this on internet but unfortunately I didn't find appropriate results.
Thanks for replying on my post.. The number of concurrent will be changed once the web service is successfully serves the 100 request per minute.. My first target is to handle 100 request.. If this works fine then my next target will be 1000 per minute..
Although I tried to install pthread on my hosting space(On Godaddy) using pcntl. But unfortunately that installation failed..Also, I did not find proper documentation of PThread. Is it possible to install PThread on my local wamp?? If yes could share the the steps with me..If I successfully install PThread on local wamp then i can set my local ip over the internet so that web service can be accessed over the internet..

Ridiculously slow writes to Amazon DynamoDB (PHP API)

This question has been already posted on AWS forums, but yet remains unanswered https://forums.aws.amazon.com/thread.jspa?threadID=94589
I'm trying to to perform an initial upload of a long list of short items (about 120 millions of them), to retrieve them later by unique key, and it seems like a perfect case for DynamoDb.
However, my current write speed is very slow (roughly 8-9 seconds per 100 writes) which makes initial upload almost impossible (it'd take about 3 months with current pace).
I have read AWS forums looking for an answer and already tried the following things:
I switched from single "put_item" calls to batch writes of 25 items (recommended max batch write size), and each of my items is smaller than 1Kb (which is also recommended). It is very typical even for 25 of my items to be under 1Kb as well, but it is not guaranteed (and shouldn't matter anyway as I understand as only single item size is important for DynamoDB).
I use the recently introduced EU region (I'm in the UK) specifying its entry point directly by calling set_region('dynamodb.eu-west-1.amazonaws.com') as there is apparently no other way to do that in PHP API. AWS console shows that the table in a proper region, so that works.
I have disabled SSL by calling disable_ssl() (gaining 1 second per 100 records).
Still, a test set of 100 items (4 batch write calls for 25 items) never takes less than 8 seconds to index. Every batch write request takes about 2 seconds, so it's not like the first one is instant and consequent requests are then slow.
My table provisioned throughput is 100 write and 100 read units which should be enough so far (tried higher limits as well just in case, no effect).
I also know that there are some expenses on request serialisation so I can probably use the queue to "accumulate" my requests, but does that really matter that much for batch_writes? And I don't think that is the problem because even a single request takes too long.
I found that some people modify the cURL headers ("Expect:" particularly) in the API to speed the requests up, but I don't think that is a proper way, and also the API has been updated since that advice was posted.
The server my application is running on is fine as well - I've read that sometimes the CPU load goes through the roof, but in my case everything is fine, it's just the network request that takes too long.
I'm stuck now - is there anything else I can try? Please feel free to ask for more information if I haven't provided enough.
There are other recent threads, apparently on the same problem, here (no answer so far though).
This service is supposed to be ultra-fast, so I'm really puzzled by that problem in the very beginning.
If you're uploading from your local machine, the speed will be impacted by all sorts of traffic / firewall etc between you and the servers. If I call DynamoDB each request takes 0.3 of a second simply because of the time to travel to/from Australia.
My suggestion would be to create yourself an EC2 instance (server) with PHP, upload the script and all files to the EC2 server as a block and then do the dump from there. The EC2 server shuold have the blistering speed to the DynamoDB server.
If you're not confident about setting up EC2 with LAMP yourself, then they have a new service "Elastic Beanstalk" that can do it all for you. When you've completed the upload, simply burn the server - and hopefully you can do all that within their "free tier" pricing structure :)
Doesn't solve long term issues of connectivity, but will reduce the three month upload!
I would try a multithreaded upload to increase throughput. Maybe add threads one at a time and see if the throughput increases linearly. As a test you can just run two of your current loaders at the same time and see if they both go at the speed you are observing now.
I had good success using the php sdk by using the batch method on the AmazonDynamoDB class. I was able to run about 50 items per second from an EC2 instance. The method works by queuing up requests until you call the send method, at which point it executes multiple simultaneous requests using Curl. Here are some good references:
http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/LoadData_PHP.html
http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/LowLevelPHPItemOperationsExample.html
I think you can also use HIVE sql using Elastic Map Reduce to bulk load data from a CSV file. EMR can use multiple machines to spread the work load and achieve high concurrency.

Stress testing a webpage

I have a web based phone dialer, which I need to stress test. It requires human action to terminate a call and dial next call. I need to simulate a situation under which 100 users will use the service concurrently. I am not allowed to modify the javascript which dials the next number. Also, there exist a login page, after which the users can reach the dial pad.
Any idea how do I do this?
You can use Apache JMeter to stress test your web app. First setup JMeter as proxy to record the http transactions. then using those transactions as template set it up to send 100 concurrent request.
Maybe xdotool could be a good beginning to resolve your human interaction simulation. But how to solve the 100 users concurrently, I don't know yet. Hopes this helps.

General techniques to get faster web service response?

I'm using Amazon Product Advertising API to handle my full text search. The problem is that the response is taking up to 3-4 seconds (which is about half of my total page load time of 6-8 seconds). Are there any general techniques I could do to improve response time? I'm already receiving the response in compressed format.
Ultimately, I want to be able to display the search engine results page to the user as quickly as possible.
I think you're asking about the concept of Web 2.0. Here is where, in your case, you serve the page immediately and then use an AJAX request that will populate it several seconds later with the content - all the while the user sees a spinning animated GIF waiting for your data payload.
You may want to read further about SOA (Service-oriented architecture) - this just one of dozens of programming paradigm that fit with the whole Web 2.0 theme.
Communicating with external web services is nearly always slow, usually unacceptably so. In this case, the only piece you'll really be able to optimize is the connection overhead. If you were to keep a daemon running locally that maintained a keepalive connection to the Amazon web service, then fired requests through that, you could avoid the connection overhead and improve response times.
From a UX perspective, you're probably better executing the search via an AJAX request to the server. You can display a spinner to the user, and then populate the page when the request returns. This would probably make it feel a bit more responsive, since they wouldn't be waiting on the whole page to build.

Categories