is there a recommended limit to using parallel CURL operations - php

suppose I use CURL MULTI to perform parallel operations (upload/download etc)
Is there a recommended maximum limit of parallel operations that I can perform or can I set it so that it runs, for instance, 100 operations in parallel? What about 1000?
What sort of factors should I consider when I specify the number of concurrent operations using CURL?

For 1000 CURL in parallel you will need a good bandwidth, the recommended amount of parallel requests can be found by debugging, set a function to check for timed-out connections and in that case decrease / increase the limit, or timeout period.

This answer to this question is dependent on your network capacity, the capacity of your NIC, and what you are trying to optimize.
If you are trying to minimize latency of a single request, the answer is quite possible 1 or something close to 1.
If you are trying to maximize throughput, then you should keep increasing the number of parallel operations until your throughput peaks and then either plateaus or falls. That will be the sweet spot.

Most browsers will open at most 4 connections at a time to a single server. That's probably a good guideline.
If you're downloading from different servers, the only problem may be that you'll use up lots of local TCP ports, and this might interfere with other applications on the system. It won't get in the way of connections TO the server, since those all use the same local port 80 (or 443 for SSL). But if you have other applications using CURL, or your machine sends mail, they might not be able to get an outgoing port if you use them all up. There are typically 15-30K ephemeral ports, so you could probably get away with using 1,000 of them.

Related

AWS EC2: Internal server error when huge number of api calls at the same time from jmeter

We have made the backend of a mobile AP in laravel and mysql. The application is hosted on AWS Ec2 and using RDS mysql database.
We are stress testing the app using jmeter. When we send upto 1000 API requests from jmeter, it seems to work fine. However, when we send more than 1000 (roughly) requests in parallel, The jmeter starts getting internal server error (500) as a response for many requests. the internal 500 error percentage increases as we increase the number of APIs
Normally, we would expect that if we increase the APIs, they should be queued and the response should slow if the server is out of resources. We also monitored the resources on the server and they never reached even 50% of the available resources
Is there any timeout setting or any other possible setting that I could tweak so that the we dont get the internal server error before reaching 80% of the resource usage
Regards
Syed
500 is the externally visible symptom of some sort of failure in the server delivering your API. You should look at the error log of that server to see details of the failure.
If you are using php scripts to deliver the API, your mysql (rds) server may be running out of connections. Here's how that might work.
A php-driven web server under heavy load runs a lot of php instances. Each php instance opens up one or more connections to the mysql server. When there are too many php instances x connections per instance the mysql server starts refusing more of them.
Here's what you need to do: restrict the number of php instances your web server is allowed to use at a time. When you restrict that number, incoming requests will queue up (in the TCP connect queue of your OS's communication stack). Then, when an instance is available to serve each item in the queue it will do so.
Apache has a MaxRequestWorkers parameter, with a default extremely large value of 256. Try setting it much lower, for example to 32, and see whether your problem changes.
If you can shrink the number of request workers, you paradoxically may improve high-load performance. Serializing many requests often generates better throughput than trying to do many of them in parallel.
The same goes for the number of active connections to your MySQL server. It obviously depends on the nature of the queries you use, but generally speaking fewer concurrent queries improves performance. So, you won't solve a real-world problem by adding MySQL connections.
You should be aware that the kind of load imposed by server-hammering tools like jmeter is not representative of real world load. 1000 simultaneous jmeter operations without failure is a very good result. If your load-testing setup is robust and powerful, you will always be able to bring your server system to its knees. So, deciding when to stop is an important part of a load testing plan. If this were my system I would stop at 1000 for now.
For your app to be robust in the field, you probably should program it to respond to 500 status by waiting a random amount of time and trying again.

Apache server slow when high HTTP API call

I am running HTTP API which should be called more than 30,000 time per minute simultaneously.
Currently I can call it 1,200 time per minute. If I call 1200 time per minute, all the request are completed and get response immediately.
But if I called 12,000 time per minute simultaneously it take 10 minute to complete all the request. And during that 10 minute, I cannot browse any webpage on the server. It is very slow
I am running CentOS 7
Server Specification
Intel® Xeon® E5-1650 v3 Hexa-Core Haswell,
RAM 256 GB DDR4 ECC RAM,
Hard Drive2 x 480 GB SSD(Software-RAID 1),
Connection 1 Gbit/s
API- simple php script that echo the time-stamp
echo time();
I check the top command, there is no load in the server
please help me on it
Thanks
Sounds like a congestion problem.
It doesn't matter how quick your script/page handling is, if the next request gets done within the execution time of the previous:
It is going to use resources (cpu, ram, disk, network traffic and connections).
And make everything parallel to it slower.
There are multiple things you could do, but you need to figure out what exactly the problem is for your setup and decide if the measure produces the desired result.
If the core problem is that resources get hogged by parallel processes, you could lower connection limits so more connections go in to wait mode, which keeps more resources available for actually handing out a page instead of congesting everything even more.
Take a look at this:
http://oxpedia.org/wiki/index.php?title=Tune_apache2_for_more_concurrent_connections
If the server accepts connections quicker then it can handle them, you are going to have a problem which ever you change. It should start dropping connections at some point. If you cram down French baguettes down its throat quicker then it can open its mouth, it is going to suffocate either way.
If the system gets overwhelmed at the network side of things (transfer speed limit, maximum possible of concurent connections for the OS etc etc) then you should consider using a load balancer. Only after the loadbalancer confirms the server has the capacity to actually take care of the page request it will send the user further.
This usually works well when you do any kind of processing which slows down page loading (server side code execution, large volumes of data etc).
Optimise performance
There are many ways to execute PHP code on a webserver and I assume you use appache. I am no expert, but there are modes like CGI and FastCGI for example. Which can greatly enhance execution speed. And tweaking settings connected to these can also show you what is happening. It could for example be that you use to little number of PHP threats to handle that number of concurrent connections.
Have a look at something like this for example
http://blog.layershift.com/which-php-mode-apache-vs-cgi-vs-fastcgi/
There is no 'best fit for all' solution here. To fix it, you need to figure out what the bottle neck for the server is. And act accordingly.
12000 Calls per minute == 200 calls a second.
You could limit your test case to a multitude of those 200 and increase/decrease it while changing settings. Your goal is to dish that number of requestst out in a shortest amount of time as possible, thus ensuring the congestion never occurs.
That said: consequences.
When you are going to implement changes to optimise the maximum number of page loads you want to achieve you are inadvertently going to introduce other conditions. For example if maximum ram usage by Apache would be the problem, the upping that limit will ensure better performance, but heightens the chance the OS runs out of memory when other processes also want to claim more memory.
Adding a load balancer adds another possible layer of failure and possible slow downs. Yes you prevent congestion, but is it worth the slow down caused by the rerouting?
Upping performance will increase the load on the system, making it possible to accept more concurrent connections. So somewhere along the line a different bottle neck will pop up. High traffic on different processes could always end in said process crashing. Apache is a very well build web server, so it should in theories protect you against said problem, however tweaking settings wrongly could still cause crashes.
So experiment with care and test before you use it live.

Apache connections limit

My hosting says that apache connections limit is 30. I don't whether its enough or not for an average site with 100 visitors per day. I want to know what are the things I should adapt for this limit while coding the site. Mostly I 'll use php sessions and little ajax . I want to know if there any precautions and recommended practices (if any) to avoid hitting this limit.
Thank you.
Since you will be using AJAX, I can't stress this enough...Do not long poll with Apache! It will hold your connections open and effectively perform a DOS(Denial of Service) on your own site.
Other than that, minimize the time it takes between when Apache receives a request to when it outputs and closes. The big blinking neon sign here is to use caching. Whether it is file based caching or something like Memcached or APC, this can drastically reduce the time Apache holds a connection open.
Taken by itself, the statement "apache connections limit is 30" doesn't actually mean much -- Apache configuration can be fairly involved and there are a lot of numbers/parameters. But if we assume that what this really means is 'MaxClients is 30', then what you need to know is that you have a limit of 30 simultaneous connections. However, connection 31 isn't rejected -- it should just be queued until there's a thread available to respond to the request. There's a lot of specifics according to the config, etc, but I doubt you need to worry much.
This means there are 30 possible concurrent connections possible, if you have 100 visitors per day, it's very unlikely to have about a third at the same time.
As you are growing with your site I'd recommend you another server/hoster.
But as if you don't make long running persistent connections and high frequent AJAX call all the time, this should be enough.
Connection limit is most probably simultaneous requests. So if you're only at the development stage, that is fine. But as for once it has launched, that is a different story. If your expected traffic is only about 100 visitors a day, then you will most probably be fine. I would however recommend to change your VPS host if it is anything over that, as if the server is turning away visitors, then it is not good for business.
But in all honesty you're better off developing locally for now to save your bandwidth for actual visitors, as from your description you don't seem to be using anything that requires a live site.

What are the limits of PHP's multi curl functions?

Is there any limits in the max amounts of concurrent connections a multi curl session make?
I am using it to process batches of calls that I need to make to a API service, I just want to be careful that this does not effect the rest of my app.
A few queries, do curl sessions take up the amount of connections the apache server can serve? Is multi curl a ram or CPU hungry operation? I'm nit concerned about bandwidth because I have lots of it, a mighty fast host and only small amounts of data is being sent and received per request.
And I imagine it depends on server hardware / config...
But I can't seem to find what limits the amount of curl sessions on the documentation.
PHP doesn't impose any limitations on the number of concurrent curl requests you can make. You might hit the maximum execution time or the memory limit though. It's also possible that your host limits the amount of concurrent connections you're allowed to make.

concurrent users can a Apache + PHP solution support

How any concurrent users can a Apache + PHP solution support? Please don’t be bogged down by Mysql constraints – we are using LAP without M as we are storing around 2-8 PB at the back end.
Why not try it out:
ab - Apache HTTP server benchmarking tool
As an alternative Siege comes to mind.
Also see the answers to How to test a site rigorously
How any concurrent users
Ok, there's your first issue - HTTP is stateless, so your webserver can support an infinite numbers of users - as long as they don't actually submit any requests to the webserver. Really the limiting factor is the number of concurrent connections to the webserver. This is going to be determined by:
1) the frequency at which users make requests
2) the length of time it takes to service the request
3) the keepAlive duration
The first 2 will vary enormously from application to application, while the latter is something you can control - using keepalives will improve performance at the browser at the expense of hogging memory (and therefore slowing down) at the server. Using a keepalive of more than 2 seconds is probably a waste of time.
There are good books available on Apache performance tuning which will allow you to optimize the webserver for your application.
Of course, if you have a common data substrate, then there's nothing to stop you adding more webservers on top of the storage (unlimited horizontal scalability) - so it's the storage substrate which ultimately limits the capacity / performance of the system (until you look at tuning the code and storage). And you get the added benefit of improved resillience.
Certainly a fairly low end PC (2GHz CPU, 2Gb ram) should comfortably handle upwards of 500 current connections - particularly if you're running a database-centric application, then you'll also get more benefit out of adding servers rather than upgrading the CPU/RAM.
HTH
C.

Categories