Accept Concurrent OR Parallel request in PHP - php

I'm currently working on Web Service which typically should handle 100 request at 1 minute and process all requests parallel. As per I know, the $_GET method only accept one request at a time and process it even if the client send multiple request at same instance of time. Until first request does not complete other request can not be executed.
For Example.. If suppose Client send the 10 request to the Web Service in one instance of time and consider that each request will take 10 secs to execute that means 10 requests will take 100 seconds to execute.
My question is; Can't we reduce the response time. I mean, If i execute all 10 request in parallel that means all request will execute within 10 Secs. I know this type of thing can be achieved in Java. Since I never created the web service in PHP. So please can anyone tell me how to achieve this in PHP.
Is there way to handle the requests concurrently or parallel in PHP. I searched many things regarding this on internet but unfortunately I didn't find appropriate results.

Thanks for replying on my post.. The number of concurrent will be changed once the web service is successfully serves the 100 request per minute.. My first target is to handle 100 request.. If this works fine then my next target will be 1000 per minute..
Although I tried to install pthread on my hosting space(On Godaddy) using pcntl. But unfortunately that installation failed..Also, I did not find proper documentation of PThread. Is it possible to install PThread on my local wamp?? If yes could share the the steps with me..If I successfully install PThread on local wamp then i can set my local ip over the internet so that web service can be accessed over the internet..

Related

Big amount of CURLs [PHP/Python]

I have a PHP API (using SLIM) that users different external microservices.
My apache is configured to handle a huge amount of simultaneous connections (without my CURLS I can handle 50k simultaneous connections without any effort). So is my microservice (in Python with GUnicorn - 25 Workers and 25 threads each).
However, I am having a issue when CURLing my Python API with my Apache API.
Indeed when I am having high traffic spikes (and therefor a high amount of CURL connections from my PHP to my Python) - my apache starts hanging.
It seems as the CURLs are queued.
My first thought was that my Python script executes too slowly (500ms). But I then noticed that i could manually launch the Python script without any delay even when it was spiking.
This is why I believe the problem comes from the actual CURLs. Is it possible that they are getting queued when I have too many users Curling at the same time or when the CURL takes too long to respond ?
The consequence of it all is that APACHE takes time to respond and slows down. It even happens that APACHE goes into 503.
FYI : The two APIs are on different DOCKER CONTAINERS but on the same server. Both containers have their SOMAXCON at a high number.
If anyone has any idea, please help.
By CURLs or Curling i mean sending CURL requests to my python api.
I will check mod_wsgi out m, thanks!
Edit : sorry, I meant to send it as a comment

Increase idle timeout

I have an App service in Azure: a php script that makes a migration from a database (server1) to a another database (azure db in a virtual machine).
This script makes a lot of queries and requests, so it takes a lot of time and the server (App service) returns:
"500 - The request timed out. The web server failed to respond within
the specified time."
I found that it's something about "idle timeout." I would like to know how to increase this time.
In my test, I have tried the following so far:
Add ini_set('max_execution_time', 300); at the top of my PHP script.
App settings on portal: SCM_COMMAND_IDLE_TIMEOUT = 3600.
But nothing seems to work.
After some searching, I found the post by David Ebbo, as he said:
There is a 230 second (i.e. a little less than 4 mins) timeout for
requests that are not sending any data back. After that, the client
gets the 500 you saw, even though in reality the request is allowed to
continue server side.
And the similar thread from SO, you can refer here.
The suggestion for migration is that you can leverage Web Jobs to run PHP scripts as background processes on App Service Web Apps.
For more details, you can refer to https://learn.microsoft.com/en-us/azure/app-service-web/web-sites-create-web-jobs.

Rest WEB service pool requests at one moment

I don't know did I make a correct caption of my question so feel free to correct me. So I have a REST web service written on PHP (It retreives some data from database and returns some xml data). I have tested it with JMeter and made a conclusion that it responses fast (min 50 requests per second with answer faster than 5 sec.) But in my log of requests I found that sometimes I recieve the pool of requests (means ~10-20 requests at one moment) and my service is unavailable to proceed all of them. What can I do. How can I handle them? How can I make some queue and response each of them separately. Thanks

push and pull technologies using Ajax or Socket

I have a website that needs to send notifications to the online clients at real time same as Facebook, after more googling, I found a lot of documentation about push and pull technology. I found from this documentation ways for implementing them using Ajax or Sockets. I need to know what is the best to use in my case and how is it coded using javascript or jquery and php.
I cannot say you what's the best use in your case without knowing your case in detail.
In most cases it is enough to have the clients check with the server every one or two seconds, asking if something new has happened. I prefer this over sockets most of the time because it works on every web server without any configuration changes and in any browser supporting AJAX, even old ones.
If you have few clients (because every client requires an open socket on the server) and you want real realtime, you can use websockets. There are several PHP implementations, for example this one: http://code.google.com/p/phpwebsocket/
If you can ensure that there will be only single browser open per logged in user then you can apply this long polling technique easily.
Policy for Ajax Call:
Do not make request every 2 seconds.
But wait and make request only after 2 seconds of getting response from previous request.
If a request does not respond within 12 seconds then do not wait send a fresh request. This is connection lost case.
Policy for server response:
if there is update response immediately. to check if there is update rely on session ; (better if you could send some hint from client side like latest message received; this second update checking mechanism will eliminate the restriction of single browser open as mentioned above)
otherwise sleep() for 1 second; (do not use infinite loop but use sleep) and then check whether there is update; if update is there respond; if not sleep again for 1 second; repeat this until total 10 seconds has elapsed and then respond back with no update
If you apply this policy (commonly known as long polling), you will find processor usage reduced from 95% to 4% under heavy load case.
Hope this explains. Best of luck.
Just use apply the long-polling technique using jQuery.
Sockets are not yet supported everywhere and also you would need to open a listening socket on the server for this to work.

Twitter API works locally, but is spotty on remote server

I wrote a script that pulls the current top Twitter trends using cURL and it works 100% of the time locally but when I FTP it up to my mediatemple server it seems to only work sometimes. Is this caused by Twitter? Mediatemple? Some error in my code?
EDIT: How can I cache content in a flat-file?
If the code works sometimes that suggests it is not a problem with your code so there are two logical areas for potential blame:
1) Web Server Load
This could be your server is to bogged down. If the server (not just your site - consider this if your on shared hosting) is experiencing a heavy load then it may take your server too long to complete the curl request. to combat this try and increase the timeout time on the request using the following:
CURLOPT_CONNECTTIMEOUT
2) Twitter Rate Limmit
Twitter limits the number of API calls you can make from one authorized account per hour (I believe the number is around 100ish - check their API Documentation) If you are hitting this limit you will be declined further calls until the 1 hour anniversary of the first call. To combat this have either a cron job run the curl at a set interval and cache the result in a text file or database or store the time of each request made and use an IF to only allow one request every 2 or 3 mins, cache the results and pull the results from the cache.
Making a call to the twitter API every time there is a page load is a waste of resources, bandwith and could reduce page load time.

Categories