We have develop a CURL function on our application. This curl function is mainly to map the data over from 1 site to our form-field in our application.
However, this function has been working fine all the while and ready for use for more than 2 months. Yesterday, this fucntion was broken down. the data from this website is no longer able to map over. We are trying to find out why the problem is. When we troubleshooting, it shows that there is response timeout issue.
To re-ensure there were nothing wrong on our coding and our server performance is working, we have duplicates this instance to another server and try out the function. It was working perfectly.
Wondering if any one out there facing such problem?
What could the possibility to cause this issue?
When we are using cURL, will the site owner know that we are calling their data to map into ours server application? If so, is there a way that we can overcome this?
Could be the owner that block our server ip address? tht's why it function works well on my another server but not in the original server?
Appreciate your help on this.
Thank you,
Your problem description is far too generic to determine a specific cause. Most likely however there is a specific block in place.
For example a firewall rule on the other end, or on your end, would cause all traffic to be dropped, thus causing the timeout. There could also be a regular network outage between both servers, but that's unlikely.
Yes, they will see it in their Apache (or IIS) logs regularly. No, you cannot hide from the server logs - it logs all successful requests. You either get the data, or you stay stealthy. Not both.
Yes, the webserver logs will contain the IP doing all the requests. Adding a DROP rule to the firewall is then a trivial task.
I have applied such a firewall rule to bandwidth and/or data leechers a lot of times in the past few years, although usually I prefer the more resilient deny from 1.2.3.4 approach in Apache vhost/htaccess. Usually, if you use someone else's facilities, it's nice to ask for proper permission - lessens the chance you get blocked this way.
I faced a similar problem some time ago
My server IP was blocked from the website owner
It can be seen in the server logs. Google Analytics, however, won't see this, as cURL doesn't execute javascript.
Try to ping the destination server from the one executing the cURL.
Some advices are:
Use a browser header to mask your request.
If you insist on using this server, you can run trough a proxy.
Put some sleep() between the requests.
Related
We have two applications on non-internet-facing servers within our corporate network. One application (client app) gets its data from the other (server app) via an API.
The client app uses the PHP library Jyggen\Curl to make calls to the API. On Friday, users started to report errors with the client app. When I checked the error logs, I could see that the Curl requests were intermittently failing with the error:
Failed connect to server-app:80; no error
I was able to reproduce this by clicking around different pages in client app myself - eventually an API call would fail and the PHP lib would throw an exception. The error continued today and I was also able to reproduce it from the command line using curl.exe - I had to execute the command 10-15 times before I could get the error but it happened eventually.
The server app is also accessed directly by users in their browser (as well as by API) and we have had no problems there.
The Curl errors appear to be happening during the busiest period of the day (9am - 3pm UK time) in terms of use of the client app. Both apps run on IIS and have sufficient max concurrent users allowed for.
My two theories at the moment are:
Network issue - corporate IT can't see anything wrong however
Curl issue - is there something I'm not aware of regarding how many Curl requests can be made at any one time? Our number of users has been steadily increasing over the last few months so perhaps we have only just hit the tipping point where it's starting to cause issues? We are not using curl_multi, if that's relevant.
Any tips / ideas to check out next would be appreciated.
Update
I managed to reproduce the error this morning in my browser. I checked the IIS logs and I was the only person to be using the client app at that time (no one else had used it for more than 10 minutes). I am therefore minded to suggest that traffic on the client app is not a factor.
(why do people insist on wrapping perfectly sensible APIs up in over-complicated OO?)
This is not really a programming question - it's about fault finding and most likely some infrastructure related issue.
If the client is failing to connect, then either the conection is being rejected or it is timing out. You should have enough information to determine which applies here.
If the connection is being rejected, then there won't be a significant delay. You need to go look at what is rejecting the connection (in the absence of a proxy or an IPS, that would be the IIS instance) and find the reason why.
If the connection is timing out, then the issue may be dropped packets on the network, or an issue on the remote server. Increasing the connection timeout will help for the latter. Start collecting the time it takes for the client to connect and see if there is any pattern (check for correlations with other events such as backups). If there isn't any noticeable pattern/increasing the timneout doesn't help then it's a packet loss issue.
I have a really weird behavior going on.
I'm hosting a tracking software for users, that mainly logs mobile traffic. Now, the path is as follows:
1. My client gets a php code snippet to put in his website.
2. This code sends a cURL post (based on predefined post fields like: visiotr IP, useragent, host etc) to my server.
3. my server logs the data, and decide what the risk level is.
4. it then responds the client server about the status. That is, it sends "true" or "false" back to the client server.
5. client server gets that r
esponse, and decides what to do (load diffrent HTML content, redirect, block the visitor etc).
The problem I'm facing is, for some reason, all the requests made from my client's server to my server, are recorded and stored in the a log file, but my clients report of click loss as if my server sends back the response, but their server fails to receive those responses or something.
I may note that, there are tons of requests every minute from different clients' servers, and from each client himself.
Could the reason be related to the CURL_RETURNTRANSFER not getting any response ? or, maybe the problem is cURL overload ?
I really have no idea. My server is pretty fast, and uses only 10% of its sources.
Thanks in advance for your thoughts.
You touched very problematic domain - high load servers, you problem can be in so many places, so you will have to really spend time to fix it, or at least partially fix.
First of all, you should understand what is really going on, check out this simplified scheme:
Client's php code tries to open connection to your server, to do this it sends some data via network to your server
Your server (I suppose apache) tries to accept it, if it has resources - check max connections properties in apache config
If server can accept connection it tries to create new thread (or use one from thread pool)
After thread is started, it runs your php script
Your php script do some work, connecto to db and sends response back via network
Client waits till the answer from p5 or closes connection because of timeout
So, at each point you can have bottleneck:
Network bandwidth
Max opened connections
Thread pool size
Script execution time
Max database connections, table locks, io wait times
Clients timeouts
And it is not a full list of possible places where problem can occur and finally lead to empty curl response.
From the very start I suggest you to add logging to both PHP codes (clients and servers) and store all curl_error problems in some text file, at least you will see what problems occur often.
This is complicated, and not necessarily one question. I'd appreciate any possible help.
I've read that is is possible to have websockets without server access, but I cannot seem to find any examples that show how it is. I've come to that conclusion (that I believe I need this) based on the following two things:
I've been struggling for the past several hours trying to figure out how to even get websockets to work with the WAMP server I have on my machine, which I have root access. Installed composer, but cannot figure out how to install the composer.phar file to install ratchet. Have tried other PHP websocket implementations (would prefer that it be in PHP), but still cannot get them to work.
My current webhost I'm using to test things out on is a free host, and doesn't allow SSH access. So, even if I could figure out to get websockets with root access, it is a moot point when it comes to the host.
I've also found free VPS hosts by googling (of course, limited everything) but has full root access, but I'd prefer to keep something that allows more bandwidth (my free host is currently unlimited). And I've read that you can (and should) host the websocket server on a different subdomain than the HTTP server, and that it can even be run on a different domain entirely.
It also might eventually be cheaper to host my own site, of course have no real clue on that, but in that case I'd need to figure out how to even get websockets working on my machine.
So, if anyone can understand what I'm asking, several questions here, is it possible to use websockets without root access, and if so, how? How do I properly install ratchet websockets when I cannot figure out the composer.phar file (I have composer.json with the ratchet code in it but not sure if it's in the right directory), and this question is if the first question is not truly possible. Is it then possible to have websocket server on a VPS and have the HTTP server on an entirely different domain and if so, is there any documentation anywhere about it?
I mean, of course, there is an option of using AJAX and forcing the browser to reload a JS file every period of time that would use jQuery ajax to update a series of divs regardless of whether anything has been changed, but that could get complicated, and I'm not even sure if that is possible (I don't see why it wouldn't be), but then again I'd prefer websockets over that since I hear they are much less resource hungry than some sort of this paragraph would be.
A plain PHP file running under vanilla LAMP (i.e. mod_php under Apache) cannot handle WebSocket connections. It wouldn't be able to perform the protocol upgrade, let alone actually perform real-time communication, at least through Apache. In theory, you could have a very-long-running web request to a PHP file which runs a TCP server to serve WebSocket requests, but this is impractical and I doubt a shared host will actually allow PHP to do that.
There may be some shared hosts that make it possible WebSocket hosting with PHP, but they can't offer that without either SSH/shell access, or some other way to run PHP outside the web server. If they're just giving you a directory to upload PHP files to, and serving them with Apache, you're out of luck.
As for your trouble with Composer, I don't know if it's possible to run composer.phar on a shared host without some kind of shell access. Some hosts (e.g. Heroku) have specific support for Composer.
Regarding running a WebSocket server on an entirely different domain, you can indeed do that. Just point your JavaScript to connect to that domain, and make sure that the WebSocket server provides the necessary Cross-Origin Resource Sharing headers.
OK... you have a few questions, so I will try to answer them one by one.
1. What to use
You could use Socket.IO. Its a library for developing realtime web application based on JavaScript. It consists of 2 parts - client side (runs on the visitor browser) and server side. Basic usage does not require almost any background knowledge on Node.js. Here is an example tutorial for a simple chat app on the official Socket.IO website.
2. Hosting
Most of the hosting providers have control panel (cPanel) with the capebility to install/activate different Apache plugins and so on. First you should check if Node.js isn't available already, if not you could contact support and ask them if including this would be an option.
If you don't have any luck with your current hosting provider you could always switch hosts quickly as there are a lot of good deals out there. Google will definitely help you here.
Here is a list containing a few of the (maybe) best options. Keep in mind that although some hosting deals may be paid there are a lot of low cost options to choose from.
3. Bandwidth
As you are worried about "resource hungry" code maybe you can try hosting some of your content on Amazon CloudFront. It's a content delivery network that is widely used and guarantees quick connection and fast resource loading as the files are loaded from the closest to the client server. The best part is that you only pay for what you actually use, so if you don't have that much traffic it would be really cheap to run and still reliable!
Hope this helps ;)
Apologize if this particular problem has been answered already (a search didn't turn anything directly relevant up).
We are developers of a web app that is used to provide community commenting and "social" to our partners websites. Our app uses Javascript and HTML on the front end, PHP and mySQL on the back.
Currently we are running everything through our own servers, which is getting very expensive.
We would like to ask our partners if we can host the app through their servers, with them getting a discount to our monthly charge due to the bandwidth/cpu load they would help us share.
My question is, is there a way to host our app through our partner's web servers in such a way that we can offload most of the CPU time and bandwidth without exposing our source code?
I would greatly appreciate any ideas/help!!
Thank you very much all!
If you also serve static or rarely changing content your clients could run a caching reverse proxy to remove some load from your servers without giving them any source code at all. But you need to implement caching headers for this to work properly.
You may want to look into nginx.
On second thought: Did you try to compile your scripts using facebooks Hip-Hop for PHP? First of all the script should perform way better, second of all, if you still had to outsource the hosting, you deploy a compiled program, no source code involved.
If you put the code on their server they can find out. So that won't be 100% working. Though you can make it difficult but it's still not great.
Most doable solution might be to separate parts of the application and share them. So: You give away a process (so source and other needed data) but it's only part of the total. That way no partner has your total solution but you do outsource the parts.
Is it possible to implement a p2p using just PHP? Without Flash or Java and obviously without installing some sort of agent/client on one's computer.
so even though it might not be "true" p2p, but it'd use server to establish connection of some sort, but rest of communication must be done using p2p
i apologize for little miscommunication, by "php" i meant not a php binary, but a php script that hosted on web server remote from both peers, so each peer have nothing but a browser.
without installing some sort of
agent/client on one's computer
Each computer would have to have the PHP binaries installed.
EDIT
I see in a different post you mentioned browser based. Security restrictions in javascript would prohibit this type of interaction
No.
You could write a P2P client / server in PHP — but it would have to be installed on the participating computers.
You can't have PHP running on a webserver cause two other computers to communicate with each other without having P2P software installed.
You can't even use JavaScript to help — the same origin policy would prevent it.
JavaScript running a browser could use a PHP based server as a middleman so that two clients could communicate — but you aren't going to achieve P2P.
Since 2009 (when this answer was originally written), the WebRTC protocol was written and achieved widespread support among browsers.
This allows you to perform peer-to-peer between web browsers but you need to write the code in JavaScript (WebAssembly might also be an option and one that would let you write PHP.)
You also need a bunch of non-peer server code to support WebRTC (e.g. for allow peer discovery and proxy data around firewalls) which you could write in PHP.
It is non-theoretical because server side application(PHP) does not have peer's system access which is required to define ports, IP addresses, etc in order to establish a socket connection.
ADDITION:
But if you were to go with PHP in each peer's web servers, that may give you what you're looking for.
Doesn't peer-to-peer communication imply that communication is going directly from one client to another, without any servers in the middle? Since PHP is a server-based software, I don't think any program you write on it can be considered true p2p.
However, if you want to enable client to client communications with a php server as the middle man, that's definitely possible.
Depends on if you want the browser to be sending data to this PHP application.
I've made IRC bots entirely in PHP though, which showed their status and output in my web browser in a fashion much like mIRC. I just set the timeout limit to infinite and connected to the IRC server using sockets. You could connect to anything though. You can even make it listen for incoming connections and handle them.
What you can't do is to get a browser to keep a two-way connection without breaking off requests (not yet anyways...)
Yes, but its not what's generally called p2p, since there is a server in between. I have a feeling though that what you want to do is to have your peers communicate with each other, rather than have a direct connection between them with no 'middleman' server (which is what is normally meant by p2p)
Depending on the scalability requirements, implementing this kind of communication can be trivial (simple polling script on clients), or demanding (asynchronous comet server).
In case someone comes here seeing if you can write P2P software in PHP, the answer is yes, in this case, Quentin's answer to the original question is correct, PHP would have to be installed on the computer.
You can do whatever you want to do in PHP, including writing true p2p software. To create a true P2P program in PHP, you would use PHP as an interpreted language WITHOUT a web server, and you would use sockets - just like you would in c/c++. The original accepted answer is right and wrong, unless however the original poster was asking if PHP running on a webserver could be a p2p client - which would of course be no.
Basically to do this, you'd basically write a php script that:
Opens a server socket connection (stream_socket_server/socket_create)
Find a list of peer IP's
Open a client connection to each peer
...
Prove everyone wrong.
No, not really. PHP scripts are meant to run only for very small amount of time. Usually the default maximum runtime is two minutes which will be normally not enough for p2p communication. After this the script will be canceled though the server administrator can deactivate that. But even then the whole downloading time the http connection between the server and the client must be hold. The client's browser will show in this time its page loading indicator. If the connection breakes most web servers will kill the php script so the p2p download is canceled.
So it may be possible to implement the p2p protocol, but in a client/server scenario you run into problems with the execution model of php scripts.
both parties would need to be running a server such as apache although for demonstration purposes you could get away with just using the inbuilt php test server. Next you are going to have to research firewall hole punching in php I saw a script i think on github but was long time ago . Yes it can be done , if your client is not a savvy programmer type you would probably need to ensure that they have php installed and running. The path variable may not work unless you add it to the system registry in windows so make sure you provide a bat file that both would ensure the path is in the system registry so windows can find it .Sorry I am not a linux user.
Next you have to develop the code. There are instrucions for how hole punching works and it does require a server on the public domain which is required to allow 2 computers to find each others ip address. Maybe you could rig up something on a free website such as www.000.webhost.com alternatively you could use some kind of a built in mechanism such as using the persons email address. To report the current ip.
The biggest problem is routers and firewalls but packets even if they are directed at a public ip still need to know the destination on a lan so the information on how to write the packet should be straight forwards. With any luck you might find a script that has done most of the work for you.