PHP Soap Connection Periodic Failure - php

I'm using the Fore.com PHP toolkit for integrating with the SFDC API.
I have the app hosted in a client's office, and it is getting periodic errors where the connection times out, and can't reach the salesforce host.
The app runs fine on my laptop, regardless of the internet connection I'm using. I've also used this library in numerous environments, so confident that this is not a code issue.
Can someone recommend a way I can analyze what's happening in terms of connectivity on this box, so I can prove/articulate the problem to the client's Sys Admin? We're in a bit of a finger-pointing situation now, want to find a resolution.
I realize I can increase the timeout of the app, but for the app to be practical for the client, they can't be waiting long periods for the connections to be made while running through a wizard.

I had the same problem and when i contacted SFDC support, they mentioned that
this problem is because of the performance degradation on SFDC server.
When I check http://trust.salesforce.com to verify system status there was no record of degradation.
When asked, SFDC support mentioned that it was not mentioned on http://trust.salesforce.com it was for very short span.
They also recommended that calling PHP script should have retry mechanization so that it try 3 times or so before giving up.
hope this might help you.

Related

Horrible performance on Azure App Service - Wordpress

Evening All,
At by absolute wits end and hoping someone may be able to save me! I am in the process of migrating a number of PHP applications into Azure. I am using:
Linux based App Service running PHP 7.4 (2 vCPUs, 8Gb RAM) at a cost of £94 a month.
Azure Database on MySQL 8.0 (2 vCPUs) at £114 a month.
My PHP apps run well, decent load time of under 1 second per page. Wordpress performance however is awful. I am going from a 1 second page load to around 10 seconds, particularly on the back end. I have read all of the Azure guides and have implemented the following obvious points:
Both the App Service and the MySQL install are in the same data center
App Service is set to 'Always On'
Connection Redirection is set to Preferred and tested as working
The same app runs fine on a very basic £10 or so a month shared hosting package. I have also tried the same setup in Amazon Web Services today and page load is back to a second or so.
In Chrome Console, the delay is in TTFB. I have disabled all the plugins and none stand out as making a huge difference. Each adds a second or so page load, suggesting to me a consistent issue when a page requires a number of database calls.
What is going on with Azure and the awful Wordpress performance?! Is there anything else I can investigate or try? Really keen to stay with Azure but can't cope with the huge increase in cost for a performance hit.
The issue turned out to be the way the file system runs in the app service. It is NOT an issue with the database. The App Service architecture is just too slow at present with file read/writes, of which Wordpress uses a lot. Investigated the various file cache options but none improved enough.
Ended up setting up a fairly basic, and significantly cheaper, virtual machine, running with the same database and performance is hugely improved.
Not a great answer, but App Services are not up to Wordpress at present!
The comments below are correct. The "problem" is the database. You can either move MySQL to a Virtual Machine (which will give you a better performance) or you can also try to use cache plugins such as WP Super Cache as well decrease the number of requests.
You can find a full list of tips in the following link:
https://azure.microsoft.com/en-us/blog/10-ways-to-speed-up-your-wordpress-site-on-azure-websites/
PS: ignore the date, it's still relevant

App Engine Standard connection to Cloud SQL Latency Randomly

I have a pretty "basic" app that we designed that was originally on a local plesk server and we migrated to GAE/GSQL/GCS. app engine, mysql, cloud storage.
Here's some background info:
App is PHP based, and runs great on the local server. When we migrate to the cloud we notice this random yet extremely latency that happens. It's so bad that the app times out and gives a SPDY timeout error. We utilize cloudflare for SPDY assistance so we started there and they said it's the the server. Then we went to google. We've been going back and forth back and forth and I am looking for other avenues of help.
I am running an app on a F2 standard GAE instance and a G1-small CloudSQL instance (gen 2). All same region/zone. There is also a failover sql instance.
There is really no pattern to it but users on the app notice a bad timeout very frequently and it dies after 60 seconds. (which points to a PHP timeout right? We checked the code and it runs fine on the local server)
I dont have a whole lot of traffic on this app yet (maybe a few users a day) so i dont know if it's traffic load. Here's some basic stats for you:
https://imgur.com/a/U1tk5ak
Some Google Engineers said our app has trouble scaling (QPS never will get about 1)
https://imgur.com/a/XWh44bm
And asked if we are threading. We are not. We do not use memcache yet either.
I also see a ton of these:
https://imgur.com/a/eVSNqc3
Which looks like this bug: https://github.com/GoogleCloudPlatform/cloudsql-proxy/issues/126
But I am unsure if this is all related.
We've tried going through Google's tech support, they said we have "manual locks" but our dev team doesn't agree nor know what this really means. Again, the same framework of the app (session handling etc) code is used in many apps with a ton of users on it (non GAE, they're on compute on AWS) so this is our first venture to GAE.
We connect using standard MySQL connection parameters and use the same framework in a lot of applications and it runs fine. We use the required proxy to connect to CloudSQL.
The speed and constant lag shouldn't be there. We don't know what this issue could be. My questions are:
1) Do you see any issues here? All database logs are above and summaries
2) Can you help me understand what may be wrong here?
Thank you!
The biggest latency spike I can see from your Screenshot it's about 20 seconds at 9:00 am, that its about the same time where you have the biggest amount of queries, read/write operations, and memory usage.
Even though you have a small amount of users they can be doing many queries, If GCP support suggest that it has problems scaling you can check the auto-scale property and see if it is enable.
From what i can see from your images and looking through the Cloud SQL docs I would suggest a horizontal scale of your Cloud SQL Instance.
Also take a look at diagnose-issues docs, maybe you can get more info on whats causing the MySQL aborted connections error.
There was a query we found running that caused a huge database lag.

Curl returning intermittent "Failed connect - no error"

We have two applications on non-internet-facing servers within our corporate network. One application (client app) gets its data from the other (server app) via an API.
The client app uses the PHP library Jyggen\Curl to make calls to the API. On Friday, users started to report errors with the client app. When I checked the error logs, I could see that the Curl requests were intermittently failing with the error:
Failed connect to server-app:80; no error
I was able to reproduce this by clicking around different pages in client app myself - eventually an API call would fail and the PHP lib would throw an exception. The error continued today and I was also able to reproduce it from the command line using curl.exe - I had to execute the command 10-15 times before I could get the error but it happened eventually.
The server app is also accessed directly by users in their browser (as well as by API) and we have had no problems there.
The Curl errors appear to be happening during the busiest period of the day (9am - 3pm UK time) in terms of use of the client app. Both apps run on IIS and have sufficient max concurrent users allowed for.
My two theories at the moment are:
Network issue - corporate IT can't see anything wrong however
Curl issue - is there something I'm not aware of regarding how many Curl requests can be made at any one time? Our number of users has been steadily increasing over the last few months so perhaps we have only just hit the tipping point where it's starting to cause issues? We are not using curl_multi, if that's relevant.
Any tips / ideas to check out next would be appreciated.
Update
I managed to reproduce the error this morning in my browser. I checked the IIS logs and I was the only person to be using the client app at that time (no one else had used it for more than 10 minutes). I am therefore minded to suggest that traffic on the client app is not a factor.
(why do people insist on wrapping perfectly sensible APIs up in over-complicated OO?)
This is not really a programming question - it's about fault finding and most likely some infrastructure related issue.
If the client is failing to connect, then either the conection is being rejected or it is timing out. You should have enough information to determine which applies here.
If the connection is being rejected, then there won't be a significant delay. You need to go look at what is rejecting the connection (in the absence of a proxy or an IPS, that would be the IIS instance) and find the reason why.
If the connection is timing out, then the issue may be dropped packets on the network, or an issue on the remote server. Increasing the connection timeout will help for the latter. Start collecting the time it takes for the client to connect and see if there is any pattern (check for correlations with other events such as backups). If there isn't any noticeable pattern/increasing the timneout doesn't help then it's a packet loss issue.

Receiving changes from server in real-time, Server/Ajax-Push, HTML5 Websockets, ready-made server implementations or what?

I’ve been working on a php project where I’m trying to create a cards game.
That obviously needs to be updated in real-time, so, having almost finished the underlying server logic, I went for the naiive/obvious solution for fetching the data from the server - heartbeats or periodic ajax requests - and was thrilled to see the page working through that.
Misery began when I started thinking there could be a less "stressful" way, that’s when I found a couple of conversations here (and in other websites) about "Comet" and “AJAX PUSH” or “Server Push” which I’ve read about intensively.
I found a demo in zeitoun.net which was very simple and ridiculously easy to make it work on my localhost.
As I was writing this question I've gone through the "similar question" panel. and to be honest it's very confusing which option to go with.
Which would you recommend, knowing that I wanna make sure the website can serve up to 2000 users, and that I'm using PHP on Apache?
Keep using the current method, periodic client ajax requests (I've refined the server response to that, and it actually returns nothing most of the time unless a change was to be sent, but still I'm worried about the amount of hits per second the server is going to recieve).
Go for the "too good to be true" solution at zeitoun.net.
Use APE which will require me to switch my operating system to Linux (which I'm willing to do if it turned out to be a promising solution).
Take a deeper look into https://stackoverflow.com/questions/4262543/what-are-good-resources-for-learning-html-5-websockets and go for HTML5 Websocket instead (regardless of browser-support and used fallbacks).
None of the above?

Is it possible to have a peer to peer communication using nothing but PHP

Is it possible to implement a p2p using just PHP? Without Flash or Java and obviously without installing some sort of agent/client on one's computer.
so even though it might not be "true" p2p, but it'd use server to establish connection of some sort, but rest of communication must be done using p2p
i apologize for little miscommunication, by "php" i meant not a php binary, but a php script that hosted on web server remote from both peers, so each peer have nothing but a browser.
without installing some sort of
agent/client on one's computer
Each computer would have to have the PHP binaries installed.
EDIT
I see in a different post you mentioned browser based. Security restrictions in javascript would prohibit this type of interaction
No.
You could write a P2P client / server in PHP — but it would have to be installed on the participating computers.
You can't have PHP running on a webserver cause two other computers to communicate with each other without having P2P software installed.
You can't even use JavaScript to help — the same origin policy would prevent it.
JavaScript running a browser could use a PHP based server as a middleman so that two clients could communicate — but you aren't going to achieve P2P.
Since 2009 (when this answer was originally written), the WebRTC protocol was written and achieved widespread support among browsers.
This allows you to perform peer-to-peer between web browsers but you need to write the code in JavaScript (WebAssembly might also be an option and one that would let you write PHP.)
You also need a bunch of non-peer server code to support WebRTC (e.g. for allow peer discovery and proxy data around firewalls) which you could write in PHP.
It is non-theoretical because server side application(PHP) does not have peer's system access which is required to define ports, IP addresses, etc in order to establish a socket connection.
ADDITION:
But if you were to go with PHP in each peer's web servers, that may give you what you're looking for.
Doesn't peer-to-peer communication imply that communication is going directly from one client to another, without any servers in the middle? Since PHP is a server-based software, I don't think any program you write on it can be considered true p2p.
However, if you want to enable client to client communications with a php server as the middle man, that's definitely possible.
Depends on if you want the browser to be sending data to this PHP application.
I've made IRC bots entirely in PHP though, which showed their status and output in my web browser in a fashion much like mIRC. I just set the timeout limit to infinite and connected to the IRC server using sockets. You could connect to anything though. You can even make it listen for incoming connections and handle them.
What you can't do is to get a browser to keep a two-way connection without breaking off requests (not yet anyways...)
Yes, but its not what's generally called p2p, since there is a server in between. I have a feeling though that what you want to do is to have your peers communicate with each other, rather than have a direct connection between them with no 'middleman' server (which is what is normally meant by p2p)
Depending on the scalability requirements, implementing this kind of communication can be trivial (simple polling script on clients), or demanding (asynchronous comet server).
In case someone comes here seeing if you can write P2P software in PHP, the answer is yes, in this case, Quentin's answer to the original question is correct, PHP would have to be installed on the computer.
You can do whatever you want to do in PHP, including writing true p2p software. To create a true P2P program in PHP, you would use PHP as an interpreted language WITHOUT a web server, and you would use sockets - just like you would in c/c++. The original accepted answer is right and wrong, unless however the original poster was asking if PHP running on a webserver could be a p2p client - which would of course be no.
Basically to do this, you'd basically write a php script that:
Opens a server socket connection (stream_socket_server/socket_create)
Find a list of peer IP's
Open a client connection to each peer
...
Prove everyone wrong.
No, not really. PHP scripts are meant to run only for very small amount of time. Usually the default maximum runtime is two minutes which will be normally not enough for p2p communication. After this the script will be canceled though the server administrator can deactivate that. But even then the whole downloading time the http connection between the server and the client must be hold. The client's browser will show in this time its page loading indicator. If the connection breakes most web servers will kill the php script so the p2p download is canceled.
So it may be possible to implement the p2p protocol, but in a client/server scenario you run into problems with the execution model of php scripts.
both parties would need to be running a server such as apache although for demonstration purposes you could get away with just using the inbuilt php test server. Next you are going to have to research firewall hole punching in php I saw a script i think on github but was long time ago . Yes it can be done , if your client is not a savvy programmer type you would probably need to ensure that they have php installed and running. The path variable may not work unless you add it to the system registry in windows so make sure you provide a bat file that both would ensure the path is in the system registry so windows can find it .Sorry I am not a linux user.
Next you have to develop the code. There are instrucions for how hole punching works and it does require a server on the public domain which is required to allow 2 computers to find each others ip address. Maybe you could rig up something on a free website such as www.000.webhost.com alternatively you could use some kind of a built in mechanism such as using the persons email address. To report the current ip.
The biggest problem is routers and firewalls but packets even if they are directed at a public ip still need to know the destination on a lan so the information on how to write the packet should be straight forwards. With any luck you might find a script that has done most of the work for you.

Categories