How to prevent back button from refreshing the previous page on HTTPS? - php

I have an application in which a module lists out some tour packages from various third party vendors through SOAP calls. This takes about 90 seconds to load.
Once one of the package is clicked, another webservice is called to get the details of the package.
Now once you click browser back from here, it is supposed to show the list without calling the webservices again (from cache). It happens so in dev machine which is http. But on production server the back button refreshes the list page and I have to wait again for about 90 seconds which is pretty painful.
Is it because of HTTPS? How do i force navigate back without refreshing the previous page?
The application in written in PHP, The production server is redhat while the dev is windows (if it helps).

It has nothing to do with HTTPS. Cache is only controlled by the Expires/Cache-Control HTTP headers, which work the same regardless of whether the connection is encrypted or not. Most likely your production server is enforcing different cache headers than your development box.
Having said that, you should also employ server-side caching for such an expensive operation. Perhaps have the data be refreshed periodically by a cron-job or such and save them on the server for fast retrieval. Any page that requires 90 seconds of work to be displayed needs to be rethought.

Related

symfony 3.4 FirewallListener Slow / Blocking

When i´m doing a request to a "huge" page with a lot of data to load, and make a second request to a "normal" content page, the normal page is blocked until the "huge" is loaded.
I activated the Profiler and recognized that the FirewallListener was the blocking element).
Profiler Screenshots (Loaded huge, switched tab - loaded normal)
Huge
Normal
While the "huge" page was loaded, i did a mysql php request on cli with some time measurements:
Connection took 9.9890232086182 ms
Query took 3.3938884735107 ms
So that is not blocking.
Any ideas on how to solve that?
Setup:
php-fpm7.2
nginx
symfony3.4
It is been blocked by the PHP Session.
You can't serve to pages that requires access to the same session id.
Although once you close/serve/release the session on the slow page, another page can be served on the same session. On the slow page just call Session::save() as soon as possible on your controller. This will release the session. Take into consideration that everything you do after saving the session will not be stored in the session.
The reason the firewall takes so long is that of debug is enabled.
In debug, the listeners are all wrapped with debugging listeners. All the information in the Firewall is being profiled and logged.
Try to run the same request with Symfony debug disabled.
We had a similar problem. When sending a couple of consecutive requests to the server in a short period, the server became very slow. I enabled the profiler bar, and a lot of time was spent by the ContextListener
The problem was that file server access on our server is very slow, and session information was stored on the file system, as is the default for symfony.
I configured my app to use the PdoSessionHandler, and the problem was gone.

PHP script keeps loading until manual refresh from different machine

I am running a PHP web application on a local network (APACHE server) that allows clients to write notes that are stored on the database.
Whenever 3 or more client machines connect to the server, two clients machines will end up waiting for their page to load indefinitely until client 3 performs an action on the application (i.e AJAX drop down menus, confirm button, open another page or refresh the page). The higher the number of clients, the more who wait for that one machine to make an action on the server.
At times (though rarely), all clients wait indefinitely and I have to reset the APACHE server.
When there are 2 clients connected, all is smooth (although I think its because it slims the chance of the waiting issue to almost never). I have tested the server settings and all seems well (just some tweaks from the default settings)
I believe this is related to my code but cannot track the issue.
Upon my research for a solution I have come across these possible solutions but want to ask around if anyone has experienced this issue and their solution:
Multi threading
Output Buffering
Disable Antivirus

Issues with PHP cURL between servers, return transfer probably wont catch the response

I have a really weird behavior going on.
I'm hosting a tracking software for users, that mainly logs mobile traffic. Now, the path is as follows:
1. My client gets a php code snippet to put in his website.
2. This code sends a cURL post (based on predefined post fields like: visiotr IP, useragent, host etc) to my server.
3. my server logs the data, and decide what the risk level is.
4. it then responds the client server about the status. That is, it sends "true" or "false" back to the client server.
5. client server gets that r
esponse, and decides what to do (load diffrent HTML content, redirect, block the visitor etc).
The problem I'm facing is, for some reason, all the requests made from my client's server to my server, are recorded and stored in the a log file, but my clients report of click loss as if my server sends back the response, but their server fails to receive those responses or something.
I may note that, there are tons of requests every minute from different clients' servers, and from each client himself.
Could the reason be related to the CURL_RETURNTRANSFER not getting any response ? or, maybe the problem is cURL overload ?
I really have no idea. My server is pretty fast, and uses only 10% of its sources.
Thanks in advance for your thoughts.
You touched very problematic domain - high load servers, you problem can be in so many places, so you will have to really spend time to fix it, or at least partially fix.
First of all, you should understand what is really going on, check out this simplified scheme:
Client's php code tries to open connection to your server, to do this it sends some data via network to your server
Your server (I suppose apache) tries to accept it, if it has resources - check max connections properties in apache config
If server can accept connection it tries to create new thread (or use one from thread pool)
After thread is started, it runs your php script
Your php script do some work, connecto to db and sends response back via network
Client waits till the answer from p5 or closes connection because of timeout
So, at each point you can have bottleneck:
Network bandwidth
Max opened connections
Thread pool size
Script execution time
Max database connections, table locks, io wait times
Clients timeouts
And it is not a full list of possible places where problem can occur and finally lead to empty curl response.
From the very start I suggest you to add logging to both PHP codes (clients and servers) and store all curl_error problems in some text file, at least you will see what problems occur often.

IP Connection limit exceeded

I used php script to parse remote xml file and print output on web page into a div. Since I need output have to be synchronized with currently playing track, I used Javascript to reload div content every 20sec. While testing the page I faced an issue with my hosting, and got message "IP Connection limit exceeded", site was not accessible. I've changed IP to solve this. Is there a workaround to parse metadata without bumping the server and running into web hosting issues?
<script>
setInterval(function() {
$('#reload').load('current.php');
}, 20000);
</script>
Since a web page is a client-based entity, it is in nature unable to receive any data that it hasn't requested. That being said, there are a few options that you may consider.
First, I don't know what web host you are using, but they should let you refresh the page (or make a request like you are doing) more than once every 20 seconds, so I would contact them about that. A Denial of Service attack should be more like 2 or 3 times per second per connection. There could be a better answer for this that I'm just not seeing, but at first glance that's my take on that.
One option you may want to consider is using a Web Socket, which is a new feature of HTML 5 enabling the Web Server to maintain an open connection between the Visitor's Browser and send packets of data back and forth. This prevents the need for the browser to constantly poll the server every 20 seconds. Granted, these are new and I believe they only work in Safari and Chrome. I haven't experimented with them but plan to in the future.
In conclusion, I don't know of a better way than polling the server every so often to check for changes. Based on my browser's XMLHttpRequest tab, this is how gmail looks for new messages. If your host won't allow you more requests per time interval, perhaps decrease the frequency you are polling the server or switch to a different host.

How are AJAX progress indicators for modern PHP web applications implemented?

I've seen many web apps that implement progress bars, however, my question is related to the non-uploading variety.
Many PHP web applications (phpBB, Joomla, etc.) implement a "smart" installer to not only guide you through the installation of the software, but also keep you informed of what it's currently doing. For instance, if the installer was creating SQL tables or writing configuration files, it would report this without asking you to click. (Basically, sit-back-and-relax installation.)
Another good example is with Joomla's Akeeba Backup (formerly Joomla Pack). When you perform a backup of your Joomla installation, it makes a full archive of the installation directory. This, however, takes a long time, and hence requires updates on the progress. However, the server itself has a limit on PHP script execution time, and so it seems that either
The backup script is able to bypass it.
Some temp data is stored so that the archive is appended to (if archive appending is possible).
Client scripts call the server's PHP every so often to perform actions.
My general guess (not specific to Akeeba) is with #3, that is:
Web page JS -> POST foo/installer.php?doaction=1 SESSID=foo2
Server -> ERRCODE SUCCESS
Web page JS -> POST foo/installer.php?doaction=2 SESSID=foo2
Server -> ERRCODE SUCCESS
Web page JS -> POST foo/installer.php?doaction=3 SESSID=foo2
Server -> ERRCODE SUCCESS
Web page JS -> POST foo/installer.php?doaction=4 SESSID=foo2
Server -> ERRCODE FAIL Reason: Configuration.php not writable!
Web page JS -> Show error to user
I'm 99% sure this isn't the case, since that would create a very nasty dependency on the user to have Javascript enabled.
I guess my question boils down to the following:
How are long running PHP scripts (on web servers, of course) handled and are able to "stay alive" past the PHP maximum execution time? If they don't "cheat", how are they able to split the task up at hand? (I notice that Akeeba Backup does acknowledge the PHP maximum execution time limit, but I don't want to dig too deep to find such code.)
How is the progress displayed via AJAX+PHP? I've read that people use a file to indicate progress, but to me that seems "dirty" and puts a bit of strain on I/O, especially for live servers with 10,000+ visitors running the aforementioned script.
The environment for this script is where safe_mode is enabled, and the limit is generally 30 seconds. (Basically, a restrictive, free $0 host.) This script is aimed at all audiences (will be made public), so I have no power over what host it will be on. (And this assumes that I'm not going to blame the end user for having a bad host.)
I don't necessarily need code examples (although they are very much appreciated!), I just need to know the logic flow for implementing this.
Generally, this sort of thing is stored in the $_SESSION variable. As far as execution timeout goes, what I typically do is have a JavaScript timeout that sets the innerHTML of an update status div to a PHP script every x number of seconds. When this script executes, it doesn't "wait" or anything like that. It merely grabs the current status from the session (which is updated via the script(s) that is/are actually performing the installation) then outputs that in whatever fancy method I see fit (status bar, etc).
I wouldn't recommend any direct I/O for status updates. You're correct in that it is messy and inefficient. I'd say $_SESSION is definitely the way to go here.

Categories