javascript not working after server reboot - php

Earlier today I had an issue with my server and it crashed so I had to reboot. Since I rebooted it I've been seeing some strange behavior in some of my php pages. Particularly, some javascript stuff doesn't seem to be working at all. (There are some other issues, but I feel like they're stemming from the failing javascript.)
I'm still getting used to web programming and using servers, so I have no idea why this javascript wouldn't be working after the reboot. I can post the script here if need be - I don't know if this is just a generic thing or is going to be specific to my script.
For the record it's an Apache server on a Redhat machine.

The javascript is executed in the client browser not in the server, so as long as you did not make any modifications to the javascript files, there should not be anything wrong caused by the server.
Did you try accessing your website using another computer ?

From your comment it appears that the page is loading a javascript library from a network location which it cannot find (http://wks-l0000120674/nephiere/validation.js). Thus it will load if the client machine (not the server) is connected to the network AND the wks-l0000120674 workstation is up and on the network AND the client machine has rights to request files from wks-l0000120674. On the reboot you may have booted wks-l0000120674 off the network (assuming that wks-l0000120674 is part of the same network as the server and the server also provides DHCP services). Make sure you can still get to this location from the client machine.

Related

Why do local test servers open a "save file" dialog?

I have been trying to develop web pages locally (in Windows 10) and running in my local browser (chrome, vivaldi). Right now I have 3 different ways to run simple servers locally: php's built in server, python's http.server module, and vscode's LiveServer. When I run the php server, I can execute php code properly, as one would expect. But calling php urls using the other two, I get a "Save File" dialog! Where is that coming from? Instead of a simple "not found" I get the dialog. So I have two questions: (1) Why am I getting the save file dialog? (2) Is it possible to process php files using LiveServer or python's http.server module (which I don't expect can ever support php)
if the save dialog is being shown it's cause the server can't interpret php code. You have to check these servers configs to check their integration with PHP (if they they can do that).
Good questions. Erick has answered the 1st one. I'll just elaborate more on it and then answer the 2nd one.
Why do you get save file dialog?
At a high level, a web server is serving files. When serving HTML/CSS/JS files to the browser, life is easy. Your browser understands HTML/CSS/JS and knows how to render it for the user. If your browser was sent unprocessed PHP file (assuming that file was present), the browser won't know what to do with <?php .. ?> tags and such. So, the browser offers the user to download the file. Same thing with a zip file. If you went to http://someurl.com/abc.zip, if the webserver found that file under the root of someurl.com, it'll send it to the browser and the browser will offer the user to download it. There's more to it than just that.
So, how does a web server process PHP files? It depends on the web server, but the common thing is that they need help in processing PHP files. Web server is configured to send the request to php.exe or some other system such as PHP-FPM, which processes the file and returns back to the web server to send it to the user. Processing of the file converts echo "<div>$variable</div>"; to clean HTML <div>I am awesome</div>. This processing system (php.exe or PHP-FPM) tag team with web server to serve to the browser what it can render.
Is it possible to cross-render languages?
Yes, you can in multiple ways. One of the common ways is to find the best processing system for the language of choice. For example, PHP can be processed with PHP-FPM running as a service. So, http://someurl.com/test/index.php could run through PHP-FPM. Python may use WSGI and you may choose gunicorn to process Python files. In that case, your webserver can be asked to send python-related directories/subdomains directly to gunicorn (essentially a proxy).
Reverse proxy
Let's say you have multiple sites with multiple language needs.
http://py.someurl.com serves Python/Django
http://someurl.com serves straight HTML
http://ph.someurl.com services PHP
http://js.someurl.com is powered by NodeJS
py.someurl.com could run on the server using gunicorn web server (or other wsgi-friendly servers) on port 8000. Node could be serving using Express web server on port 9000.
You could run NGINX server that serves straight HTML and also serves ph.someurl.com by sending requests to PHP-FPM service. It can also be configured to take all requests to js.someurl.com and hand it off to http://localhost:9000 where Node will service the request and send output back to NGINX and NGINX can send the request to the browser. Similarly, requests to py.someurl.com can be sent to localhost:8000 where gunicorn processes the request and sends the request back to NGINX, which forwards the request to the browser.
From a user's perspective, all they know is the NGINX server. All the other things in the background are known to NGINX. NGINX, in that case, serves as a web server and a proxy.

PHP execution very slow with simple hello world script and no internet connectivity

I have an application that does a lot of requests to external hostnames, and because of this i needed to test the behavior of this application when there was no internet connectivity. So i went ahead and blocked internet access for the server that runs the app on my router, so that i could still access the server from within the LAN. However I'm facing an issue because even with a simple script as this
<?php
echo "Hello World!";
I can never access that page when on the same lan. Chrome just loads forever. However I've no problems accessing the server via Putty over SSH from the same LAN, so it is not blocked on the LAN. I can also write a random non existing page into my browser on the client, that points to the server, and i get a 404 error at once.
Its like even though my code, as above, doesn't make any requests externally it is still dependant on the WAN being available, which i do not want
EDIT: It should be noted that the server is running Lighttpd as the main webserver along with PHP7.0

Remote X-Debugging PHP stuck while Server-2-Server communication

I am developing a web service using PHP, which will fetch data through a curl-call from a foreign website.
For developing, I use an Apache Webserver with PHP on a raspberry pi in my local network (call it Server A).
For testing purposes, I've also set up a dummy service to avoid to many useless or bad request to the foreign service. This dummy runs on another Raspberry, with the same Setup, call it Server B. On both Servers A and B, I've deployed XDebug. For developent, I use Netbeans.
When I remotely debug PHP-Scrips on Server A everything runs fine, unless I run/debug a scrip in which a curl-call to the Dummy-Service on Server B is made. If that is the case, the execution halts until I exit the debugging mode. If I do so, the scrips finish normal.
I am not sure what makes the scrip halt, so I've no idea how to avoid this.
What can I do to make debugging work in this case?
OK, found a solution. It obviously is a problem if both servers use the same port (9000) for communicating with the PC I am debugging on.
After setting one of the servers to 9001 instead, i can attach the debugger to one server at a time, depending on what port is set in Netbeans to listen on.

Apache localhost not responding to clients until reset

I have setup a local server on a regular desktop (not a server desktop) and have 3-4 client machines accessing the local web application I developed from the server via a WIFI router (server is connected to router via cable. All clients via WIFI).
When two of the clients are connected to the application all is well, but when a third (or more) machine joins in there are periods where each machine does not get any service from the server (the application webpage remains loading until I manually reset Apache on the server via services). At times the server responds when one of the clients refresh their page but most of the time I have to perform a reset of the Apache server.
This occurs roughly once an hour on average (6 hours of continuous usage) as the clients are using the application.
Server is running Windows 7 and Apache v2.4 with PHP v5.4
Server and all client machines are running AVG internet security
Firewall is handled by AVG Internet Security
Is this issue due to the code in my application, desktop not being able to manage requests like a server machine, antivirus or a mix of the three?
If so, how can I set-up the server to reset automatically?
Thanks
UPDATE
It is a application where users write reports on files after reviewing information
-Frequent sql requests for file data
-No images
-Some pages contain multiple sql queries that represent the page content
-Network has no internet connection
-Code does not make requests for external information from the internet
-All client machines run the application on Google Chrome web browser
But it rarely happens but sometimes the amount of connection is restricted by the third-party interface being used by the application. We are unable to figure out the reason unless having more details like what content of your app, and the error code apache or HTTP returning.
This kind of situations is difficult to track, especially on Windows where diagnostic tools aren't as readily available as on other platforms.
I suppose you can try and check the antivirus by either running server and clients with no antivirus at all for some hours, or disabling and re-enabling the antivirus when the hangup occurs.
Apart from that, you would need to pinpoint where the error occurs:
in the connection stage (Windows OS is the problem)
in the response stage (Apache is the problem - try fiddling with the child spawning parameters)
in the management stage (PHP is the problem - you can probably check this by changing the setup from PHP-as-a-module, and PHP-as-CGI-application)
in the response stage (that is, connection to the SQL server). You can check this by setting up some pages that use different combinations of session, database, and output buffering and see whether those pages remain reachable even when the application is hung up.
For an example of the last, if a page such as
<?php print date("H:i:s"); phpinfo(); ?>
remains reachable and correctly refreshes (that's why the date() command) even when the app does not respond, this demonstrates that Windows, Apache and PHP are "innocent", and either the SQL server has issues, or you do not interface with it correctly. It might be for example be the case (though unlikely in this instance) that the resident PHP module is accumulating connections to the SQL server and not releasing them, so that after a while you need to stop Apache (thereby freeing the module) and restart.
If this were the case, even if it's not a "real" fix, you can set up Apache so that all children die and are replaced after a small number of requests (once it was 150, but when leaks all but disappeared, I believe that the default became 0 -- Apache children no longer die. Check it out, I might well misremember).

Can you setup an application to be only Javascript on the client side and PHP on a remote server?

We are planning the architecture of an enterprise application that will reside behind the firewall on the clients server. We would like to stick with PHP for the server side language and extjs for the client side. Although, we do not want the client to be required to install Apache, etc on their windows machine. There are a few ideas that I have for the architecture although I would like to know if I can accomplish packaging the application for download for the client and all it contains is the Javascript and communicates with our single server instance for server side computations? I believe this can be done best with an API. Our clients use MSQL Server 2008 on Windows servers and about 10% of them are allowed to run linux on a virtual machine.
Is this correct? Your thoughts and suggestions are greatly appreciated.
The short answer to your question is - yes you can.
ExtJS is a javascript library. As such it requires a browser to run, not a server.
You can write a whole application with ExtJs such as one that will be downloaded and run on any PC browser (or Chromebook) - without any server installation being involved.
If you wish your clients to download the client-side application - no problems (they will simply have to open an index file on their browser). But you can equally just put the javascript files on your server and refer your users to the corresponding url, so you can update the code of your app easily, without your clients needing to constantly download updates. Your clients still won't need any apache or server installed.
ExtJs allows server communication, the nature of which is irrelevant (it can run PHP over apache, ASP.net, RubyOnRail, anything).

Categories