AJAX / PHP not working on new WAMP server - php

We've moved a php page from one WAMP server to another and the chap who created it has left the business. I know nothing about AJAX, so am struggling!
I've scoured the code on both the old server and new, and it's exactly the same but for some reason the AJAX on the new server isn't working correctly, whereas on the old server it's fine.
What it's supposed to do is show a list of people from our database, allow users to update those records and then show the new, updated record without refreshing the page.
On the old server, it works a treat but on the new server it doesn't load the new data through. It's probably something ridiculously basic, but I'm scratching my head (mainly as I know nothing about the technology!)
Any help much appreciated.

Have you made sure the script that you are requesting with AJAX is actually there? You have said you've uploaded the script but is the path exactly the same? Different environments may have different paths to the same script.
Also how are you making the AJAX request? Are you using (for example) the jQuery library that might exist on one environment but not the other? Are they both using the same version of jQuery?
Use the Javascript console to see what errors you're getting and go from there. Chrome and Firefox have the best ones by default.
https://developer.mozilla.org/en/Error_Console
Before any of this though I would make sure the original script is working and returning the right results.

First check:
Look into the Javascript code and find where the actual URL is constructed.
It will look like:
xmlhttp.open("GET","yourAjaxPage.php",true);
but the xmlhttp can have a different name.
Now, simply alert this URL, or document.write(), or anything that gives you the url.
Put this URL into the browser-url and see the response.
In case you are using a session, make sure you use the same browser (TAB for example) where you expect the result.
Now you will see the response from the server to the request you just made. This should get you going finding the problem.

With the error console we were seeing an error 500 from the server (which seemed odd and rather unspecific...!)
Having had someone we know take a look at the code, it seems that on the old server, it seemed quite happy using mssql_close($con); but on the new server it didn't and so simply replacing it with sqlsrv_close($con) then meant the page the ajax was calling could complete and then returned the results as expected!
Always the little things...!

Related

how is already loaded php script processed by server if there is another request from the same page

I real beginner and try to understand how things work more then to develop stuff, and now i can't move forward till someone gives me an accurate answer about a little detail of following issue.
Let's assume there's a page with php code http://example.com/blablabla and link on it like http://example.com/blablabla?file=number_1 which's used to modify some parts of this page
What i really don't know is what happens with the already loaded script from http://example.com/blablabla when there's a request from this page -http://example.com/blablabla?file=number_1
The questions actually are:
Is code from the already loaded page processed every time when requesting ?file=number_1?
For me it seems very strange, 'cause if with the first http://example.com/blablabla via php i selected for example a huge size of data from database and only want to modify small part of page with ?file=number_1 and why do i need server to process request to the database one more time.
My experience says me that server do process again already loaded code,
BUT according to this i have a very SLIGHT ASSUMPTION, that i'm not really sure about this, but it seems very logical:
The real trick is that the code in the first page has one VARIABLE and its value is changed
by the second request, so i assume that server see this change and modifies only that part of the code with this VARIABLE - for example the code in http://example.com/blablabla looks like this
<?
/* some code above */
if (empty($_GET['file'])) {
/* do smth */
} else {
/* do smth else */
}
/* some code below */
?>
with the request http://example.com/blablabla?file=number_1 the server processes only part of the original code only including changed $_GET['file'] variable.
Is it totally my imagination or it somehow make a point?
Would someone please explain it to me. Much appreciated.
HTML is a static language. There is php and other similar languages that allows you to have dynamic pages but because it still has to send everything over as html you still have to get a new page.
The ?file=number_1 just gives a get request to the page giving it more information but the page itself had to still be rerun in order to change the information and send the new static html page back.
The database query can be cached with more advanced programming in PHP or other similar languages so that the server doesnt have to requery the database but the page itself still had to be completely rerun
There are more advanced methods that allows client side manipulation of the data but from your example I believe the page is being rerun with a get request on the server side and a new page is being sent back.
i believe this is what your asking about.
Yeah, thanks you guys both. It certainly clarified the issue that every script (clean html or generated by php) runs every time with each request, and only external types of data like image files and, even as it follows from the previous answer, mysql results can be cached and be used via php to output necessary data.
The main point was that I mistakenly hoped that if the page is loaded and consequently cached in computer memory, the appended QUERY STRING to this URL will send, of course, new get request, but retrieved respond will affect this page partly without rerunning it completely.
Now i have to reconsider my building strategy – load as much data as it’s required from each requested URL.
If you are looking for a way to edit the page dynamically, use JavaScript.
If you need to run code server side, invisibly to the client, use PHP.
If you need to load content dynamically, use AJAX, an extension of JavaScript.
I hope that helps.

Incorrect URL in Wordpress leads to broken images

I just transferred a website from their previous host to hosting with me. Obviously, I had to change some of the links that pointed to the images to make them display correctly. Unfortunately, it's a huge mess. There were some links described in the mysql database, but i got into MySQL and replaced all of those with the correct link. Originally, it linked to
http://localhost/...
I now need it to link to
http://[subdomain].[website].net/
I've gone through every line of code i could find with fgrep in linux and i can't find where it's inserting localhost. Any ideas where localhost could be stored, if not in the database (as far as i can tell) and not in the physical code? I'm assuming it's a PHP variable somewhere. I'm not sure which, but i already made sure that
<?php echo get_template_directory_uri(); ?>
was set to the correct uri. Any help would be greatly appreciated. thank you.
EDIT
I tried to replace the database information correctly from a clean copy of the database. I used the serialize php script and it didn't work. the images are still not showing up and they're still routing back to
http://localhost
I'm not sure what to do about it. Any more suggestions?
1) Check page source and see exactly where the image URLs point to. Some missing image links may be hardcoded to point to the theme folder or other locations.
2) Did you also move /wp-content/uploads?
3) Dumping the database and doing a find/replace with a text editor will break URLs that are in serialized data. You have to use a tool to correctly deserialize/re-serialize data. See interconnectit.com WordPress Serialized PHP Search Replace Tool
If you're sure that you replaced every occurrence of localhost in the database, then the most likely next culprit is the browser cache, so I recommend you to delete the cache of your browser just to be sure, as this depends on your browser search for the appropriate method, but, for example, on Internet Explorer open the Developer Tools (F12) and go to Cache->Erase cache for this domain.

Is there a way to let cURL wait until the page's dynamic updates are done?

I'm fetching pages with cURL in PHP. Everything works fine, but I'm fetching some parts of the page that are calculated with JavaScript a fraction after the page is loaded. cURL already send the page's source back to my PHP script before the JavaScript calculations are done, thus resulting in wrong end-results. The calculations on the site are fetched by AJAX, so I can't reproduce that calculation in an easy way. Also I have no access to the target-page's code, so I can't tweak that target-page to fit my (cURL) fetching needs.
Is there any way I can tell cURL to wait until all dynamic traffic is finished? It might be tricky, due to some JavaScripts that are keep sending data back to another domain that might result in long hangs. But at least I can test then if I at least get the correct results back.
My Developer toolbar in Safari indicates the page is done in about 1.57s. Maybe I can tell cURL statically to wait for 2 seconds too?
I wonder what the possibilities are :)
cURL does not execute any JavaScript or download any files referenced in the document. So cURL is not the solution for your problem.
You'll have to use a browser on the server side, tell it to load the page, wait for X seconds and then ask it to give you the HTML.
Look at: http://phantomjs.org/ (you'll need to use node.js, I'm not aware of any PHP solutions).
With Peter's advise and some research. It's late but I have found a solution. Hope someone find it helpful.
All you need to do is request the ajax call directly. First, load the page that you want to get in chrome, go to Network tab, filter XHR.
Now you have to find the ajax call that you want. Check the response to verify it.
Right click on the name of the ajax call, select copy -> "copy as Curl (bash)"
Go to https://reqbin.com/curl, paste the Curl and click Run. Check the response content.
If it's what you want then move to the next step.
Still in reqbin window, click Generate code and choose the language that you want it to be translated and you will get the desired code. Now intergrated to your code however you want.
Some tips: if test run on your own server return 400 error or nothing at all: Set POSTFIELDS to empty. If it return 301 permanently moved, check your url whether it's https or not.
Not knowing a lot about the page you are retrieving or the calculations you want to include, but it could be an option to cURL straight to the URL serving those ajax requests. Use something like Firebug to inspect the Ajax calls being made on your target page and you can figure out the URL and any parameters passed. If you do need the full web page, maybe you can cURL both the web page and the Ajax URL and combine the two in your PHP code, but then it starts to get messy.
There is one quite tricky way to achieve it using php. If you' really like it to work for php you could potentially use Codeception setup in junction with Selenium and use Chrome browser webdriver in headless mode.
Here are some general steps to have it working.
You make sure you have codeception in your PHP project
https://codeception.com
Download chrome webdriver:
https://chromedriver.chromium.org/downloads
Download selenium:
https://www.seleniumhq.org/download/
Configure it accordingly looking into documentation of codeception framework.
Write codeception test where you can use expression like $I->wait(5) for waiting 5 seconds or $I->waitForJs('js expression here') for waiting for js script to complete on the page.
Run written in previous step test using command php vendor/bin/codecept path/to/test

Autocomplete in .php framework doesn't seem to work any longer

I have a basic Autocomplete and Add to the database function and , for some reason, it has stopped working completely and I don't get any useful information from Firebug or otherwise what could be the problem.
I am guessing it is something simple, but don't know where to look.
This is the library where I am making the call:
http://github.com/allyforce/AF-upload/blob/master/Library/Target1.class.php
What browser are you debugging on? Firefox displays Ajax errors correctly, but no others do unless you apply this patch (which will be in the next release of QCubed):
http://trac.qcu.be/projects/qcubed/ticket/432
Someone found the right answer in terms of remvoing a QEvent.

how do you take a snapshot of your current browser window using php

I've tried searching everywhere but there's seems to be no implementation available other than having the client use a file (batch/exe of some sort).
You just can't do it. PHP is server side scripting language, maybe you can do that using JavaScript, but I'm not even sure about that.
I know someone implemented such service, but actually he had to use Mozilla browser, which opened, a script (I think it was not JS, maybe perl, c/c++) made a screenshot and uploaded it.
I'm assuming you mean "your" in the general sense. If you mean "how does one take a screenshot...", you generally hit the print screen key. If you're trying to capture your users' browser output, I'd say that it's probably not possible. If it were, the best you could get is the output of what you wrote yourself.
Google Gears might be hackable to do something close, if you can simulate the print screen key press with JS and get the file to save somewhere gears can access.
You can't do that in PHP, as PHP is running on the server, and not the client.
To get screenshots of the browser, you can take a look at, for instance, this list.
If you are look for an automated solution to take screenshot of web pages opened in a browser window, you could also look at this question : How to capture x screen using PHP, shell_exe and scrot and it's answers.
And, finally, and without selecting any particular post, you can try a search on SO ; something like screenshot browser, sorted by relevance, seems to get some interesting posts :-)
Good luck !

Categories