Have a program written, working well, using $_GET parameters, submitted from a form, to load a query a file in SQL, displaying the results.
Currently migrating the program into the Wordpress framework and am running into some strange behaviour.
Although the page loads well with no parameters, attempting to invoke the php parameters in the standard way, the page has no idea what is going on, and returns a 404 error.
URL's sent to broswer look like
"home/program/?parameter1=1 & paraemeter2 =2...etc for up to 28 parameters"
-> 404 error.
Strangely, the browser can be made to at least recognize the page by adding a '$' before parameter1; however, parameter1 no longer behaves properly.
"home/program/?$parameter1=1 & paraemeter2 =2...etc"
-> parameters after the first work as expected
What could explain Wordpress's reluctance to interpret the code in the standard way?
What effect is the "$" having?
The first parameter name was "artist", but this apparently caused some conflict with some already-used or reserved word somewhere in the vastness of wordpress and associated theme.
changing every instance of "artist" with "producer" (in the code, and all the database tables and stored procedures) cleared up the problem :).
One semi-comprehensible error message could have avoided a lot of angst.
Thanks for the consideration
Related
I have a ecommerce store website running with WordPress. I'd like to include a section with a -random custormer's product review, so that every time someone access the page, there will be a different comment there.
I'm not used to PHP, but I managed to create a shortcode which takes a random comment and returns the proper HTML. It is working fine (in eddition mode, every time I insert the shortcode a different comment appears).
My issue is that when I leave the page and return, the previous one is still there. I believe that it is being caused by cache, but I wouldn't like to disable the cache for the whole page.
How you I force the shortcode run again (I don't know if it is the right way to explain) and make sure that at every access, a different comment appears?
One solution I thought is to have JS code which would do preaty much the same thing my PHP code does, using Woocommerce API to get the data. But I'm wondering if there is a simpler solution to do that, like forcing the specific section not being cached or re-run the shortcode.
Thanks!
JS can't do what PHP does here: at most it can create an AJAX-call to the backend that then runs a query for a random comment and returns it. You need to render it thereafter. It's fairly standard, but overkill for your case.
Instead, you're going to want to check whether your caching mechanism supports ESI or something else that excluded parts of your code from being cached.
I have an array of WordPress Theme Customizer settings that I want to register and output to the Theme Customizer, and they all register fine, and they all work perfectly.
This is where my problem is, $wp_customize->add_setting won't seem to pass the Theme Checker plugins' checks, it doesn't seem to like using a variable in place of an ID for the setting, even though the variable contains the setting name. When I'm looping through the array I have, everything works fine in the Theme Customizer (all settings, control, sections and panels that I'm registering are there, and working and updating and even refreshes in the preview window), but in the Theme Checker plugin, I get this error repeated 100's of times when the tests are run:
Warning: preg_match(): No ending delimiter '$' found in I:\xampp\htdocs\basic\wp-content\plugins\theme-check\checkbase.php on line 110
The line that is causing the problem is the one that registers the settings:
$wp_customize->add_setting( $set_name, $items['setting'] );
The "ending delimiter" that it's referring to is for the $set_name, but I can't find a way of passing this without these errors appearing, even though the functionality of the Theme is working perfectly. Is there some sort of way that I can make the add_setting() method take my variable which holds my setting name?
The tests do complete, and it shows this related to the problem:
REQUIRED: Found a Customiser setting called $set_name in theme-customizer.php that did not have a sanitisation callback function. Every call to the add_setting() method needs to have a sanitisation callback function passed.
It seems to be passing the literal variable name, rather than it's contents, however, the Customizer functions as expected, and registers/saves the settings in the correct names that I've given in the array of settings.
Can anyone help please?
EDIT: I've added double quotes around the $set_name, and it has stopped outputting the 100's of errors about the delimiter, but I'm still getting an issue with the REQUIRED test that it isn't passing, giving the same failure as above.
Yes, I understand there is no difference between a string and a variable containing a string as far as the function/method is concerned. I'm going to move forward on the assumption that this is possibly an error in the Theme Check plugin code from the WP Team and contact them about it. Thank you for your help, I think this is the closest I think we can get for now
I'm developing a PHP site which I take parameters and I process its data according by the given parameters, however I'm having a bit of a problem with a specific URL query, looking at this What do a question mark (?) and an ampersand (&) mean in a URL? I understood a bit more how parameters works, but I still need to know what's causing the issue.
the url in question is: http://mydomain.xyz/load.php?char=HellHound&Platform=Win32NT&id=ab38df8h3ff&host=http://mydomain.xyz/
the host parameter, is not getting parsed by PHP, and is instead getting thrown as a (Access Denied) 403 error, I have spoken to my web host and they claim it's not an issue with the actual server or the file system, but with the development itself, so here I am.
How can I process this parameter so it doesn't defaults to 403 every single time?
This is the code in load.php (right now it's just a place holder)
<?php
var_dump($_GET['host']);
//var_dump($_GET['char']);
//var_dump($_GET['Platform']);
//var_dump($_GET['id']);
//echo "success";
?>
Whenever I try to use var_dump() or anything similar it won't just process the content it will automatically redirect to 403
I real beginner and try to understand how things work more then to develop stuff, and now i can't move forward till someone gives me an accurate answer about a little detail of following issue.
Let's assume there's a page with php code http://example.com/blablabla and link on it like http://example.com/blablabla?file=number_1 which's used to modify some parts of this page
What i really don't know is what happens with the already loaded script from http://example.com/blablabla when there's a request from this page -http://example.com/blablabla?file=number_1
The questions actually are:
Is code from the already loaded page processed every time when requesting ?file=number_1?
For me it seems very strange, 'cause if with the first http://example.com/blablabla via php i selected for example a huge size of data from database and only want to modify small part of page with ?file=number_1 and why do i need server to process request to the database one more time.
My experience says me that server do process again already loaded code,
BUT according to this i have a very SLIGHT ASSUMPTION, that i'm not really sure about this, but it seems very logical:
The real trick is that the code in the first page has one VARIABLE and its value is changed
by the second request, so i assume that server see this change and modifies only that part of the code with this VARIABLE - for example the code in http://example.com/blablabla looks like this
<?
/* some code above */
if (empty($_GET['file'])) {
/* do smth */
} else {
/* do smth else */
}
/* some code below */
?>
with the request http://example.com/blablabla?file=number_1 the server processes only part of the original code only including changed $_GET['file'] variable.
Is it totally my imagination or it somehow make a point?
Would someone please explain it to me. Much appreciated.
HTML is a static language. There is php and other similar languages that allows you to have dynamic pages but because it still has to send everything over as html you still have to get a new page.
The ?file=number_1 just gives a get request to the page giving it more information but the page itself had to still be rerun in order to change the information and send the new static html page back.
The database query can be cached with more advanced programming in PHP or other similar languages so that the server doesnt have to requery the database but the page itself still had to be completely rerun
There are more advanced methods that allows client side manipulation of the data but from your example I believe the page is being rerun with a get request on the server side and a new page is being sent back.
i believe this is what your asking about.
Yeah, thanks you guys both. It certainly clarified the issue that every script (clean html or generated by php) runs every time with each request, and only external types of data like image files and, even as it follows from the previous answer, mysql results can be cached and be used via php to output necessary data.
The main point was that I mistakenly hoped that if the page is loaded and consequently cached in computer memory, the appended QUERY STRING to this URL will send, of course, new get request, but retrieved respond will affect this page partly without rerunning it completely.
Now i have to reconsider my building strategy – load as much data as it’s required from each requested URL.
If you are looking for a way to edit the page dynamically, use JavaScript.
If you need to run code server side, invisibly to the client, use PHP.
If you need to load content dynamically, use AJAX, an extension of JavaScript.
I hope that helps.
I've been doing some scraping with PHP and getting some strange results on a particular domain. For example, when I download this page:
http://pitchfork.com/reviews/tracks/
It works fine. However if I try to download this page:
http://pitchfork.com/reviews/tracks/1/
It returns an incomplete page, even though the content is exactly the same. All subsequent pages (tracks/2/, tracks/3/, etc) also return incomplete data.
It seems to be a problem with the way the URLs are formed during pagination. Most other sections on the site exhibit the same behaviour (the landing page works, but not subsequent pages). One exception is this section:
http://pitchfork.com/forkcast/
Where forkcast/2/ etc work fine. This may be due to it being only one directory deep, where most other sections are multiple directories deep.
I seem to have a grasp on WHAT is causing the problem, but not WHY or HOW it can be fixed.
Any ideas?
I have tried using file_get_contents() and cURL and both give the same result.
Interestingly, on all the pages that do not work, the incomplete page is roughly 16,000 chars long. Is this a clue?
I have created a test page where you can see the difference:
http://fingerfy.com/test.php?url=http://pitchfork.com/reviews/tracks/
http://fingerfy.com/test.php?url=http://pitchfork.com/reviews/tracks/1/
It prints the strlen() and content of the downloaded page (plus it makes relative urls into absolute, so that CSS is correct).
Any hints would be great!
UPDATE: Mowser, which optimizes pages for mobile has no trouble with these pages (http://mowser.com/web/pitchfork.com/reviews/tracks/2/) so the must be a way to do this without it failing....
It looks like pitchfork's running a CMS with "human" urls. That'd mean that /review/tracks would bring up a "homepage" with multiple postings listed, but "/reviews/tracks/1" would bring up only "review #1". It's possible they've configured the CMS to output only a fixed length excerpt, or have an output filter mis-configured and chop off the individual posts pages early.
I've tried fetching /tracks/1 through /tracks/6 using wget, and they all have different content which terminates at 16,097 bytes exactly, usually in the middle of a tag. So, it's not likely this is anything you can fix on your end, as it's the site itself sending bad data.