My problem is not so easy to describe ... for me :-) so please be lenient towards me.
I have several ways to view a list. which means, there are some possibilities how to come to and create the view which displays my list. this wokrs well with parallel opend browser tabs and is desired though.
if I click on an item of my list I come to a detail-view of that item.
at this view I want to know from which type of list the link was "called". the first problem is, that the referrer will allways be the same and the second: I should not append a get variable to the url. (and it should not be a submitted form too)
if I store it to the session, I will overwrite my session param when working in a parallel tab as well.
what is the best way to still achive my goal, of knowing which mode the previous list was in.
You need to use something to differentiate one page from another, otherwise your server won't know what you're asking for.
You can POST your request: this will hide the URL parameters, but will hinder your back button functionality.
You can GET your request: this will make your URLs more "ugly" but you should be able to work around that by passing short, concise identifiers like www.example.com/listDetail?id=12
If you can set up mod_rewrite, then you can GET requests to a url like www.example.com/listDetails/12, and apache will rewrite the request behind the scenes to look more like www.example.com/listDetails?id=12 but the user will never see it -- they will just see the original, clean/friendly version.
You said you don't have access to the server configuration -- I assume this is because you are on a shared server? Most shared servers already have mod_rewrite installed. And while the apache vhost is typically the most appropriate place to put rewrite rules, they can also be put in a .htaccess file within any directory you want to control. (Sometimes the server configuration disables this, but usually on a shared host, it is enabled) Look into creating .htaccess files and how to use mod_rewrite
Related
I am ordering products from a supplier using cURL. I have a choice of receiving the orders by email, fairly straight forward, or via HTTP. I want to use the HTTP delivery method to get the files delivered into a directory on my webserver. I have asked the supplier to tell me more about the requirements for the HTTP and they are unable to give detail. This suggests that it is something that can be setup entirely at my side, or they are not being forthcoming with information i will need. I have asked them for examples of what other people have provided for the HTTP parameter and they have given me some examples. Apparently most people have specified a URL to file level and not just directory level. E.g. http://www.domain.co.uk/directory/listenLandmark.asmx or .aspx or .ashx. there are also some .php which is good as that is what i want to use. Howeever, some have just given a directory level parameter.
I guess that if I use a php file, I will be able to handle the files delivered rather than just have to check manually to see if there is anything there.
Can somebody tell me what I need to do to get this started please. The supplier is getting a 500 server error when trying to send/post file to me. I have tried changing the permissions on a directory to allow public read and write and execute, but this has changed nothing and is presumably very bad practice anyway (??).
If somebody can give me a quick push in the right direction i would appreciate it (or is it the case that i need more information from the supplier before i can begin).
I'm not sure based on your description, but if you line of thought is that the url on your site they are making the http call to is where you want them to store files or anything like that, that almost certainly isn't the intended way for it to work.
That being said... I have no idea what service you are using, or exactly how they intend on it working. But based on what you are describing it sounds like they are using a webhook or interfacing in a similar way. I'm guessing they are either making a http post call with all of the data for the files they need to send you, or they are making a http call with a list of URLs you can download the files from.
Since they can't provide you with any documentation (this is a giant red flag by the way), if I were you I would first find out what URL on your site they are trying to hit. Once you find that out, then I would add a bunch of logging to see what headers, post data, and any query string they are sending. Once you have that data you should have a much better idea of how they are trying to interact with your site, and be able to make a game plan of how you will use that data to do whatever you need to do.
I hope that helps
Edit: (adding some example logging code)
You could do something like this to log your headers, get & post data, and any file upload.
<?php
$data = array("headers" => headers_list(), "post" => $_POST, "get" => $_GET, "file" => $_FILES);
$file = "/path/to/write/file.log.txt";
file_put_contents($file, json_encode($data));
?>
Then it json encodes all the data (you could convert it all to a string in a different way if you like), and logs it to whatever file you want it to. If you set $file = "debug.txt" it will just log to the same directory as the php file, or you can tell it to go to a specific directory if you like. Then you just have to analyze the data and see what/where they are sending the data.
I have a website where each person has his personal profile. I would like to have static URL like mywebsite/user1, mywebsite/user2, but actually I would remain in the same page and change the content dynamically. A reason is that when I open the site I ask to a database some data, and I don't want to ask it each time I change page.
I don't like url like mywebsite?user=1
Is there a solution?
Thank you
[EDIT better explenation]
I have a dynamic page that shows the user profile of my website. So the URL is something like http://mywebsite.me?user=2
but i would like to have a static link, like
http://mywebsite.me/user2name
Why I want this? Because it's easy to remember and write, and because i can change dynamically the content of the page, without asking each time data to my database (i need some shared info in all the pages. info are the same for all the pages)
Yes there are solutions to your problem!
The first solution is server dependend. I am a little unsure how this works on an IIS server but it's quiet simple in Apache. Apache can take directives from a file called .htaccess. The .htaccess file needs to be in the same folder as your active script to work. It also needs the directive AllowOverride All and the module mod_rewrite loaded in the main server configuration. If you have all this set up you need to edit your .htaccess file to contain the following
RewriteEngine on
RewriteRule ^mywebsite/([^/\.]+)/?$ index.php?user=$1 [L]
This will allow you to access mywebsite/index.php?user=12 with mywebsite/12.
A beginner guide to mod_rewrite.
You could also fake this with only PHP. It will not be as pretty as the previous example but it is doable. Also, take into concideration that you are working with user input so the data is to be concidered tainted. The user needs to access the script via mywebsite/index.php/user/12.
<?php
$request = $_SERVER['REQUEST_URI'];
$request = explode($request, '/'); // $request[0] will contain the name of the .php file
$user[$request[1]] = $request[2];
/* Do stuff with $user['user'] */
?>
These are the quickest way I know to acheive what you want.
First off, please familiarise yourself with the solution I have presented here: http://codeumbra.eu/how-to-make-a-blazing-fast-ajax-call-to-a-zend-framework-application
This does exactly what you propose: eliminates all the unnecessary database queries and executes only the one that's currently needed (in your case: fetch user data). If your application doesn't use Zend Framework, the principle remains the same regardless - you'll just have to open the database connection the way that is required by your application. Or just use PDO or whatever you're comfortable with.
Essentially, the method assumes you make an AJAX call to the site to fetch the data you want. It's easy in jQuery (example provided in the article mentioned above). You can replace the previous user's data with the requested one's using JavaScript as well on success (I hope you're familiar with AJAX; if not, please leave a comment and I will explain in more detail).
[EDIT]
Since you've explained in your edit that what you mean is URI rewriting, I can suggest implemensting a simple URI router. The basics behind how it works are described here: http://mingos.eu/2012/09/the-basics-of-uri-routing. You can make your router as complex or as simple as needed by your application.
The URL does not dictate whether or not you make a database call. Those are two separate issues. You typically set up your server so example.com/username is rewritten internally to example.com/user.php?id=username. You're still running PHP, the URL is just masking it. That's called pretty URLs, realized by URL rewriting.
If you want to avoid calling the database, cache your data. E.g. in the above user.php script, you generate a complete HTML page, then write it into a cache folder somewhere, then next time instead of generating the page again the script just outputs the contents of the already created page. Or you just cache the database data somewhere, but still generate the HTML anew every time.
You could write an actual HTML file to /username, so the web server will serve it directly without even bothering PHP. That's not typically what you want though, since it's hard to update/expire those files and you also typically want some dynamic content on there.
Select all from your database.
Then create file containing the scripts contents(index.php?user='s) for each one. set the file name to user_id/user_name you got from the SELECT statement.
This will create a page for each user in the present folder.
To avoid having to recreate 'static' pages, you could set a new column named say 'indexedyet' and change it to 1 on creating a file. You select only files which have this as 0. You could perform this via cronjob once a day or so.
This leaves you vulenderable to user data changes though, as they won't autmatically update. a tactic to use here is to update the static page on any editing.
Another, probably better (sorry not had enough coffee yet-) ideal would be to create a folder on a users registration. Make the index.php page tailored to them on registration and then anything like www.mysite.com/myuser will show their 'tailored version'. Again update the page on user updates.
I would be happy to provide examples depending on your approach.
On my website, I have a search.php page that makes $.get requests to pages like search_data.php and search_user_data.php etc.
The problem is all of these files are located within my public html folder.
Even though someone could browse to www.mysite.com/search_user_data.php, all of the data processed is properly escaped and handled, but on a professional level this is inadequate to even have this file within public reach.
I have tried moving the sensitive files to my web root, however since Jquery is making $.get requests and passing variables in the URL, this doesn't work.
Does anyone know any methods to firmly secure these vulnerable pages?
What you describe is normal.
You have PHP files that are reachable in your www directory so apache (or your favored webserver) can read and process them.
If you move them out you can't reach them anymore so there is no real option of that sort.
After all your PHP files for AJAX are just regular php files, likely your other project also contains php files. Right ? They are not more or less at risk than any script on your server.
Make sure you program "clean". Think about evil requests when writing your php functions, not after writing them.
As you already did: correctly quote all incoming input that might hit a database or sensitive function.
You can add security checks on your incoming values and create an automated email if you detect someone trying evil stuff. So you'll likely receive a warning in such cases.
But on the downside: You'll regularly receive warnings because some companies automatically scan websites for possible bugs. So you will receive a warning on such scans as well.
On top of writing your code as "secure" as you can, you may want to add a referer check in your code. That means your PHP file will only react if your website was given as referer when accessing it. That's enough to block 80% of the kids out there.
But on the downside: a few internet users do not send a referer at all, some proxies filter that. (I personally would ignore them, half the (www) internet breaks on them anyway)
One more layer of protection can be added by htaccess, you can do most within PHP but it might still be of interest for you: http://httpd.apache.org/docs/2.0/howto/htaccess.html
You can store a uid each time your page is loaded and store it in $_SESSION['uid']. You give this uid to javascript by doing :
var uid = <?php print $_SESSION['uid']; ?>;
Then you pass it with your get request, compare it to your $_SESSION :
if($_GET['uid'] != $_SESSION['uid']) // Stop with an error message or send a forbidden header.
If it's ok, do what you need.
It's not perfect since someone can request search.php and get the current uid, and then request the other pages, but it may be the best possible solution.
I am trying to trace the flow of execution in some legacy code. We have a report being accessed with
http://site.com/?nq=showreport&action=view
This is the puzzle:
in index.php there is no $_GET['nq'] or $_GET['action'] (and no
$_REQUEST either),
index.php, or any sources it includes, do not include showreport.php,
in .htaccess there is no url-rewriting
yet, showreport.php gets executed.
I have access to cPanel (but no apache config file) on the server and this is live code I cannot take any liberty with.
What could be making this happen? Where should I look?
Update
Funny thing - sent the client a link to this question in a status update to keep him in the loop; minutes latter all access was revoked and client informed me that the project is cancelled. I believe I have taken enough care not to leave any traces to where the code actually is ...
I am relieved this has been taken off me now, but I am also itching to know what it was!
Thank you everybody for your time and help.
There are "a hundreds" ways to parse a URL - in various layers (system, httpd server, CGI script). So it's not possible to answer your question specifically with the information you have got provided.
You leave a quite distinct hint "legacy code". I assume what you mean is, you don't want to fully read the code, understand it even that much to locate the piece of the application in question that is parsing that parameter.
It would be good however if you leave some hints "how legacy" that code is: Age, PHP version targeted etc. This can help.
It was not always that $_GET was used to access these values (same is true for $_REQUEST, they are cousins).
Let's take a look in the PHP 3 manual Mirror:
HTTP_GET_VARS
An associative array of variables passed to the current script via the HTTP GET method.
Is the script making use of this array probably? That's just a guess, this was a valid method to access these parameter for quite some time.
Anyway, this must not be what you search for. There was this often misunderstood and mis-used (literally abused) feature called register globals PHP Manual in PHP. So you might just be searching for $nq.
Next to that, there's always the request uri and apache / environment / cgi variables. See the link to the PHP 3 manual above it lists many of those. Compare this with the current manual to get a broad understanding.
In any case, you might have grep or a multi file search available (Eclipse has a nice build in one if you need to inspect legacy code inside some IDE).
So in the end of the day you might just look for a string like nq, 'nq', "nq" or $nq. Then check what this search brings up. String based search is a good entry into a codebase you don't know at all.
I’d install xdebug and use its function trace to look piece by piece what it is doing.
EDIT:
Okay, just an idea, but... Maybe your application is some kind of include hell like application I’m sometimes forced to mess at work? One file includes another, it includes another and that includes original file again... So maybe your index file includes some file that eventually causes this file to get included?
Another EDIT:
Or, sometimes application devs didn’t know what is a $_GET variable and parsed the urls themselves -> doing manual includes based to based urls.
I don't know how it works, but I know that Wordpress/Silverstipe is using is own url-rewriting to parse url to find posts/tags/etc. So the url parsing maybe done in a PHP script.
Check your config files (php.ini and .htaccess), you may have auto_prepend_file set.
check your crontab, [sorry I don't know where you would find it in cpanel]
- does the script fire at a specific time or can you see it definitely fires only when you request a specific page?
-sean
EDIT:
If crontab is out, take a look at index.php [and it's includes] and look for code that either loops over the url parameters without specifically noting "nq" and anything that might be parsing the query string [probably something like: $_SERVER['QUERY_STRING'] ]
-sean
You should give debug_backtrace() (or debug_print_backtrace() a try. The output is similar to the output of an Exception-stacktrace, thus it should help you to find out, what is called when and from where. If you don't have the possibility to run the application on a local development system, make sure, that nobody else can see the output
Are you sure that you are looking at the right config or server? If you go the url above you get an error page that seems to indicate that the server is actually a microsoft iis server and not an apache one.
I am writing an TYPO3 Extension and everything is working fine right now. I Access the GET Variables via
t3lib_div::_GET('rid');
This does work on the testsite I added my Extension to, but if I add it on another subsite of the same page which is in an access-restricted area this does not work. I use var_dump to look at the GET vars, and on the normal site it works, on the restricted I dont get anything (not even NULL!) Just no output and the logic also does not take it. How do i fix that, or ist there another way to access the GET variables in that case?
I guess that happens because within the first request the output of your extensions is stored within the cache. And the second output is just generated out of the cache (instead of regeneration within your extension). To avoid that you could just make your Extensions not cacheable (USER_INT) or use cHash to show that cache-entries are related to more input values than just the simple page-url...
cHash is explained in the the mysteries of cHash article and I guess you'll find enough information regarding USER vs. USER_INT Objects with google ;)
I have no clue why, but seems to be some kind of caching issue. I always cleared the Typo3 cache so it was not directly a problem with that, but if i set the "nocache" flag for the site the plugin is on, everything works fine. So actually it has nothing to do with the access thing, but I do not understand why this doesnt work without nocache.