Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
We covered all these POST and GET requests topics in college but those three are still in my mind.
I was wondering since I'm not quite sure why I shouldn't be using GET request for those three examples. I'm just hoping that someone is better at this for explaining a bit more for all these options.
1.
sql = 'SELECT * FROM contacts WHERE id ?' .$_GET['id'];
Is it because if there isn't id then I wouldn't be able to get it and PHP shows me an error message.
2.
eval($_GET['user_provided_code'];
Is it because a person who enters his/her code can basically insert whatever he/she wants and can take over my computer or delete something.
3.
function toFarenheit($temp){
return ($temp * 9 /5 + 32) * $_GET['const'];
}
Basic thinking as for the second option, that we can't insert data with GET request and in this case person is able to insert whatever he/she likes.
Security-wise, there's not really any difference between GET and POST. Generally, GET is used for idempotent operations (like selecting rows from a database and displaying them) and POST is used when the request creates a change (like updating a row.) The problem in these examples is not that they use GET, it's that they don't validate untrusted user input.
There's nothing inherently wrong with building a SQL query from a value obtained from a GET request. The problem with this particular example (syntax errors aside) is only that it presumes the variable exists and contains a valid value.
eval() is virtually never needed and almost always introduces security issues. In this example, you're blindly just executing whatever the user gives you, which is a terrible idea.
$_GET['const'] might not exist. If it does exist, it might not contain a number. There's no real security issue, worst case is it'll evaluate to zero and return a bad result.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
All the answers about this question assume you're storing all of your user's data in one big file - and so they talk about how that is too slow.
Let's say I have thousands of users and store their data as JSON format in separate files (which I am currently doing), what is the downside to that - as opposed to setting up a proper database like Postgresql - which seems like overkill.
The speed is great on my current setup, but I am advised against doing this.
Since each user has their own separate file, there isn't really an issue of hundreds of people writing to the file at the same time (isolation).
Maybe it only matters for sites with millions of users?
In most systems, the users don't merely have to exist, they have to do stuff. And that stuff would generally be represented in a database. So you want the users to exist in the same system where the things they interact with exist.
What happens if your system crashes (power failure, for example) when a json file is half-way written out? Will you be left with a broken JSON file for that user? With databases, that should be taken care of automatically (you find either the old record, or the new one, not some truncation or mishmash). If you roll your own database, you will have to go some way out of your way to verify that you do this in a safe manner.
How do you name your user files? By the user's name? What if different people have the same name? What if their name has characters that can't be represented in file names? By an account number you assign? What happens if they forgot their account number and need to look it up by their human name? Do you then need to read and parse every user file to identify the correct one? Not that a database will magically make this free, but at least with a database you can just build an index with first having to invent and implement them.
You are basically reimplementing a database system from scratch, one feature at a time, as you discover the need for that feature. You can do it, sure. But why not use one that already exists?
Since each user has their own separate file, there isn't really an issue of hundreds of people writing to the file at the same time (isolation).
What if one person writes to one file at the same time from two different browsers (or tabs)?
There is no absolute right or wrong.
If you will never need to take care of concurrent access to the same record (file) or you don't need to search through your records or scale to multiple servers, the solution is fine and even faster than accessing a database.
I would just recommend to properly escape the user provided data, as JSON
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I just have a general question. I am concerned with the speed of my PHP website, which is set to go into production soon.
My concern is the length of time it takes for the page to run a query.
On my page, I have about 14 filters in an HTML form. I am using the method GET to retrieve all the values from the filters. Granted, not all 14 filters have to be used. A user can just search off one filter. Of course, the more filters are selected, the larger the query becomes. But the larger the query becomes, the quicker the page loads. So it's beneficial for the user to select more filters over using just one filter.
All of the filter values are then sent to an INCLUDED PHP file, which then builds a query based off of the user's filtered selection.
The query runs and I am able to print the selected data into an HTML table on the original page. The problem is the it can take quite some time for the page to render and finally display the data-table.
The database is not too large. Maybe between 20K - 40K records, though there are over 20 columns per record.
When I run the same query in MySQL, it returns the data faster than it does on the page.
Here is where I believe the problem might lie.
Within the form are the filters. About 5-6 of the filters are running queries themselves to populate the selection data for the user.
I believe that after the user runs a query, the page refreshes and it has to re-run all the filter queries within the form.
If this is the case, what steps can I take to fix this issue? If any. Should I place all of the filters in a separate file and INCLUDE them within the form? If not, then please advise what I can do to speed up the page loading process.
I have visited various websites in an attempt to fix this issue. Like this one:
http://code.tutsplus.com/tutorials/top-20-mysql-best-practices--net-7855
I am following just about every step suggested by that site, but I am still experiencing the page load delay.
I hope I can receive some positive insight.
Thank you in advance.
What you can do is if all the filters are static and do not disappear or change view when selected / changed value you can set the filters outside of the reload view.
Currently I am building a site that is dealing with AJAX query reload and have to deal with a very similar aspect. My fields are set outside of the reload and I have very fast load times.
If they are dynamic or need to change based on options chosen then I would set them as a separate reload. Basically determining which ones changed vs what needs to be displayed.
Hopefully this helps and explains well enough.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
i have a blog system which has articles inside a database, now i want to build a feature where it displays five of the most popular articles in the database, according to how many views it gets.
Is there any sort of technology out there which i can take advantage of where it states how many views a page has received, and IS able to be integrated into a database.
Or perhaps there is a better internal method of doing something like this?
Thanks in advance.
EDIT: If you are going do down vote my thread randomly, at least tell me why.
You have three choices as an approach for this obviously:
you collect the usage count inside your database (a click counter)
you extract that information from the http servers access log file later
you could implement a click counter based on http server request hits
Both approaches have advantages and disadvantages. The first obviously means you have to implement such a counter and modify your database scheme. The second means you have asynchronous behavior (not always bad), but the components depend on each other, so your setup gets more complex. So I would advise for the first approach.
A click counter is something really basic and typical for all cms/blog systems and not that complex to implement. Since the content is typically generated dynamically (read: by a script) you typically have one request per view, so it is trivial to increment a counter in a table recording views of pages. From there your feature is clear: read the top five counter values and display a list of five links to those pages.
If you go with the second approach then you will need to store that extracted information, since log files are rotated, compressed, archived and deleted. So you either need a self tailored database for that or some finished product. But as said: this approach is much more complex in the end.
The last option is nothing I saw, it just sprang to my mind. You could for example use phps (just as an example) auto append feature to run a counting routine in a generic way. That routine could interpret the request url, decide if it was a request to an article view. If so it could raise a click counter, typically in a small database, since you might have several requests at the same time, which speaks against using a file. but why make things that xomplex? Go with the first option.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Let's say I have two tables, post and user. Now let's create two approaches:
Use a join on post.userID and user.ID
Select a post; when it is being processed, look up its userID; if the user has already been selected (in the script, not cached, but cached "in memory") use these data in orded to achieve a complete post. If the user has not been selected; execute a query to select the user from the user from the database. Retrieve and store the data for further use (always in memory).
Both methods will work, though the first one will make one "big" request, while the second one will make many "small" requests. To my eyes the second one would be better in a huge environment and inconvenient in a small one (vice versa for the second one).
Now let's define three scenarios:
Few posts by few users
Many posts by few users
Many posts by many users
I would like to understand exactly when the two methods will be or not convenient.
Here are my thoughts so far.
In the first case both methods would be almost the same
In the second case the second method SHOULD be better since selecting few user will result in few queries.
In the third case I think the second method would fit better though I can't really make up my mind.
Is what I said correct? Is there a particular reason I shouldn't adopt second method? Are there any pros/cons to add to what I said ?
Thanks.
From my experience, doing one "big request" as you name it with a carefully designed join with indexes will be much faster in any case than doing an n*n queries against the database.
If the data portions are small (user tables usually do not hold much data), the overhead for always running one query with a one row result gives a bad performance. Even if you cache data afterwards in memory, in the worst case you have 1000 posts of 1000 different users.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I'm working on a POST register system. Simply every single post request that is sent to the site must be recorded as well as everything the user had posted ( in other words the $_POST array ).
I see 2 ways of doing this:
The right way - having a separate table registerPostInfo for post information where every single element of the array will be inserted as a new record.
The wrong way - creating an additional column to my registerPost table which would hold the json_encode()'d post array.
I'm asking for advice because eventhough it may be considered 'WRONG' I honestly think I'll be better off with the second solution, because this table gets flooded like crazy. I have made 2000 records all by myself in a one month testing period on a local server, if I were to proceed with the first solution, say there are an average of 5 elements in the post array, this means there were going to be 10000 records in the registerPostInfo table. Imagine that with thousands of people.. I'll be happy for any useful information about my issue and possibly a third way I haven't thought of.
Thanks in advance!
Depends on what the actual purpose of “recording” all the posted data is. If you just want this as a kind of log so that you can reconstruct later on what a user was posting should it turn out to be malicious or unwanted, then I’d say storing it as JSON or serialized into a single column is totally OK; whereas if you want to process that data in general at some point, maybe even search in it for certain parameter names/values, then you might be better off with storing it as single parameter_name|value records, all tied together by an id for each single POST request made.
So if the main purpose is not actually working with that data constantly, but only to “analyze” it when necessary, I’d go with serialized data. Easy enough to select it from the database by time of posting or by user id – and then the de-serializing part could be done by a script. And your secondary use, showing to the user what kind of content they have created – well that you should be able to get from the tables that actually hold that content.