are anonymous sessions secure? - php

I'm new in web design. My concern is if I trace anonymous users by session to keep correct language, and etc., then I would save data for each user who visit my website(for example 2 KB). then wouldn't it make my website vulnerable against attacking users to overflow memory of session storage by creating false sessions?
thanks

Why not use local storage, cookies or some other solution instead of sessions? I am not saying that thats the best solution but cookies might be better solution for just keeping preferences. They "could" be longer lasting than session and less intensive on server side.

PHP saves sessions to disk by default; it's only in memory while the program is actually running, so it would only be a memory issue if you had a lot of visitors simultaneously -- ie running your PHP code at exactly the same time on the same server.
But the amount of memory used by your session array is small compared with the memory used overall by your whole PHP process, so if you had sufficient simultaneous visitors for that to cause a problem, then it's unlikely that having a session for each of them would make much of a difference.
The real way to mitigate against this kind of thing is to make your programs run fast, so that they exit quickly, and thus there is less chance of having large numbers of copies of it running simultaneously.

Related

Caching data to spare mysql queries

I have a PHP application that is executed up to one hundred times simultaneously, and very often. (its a telegram anti-spam bot with 250k+ users)
The script itself makes various DB calls (tickers update, counters etc.) but it also load each time some more or less 'static' data from the database, like regexes or json config files.
My script is also doing image manipulation, so the server's CPU and RAM are sometimes under pressure.
Some days ago i ran into a problem, the apache2 OOM-Killer was killing the mysql server process due to lack of avaible memory. The mysql server were not restarting automaticaly, leaving my script broken for hours.
I already made some code optimisations that enabled my server to breathe, but what i'm looking now, is to have some caching method to store data between script executions, with the possibility to update them based on a time interval.
First i thought about flat file where i could serialize data, but i would like to know if it is a good idea or not regarding performances.
In my case, is there a benefit of using caching data over mysql queries ?
What are the pro/con, regarding speed of access, speed of execution ?
Finaly, what caching method should i implement ?
I know that the simplest solution is to upgrade my server capacity, I plan to do so anytime soon.
Server is running Debian 11, PHP 8.0
Thank you.
If you could use a NoSQL to provide those queries it would speed up dramatically.
Now if this is a no go, you can go old school and keep that "static" data in the filesystem.
You can then create a timer of your own that runs, for example, every 20 minutes to update the files.
When you ask info regarding speed of access, speed of execution the answer will always be "depends" but from what you said it would be better to access the file system that being constantly querying the database for the same info...
The complexity, consistency, etc, lead me to recommend against a caching layer. Instead, let's work a bit more on other optimizations.
OOM implies that something is tuned improperly. Show us what you have in my.cnf. How much RAM do you have? How much RAM des the image processing take? (PHP's image* library is something of a memory hog.) We need to start by knowing how much RAM can MySQL can have.
For tuning, please provide GLOBAL STATUS and VARIABLES. See http://mysql.rjweb.org/doc.php/mysql_analysis
That link also shows how to gather the slowlog. In it we should be able to find the "worst" queries and work on optimizing them. For "one hundred times simultaneously", even fast queries need to be further optimized. When providing the 'worst' queries, please provide SHOW CREATE TABLE.
Another technique is to decrease the number of children that Apache is allowed to run. Apache will queue up others. "Hundreds" is too many for Apache or MySQL; it is better to wait to start some of them rather than having "hundreds" stumbling over each other.

How to keep a website or PHP-script loading indefinitely?

This is supposed to be part of a protection against hacking-attempts. The idea is to keep the hacker/attacker waiting for a long time, after a hacking-attempt is detected.
The detection of such attempts is not part of my question.
I only need to know if/how it is possible to create a PHP-script that will simply keep loading, as if the website/server is exceptionally slow, preferably without creating a high server-load by the connection being kept open.
I thought there must be some way to simply stop the PHP script on the server without notifying the user/client that there won't be a response from the server. Or maybe something similar?
The easiest way to do this would be to use sleep(), but there’s a huge downside:
While sleep() doesn’t use any CPU time, the connection has to be kept open for a long time (which means you could reach the connection limit on your server) and a PHP process would be running uselessly (which consumes quite a bit of memory and might also make you hit a process limit).
So that means that someone you have considered to be an attacker would actually have more leverage to run a Denial of Service attack against your website.
I don’t think there’s a resource-friendly way of doing this with PHP alone.

Zend_Session::Start intolerably slow (but only sometimes)

Yes, I've read session_start seems to be very slow (but only sometimes), but my problem is slightly different.
We have a PHP application that stores very simple sessions in memcached (elasticache to be specific), and have been monitoring our slowest-performing pageloads. Almost all of the slow ones spend a majority of their time in Zend_Session::Start, and we can't figure out why. It's a very AJAX-y front end, moving more and more toward a single-page app, making a number of simultaneous requests to the backend per pageload, and some of the requests take up to three to four times as long as they should based solely on this.
It's not every request, obviously, but enough of them that we're concerned. Has anyone else seen this behavior? We were under the impression that memcache is not blocking (how could it be?), so the very worst would be a user has a bum session, but multiple-second wait times in session_start seems inexplicable.
Take a look at your session garbage collection mechanism (play with probability and divisor).
If gc is slowing down the app, consider cleaning the old sessions manually (e.g. setting gc probability to 1 and running gc script via cron).
Another possibilities:
session mechanism is locked (so the app waits to unlock, then write)
too much data saved

First page request on website very slow

The first page I load from my site after not visiting it for 20+ mins is very slow. Subsequent page loads are 10-20x faster. What are the common causes of this symptom? Could my server be sleeping or something when it's not receiving http requests?
I will answer this question generally because I'm sure it's something that confuses a lot of newcomers.
The really short answer is: caching.
Just about every program in your computer uses some form of caching to remember data that has already been loaded/processed recently, so it doesn't have to do the work again.
The size of the cache is invariably limited, so stuff has to be thrown out. And 99% of the time the main criteria for expiring cache entries is, how long ago was this last used?
Your operating system caches file data that is read from disk
PHP caches pages and keeps them compiled in memory
The CPU caches memory in its own special faster memory (although this may be less obvious to most users)
And some things that are not actually a cache, work in the same way as cache:
virtual memory aka swap. When there not enough memory available for certain programs, the operating system has to make room for them by moving chunks of memory onto disk. On more recent operating systems the OS will do this just so it can make the disk cache bigger.
Some web servers like to run multiple copies of themselves, and share the workload of requests between them. The copies individually cache stuff too, depending on the setup. When the workload is low enough the server can terminate some of these processes to free up memory and be nice to the rest of the computer. Later on if the workload increases, new processes have to be started, and their memory loaded with various data.
(Note, the wikipedia links above go into a LOT of detail. I'm not expecting everyone to read them, but they're there if you really want to know more)
It's probably not sleeping. It's just not visited for a while and releases it's resources. It takes time to get it started again.
If the site is visited frequently by many users it should response quickly every time.
It sounds like it could be caching. Is the server running on the same machine as your browser? If not, what's the network configuration (same LAN, etc...)?

Apache connections limit

My hosting says that apache connections limit is 30. I don't whether its enough or not for an average site with 100 visitors per day. I want to know what are the things I should adapt for this limit while coding the site. Mostly I 'll use php sessions and little ajax . I want to know if there any precautions and recommended practices (if any) to avoid hitting this limit.
Thank you.
Since you will be using AJAX, I can't stress this enough...Do not long poll with Apache! It will hold your connections open and effectively perform a DOS(Denial of Service) on your own site.
Other than that, minimize the time it takes between when Apache receives a request to when it outputs and closes. The big blinking neon sign here is to use caching. Whether it is file based caching or something like Memcached or APC, this can drastically reduce the time Apache holds a connection open.
Taken by itself, the statement "apache connections limit is 30" doesn't actually mean much -- Apache configuration can be fairly involved and there are a lot of numbers/parameters. But if we assume that what this really means is 'MaxClients is 30', then what you need to know is that you have a limit of 30 simultaneous connections. However, connection 31 isn't rejected -- it should just be queued until there's a thread available to respond to the request. There's a lot of specifics according to the config, etc, but I doubt you need to worry much.
This means there are 30 possible concurrent connections possible, if you have 100 visitors per day, it's very unlikely to have about a third at the same time.
As you are growing with your site I'd recommend you another server/hoster.
But as if you don't make long running persistent connections and high frequent AJAX call all the time, this should be enough.
Connection limit is most probably simultaneous requests. So if you're only at the development stage, that is fine. But as for once it has launched, that is a different story. If your expected traffic is only about 100 visitors a day, then you will most probably be fine. I would however recommend to change your VPS host if it is anything over that, as if the server is turning away visitors, then it is not good for business.
But in all honesty you're better off developing locally for now to save your bandwidth for actual visitors, as from your description you don't seem to be using anything that requires a live site.

Categories