Detect if user is human without captcha or useragent - php

I've a website where I'm providing email encryption to users and I'm trying to figure out if there's a way to detect if a user is human or a bot.
I've been digging into $_SESSION in php but it's easy to bypass, I'm also not interested in captcha, useragent or login solutions, any idea of what I need ?
There are other questions very similar to this one in SO but I couldn't find any straight answer...
Any help will be very welcome, thank you all !

This is a hard problem, and no solution I know of is going to be 100% perfect from a bot-defending and usability perspective. If your attacker is really determined to use a bot on your site, they probably will be able to. If you take things far enough to make it impractical for a computer program to access anything on your site, it's likely no human will want to either, but you can strike a good balance.
My point of view on this is partially as a web developer, but more so from the other side of things, having written numerous web crawler programs for clients all over the world. Not all bots have malicious intent, and can be used for things from automating form submissions to populating databases of doctors office addresses or analyzing stock market data. If your site is well designed from a usability standpoint, there should be no need for a bot that "makes things easier" for a user, but there are cases where there are special needs you can't plan for.
Of course there are those who do have malicious intent, which you definitely want to protect your site against as well as possible. There is virtually no site that can't be automated in some way. Most sites are not difficult at all, but here are a few ideas off the top of my head, from other answers or comments on this page, and from my experience writing (non-malicious) bots.
Types of bots
First I should mention that there are two different categories I would put bots into:
General purpose crawlers, indexers, or bots
Special purpose bots, made specifically for your site to perform some task
Usually a general-purpose bot is going to be something like a search engine's indexer, or possibly some hacker's script that looks for a form to submit, uses a dictionary attack to search for a vulnerable URL, or something like this. They can also attack "engine sites", such as Wordpress blogs. If your site is properly secured with good passwords and the like, these aren't usually going to pose much of a risk to you (unless you do use Wordpress, in which case you have to keep up with the latest versions and security updates).
Special purpose "personalized" bots are the kind I've written. A bot made specifically for your site can be made to act very much like a human user of your site, including inserting time delays between form submissions, setting cookies, and so on, so they can be hard to detect. For the most part this is the kind I'm talking about in the rest of this answer.
Captchas
Captchas are probably the most common approach to making sure a user is humanoid, and generally they are difficult to automatically get around. However, if you simply require the captcha as a one-time thing when the user creates an account, for example, it's easy for a human to get past it and then give their shiny new account credentials to a bot to automate usage of the system.
I remember a few years ago reading about a pretty elaborate system to "automate" breaking captchas on a popular gaming site: a separate site was set up that loaded captchas from the gaming site, and presented them to users, where they were essentially crowd-sourced. Users on the second site would get some sort of reward for each correct captcha, and the owners of the site were able to automate tasks on the gaming site using their crowd-sourced captcha data.
Generally the use of a good captcha system will pretty well guarantee one thing: somewhere there is a human who typed the captcha text. What happens before and after that depends on how often you require captcha verification, and how determined the person making a bot is.
Cell-phone / credit-card verification
If you don't want to use Captchas, this type of verification is probably going to be pretty effective against all but the most determined bot-writer. While (just as with the captcha) it won't prevent an already-verified user from creating and using a bot, you can verify that a human being created the account, and if abused block that phone number/credit-card from being used to create another account.
Sites like Facebook and Craigslist have started using cell-phone verification to prevent spamming from bots. For example, in order to create apps on Facebook, you have to have a phone number on record, confirmed via text message or an automated phone call. Unless your attacker has access to a whole lot of active phone numbers, this could be an effective way to verify that a human created the account and that he only creates a limited number of accounts (one for most people).
Credit cards can also be used to confirm that a human is performing an action and limit the number of accounts a single human can create.
Other [less-effective] solutions
Log analysis
Analyzing your request logs will often reveal bots doing the same actions repeatedly, or sometimes using dictionary attacks to look for holes in your site's configuration. So logs will tell you after-the-fact whether a request was made by a bot or a human. This may or may not be useful to you, but if the requests were made on a cell-phone or credit-card verified account, you can lock the account associated with the offending requests to prevent further abuse.
Math/other questions
Math problems or other questions can be answered by a quick google or wolfram alpha search, which can be automated by a bot. Some questions will be harder than others, but the big search companies are working against you here, making their engines better at understanding questions like this, and in turn making this a less viable option for verifying that a user is human.
Hidden form fields
Some sites employ a mechanism where parameters such as the coordinates of the mouse when they clicked the "submit" button are added to the form submission via javascript. These are extremely easy to fake in most cases, but if you see in your logs a whole bunch of requests using the same coordinates, it's likely they are a bot (although a smart bot could easily give different coordinates with each request).
Javascript Cookies
Since most bots don't load or execute javascript, cookies set using javascript instead of a set-cookie HTTP header will make life slightly more difficult for most would-be bot makers. But not so hard as to prevent the bot from manually setting the cookie as well, once the developer figures out how to generate the same value the javascript generates.
IP address
An IP address alone isn't going to tell you if a user is a human. Some sites use IP addresses to try to detect bots though, and it's true that a simple bot might show up as a bunch of requests from the same IP. But IP addresses are cheap, and with Amazon's EC2 service or similar cloud services, you can spawn a server and use it as a proxy. Or spawn 10 or 100 and use them all as proxies.
UserAgent string
This is so easy to manipulate in a crawler that you can't count on it to mark a bot that's trying not to be detected. It's easy to set the UserAgent to the same string one of the major browsers sends, and may even rotate between several different browsers.
Complicated markup
The most difficult site I ever wrote a bot for consisted of frames within frames within frames....about 10 layers deep, on each page, where each frame's src was the same base controller page, but had different parameters as to which actions to perform. The order of the actions was important, so it was tough to keep straight everything that was going on, but eventually (after a week or so) my bot worked, so while this might deter some bot makers, it won't be useful against all. And will probably make your site about a gazillion times harder to maintain.
Disclaimer & Conclusion
Not all bots are "bad". Most of the crawlers/bots I have made were for users who wanted to automate some process on the site, such as data entry, that was too tedious to do manually. So make tedious tasks easy! Or, provide an API for your users. Probably one of the easiest way to discourage someone from writing a bot for your site is to provide API access. If you provide an API, it's a lot less likely someone will go to the effort to create a crawler for it. And you could use API keys to control how heavily someone uses it.
For the purpose of preventing spammers, some combination of captchas and account verification through cell numbers or credit cards is probably going to be the most effective approach. Add some logging analysis to identify and disable any malicious personalized bots, and you should be in pretty good shape.

My favorite way is presenting the "user" with a picture of a cat or a dog and asking, "Is this a cat or a dog?" No human ever gets that wrong; the computer gets it right perhaps 60% of the time (so you have to run it several times). There's a project that will give you bunches of pictures of cats and dogs -- plus, all the animals are available for adoption so if the user likes the pet, he can have it.
It's a Microsoft corporate project, which puts me in a state of cognitive dissonance, as if I found out that Harry Reid likes zydeco music or that George Bush smokes pot. Oh, wait...

I've seen/used a simple arithmetic problem with written numbers ie:
Please answer the following question to prove you are human:
"What is two plus four?"
and similar simple questions which require reading:
"What is man's best friend?"
you can supply an endless stream of questions, should the person attempting access be unfamiliar with the subject matter, and it is accessible to all readers, etc.

There's a reason why companies use captchas or logins. As ugly of a solution as captchas are, they're currently the best (most accurate, least disruptive to users) way of weeding out bots. If a login solution doesn't work for you, I'm afraid the only realistic solution is a captcha.

If users will be filling in a form, honeypot fields are simple to implement, and can be reasonably effective, but nothing is perfect. Create one or more hidden fields in the form, and if they contain anything when the form is posted, reject the form. Spambots will usually attempt to fill in everything.
You do need to be aware of accessibility. Hidden fields probably won't be filled in by those using a standard browser (where the field is not visible), but those using screen readers may be presented with the field. Be sure to label it correctly so that these users do not fill it in. Perhaps with something like "Please help us to prevent spam by leaving this field empty". Also, if you do reject the form, be sure to reject it with helpful error messages, just in case it has been filled in by a human.

I suggest getting the Growmap Anti Spambot Wordpress plugin and seeing what code you can borrow from it or just using the same technique. I've found this plugin to be very effective for curtailing automated spam on my WordPress sites and I've started adapting the same technique for my ASP.NET sites.
The only thing it doesn't deal with are human cut-and-paste spammers.

Related

Securing a php contact form

i have made a simple php contact form following this tutorial:
http://www.catswhocode.com/blog/how-to-create-a-built-in-contact-form-for-your-wordpress-theme
The big problem is that this form processing is not safe, I have heard people can use it to send spam and/or hack my server.
What are the basic steps needed to make this form more secure?
Ps: I don't want to use re-captcha if it can be avoided...
Edit: I need suggestions to what php functions are used to filter and secure that the form is submitted "the right way" and not altered and/or used to hack my site or send email to other people (using the site to send spam to other people). Do i just need to use strip_slashes? or is there a better way?
One way: If you're not a huge site, it's not likely anyone is going to figure this out/take the time to.
You could use some tricky JS to handle tokens on click. So your server issues token-id's to clickable/focus-able elements on the page during the backend render phase. Log these in a database or data file. Then, when users click around and submit, you can compare the id's sent via the onclick() function. You could also apply some heuristics to determine if the history of clicks is reasonably paced. Posts are too fast to be a human or not, that is, even if they scripted the hijacking of the token-ids and auto submitted, you could check that the time between click events appears automated. Signed up for a twitter account lately? They use passive human detection that while not 100% foolproof, it is slower and more difficult to break. Somebody would REALLY want to hack/spam your site.
Important Step 2: strip out/URLEncode strange characters if you think this will break your page. common ones that break things are " and ' and :
Another Way: http://areyouahuman.com/
As long as you are using encrypted methods verifying humanity without crappy CAPTCHA is possible.I mean, don't ignore your headers either. These are complimentary ways.
The key is to have enough complexity to make for an NP-Complete problem. http://en.wikipedia.org/wiki/NP-complete
When the day comes when AI can solve multiple complex Human problems on their own, we will have other things to worry about than request tampering.
http://louisville.academia.edu/RomanYampolskiy/Papers/1467394/AI-Complete_AI-Hard_or_AI-Easy_Classification_of_Problems_in_Artificial
Another company doing interesting research is http://www.vouchsafe.com/play-games they actually use games designed to trick the RTT into training the RTT how to be more solvable by only humans!
Here's a great article on NP-Hard problems. I can see a huge possibility here: http://www.i-programmer.info/news/112-theory/3896-classic-nintendo-games-are-np-hard.html

Best method to prevent gaming with anonymous voting

I am about to write a voting method for my site. I want a method to stop people voting for the same thing twice. So far my thoughts have been:
Drop a cookie once the vote is complete (susceptible to multi browser gaming)
Log IP address per vote (this will fail in proxy / corporate environments)
Force logins
My site is not account based as such, although it aggregates Twitter data, so there is scope for using Twitter OAuth as a means of identification.
What existing systems exist and how do they do this?
The best thing would be to disallow anonymous voting. If the user is forced to log in you can save the userid with each vote and make sure that he/she only votes once.
The cookie approach is very fragile since cookies can be deleted easily. The IP address approach has the shortcoming you yourself describe.
One step towards a user auth system but not all of the complications:
Get the user to enter their email address and confirm their vote, you would not eradicate gaming but you would make it harder for gamers to register another email address and then vote etc.
Might be worth the extra step.
Let us know what you end up going for.
If you want to go with cookies after all, use an evercookie.
evercookie is a javascript API available that produces
extremely persistent cookies in a browser. Its goal
is to identify a client even after they've removed standard
cookies, Flash cookies (Local Shared Objects or LSOs), and
others.
evercookie accomplishes this by storing the cookie data in
several types of storage mechanisms that are available on
the local browser. Additionally, if evercookie has found the
user has removed any of the types of cookies in question, it
recreates them using each mechanism available.
Multi-browser cheating won't be affected, of course.
What type of gaming do you want to protect yourself against? Someone creating a couple of bots and bombing you with thousands (millions) of requests? Or someone with no better things to do and try to make 10-20 votes?
Yes, I know: both - but which one is your main concern in here?
Using CAPTCHA together with email based voting (send a link to the email to validate the vote) might work well against bots. But a human can more or less easily exploit the email system (as I comment in one answer and post here again)
I own a custom domain and I can have any email I want within it.
Another example: if your email is
myuser*#gmail.com*, you could use
"myuser+1#gmail.com"
myuser+2#gmail.com, etc (the plus sign and the text after
it are ignored and it is delivered
to your account). You can also include
dots in your username (my.user#gmail.com). (This only
works on gmail addresses!)
To protect against humans, I don't know ever-cookie but it might be a good choice. Using OAuth integrated with twitter, FB and other networks might also work well.
Also, remember: requiring emails for someone to vote will scare many people off! You will get many less votes!
Another option is to limit the number of votes your system accepts from each ip per minute (or hour or anything else). To protect against distributed attacks, limit the total number of votes your system accepts within a timeframe.
Different approach, just to provide an alternative:
Assuming most people know how to behave or just can't be bothered to misbehave, just retroactively clean the votes. This would also keep voting unobtrusive for the voters.
So, set cookies, log every vote and afterwards (or on a time interval?) go through the results and remove duplicates based on the cookie values, IP/UserAgent combinations etc.
I'd assume that not actively blocking multiple votes from same person keeps the usage of highly technical circumvention methods to a minimum and the results are easy to clean.
As a down side, you can't probably show the actual vote counts live on the user interface, or eyebrows will be raised when bunch of votes just happen to go missing.
Although I probably wouldn't do this myself, but look at these cookies, they are pretty hard to get rid of:
http://samy.pl/evercookie/
A different way that I had to approach this problem and fight voting fraud, was to require an email address, then a person could still vote, but the votes wouldn't count until they clicked on a link in the email. This was easier than full on registration, but was still very effective in eliminating most of the fraudulent votes.
If you don't want force users to log, consider this evercookie, but force java script to enable logging!
This evercookie is trivial to block because it is java script based. The attacker would not likely use browser, with curl he could generate tousends of requests. Hovewer such tools have usually poor javascript support.
Mail is even easier to cheat. When you run your own server, you can accept all email addresses, so you will have practically unlimited pool of addresses to use.

Preventing spam bots on site?

We're having an issue on one of our fairly large websites with spam bots. It appears the bots are creating user accounts and then posting journal entries which lead to various spam links.
It appears they are bypassing our captcha somehow -- either it's been cracked or they're using another method to create accounts.
We're looking to do email activation for the accounts, but we're about a week away from implementing such changes (due to busy schedules).
However, I don't feel like this will be enough if they're using an SQL exploit somewhere on the site and doing the whole cross site scripting thing. So my question to you:
If they are using some kind of XSS exploit, how can I find it? I'm securing statements where I can but, again, its a fairly large site and it'd take me awhile to actively clean up SQL statements to prevent XSS. Can you recommend anything to help our situation?
1) As mentioned above reCAPTCHA is a good start.
2) Askimet is a great way to flag spam before it is published. It's what Wordpress uses to stop spam and it is extremely effective. You then could reject or queue the entry for moderation based on the results. It's API is ridiculously easy to use, too. (I have PHP code if you need it). You might need a commercial license although I am sure you can get started using the free version.
3) Verification of email addresses is definitely a good idea as it requires a valid email account which many spammers do not have. Just make sure you make verifying the email address easy as if it is too difficult it can turn legitimate users away as well.
If the bots were exploiting a hole in a script somewhere, there should be evidence of that in the logs. Check for direct POSTs to user creation scripts and the journal entry creation scripts without the usual "normal" surfing activity prior to the hit: The bots may have trolled the site only once and are bypassing the step of pulling down the forms and pretending to fill them in. Look for GET requests with obvious XSS-type data in the query strings.
You could also embed a random token in an hidden field within the forms and require that token to be present for the activation/posting to go through. If the bots only parsed your signup scripts once and are doing direct posts, this will stop them in their tracks until the bot creators catch on and look for the token. But it would give you some breathing space to implement a better system.
If your user account tables don't have some kind of time-of-creation time stamp on them, put one in and have the server create the timestamp, not your user scripts. This way you can narrow down the time period(s) to scan the logs for bot activity and see what they're doing. And if nothing else, you could block the IPs the bots are posting from.
I'm surprised that someone can advise Akismet and it is accepted as answer:
Akismet engagement is illegal in EU as infringing privacy protection laws;
Any blacklisting system helps criminals, making internet unusable by legit users;
Systems collecting spam to analyze it are doomed to act retroactively always lagging behind spammer techniques advances and bot development;
Why to accumulate and analyse spam fed by bots instead of blocking spam bots?

Top techniques to avoid 'data scraping' from a website database

I am setting up a site using PHP and MySQL that is essentially just a web front-end to an existing database. Understandably my client is very keen to prevent anyone from being able to make a copy of the data in the database yet at the same time wants everything publicly available and even a "view all" link to display every record in the db.
Whilst I have put everything in place to prevent attacks such as SQL injection attacks, there is nothing to prevent anyone from viewing all the records as html and running some sort of script to parse this data back into another database. Even if I was to remove the "view all" link, someone could still, in theory, use an automated process to go through each record one by one and compile these into a new database, essentially pinching all the information.
Does anyone have any good tactics for preventing or even just detering this that they could share.
While there's nothing to stop a determined person from scraping publically available content, you can do a few basic things to mitigate the client's concerns:
Rate limit by user account, IP address, user agent, etc... - this means you restrict the amount of data a particular user group can download in a certain period of time. If you detect a large amount of data being transferred, you shut down the account or IP address.
Require JavaScript - to ensure the client has some resemblance of an interactive browser, rather than a barebones spider...
RIA - make your data available through a Rich Internet Application interface. JavaScript-based grids include ExtJs, YUI, Dojo, etc. Richer environments include Flash and Silverlight as 1kevgriff mentions.
Encode data as images. This is pretty intrusive to regular users, but you could encode some of your data tables or values as images instead of text, which would defeat most text parsers, but isn't foolproof of course.
robots.txt - to deny obvious web spiders, known robot user agents.
User-agent: *
Disallow: /
Use robot metatags. This would stop conforming spiders. This will prevent Google from indexing you for instance:
<meta name="robots" content="noindex,follow,noarchive">
There are different levels of deterrence and the first option is probably the least intrusive.
If the data is published, it's visible and accessible to everyone on the Internet. This includes the people you want to see it and the people you don't.
You can't have it both ways. You can make it so that data can only be visible with an account, and people will make accounts to slurp the data. You can make it so that the data can only be visible from approved IP addresses, and people will go through the steps to acquire approval before slurping it.
Yes, you can make it hard to get, but if you want it to be convenient for typical users you need to make it convenient for malicious ones as well.
There are few ways you can do it, although none are ideal.
Present the data as an image instead of HTML. This requires extra processing on the server side, but wouldn't be hard with the graphics libs in PHP. Alternatively, you could do this just for requests over a certain size (i.e. all).
Load a page shell, then retrieve the data through an AJAX call and insert it into the DOM. Use sessions to set a hash that must be passed back with the AJAX call as verification. The hash would only be valid for a certain length of time (i.e. 10 seconds). This is really just adding an extra step someone would have to jump through to get the data, but would prevent simple page scraping.
Try using Flash or Silverlight for your frontend.
While this can't stop someone if they're really determined, it would be more difficult. If you're loading your data through services, you can always use a secure connection to prevent middleman scraping.
force a reCAPTCHA every 10 page loads for each unique IP
There is really nothing you can do. You can try to look for an automated process going through your site, but they will win in the end.
Rule of thumb: If you want to keep something to yourself, keep it off the Internet.
Take your hands away from the keyboard and ask your client the reason why he wants the data to be visible but not be able to be scraped?
He's asking for two incongruent things and maybe having a discussion as to his reasoning will yield some fruit.
It may be that he really doesn't want it publicly accessible and you need to add authentication / authorization. Or he may decide that there is value in actually opening up an API. But you won't know until you ask.
I don't know why you'd deter this. The customer's offering the data.
Presumably they create value in some unique way that's not trivially reflected in the data.
Anyway.
You can check the browser, screen resolution and IP address to see if it's likely some kind of automated scraper.
Most things like cURL and wget -- unless carefully configured -- are pretty obviously not browsers.
Using something like Adobe Flex - a Flash application front end - would fix this.
Other than that, if you want it to be easy for users to access, it's easy for users to copy.
There's no easy solution for this. If the data is available publicly, then it can be scraped. The only thing you can do is make life more difficult for the scraper by making each entry slightly unique by adding/changing the HTML without affecting the layout. This would possibly make it more difficult for someone to harvest the data using regular expressions but it's still not a real solution and I would say that anyone determined enough would find a way to deal with it.
I would suggest telling your client that this is an unachievable task and getting on with the important parts of your work.
What about creating something akin to the bulletin board's troll protection... If a scrape is detected (perhaps a certain amount of accesses per minute from one IP, or a directed crawl that looks like a sitemap crawl), you can then start to present garbage data, like changing a couple of digits of the phone number or adding silly names to name fields.
Turn this off for google IPs!
Normally to screen-scrape a decent amount one has to make hundreds, thousands (and more) requests to your server. I suggest you read this related Stack Overflow question:
How do you stop scripters from slamming your website hundreds of times a second?
Use the fact that scrapers tend to load many pages in quick succession to detect scraping behaviours. Display a CAPTCHA for every n page loads over x seconds, and/or include an exponentially growing delay for each page load that becomes quite long when say tens of pages are being loaded each minute.
This way normal users will probably never see your CAPTCHA but scrapers will quickly hit the limit that forces them to solve CAPTCHAs.
My suggestion would be that this is illegal anyways so at least you have legal recourse if someone does scrape the website. So maybe the best thing to do would just to include a link to the original site and let people scrape away. The more they scrape the more of your links will appear around the Internet building up your pagerank more and more.
People who scrape usually aren't opposed to including a link to the original site since it builds a sort of rapport with the original author.
So my advice is to ask your boss whether this could actually be the best thing possible for the website's health.

Blocking spam in a PHP site without bothering the user

I'm currently working on a little chat/forum site that I roughed out in a weekend, and it has anonymous entries (i.e.: no usernames or passwords). This looks like it could be easy-cake for a spammer to ruin, but I don't want to bother the user with captchas or similar anti-spam inputs.
Are there any invisible-to-the-user alternatives to these? Thanks for your help.
One thing you should know about spammers is they always go for the low-hanging fruit. Same with hackers. By this I mean they'll pick the easiest to hit targets that affect the most users. This is why PHP and Windows vulnerabilities are often exploited: they affect so many users that if you find such a weakness/exploit your target "market" is huge.
It's also a big part of the reason why Linux and Mac OSs remain relatively unscathed by viruses for example: the target market is much smaller than Windows. Now I'm not equating the security and robustness of Windows with Mac/Linux but even though the security model of the latter two is much better the number of attacks against the former is still disproportionate with the deficiencies it has.
I say this because one of the best ways to avoid these kinds of problems is not to use popular softare. phpBB for example has had lots of attacks made against it just because it's so popular.
So by doing your own chat/forum system you're at a disadvantage because you have a system that doesn't have the field-testing something popular does but you also have an advantage in that it isn't worth most spammer's time to exploit it. So what you need to watch out for is what can automated systems do against you. Contact forms on Websites tend to have recognizable markers (like name, email and comment fields).
So I would advise:
Ignoring responses that come within say 5-10 seconds of sending the form to the user;
Using a honeypot (CSS/JS hidden fields as described elsewhere);
Using Javascript where applicable to render, reorder or display the form;
Using non-predictable form field names; and
Throttle bad responses by IP.
Not a bomb-proof solution, but you can have some hidden input fields. If those are not left empty, you caught a bot.
Bots tend to fill all input fields, while users will sure leave fields they don't see empty.
This has worked 100% of the time for me:
<input type="text" style="display:none" name="email" value="do not fill this in it is for spam catching" />
Then server side (PHP):
if($_POST['email'] != 'do not fill this in it is for spam catching') {
// spam
}
As mentioned earlier, most bots fill everything in, especially inputs named "email".
The idea of capchas is that they are very easy for humans to pass but very diffucult for bots etc. to avoid. If you don't want this kind of solution what will keep those spam-bots from posting to your site?
It's like you would like your computer to be safe but you don't want to use an antivirus and firewall.
I think you could create a session for every user that enters your site and first time they want to post something show them the capcha (don't require to log in, just pass capcha). If they pass it just store a flag in session that they are human. As long as they have their browser opened they can post and reply on your site what they want. Bots will unlikely pass this first test.
There are two classes of anti-spam protection.
The first is to make it difficult for automated bots to stumble data into your site. The hidden form field method is frequently mentioned for this, and is suitable for low traffic sites. These protections can be trivially defeated by a spam bot written for your site. However if you are too small a target, this won't happen.
The second is the "bothersome" types. This usually involves a captcha, registration, or email confirmation of post. You can use a few approaches to make this less bothersome, but requires much more effort on the bot's side to post spam.
Note that both of these approaches can often impede disabled and mobile users.

Categories