Laravel 5 losing sessions and .env configuration values in AJAX-intensive applications - php

I am using Laravel 5 (to be specific, "laravel/framework" version is "v5.0.27"), with session driver = 'file'.
I'm developing on Windows 7 64 bit machine.
I noticed that sometimes (once a week or so) I get unexpectedly and randomly logged out. Sometimes this happens even immediately after I log in. I have added log messages to my auth logic code, but the log code was not triggered. Laravel behaved as if it has completely lost the session file.
Another, more serious issue, was that sometimes after debugging sessions (using xdebug and Netbeans) Laravel started losing also other files - .env settings, some debugbar JS files etc. The error log had messages like:
[2015-07-08 13:05:31] local.ERROR: exception 'ErrorException' with message 'mcrypt_encrypt(): Key of size 7 not supported by this algorithm. Only keys of sizes 16, 24 or 32 supported' in D:\myproject\vendor\laravel\framework\src\Illuminate\Encryption\Encrypter.php:81
[2015-07-08 13:05:31] local.ERROR: exception 'PDOException' with message 'SQLSTATE[HY000] [1044] Access denied for user ''#'localhost' to database 'forge'' in D:\myproject\vendor\laravel\framework\src\Illuminate\Database\Connectors\Connector.php:47
This clearly signals that .env file was not read by Laravel, so it is using default settings:
'database' => env('DB_DATABASE', 'forge'),
'key' => env('APP_KEY', 'somekey'),
Losing files happened rarely, maybe once a month or so, and it always happened after debugging sessions. I always had to restart Apache to make it work again.
To stress-test the system and reproduce the issues reliably, I used a quick hack in my Angular controller:
setInterval(function(){
$scope.getGridPagedDataAsync();
}, 500);
It is just a basic data request from Angular to Laravel.
And that was it - now I could reproduce the session losing and .env losing in 3 minutes or less.
I have developed AJAX-intensive web applications earlier on the same PC with the same Apache+PHP, but without Laravel, without .env, and I hadn't noticed such issues before.
While debugging through code, I found out that Laravel is not using PHP built-in sessions at all, but has implemented their own files-based session. Obviously, it does not provide the same reliability as default PHP sessions and I'm not sure why.
Of course, in real life scenarios my app won't be that AJAX-intensive, but in my experiences on some occasions it is enough with just two simultaneous AJAX requests to lose the session.
I have seen some related bug reports on Laravel for various session issues. I haven't yet seen anything about dot-env, though, but it seems suffering from the same issue.
My guess is that Laravel does not use file locks and waiting, thus if a file cannot be read for some reason (maybe locked by some parallel process or Apache) then Laravel just gives up and returns whatever it can.
Is there any good solution to this? Maybe it is specific to Windows and the problems will go away on a Linux machine?
Curious, why Laravel (or Symfony) developers haven't fixed their session file driver yet. I know that locking/waiting would slow it down, but it would be great to at least have some option to turn on "reliable sessions".
Meanwhile I'll try to step through Laravel code and see if I can invent some "quick&dirty" fix, but it would be much better to have some reliable and "best practices" solution.
Update about .env
The issue turned to be not related to locking files. I found the Laravel bug report for .env issue, which lead me to a linked report for Dotenv project which, in turn, says that it is a core PHP issue. What disturbs me is that Dotenv devs say that Dotenv was never meant to be used for production, but Laravel seems to rely upon Dotenv.
In https://github.com/laravel/framework/pull/8187 there seems to be a solution which should work in one direction but some commenter says that in their case the issue was the opposite. Someone called crynobone gave a clever code snippet to try:
$value = array_get($_ENV, $key, getenv($key));
There appeared another suggestion to use "makeMutable()" on both Dotenv and Laravel Githubs, but commenters report that this might break tests.
So I tried the crynobone's code but it did not work for me. While debugging, I found out that in my case when things break down for concurrent requests, the $key cannot be found nor in getenv(), nor in $_ENV and not even in $_SERVER.
The only thing that worked (quick&dirty experminet) was to add:
static::$cached[$name] = $value;
to Dotenv class and then in helpers.php env() method I see that:
Dotenv::$cached[$key]
is always good, even when $_ENV and getenv both give nothing.
Although Dotenv was not meant for production, I don't want to change our deployment and configuration workflow.
Next I'll have to investigate the session issues...
Addendum
Related Laravel bug reports (some even from version 4. and it seems, not fixed):
https://github.com/laravel/framework/issues/4576
https://github.com/laravel/framework/issues/5416
https://github.com/laravel/framework/issues/8172
and an old article which sheds some light on what's going on (at least with session issues):
http://thwartedefforts.org/2006/11/11/race-conditions-with-ajax-and-php-sessions/

After two days of intensive debugging I have some workarounds which might be useful to others:
Here is my patch for Dotenv 1.1.0 and Laravel 5.0.27 to fix .env issues:
https://gist.github.com/progmars/db5b8e2331e8723dd637
And here is my workaround patch to make session issues much less frequent (or fix them completely, if you don't write to session yourself on every request):
https://gist.github.com/progmars/960b848170ff4ecd580a
I tested them with Laravel 5.0.27 and Dotenv 1.1.0.
Also recently recreated patches for Laravel 5.1.1 and Dotenv 1.1.1:
https://gist.github.com/progmars/e750f46c8c12e21a10ea
https://gist.github.com/progmars/76598c982179bc335ebb
Make sure you add
'metadata_update_threshold' => 1,
to your config/session.php for this patch to become effective.
All the patches should be applied to vendor folder each time it gets recreated.
Also you might want to separate the session patch out because you update session.php just once, but the other parts of the patch should be applied to vendor folder each time it gets recreated before deployment.
Be warned: "It works on my machine". I really hope Laravel and Dotenv developers will come up with something better, but meanwhile I can live with these fixes.

My personal opinion that using .env to configure Laravel is a bad decision. Having .php files that contained key:value style of configuration was much better.
However, the problem you are experiencing is not PHP's fault, nor Apache's - it's most likely Windows issue.
A few other things: Apache contains a module that allows PHP binary to be integrated into Apache's process or thread, called mod_php - the issue with this is that PHP is not only slow, but getting another binary integrated into an existing one is super tricky and things might be missed. PHP also must be built with thread-safety in this case. If it's not, then weird bugs can (and will) occur.
To circumvent the problem of tricky integration of one program into another, we can avoid this completely and we can have .php served over FastCGI protocol. This means that the web server (Apache or Nginx) will take the HTTP request and pass it to another "web" server. In our case, this will be PHP FastCGI Process Manager or PHP-FPM.
PHP-FPM is preferred way of serving .php pages - not only because it's faster (much, much faster than integrating via mod_php), but you can easily scale your HTTP frontend and have multiple machines serve .php pages, allowing you to easily horizontally scale your HTTP frontend.
However, PHP-FPM is something called a supervisor process and it relies on process control. As far as I'm aware, Windows do not support process control in the way *nix does, therefore php-fpm is unavailable for Windows (in case I am wrong here, please correct me).
What does all of this mean for you? It means that you should use software that's designed to play nicely with what you want to do.
This is the logic that should be followed:
A web server accepts HTTP requests (Apache or Nginx)
Web server validates the request, parses the raw HTTP request, determines whether the request is too big and if everything goes well in this case, it proxies the request to php-fpm.
php-fpm processes the request (in your case it boots up Laravel) and returns the HTML which the web server shows to the user
Now, this process while great, comes with a few issues and one huge problem here is how PHP deals with sessions. A default PHP session is a file stored somewhere on the server. This means that if you have 2 physical machines serving your php-fpm, you're going to have problems with sessions. This is where Laravel does something great - it lets you use encrypted cookie based sessions. It comes with limitations (you can't store resources in those sessions and you have a size limit), but a correctly built app wouldn't store too much info in a session in the first place. There are, of course, multiple ways of dealing with sessions, but in my opinion the encrypted cookie is super, super trivial to use and powerful. When such a cookie is used, it's the client who carries the session information and any machine that contains decryption key can read this session, which means that you can easily scale your setup to multiple servers - all they have to do is have access to same decryption key (it's the APP_KEY in the .env). Basically you need to copy the same Laravel installation to machines that you wish to serve your project.
The way I would deal with the issue that you have while developing is the following:
Use a virtual machine (let's say Oracle Virtualbox)
Install Ubuntu 14.04
Map a network drive to your Windows host machine (using Samba)
Use your preferred way of editing PHP files, but they would be stored on the mapped drive
Boot an nginx or Apache, along with php-fpm on the VM to serve your project
Now what you gain via this process is this: you don't pollute your Windows machine with a program that listens on ports 80 / 443, when you're done working you can just shut the VM down without losing work, you can also easily simulate how your website would behave on an actual production machine and you wouldn't have surprises such as "it works on my dev machine but it doesn't work on my production machine" because you'd have the same software for both purposes.
These are my opinions, they are not all cold-facts and what I wrote here should be taken with a grain of salt. If you think what I wrote might help you, then please try to approach the problem that way. If not, well, no hard feelings and I wish you good luck with your projects.

Related

Way to securing symfony production env from caching related bugs

Okay, might sounds little like a troll but it's not. Since I started to programming with symfony, I encountered very very very weird bugs. Like 3 times in a month. That was always related to my caching files and every time, I spent hours to finally figure it out it was coming from him.
I'm working on a project with cryptography, and when my dear symfony start to forget the keys between two encryption with the same key : I've start to freaked out about the future of my web application.
(the real bug is to weird that I can't really explain it)
I am going to store sensitive data and I can't imagine some of this things append in production and all I can say is since I've cleared AND remove my cache folder, the bug disappear. What a trusting behaviour !
So, are those weird bug only related to development environment due to the large amount of file update ?
What is your context to tell you that you need to clear your cache in development environment ?
Should I deactivate all kind of cache for the production environment to guarantee that will not happen again ?
Thank you guys.
Think 3 time, before you decide to disable cache on prod. It's probably a bad idea.
On dev if you modify bundles, some config, etc, sometimes Symfony fails to refresh cache, and to be sure that errors you receive are real you should delete whole app cache and clear memcached/redis if you use it.
On production you probably will/should create new directory with clean fresh version of your app with clean cache and then you replace your old revision with new one (usually by changing the symlink). Thats why if your app works on dev with clean cache it should be ok on prod env.

Limit PHP Execution?

We are working on making a wordpress type system from scratch with a templating system and am wondering about security. We hope to have a SaaS model where the user will be on the same server as a few other users, but we hope to also give them the tools to modify their own Views files, which means PHP access. We are using Laravel as the framework. As a long time Dreamhost user, I know you can section the same machine off into multiple environments, but not really sure what they were using to do so.
How can I prevent the execution of commands like eval(), system commands, and limit the users access to fopen (I assume that is mostly through the linux user permissions). I would like to give them direct file access to the Views folder and to develop their own solutions instead of forcing them to go through me, but without jeopardizing too much. If there are mysql considerations beyond separate users, feel free to chime in there as well.
There are several layers that you need to protect.
Some of the hosters incorrectly rely on PHP "protections" like open_basedir, safe_mode (older PHPs), disable_functions etc.
Even PHP does NOT consider them being security features - http://php.net/security-note.php
These can be disabled with any exploit for PHP and then the whole system is doomed, do not do that.
How it should be done
the bottom
Separate OS-level/system user for each hosted site
correct permissions (one can't be able to view/edit page of the other one) - also make sure sub-directories have correct permission as they're going to be similar
separate session files (LOT of webhostings put session files of each PHP-hosted site into the same directory, that's bad bad bad!
Apache finally got it's own module for this - Apache MPM-ITK.
Long story short: picture this as you'd give the user a shell on the machine (under his own uid) - he can't be able to do anything to the other hosted sites.
different uid/gid
system permissions
framworks like SELinux, AppArmor and similar
grsecurity if you want to be hardcore.. but the system will be harder to maintain.
going up?
You can get more hard-core. The best I've seen is a shared-library for apache (or whatever you use) - that is used when apache is started using LD_PRELOAD and it implements all the potentially malicious system calls like system(), execve() and basically any other call that you find bad.
I haven't seen a good implementation of this out there yet (other than custom ones somewhere) - correct me if I'm wrong.
Make sure to implement a white-list for this as eg. mail() in PHP executes sendmail by default and that won't work anymore.
conslusion
Add classic disable_functions, open_basedir, etc. into global php.ini, add session.save_path to every vhost - put sessions into user directories. Make sure users don't share anything.
Implement underlaying OS-level separation correctly.
Get hardcore with grsec and LD_PRELOAD lib hooking system calls.
Separation, separation, separation .. soon enough systems like Docker will provide LXC-based containers to separate users on kernel level but it's not quite production ready yet (imho).

How is the Twelve-Factor App manifesto applied to PHP projects?

I just read the Twelve-Factor App, which looks like a pretty comprehensive set of rules to apply in a web-based application. It uses python or rails in its examples, but never php... I was wondering which factors of the manifesto can be applied to PHP projects and how?
Thanks
Short answer:
All points apply to PHP as the twelve factor app manifesto refers specifically for web apps.
PHP has a very hard time complying to twelve factor, in particular in the items 2, 7, 8 9 (as a side effect of 7 and 8) and 12 (partially). Actually that's the first really grounded argument I have heard in the whole "PHP sucks" rant that is common on the Ruby and Python communities (don't get me wrong, I think Ruby and Python are better languages, but I don't hate PHP, and definitively hate the "my language is better" rants.)
Being that said, it may be that your PHP project is not a web app or SaaS, but just a simple website, so you may just deem that twelve factor is not a need.
Long answer: A point-by-point analysis would be:
Codebase: not an issue
Dependencies: the way PEAR works goes quite against this point, as pear dependencies are installed system wide and usually you don't have a consolidated manifesto to declare them. Is also usual for a PHP setup to require you to add packages to your OS installation to get some libraries available. Finally AFAIK there isn't a tool in PHP to provide isolation like "virtualenv", "rbenv" or "rvm" (or if it exists is not popular among the PHP community) Edit: Composer (http://getcomposer.org/) seems to do the right regarding dependencies, it still doesn't isolate the PHP version, but for all the rest it should be fine.
Config: some PHP frameworks are not very well suited to do this, but there are certainly others that do well, so it's not a flaw of the platform itself
Backing Services: shouldn't be much of an issue, despite maybe having to hack some frameworks a little in order to manage the services as resources
Build, release, run: this is totally appliable to PHP as this definitively doesn't concern only "compiling". One may argue that several projects and hosting platforms on the PHP community abuse of direct FTP, etc. but that's not a flaw of PHP itself, and there is no real impediment on doing things right regarding this item.
Processes: This definitively concerns to PHP. PHP is quite capable of running purely stateless processes (the emphasis is on the word stateless), and actually several frameworks make your life easy for it. For example, symfony provides out-of-the-box session management with memcached or database storage instead of regular sessions
Port binding: Over simplyfing it, this point basically demands you to reverse proxy and have the actual webserver embedded on the app instead of being a separated component. This puts PHP in a very hard position to comply. Despite there are ways to do this (see the reply about using PHP as FastCGI) that's definitively not the most common nor the best supported way to serve a PHP app as it is on other communities (e.g. Ruby, Node.js).
Processes: This is not impossible in PHP. However several elements put PHP in a hard position to comply. Namely the lack of good support for items 6 and 7; the fact that the PHP API to spawn new processes isn't really very nice to work with; and specially the way Apache's mod_php handles their workers (which is by far the most common deployment schema for PHP)
Disposability: If you use the right tools there is nothing inherent on PHP to prevent you from creating fast, disposable, tidy web and worker processes. However I believe that since the underlying process model is hard to implement as per points 7 and 8, then 9 becomes a bit cumbersome as a side effect
Dev/prod parity: This is very platform agnostic, and I would say one of the hardest to get done right. PHP is no exception to this rule, but it doesn't have a particular impediment either. Actually most of the tools named on the manifesto can be applied to a PHP project
Logs: Having your app agnostic of the log system on the execution environment is totally doable on PHP
Admin processes: The most important flaw of PHP regarding this point is its lack of a REPL shell. Regarding the rest, several frameworks like Symfony allow you to program admin tasks (e.g Doctine-based database migrations) and run them on the same environment as your "regular" web envionrment.
Since the PHP community is evolving, it may be that it has already righted some of the wrongs mentioned.
Build, release, Run: Applicable to compiled code which is not the case in PHP. So this
point is not something you need to look at.
I don't claim any authority on this 12 factor stuff, but my read of that section is that the author would disagree. It's not just about compiling, it's about managing dependencies both in the small (the snapshot of the code) and in the large (any libraries the code uses).
Imagine you're a new dev and they say, "Okay, this is a custom php app, so...
a) The custom code is really two subprojects, which are in repo A and repo B.
b) You'll need to create a directory layout like so, and then
c) check the code for each subproject out into this subdirectory and this subdirectory.
d) You'll also need these three open source PHP libraries:
version 3.1 of library Foo,
version 2.3 of library Bar, and
version 5.6 of library Bat.
e) download them from their home project sites and unpack them, then copy them into this directory, that directory, and the other directory.
f) then you'll need to set these configurations in the external library,
g) and these configs in our two custom code projects.
h) once that's all done, tar/gzip it all up, upload it up to the QA server and untar it into htdocs.
There's no compiling going in that set of steps, but you can bet there's a lot of building.
Getting all of that set up and working is the build step.
Using tar/gzip to take a snapshot of the working build is the release step.
SCP'ing/unpacking it into the QA server's htdocs directory is the runtime step.
You might say that some of those steps above are unnecessary - the libraries should be deployed at the system level and merely imported. From the 12factors.net site I'd say the author prefers you to import them uniquely and individually for the app. It sidesteps dependency versioning problems at the cost of more disk space (not that anybody cares). There are more hassles in managing all those dependencies as local-to-the-app, but then that's the point of the build/release/runtime scheme.
It might have changed since you read it - there are a few PHP examples now, although a few of them seem like negations of the twelve-factor concept.
One of the ways that normal mod_php sites violate twelve-factor comes with VII. Port binding. From the manifesto:
The twelve-factor app is completely self-contained and does not rely on runtime injection of a webserver into the execution environment to create a web-facing service. The web app exports HTTP as a service by binding to a port, and listening to requests coming in on that port.
However, apache/php can be coaxed into this worldview with something like this:
https://gist.github.com/1398498
Not perfect, but it works for the most part. To test it out, install foreman:
gem install foreman
then run it in the directory you cloned the gist into:
foreman start
then visit
http://localhost:5000/foo
DO NOT TAKE THIS POST AS A REFERENCE, THIS WAS WRITTEN IN 2011, MANY THINGS HAVE CHANGED SINCE THEN... THE WORLD OF PROGRAMMING IS IN CONSTANT EVOLUTION. A POINT OF VIEW OF 2011 IS NOT NECESSARILY STILL VALID IN 2018.
I just read a few lines of each points and here goes my analysis of the document:
Codebase: Everyone, php or not should have a codebase even for the little projects
Dependecies: PHP uses includes and code libraries that you should always be able to port easily by simply copying the code to a server. Sometimes you use PEAR and then if the server doesn't support it, you have to copy and install pear manually. I use Zend Framework most of the time, so it's just copying the code of the framework with the ftp upload.
Config: It is common for php apps to have a writable config file that you store configurations into. Where you store it is your choice as a developer, but it is usually either at the root of your app or in a settings/config folder.
Backing Services: PHP does use backing service most of the time such as MySQL. Other common services depending on your app are twitter and facebook. You use their API to communicate with them and store/retrieve data to work with.
Build, release, Run: Applicable to compiled code which is not the case in PHP. So this point is not something you need to look at.
Processes: HTTP is stateless and is SERVED,thus, you usually have no process apart from the web server. This is not entirely true as a webservice may be bundled with applications you package with or create for it. But, for the sake of standards, you usually don't have to use processes in the web world.
Port binding: PHP doesn't apply to port binding at all because it is not a process that hooks to an address, Apache, NGinx or Lighttpd does that for you. Reading #6/7 makes me understand that a website could never be a Twelve-Factor app.
Concurrency: Again this point treats about processes which do not apply to PHP web pages. This point applies to the server serving the content.
Disposability: This point discusses about how fast a process should be. Obviously, PHP web sites not being a process shouldnt apply but always note that your website should execute pages as fast as possible... Always think about this block or that block of code and see if it is optimized.
Dev/Prod Parity: This point is crucial in any app development. You never want to have a large gap between two app versions or upgrading can become a hassle. Furthermore, never create client specific versions of an app. Always find ways to allow your app to be configured/customized at the template level so you can keep your app as close as possible to all installed versions everywhere.
Logs: Logs are always a good thing to have, they allow you to follow the process of your code without having to output it to screen. My favorite tactic is to "tail -f logfile" inside of a linux console and look at what is happening as i execute my code.
Admin processes: Not applicable, in php, you don't have processes, but you do have pages that you can secure with usernames and passwords.

What are the best practices for a php from Deployment Winxp to Winxp?

Where i begin to work, they are using a Remote Desktop Conection, to transfer files and manage the server, but that seems to me really insecure and a good way to cause errors and take down the apache/php/mysql stack.
I was proposing FTP to transfer files more easyly (and secure compared to the other way) but started reading about php deployment. It seems pretty easy on Linux, but on windows i havent found out wich is the best way to do it..
So far i think git on the server, and comit to it from the developer is my best shot, but what about database deployment?
Phing/jenkins/capistrano seem overly complex.. but will try if you guys think is good
The approach I am using is database migration scripts. They look like this
db-update-001.sql
db-update-002.sql
I have a script which sequentially executes them and creates *.ok file for each if it's successful. *.sql files contain "alter" statements and are stored in Git. The .ok files are not stored, so if you distribute changes, you need to imprint only those without .ok files.
I use this file: https://github.com/atk4/atk4/blob/master/tools/update.sh
but since you are in MS environment, you might need to do something different.
While MSRDP is not the most secure protocol around, its a very long way aead of FTP.
FTP is intrinsically insecure - it sends passwords as clear text. It's also a PITA to manage across a stateful firewall even where you can ensure consistent PASV behaviour.
However you do need a method for transfering files which can be scripted / automated.
I'd go back and have a long hard look at the available deployment tools - I can't comment on how well the other products compare with phing, only having used the latter - however mostly I've used stuff developed in house.
Since you really should be using a version control system - I'd recommend considering using it as your file delivery mechanism too.

APC not recommended for production?

I have started having problems with my VPS in the way that it would faill to serve the pages on all the websites. It just showed a blank page, or offered to download the php file ( luckily the code was not in the download file :) ).
The server was still running, but this seemed to be a problem with PHP, since i could login into WHM.
If i did a apache restart, the sites would work again.
After some talks with the server support they told me this is a problem with the APC extension witch they considered to be old and not recommended for production servers. So they removed it for now, to see if the same kind of fails would continue to appear.
I haven't read anywhere that APC could have some problems or that its not always recommended to use, quite the contrary ... everywhere people are saying to always use it.
The APC extension was installed ssh and is the latest version.
Edit:
They also dont recomend MemCache and say that a more reliable extension would be eAccelerator
Um APC is current tech and almost a must for any performant PHP site.
Not only that but it will ship as standard in PHP 6 (rather than being an optional module like it is now).
I don't know what your issue is/was but it's not APC being outdated or old tech.
I run several servers myself and the only time I have ever had trouble with APC was when trying to run it concurrently with Zend Optimizer. They don't work together so if I must use Optimizer (like if some commercial, third-party code requires it) I run eAccelerator instead of APC. Effectively 6 of one, half-dozen of another when it comes to performance but I really doubt APC is the problem here.
Just to add, memcached is only going to benefit you greatly if you are running multiple servers which need to access a shared data cache. Memcached does not do opcode caching like APC/eAccelerator/Xcache/etc.
The problem is not to do with APC. If APC had a problem, it would either show up in your php log file or you simply wouldn't be able to access your website until you adjusted APC. The problem is more likely with apache itself. I've experience the same problem as you with blank pages before and it was to do with mod_security playing up and preventing pages from being sent that looked "suspicious". Also, memory usage in apache is good at killing the server under load. I've also had experience with a web host that had compiled apache with a memory leak so every X amount of requests (say 100,000) the server would crash! Most annoying.
Your web host doesn't sound the most competent out there as they are giving some bad advice, most likely based on ignorance.
APC should be used on production (with the mstat check turned off on production, but on for development). You can get more stats about your apc setup while it's working by loading the apc status file that comes with it and you get a nice page like this: http://drupal.org/files/images/APC%20Status-1.png
Memcache is very heavily used as it's also distributed! The use for such is as follows:
APC is the fastest as it works most closely to php, but only works on the same server executing the PHP itself so it's use is limited in that scope. Used primarily as an opcode cache.
Memcache is like a very fast database spread over many computers working as one unit. However, a powercut will wipe the lot!!! Hence why they are heavily used to remove preasure from the persistant database. Facebook and many other sites have hundreds of servers running memcache.
My advice would be to find a web host that understands PHP. Fighting web hosts is hard work about whos right and whos wrong... until you find a good one ;)
Sounds to me like they are pushing a product that they probably have referral kickbacks on.
I run my own servers (have for a while) and I've never had this problem, not any MAJOR problems with MemCache.

Categories