OK, here's my problem, I have 1 main server with a measly 128M of RAM. I also have a few other servers but they cannot support certain things making them not usable for a web server (not ideal for development (technical reason)). But the thing is that these servers have 4GB of ram. I want to put them to some good use and allow them to be used as memcached buckets?
Is this possible?
Of course you will think I am crazy for not using the 4GB server but I am not able to as the service provider disallows certain ports (25 is the one causing me issues as my web application requires mail).
I am using PHP. Please tell me, if this can work then what I need to install if I am not using the memcached server on my web server.
Also what ports will I have to forward?
Thanks in advance.
If your application needs memory, run it on those servers anyway! Use your server that has full access to the internet as nothing more than a gateway to those servers.
You can do this a variety of ways, from simple routing, NAT, proxying, or even just mapping some ports over.
Related
Overview
I have a Laravel 9 application which is hosted with Digital Ocean. I use Laravel Forge to handle provisioning of the servers, management, etc. I've created two separate servers for my production environment. One to host my Laravel application code and another for the database which runs MySQL 8. These two servers are networked together and communicate over their VPC assigned private IP addresses.
Problem
I initially provisioned one server to host my application. This single server hosted both the Laravel application code and database. I have an endpoint that I hit to measure the response time for my application.
With one server that hosts the codebase and database the average response time was: ~70ms
When I hit the same endpoint again but with my two dedicated servers the average response time was: ~135ms
Other endpoints in my application also have a significant increase in response time when the database lives on a dedicated server vs a single server that houses everything.
Things I have done
All database queries have been optimized. (n+1, etc.)
Both networked servers are in the same region.
Both networked server's resources (CPU, RAM) are low and are not capping out.'
I've turned on Laravel's database config "sticky" option with no noticeable improvements.
I've enabled PHP OPcache for PHP 8.1.
Questions
How can I achieve a faster response time when my database is on a separate server than my codebase?
Am I sacrificing performance for scalability with dedicated servers?
TLDR
I'm experiencing slower response times in my Laravel application when the codebase and database run on separate dedicated servers vs hosting everything on one server.
Are your servers in the same data center and on the same VLAN?
Are you sure that you are connecting with your private VLAN IP address?
Some latency is expected if you need to connect to a database on another server. Have you tried to ping between the servers to see what the latency is?
Do you really need to have the web server and the database on separate servers? If so, I would probably try Digital Oceans managed database. I have used that for several projects and it works great.
Q: How can I achieve a faster response time when my database is on a separate server than my codebase?
A: If hosted in the same data center, the connection latency should be 30ms or less. Tested between AWS RDS and EC2 instances. Your mileage could vary depending on host.
Q: Am I sacrificing performance for scalability with dedicated servers?
A: It's standard practice to host databases separately to your application. It would be unrealistic to do otherwise for bigger projects. You can soften the impact by selectively caching data that doesn't change regularly on the main server. Unfortunately, PHP is not particularly good at this kind of fine tuning so you might be out of luck.
I can tell you that I currently run a central MySQL RDS instance that many ubuntu EC2 instances communicate with. While the queries take around 30ms, smart use of caching gives the majority of my web requests a 30ms response time in their own right. I do have the advantage of using NodeJS which is always doing things in the background without needing a request before performing work.
You may unfortunately find that you're running into one of the limitations of PHP.
i am new in this material so i have 2 questions to ask in this 1 post
first of all, i am a PHP developer who wants to host my application into my own PC
(my application is something like social media assuming will have many users)
(i dont want to use any Public WebHosting / VPS , considering cost and security of my data)
and i decided to make my own webserver for my Start-Up Company
but the very problem here is the cost to buy a Server is too expensive if we compare to Desktop PC
and my question no 1 is
For a WebServer based on PHP (Apache) which used Sql Server as a Database , can i just use Desktop PC instead of using Server?
(considering it will online 24hours / day and processing big amount of request. Assuming i have many users online at the same time)
If let's say i bought 1000$ Desktop PC which i maximize the money at Processor, Memory and Storage
will it worth more than if i bought 1000$ Server which i maximize the money at Processor, Memory and Storage also
question no 2 is
if i must use Server instead of Desktop PC as my Webserver, i will use Windows Server as my OS,
but if i can use a desktop PC , can i use Windows 7 Professional instead of using Windows Server?
because some website told me that Windows 7 Professional is not as powerful as Windows Server For a Normal Server (but i dont know about WebServer)
and i dont really know what's the disadvantage if i use Windows 7 Professional instead of using Windows Server as OS for this PHP application
I'll address your first question :
The main issue with a PC as a server is availablity and security.
Servers are secured and configured in a way that will prevent most issues that you would normally wouldn't think about, like disabling eval, disabling exec, disabling file_get_contents on default, and many other things, hosting companies provide support and assistance on a wide variety of topics.
(Automatic backups of sql, machine users and files also).
the 2nd issue is that if your house loses power, your website is down. if your hd crashes, it takes hours and hours to replace, reinstall, reconfigure and re-deploy your website.
Don't expect your new app to be the next LinkedIN or Twitter or facebook when it comes to traffic and usage, Just start with a small hosting company for the cost of a few $ a year(you can get really cheap hosting, but you get what you pay for) and upgrade accordingly.
With 1000$ you can buy really good hosting with superb stats for quite a long time.
My suggestion is start with a web hosting, and grow slowly, most hosting companies will allow you to upgrade.
You can use your local machine as a developer environment, but the actual deployment should be done on a server.
The first question is the internet bandwith. At data centers the bandwith is usually much better than at home PC.
The second question - the 'white' IP address, that can be accessible from everywhere. Not all internet service providers provide this service.
So, i think you can give a try, if you have good ISP provider.
Also i think you can use linux instead of windows, if your project is a PHP based site.
What do you use as sql database? MySQL, PostGreSQL, MariaDB or Microsoft SQL server?
I think Windows is only needed, if you want to use Microsoft SQL. In all other cases Linux can be easier to use and cheaper alternative.
I'm currently developing a PHP application that is going to use websockets for client-server communication. I've heard numerous times that PHP shouldn't be used for server applications because of the lack of threading mechanisms, its memory-management (cyclic references) or the unhandy socket library.
So far, everything is working quite well. I'm using phpws as the websocket library and the Doctrine DBAL to access different database systems; PHP is version 5.3.8 . The server should serve a maximum of 30 clients. Yet especially in the last days I've read several articles stating the ineffectiveness of PHP for long running applications.
Now I'm not aware whether I should continue using websockets with PHP or rebuild the entire serverside application. I've tried Python with Socket.IO, though I did not get the results I expected.
I guess I have the following options:
Keep everything as it is.
Make the application use Ajax in combination with Socket.IO - e.g. run a serverside script that invokes the client's ajax calls when data is submitted to the server.
The last point sounds quite interesting, though it would require some work .. Would it be a problem for servers to execute all the clients requests at one time?
What would you recommend? Is the problem with PHP's memory management (I'm using gc_collect each time a client sends data to the server) still valid? Are there other reasons beside the obvious reasons (no threading, ...) for not using PHP as a server?
You can try running your socket.io on a node server on another port on your server (that is if you are not using a hosting plan like goDaddy).
I am using it and the performances are really satisfying.
I have an apache server on the port 80 serving my php files, and my server-client communications are done using a Node.js server running socket.io on the port 8080 (dev) or 843 (prod).
Node.js is really light and has great performance, but you need to run it as a server. Nodejitsu.com is a hosting solution that has the websocket protocol available and is on beta, so it is still free for now. Just note that you need to listen on the port 80 with socket.io, this is a limitation from theyr network.
If you want your pages all to be accessed on the port 80 then you will need a reverse proxy like varnish .
I hope that helps! Have a nice day.
Are there other reasons beside the obvious reasons (no threading, ...)
for not using PHP as a server?
Yep, lots of socketfunctions are incompatible with each other and it's a hell to debug.
i tried something similar myself and quit frustrated sind every function i thought would make sense didnt do what i expected
I'm using Rackspace Cloud Servers. I have installed NGINX with PHP and Memcache.
When the Web server is approaching capacity, I plan to clone the server, and then add a load balancer on top of it i.e. two servers with one load balancer managing the traffic between the two. All this is done automatically using the Rackspace API.
However, I'm lost as to what is going to happen to Memcache. I now have two Memcache servers. So the cache will no longer work as expected being that there are now, essentially, two Memcache servers.
Is it possible to just install Memcache on a unique server and then have my main Web server access it, this way when I want to create a situation where there is a load-balancer i.e. two web servers, they would both be referencing the same Memcache server?
Yes, you can have a single Memcached server and all Memcache clients connect and use it (rather than local installs of Memcached). You can use two Memcached servers if the data inconsistency is acceptable and the cost of calculating any stored data twice is acceptable to you. It'll save you time in the short-term, but ultimately it will probably complicate things.
In relation to Rackspace, make sure you're using the private direct IP address Rackspace gives you to network across machines instead of the external WAN IP. This will be faster, more secure, and won't count against your bandwidth allocation.
What in the heck do you put as the host for Memcache::addServer($host, $port)?
I am hosting on mediatemple and this is really, really, really, really starting to get to me.
Do I have to set up a new memcahce server or what. I have no idea what to do and every tut just keeps saying "localhost". Well I don't want to run it on my localhost.... I guess I just don't understand what's going on.
Any help would be greatly appreciated.
"localhost" is whatever machine the code is running on. If the code is running on a server at MediaTemple, then "localhost" will be that server.
If they provide a memcache server, they should provide it's address somewhere in their knowledge base. Try "localhost" first, on the off chance that it's running on the same machine your site is hosted on.
UPDATE
Assuming you're running on their Grid service, try following these instructions:
http://kb.mediatemple.net/questions/854/Using+memcached+with+Django+or+Ruby+on+Rails+in+a+(gc)+GridContainer
Memcached is a service that provides access to a centralized RAM store which would enable caching for your application. Its default port is 11211. If your application requires it then it sounds like you will need access to one.
Most of the time though it's only used for caching and not having it means it will access your database for every request which can degrade your performance significantly depending on your scenario.