Here's the situation:
I have a web hosting which provides a MySQL database account, but connection only allows from localhost.
I'm considering to expose this MySQL through web interface, and extend the mysqli class so I can normally read/write to this database from another host.
Before doing this, I want to know if my solution is a good idea, and whether there already has such an open source solution to my situation?
Use Web Services. Web services are designed to provide an API so that one server can communicate with another server to access the resources of that server. The advantage of creating a Web service wrapper around your MySQL database is to avoid exposing the SQL layer to the broad Internet.
In general, by writing Web services, your application can only use the services that you've specifically chosen to expose. Additionally, many Web service frameworks offer authentication packages and validation that can help prevent malicious entities from illegally accessing or manipulating your data.
Finally, should you migrate to a different data source, you can maintain the same uniform interface between the application and the datasource, which eliminates the need to modify the PHP application.
However, by directly exposing your database to the Internet, you potentially expose yourself to data theft and data loss.
For more information on Web services, you could start with this Wiki Article on REST.
That's a lot of overhead and reimplementation work. Instead consider to open the MySQL server up for remote connections, using SSL and certificate authorization: http://dev.mysql.com/doc/refman/5.1/en/secure-basics.html
This allows you to expose the real mysqld server. You will need to use the most recent PHP version, as that adds SSL support in the PDO interface for example. http://www.php.net/manual/de/ref.pdo-mysql.php#103501 But I'd say that's still easier than crafting your own RPC interface and securing that.
And if you actually use Mysqli, then the SSL/cert support is already built-in: http://php.net/manual/en/mysqli.ssl-set.php
Here is a good place you can get started to creating an API.
First, you should evaluate the kind of data you want to share across your servers and see if you really need it.
Related
I want to create following project :
Server application hosted on Azure - it connects to databse via Entity framework and gives and API for anyone who want to connect (but with account stored in SQL database)
WPF application - it consumes server methods, objects etc.
Web app (php & javascript) - also consumes server methods and object etc.
IMPORTANT : I have only azure student's subscription and I want to hold onto it - buying anything else is out of the question unless it has strong argumentation.
I figured that to do this I have to create REST Web API because I have no other choice to connect to server than via HTTPWebRequest (because I want to have the same API for WPF nad web app).
My question is : does better solution exists?
I think I can create different API's for desktop client than web app but I have no idea how to do that. Whould you be so kindly to show me other way?
Why dont I want to have this solution?
Reason is simple. For big databases and slow internet connection it would take ages to download whole data in few seconds. As far as my knowledge goes there is no lazy loading in REST thus my WPF application's thread reponsible for downloading database would freeze for a big period of time.
If my question is too broad please leave a comment before you put up a flag.
Also, any tips regarding my project design are well appreciated.
Different APIs for Desktop and Web: this can be done easily enough. Assume you have a class library to contain your business logic (domain stuff). Create a web api project that makes use of it, then create yet another web api project separately that also makes use of the core models. When you deploy, deploy each separately to a different domain/subdomain (I'm unsure if you'll require further Azure resources for this, but consider api.desktop.myapp.com and api.web.myapp.com... no real technical reason why you can't do it that way, though for architecture reasons I'd avoid it (it's really coming close to if not definitely is duplication of code).
Same API for Desktop and Web: you stated that you thought you'd have to do this differently for the desktop and web, specifically because of the resource usage on the server. I disagree here, and think you should implement some standardized rate limiting into your API. Typically this is done by allowing only X amount of resources to be returned in a single call. If the initial request asks for more than X limit, an offset/nextID is returned by the API, and the client submits a new request noting that offset/nextID. This means you have subsequent calls from the client to get everything it needs, but gives your server a chance to handle it in smaller chunks (e.g., check for rate limits, throttling, load balancing, etc). See the leaky bucket algorithm for an implementation that I prefer, myself: https://en.wikipedia.org/wiki/Leaky_bucket)
When it comes to building my web applications, I know HTTP 2 is going to be recommended for all traffic coming to the site. I understand the security concerns and the reason why it is recommended/forced to be used now.
When it comes the web-based languages I code in and understand such as Ruby, PHP, and Perl.
Is there any special functions that I will have to do to produce a secure connection to my server or all do we need to do is redirect all traffic to https:// over http://?
Basically, my autoloading class in PHP would load all classes and functions for my web application to operate. Would I need to create a SSL.class.php for allowing the connection to be secure within my PHP?
The changes in HTTP/2.0 over HTTP/1.1 are mostly relevant if your application streams large amounts of data to many simultaneous users.
Is there any special functions that I will have to do to produce a secure connection to my server or all do we need to do is redirect all traffic to https:// over http://?
No. HTTP/2.0 does not require TLS. If you want TLS (which, personally, I encourage), you still need to send clients to https://.
Basically, my autoloading class in PHP would load all classes and functions for my web application to operate. Would I need to create a SSL.class.php for allowing the connection to be secure within my PHP?
In most cases, the HTTP layer is a webserver problem, not a PHP-land application code problem.
If you are working on a framework that insists on parsing request headers and managing responses in a very HTTP-like fashion, then yes, you probably need to be aware of some of the changes in the new version of the protocol.
Differences Between HTTP/1.1 and HTTP/2.0 for Developers
Servers can push more data over an established connection in HTTP/2.0. This is really neat if you need to push real-time notifications (e.g. what StackOverflow does).
Better multiplexing and streaming; it's now possible to stream multiple resources over the same HTTP connection.
Source
Unless your application is keenly aware of networking protocols, it shouldn't matter much for our day-to-day CRUD interfaces.
I am developing PHP plugins for CMS systems that at the moment communicates with my LAMP (PHP server) setup. As I am about to rewrite my server and PHP plugins and I am looking for a way to bypass the server konfiguration, maintaining and so on.
The server receives JSON, saves information from the JSON to my MySQL database, creates new JSON calls to external API's handles the response, saves part of it to the database. Merges pdf files from the different API's and creates a final JSON response to the CMS plugins.
My questions is in regards to a big update of my modules; Is there a setup that allows me to disgard my LAMP setup and use a cloud service? I have looked at Apigee and Parse but I don't know if they can make external API calls and handle the response of the API's?
If this can be done is it using Node.js?
Thanks.
Certainly Apigee can make outbound calls either through our policy based proxies or with a Node based proxy. Passivation of data can be accomplished through our KVM policies.
You can try it out with the free offering and see if it makes sense.
So you want standard website hosting with a MySQL database?
Any web host can do this for you. They manage the server, handle updates, etc. You just run your code in your little folder. Setup your domain. Connect to the database that they setup.
How much traffic are you doing? Do you need a whole server? A wee one or a giant one? Failover? Backups?
You should also look into Application Hosting with one of the big providers if you are worried about scaling.
http://aws.amazon.com/application-hosting/
http://www.rackspace.com/saas
I recently started to do development in the Haxe language with OpenFL (AS3 background).
But I have never worked on an app that communicates with a server - or never done any programming for servers for that matter!
I have to make a mobile app (for which I intend to use Haxe) where the new user creates an account on the server, and thus also interact withe other user accounts in a desired way.
So could someone guide me in the right direction to approach this situation? I'm guessing I will need to use PHP or ruby etc.
or can I use Haxe to program on the server? are there any good libraries that also provide security while making facility for user accounts? Is AWS or Google app engine something I can use?
check this simple but complete tutorial by filtreck
http://mromecki.fr/blog/post/haxite-writing-entire-website-using-haxe
You will want to create normal web pages that you can host on the server which will retrieve the needed information.
After having uploaded these, use a type of webview in the application to load the pages and retrieve this information.
You can write your server in Haxe if you want, and if you use a platform that supports it you could use TCP and haxe.remoting to pass data between the client and the server.
haxe.remoting is intended to make calling haxe function in a server from a client easier so that may be what you want.
If you don't feel confortable with using TCP you could do as Max wrote, just make some HTTP API (you can do this too in haxe) and do normal HTTP request from the client.
I have developed an Ubuntu desktop application to monitor when computers are on that is currently writing directly to a MySQL database. For security purposes I assume that I don't want to have all of these clients talking directly to my database, and instead need to create some other web interface between the client and the database. Should I write this interface in PHP? How does the client invoke this interface?
For security purposes I assume that I don't want to have all of these clients talking directly to my database
Probably not. You can lock down the database to some degree, but probably not enough.
instead need to create some other web interface between the client and the database.
Web services are the usual way to provide a controlled interface to a database these days.
Should I write this interface in PHP?
You could. Language should is a fairly personal thing though. I'd probably go with Perl's Dancer framework myself. It is capable of handling a RESTful API (although that guide assumes you've already learned the basics of Dancer).
How does the client invoke this interface?
By making an HTTP request. It might be as simple as a POST request with on as the body (and then the server uses the client ip address to determine which machine the request came from). The specifics of how you go about that depend on the language you are writing the client in and the libraries you have available to it.