I am working on a desktop application with electron and I am considering online storage to store data. I would like to get some idea on the approach as I couldn't find reliable answers from google search.
Approach 1. electron app (front end ) + php (like purchase a hosting package from godaddy with a domain e.g: www.mysite.com)
with this approach I am planning to create api calls in php to perform basic CRUD.
is this is a good way?
will this affect the speed/load time?
are there better ways for this situation?
Thank you very much in advance for the help.
Well, this is not an easy topic. Your solution could work: you Electron app ask your server for data and store data to it. Anyway the best solution depends from your application.
The most important points that you have to ask yourself are:
How often do you need to reach your server ?
Your users could work without data from server ?
How long does it takes to read and store data on your server ? (it's different if you store some kb or many gb of data)
The data stored online must be shared with other users or every user has access to its own data ?
If all the information are stored in your server your startup have to wait for the request to complete but you can show a loader or something like this to mitigate the waiting.
In my opinion you have many choices, from the simplest (and slowest) to the most complex (but that mitigate network lag):
Simple AJAX requests to your server: as you described you will do some HTTP requests to your server and read and write data to be displayed on your application. Your user will have to wait for the requests to complete. Show them some loading animations to mitigate the wait.
There are some solutions that save the data locally to your electron installation and then sync them online, Have a check to PuchDB for an example
Recently I'm looking at GraphQL. GraphQL is an API to query your data. It's not that easy but it has some interesting features, it has an internal cache and it's already studied for optimistic update. You update your application immediately thinking that your POST will be fine and then if something goes wrong you update it accordingly.
I'd also like to suggest you to try some solutions as a service. You don't have a server already and you will have to open a new contract so why don't you check some dedicated service like Firebase? Google Firebase Realtime Database allows you to work in javascript (just one language involved in the project), sync your data online automatically and between devices without the need to write any webservice. I'have just played with it for some prototypes but it looks very interesting and it's cheap. It also has a free plan that it's enough for many users.
Keep in mind that if your user has access only to their data the fastest and easies solution is to use a database inside your electron application. A sqlite database, an IndexDB database or even serialize to JSON and then store everything in localstorage (if your data fits size limits).
Hope this helps
Related
I'm trying to figure out whether my current approach will lead me into performance issues into the future, before developing further with this design, and whether there are better ways of doing this. I think this makes the most sense if I provide some context on the design first:
Current Design
I currently have my environment designed with two separate servers, let's call them frontend and backend.
Frontend
This server is open to the world. Customers access this site to view our product, make purchases, and will soon be able to view their account related information.
Backend
This server is where all information is held in a database.
Communication
The only way that the frontend currently needs access to the backend, is when the user authenticates with their license and downloads our product. To do this, the frontend calls a PHP script, which sends a JSON request to the backend server via curl_exec. The response from the backend tells the frontend how to handle that download request (e.g. license invalid).
Reasoning
The reason for this design is to avoid exposing the backend details to the user. Client-side, all the user sees is a request being sent to the frontend server. If the frontend server is ever compromised, anyone reading through how the frontend is built has no access to the backend DB, unless they know exactly what parameters to send to the backend API. Even then, it only gives access to a very low subset of information, depending on what the API exposes.
The Problem
The only time this cross-server communication happens right now is when a user tries to download our products using their license details. Relatively speaking, the traffic through this API between both servers is relatively low.
My concerns are that I want to build a user "control panel". From here they can log in with their license/account, they can view their active licenses, access details on previous orders they made, etc. This already means all these pieces of information are only available through the backend, so I'll need to expose them through the API - which is fine. The issue here is that every request the user makes through the control panel (even just refreshing the page) will build up a lot of traffic between both servers.
Questions
From the experience of developers here, is this communication design scalable? I'm worried I'm building around a bottleneck, which will just result in a slow user interface, since the frontend would end up waiting on a lot of requests it tunnelled through to the backend.
What are your thoughts? Has anyone faced a similar challenge? How did you overcome that challenge? What is the best practice to achieving this kind of requirement? I hope this question doesn't come across as too vague.
I would love to hear other answers but I will share my thoughts.
First, let's call your servers:
Application Server
Database Server
It seems that you are worried about creating a bottleneck due to an increase in the amount of database queries. Since you mentioned that these queries would execute after a page refresh, it's clear that you are not using a cache of any sort. If you could cache the database queries and invalidate the cache only if the data has changed (i.e. the user's actions cause the data to change, so the cache should be cleared) then you will increase performance drastically.
If anyone gains access to your application server, they will most likely be able to access the database server with the user that you've allowed the application server to use. You should give this user as little permissions as necessary to use the API. Still, they may be able to access a lot depending on what your API allows and what you have cached on the application server.
Take a look at Laravel's cache API which allows you to use your cache in place of a database query. If the cache does not exist, the database query would be executed and cached. Then you would delete necessary cache's based on user actions. You can also asynchronously recache database requests so you don't prevent a response to the client if the data is not needed for that response.
I hope this helps.
UPDATE:
After discussing with you further, I better understand your dilemma. You are trying to increase the security of your application by requiring all API calls to go through an extra step of being initiated after a POST request. I agree that this is going to be a bottleneck as the application scales since you won't be able to take advantage of caching and every page request will result in database queries.
What I have done in a similar case is to separate the application server and database server except the database server is literally only a database server without any logic/scripts. PHP, for example, is not even installed on the database server. Database servers and applications servers are only connected via private networking, so database servers are only accessible via the application server. A safe user has been set up to use the remote database.
Since my database queries take a lot of time, I cache as much as possible.
Also consider using https://cloudflare.com It is a reverse proxy to the application server which adds another layer between the client (browser) and your application server. This way, only cloudflare has access to your application server, and only your application server has access to your database server via the safe database user you create.
im no expert on database but using prepared statements
would help you a lot, as it is more secure as well as the best part is..
"Bound parameters minimize bandwidth to the server as you need send only the parameters each time, and not the whole query"
Hope it helps!
I'm developing a web app using Laravel (a PHP framework). The app is going to be used by about 30 of my co-workers on their Windows laptops.
My co-workers interview people on a regular basis. They will use the web app to add a new profile to a database once they interview somebody for the first time and they will append notes to these profiles on subsequent visits. Profiles and notes are stored using MySQL, but since I'm using Laravel, I could easily switch to another database.
Sometimes, my co-workers have to interview people when they're offline. They might visit a group of interviewees, add a few profiles and add some notes to existing ones during a session without any internet access.
How should I approach this?
With a local web server on every laptop. I've seen applications ship with some kind of installer including a LAMP stack, but I can't find any documentation on this.
I could install the app and something like XAMPP on every laptop
myself. That would be possible, but in the future more people might use the app and not all of them might be located nearby.
I could use Service Workers, maybe in connection with a libray such
as UpUp. This seems to be the most elegant approach.
I'd like to give option (3) a try, but my app is database driven and I'm not sure whether I could realize this approach:
Would it be possible to write all the (relevant) data from the DB to - let's say - a JSON file which could be accessed instead of the DB when in offline mode? We don't have to handle much data (less than 100 small data records should be available during an interview session).
When my co-workers add profiles or notes in offline mode, is there any "Web Service" way to insert data into the db that has been entered?
Thanks
Pida
I would think of it as building the app in "two parts".
First, the front end uses ajax calls to the back end (which isn't anything but a REST API). If there isn't any network connection, store the data in the browser using local storage.
When the user later has network connection, you could send the data that exists in the local storage to the back end and clear the local storage.
If you add web servers on the laptops, the databases and info will only be stored on their local laptops and would not be synced.
You can build what you describe using service workers to cache your site's static content to make it available offline, and a specific fetch handler in the service worker to detect a failed PUT or POST and queue the data in IndexedDB. You'd then periodically check IndexedDB for any queued data when your web app is loaded, and attempt to resend it.
I've described this approach in more detail at https://developers.google.com/web/showcase/case-study/service-workers-iowa#updates-to-users-schedules
That article assumes the use of the sw-precache library for caching your site's static assets, and the sw-toolbox library to provide runtime fetch handlers that check for failed business-logic requests. It also uses a promise-based IndexedDB wrapper called simpleDB although I'd probably go with the more recent idb library nowadays.
My client has an offline product database for a high street shop that they update fairly frequently for their own purposes. They are now creating an online store which they want to use product information from this database.
Migrating the database to a hosted server and abandoning the offline database is not an option due to their current legacy software set up.
So my question is: how can I get the information from their offline database to an online database? Their local server is always connected to the internet so is it possible to create a script on the website that somehow grabs the data from their server and imports it into the online server? If this ran every 24 hours it would be perfect. But is it even possible? And if so how would I do it?
The only other option I can think of is to manually upload the database after every update, but this isn't really a viable idea.
I did something like this with quickbooks using an odbc connection. Using that I synced data to MySQL. This synchronization however, was just one way. Unless you have keys in the data that indicates when something was changed (updated date), you will end up syncing alot of extra data.
Using SQLYog, i set up a scheduled job that connected to the odbc data source, and pushed the changes since last sync to the mysql database I was using to generate reports. If you can get the data replicated into MySQL it should be easy at that point to make use of it in your online store.
The downside is that it wont be realtime. Inventory could become a problem.
In an ideal world I would look at creating a restful API that would run on the same server or at least run on the same network as your offline database. This restful API would run as a web server via http and return JSON or even XML structures of data from the offline database. Clients running on the internet would be able to connect and fetch any data they need, at any time. A restful API like this has a number of advantages.
Firstly it's secure. You don't have to open up an attack vector to the public by making connections to your offline database public. The only thing you have to do is enable public access to your restful API. In your API's logic you might not even include functionality to write to the database so even if your API's security is compromised at very worst all attackers can do is read your data, not corrupt it.
Having a restful api in this situation represents a good separation of concerns. Your client code should not know anything about the database nor should it know about any internal systems that the offline database uses. What happens when your clients want to update their offline system or even change it? In this situation all you would have to do is update the restful api. Your client that is connecting to the data no longer cares about anything else but the api so changing databases would be easy.
Another reason to consider an API is concurrency. I hinted at this before but having an API would be great if you ever need to have more than one client accessing the offline databases' data. In a web server set up where you would have the API sitting and waiting for requests there is no reason why you could not have more than one client connecting to the api at the same time. HTTP is really good at this!
You talked about having to place old data in a new database. Something like this could be done easily with a restful API as you would just have to map the endpoints of your API to tables in the new database and run that when you need. You could even forgo the new database and use the API as your backend. This solution would require some caching but it would cut down on the duplication of a database if you don't feel it's needed.
The draw back to all of this is the fact that writing an API over a script is more complex. So in this situation I believe in horses for courses. If this database is the backbone of a long term project that will be expanding in the future an API is the way to go. If its a small part of your project then maybe you can swing it with a script that runs every 24 hours however I have done this before and the second I have to change/edit the solution things start getting a little "hairy". Hope this helps and good luck with it.
I have a web applications that stores data in a MySQL database on-line. It also retrieves data using PHP code, performs calculations on the server and sends the result back to the user.
Data it's quite simple: names, descriptions, prices, VAT, hourly charges that are read from the database and manipulated on the server side.
Often client work in environments where the internet connection is poor or not available. In this case I would like the client to be able to work offline: enter new names, descriptions, prices and use the last VAT to perform calculations. Then synchronise all data as soon as a connection is available.
Now the problem is that I do not know what is the best way or technologies for achieving this. Don't worry, I am not asking to write code for me. Can you just explain to me what is the correct way to build such a system?
Is there a simple way to use my online MySQL and PHP code locally?
Should I save the data I need in a local file, rebuild the calculation in JavaScript, perform them locally and then synchronise the data if database is available.
Should I use two MySQL database, one local and one online and do a synchronisation between the two when data is available? If yes which technology (language) shall I use to perform this operation?
If possible, I would like an answer from PHP coders that worked on a similar project in the past and can give me detailed information on framework structure and technology to use. please remember that I am new to this way of writing application and I would appreciate if you can spare few minutes and explain everything to me like if I am six year old or stupid (which I am!)
I really appreciate any help and suggestion.
Ciao,
Donato
There are essentially 3 ways to go:
Version 1: "Old school": PHP-Gtk+ and bcompiler
first, if you not have done so already, you need to separate your business logic from your presentation layer (HTML, templating engines, ...) and database layer
then adapt your database layer, so that it can live with an alternative DB (local SQlite comes to mind) and perform synchronisation when online again
Finally use PHP-Gtk+ to create a new UI and pack all this with bcompiler
Version 2: "Standard": Take your server with you
Look at Server2Go, WampOnCD and friends to create a "double clickable webserver" (Start at Z-WAMP)
You still need to adapt your DB layer as in Version 1
Version 3: "Web 2.x": Move application from server to browser
Move your application logic from the server side (PHP) to the client side (JS)
Make your server part (PHP) only a data access or sync layer
Use the HTML5 offline features to replace your data access with local data if you are offline and to resync if online
Which one is best?
This depends on what you have and what you want. If most of your business logic is in PHP, then moving it into the browser might be prohibitingly expensive - be aware, that this also generates a whole new class of security nightmaares. I personally do not recommend porting this way, but I do recommend it for new apps, if the backing DB is not too big.
If you chose to keep your PHP business logic, then the desicion between 1 and 2 is often a quiestion of how much UI does your app have - if it's only a few CRUD forms, 1. might be a good idea - it is definitly the most portable (in the sense of taking it with you). If not, go with 2.
I have worked with similar system for ships. Internet is expensive in the middle of the ocean so they have local web servers installed with database synchronization via e-mail.
We also have created simple .exe packages so people with no experience can install the system or update system...
I want to create a live, checkers-like app, which will work like this: There will be multiple icons/avatars displayed on this checkerboard like surface. I want to have a command prompt beneath this board, or some other sort of interface, that will allow them to control a certain avatar, and get it to preform actions. Multiple users will be using it at one time, and I will all be able to view the other user's changes/actions to the checkerboard.
What I'm wondering is: what's the best way to do this? I've got my HTML, CSS, and JS approach down, but not my data storage method. I know that, using PHP, I've got the choices to use either: file-based storage, MYSQL, or some other method. I need to know which is better, because I don't want to have server-lag, poor-response time, or some other issue, especially in this case since actions will be preformed every other second 2 or so, by these multiple users.
I've done similar stuff before, but I'm wanting to hear how others would handle it (advice, etc.) from more experienced programmers.
Sounds like a great project for node.js!
To clarify, node.js is a server-side implementation of javascript. What you'll want is a comet based application (a web-based client application that receives server side pushes instead of the client constantly polling the server), which is exactly what node.js is good at.
Traditional ajax calls for your clients to poll the server for data. This creates enormous overhead for both the client and the server. Allowing the server to push requests directly to the client without the client repeatedly asking solves the overhead issue and creates a more responsive interface. This is accomplished by holding asynchronous client connections on the server and only returning when the server has something to respond with. Once the server responds with data, another connection is immediately created and held by the server again until data is ready to be sent.
You may be able to accomplish the same thing with PHP, but I'm not that familiar with PHP and Comet type applications.
Number of users and hosting costs will play into your file vs DB options. If you're planning on more than a couple of users, I'd stick to the database. There are some NoSQL options available out there, but in my experience MySQL is much faster and more reliable than those options.
Good luck with your project!
http://en.wikipedia.org/wiki/Comet_%28programming%29
http://www.nodejs.org/
http://zenmachine.wordpress.com/2010/01/31/node-js-and-comet/
http://socket.io/ - abstracts away the communication layer with your clients based on their capability (LongPolling, WebSockets, etc.)
MySQL and XCache !!!!
Make sure you use predefined statements so MySQL does not need to compile the SQL again. Also memtables could be used to use memory storage
Of course make use of indexes appropriately.
If the 'gamestate' is not that important you can even store everything in XCache.
Remember that XCache does not store data persistently (after Apache restart)