I am developing an iOS application from which users can post various kinds of events in a server and view the events created from all the users. The question i have , does not have a programming nature. I would like to show here , how i would design my servers database and tell me your opinion about it and how i could improve it.
The application has a very simple interface where the user when he wants to post an event , simply writes a title , some comments and attaches a photo if he wants . Then this information is sent through XML to my server and stored to the database.
The problem is that because some people are immature , they would try to post inappropriate words or even photos. So i would like to have some control on my users. What i am thinking is , at the first time that the application runs on the mobile phone and connects to the server , the server would send a user id back to the phone. Then every time the users sends an xml file , he attaches his user id with the xml (programatically attached). Also i would keep another database with all the user ids that have been created over time. So if i notice an inappropriate event , i could delete the user id from my database , and the next time the user tries to send something, the server would understand that this id is not in the database and so not allow the posting. Of course if someone decides to uninstall and reinstall the application and get a new user id , he could again post but thats ok with me.
Would it be an easier way to prevent immature behaviors or this one sounds ok?
Sure what you describe is possible. But there is one big problem with it: that interface can easily be used by a robot (a script). So if someone really wants to missuse your service, he can flood you with whatever he likes in a second. Or he can try again and again, until you have to give up removing his posts from the service.
I suggest you take a look at one of the existing frameworks instead. This way you do not have to reinvent the wheel (which has already been done 19562394792 times, and counting) and don't have to learn from your mistakes (which you certainly will make).
This is about the only workable solution with images since you can't easily perform any recognition on an image to decide if it is appropriate. The comments they post would be more easily scanned for inappropriate items.
If you are will to do this kind of moderation then your solution sounds like the right one. As with any open system you can't really stop people from creating new accounts as you mentioned. You could log and ban IP addresses, but that is not a very good solution anymore as most IP addresses could really be shared gateway addresses and those addresses rotate frequently between users.
Create an ID. Watch for behavior. Ban the ID. And encourage community involvment in alerting you of bad posts with some kind of a Flag button.
There are many ways to design a database and not just for use with an iOS app. However when I'm building a mobile app (iOS, Android or any other) I want to make sure that the amount of data being sent and received is as small as possible; this is why I use JSON instead of XML... smaller footprint.
Because I use JSON I like to use an object database like MongoDB (my favorite) or CouchDB, because I 1) don't need to worry about the structure of my data and 2) the database stores the objects in JSON format.
I then use Node.JS for my application server so now I have JSON database -> JSON objects in my server application code -> outputting JSON... seamless with no Mappers or serialisation required. FTW.
Related
I am currently creating an app where 2 users will have the ability to chat with one another. Specifically, it will be an iOS app using Swift as the main language. Most chat app tutorials on the web recommend using Firebase but I personally want to use MySQL since the rest of my database activities for this app are done using MySQL. I also do not want to use any existing libraries and want to do this all on my own.
I only have questions regarding the efficiency of using MySQL. When accessing the database, I create a URLSession using swift which then uses a predetermined link that points to my PHP scripts on the backend to handle the database accessions. The only problem with this is that my chat functionality of the app will have to refresh messages (to see messages that the other user has sent you within a second or so). I am confused as how to go about this. My current idea is to have a Timer that calls the URLSession data task every second or so to retrieve new messages from the database then display them on the user's screen. Would this be efficient or is there a better way to do this? I feel as if this would bog down MySQL in some way and would over all slow down the efficiency of the database. Is there a better way to go about this?
Thanks in advance.
If you really want to use MYSQL as a way of delivering messages then you can look into #TekShock's comment about using Apple's PushNotifications. You can also use Long Polling however it is not favorable at all.
I personally would not use MYSQL as a way of delivering messages just because there is a lot more better options. You can pick from messaging protocols like XMPP and MQTT to deliver your messages. I have personally have used MQTT in the past and thought it was really simple to get the hang and will fit your needs perfectly. It has a couple of really good swift clients like SwiftMQTT. You will have each device subscribe and publish to a room so it can receive and send messages. So in your case you can have a User A subscribe to ROOM 1 and a User B subscribe to the same room and they will both receive all the messages published to that specific room.
You can then if you want to store delivered messages to a MYSQL db so when the user opens the app back up you can load all their previous messages. You can also use Sqlite or Realm to store these messages locally instead of storing them online.
EDIT:
Scaling is also pretty simple with MQTT if this is something you will consider. You could place a queuing system between your application and the MQTT broker, possibly something like Apache Kafka which would be your best bet.
I am working in an Android project designed for doctors. Doctors are required to authenticate when they open the app for the first time.
This authentication process is done through a HTTPS connection, using PHP code in the server-side that returns JSON code to the app, letting it know if the connection has been successful and, if it is the case, it also returns that doctor's list of patients. Let me show a piece of JSON code that would be returned in case of a successful log-in:
Obviously, if the log-in were unsuccessful, the "listOfPatients" attribute would carry no data. After the server generating this JSON code, the app would simply read through it using a JSON Parser.
Now imagine the doctor doesn't have just 3 patients, but 100 patients. And each patient doesn't have just 3 attributes ("Age", "Phone", "Smoker") but dozens of them appropriately nested where required. We would then have a somewhat large (but maybe not too complex) JSON code to read through.
In this project I am designing the Client code (i.e. the Android App), whereas the Server code is written by other guy. He is asking me how I'd like the server code to be written in order to facilitate the "Android Client - Server" interaction and achieve the best, smoothest user experience possible.
I answered (this is the really-short version of my answer; don't concern about the server-side-code security since it is not the goal of this question):
Create a login.php that allows for POSTS queries. My App would send "user" and "password" and the server would compare it with the database.
The server would then generate appropriate JSON code depending on the success of the doctor's log-in request.
The Android app would simply parse this JSON and display it to the user in form of list-views, and so on (the way I display this data to the doctor does not matter here in this question).
I was wondering two things:
Knowing that the JSON will contain hundreds of attributes, how efficient is this code? Is there a better way to achieve this functionality? How would you do it?
The vast majority of these attributes' values will change on a daily basis (for example, "bodyTemperature" or "bloodPressure"). Furthermore there will be "importantNotifications", where patients would notify their doctors in case of an emergency situation. I don't think it would be efficient to go through the entire process ("server create JSON ==> client read JSON ==> client display JSON") over and over again, minute after minute, hour after hour, day after day. There must be a better way to do it (maybe local storage? I would then have to discern which attributes to read only once a year ("age"), once a month ("phone"), once a day ("bodyTemperature") or every 30 minutes ("importantNotifications"); How could I then discriminate which values I'd need to read from the JSON in each session?)
Now you will be likely using GSON to parse the response from the server. Also you might define default values and tell the server not to return anything that is equal to default values like smoker - default NO (minimize the ammount of data to transfer). You are highly likely to display the patients in a ListView or RecyclerView.. Google a bit how to implement lazy loaders, meaning you tell the server to return just a few results, not all and when the user scrolls to end, you ask the server to give you more if there are any.
Also using caches on Android is a great way to save a couple of unnecessary requests to the server. You define of how long a cache is valid, say 5 mins and if you want to repopulate a list, check if it's still valid. But you should always leave a manual refresh option, SwipeToRefresh is a great and simple way to do just that.
Hope somebody else can have something more as I am interested in this also.
I have an html based application that allows users to store and search information in a mysql database. They run this on their own servers, so it isn't centralized. I'd like to add a function that allows them to see if their information corresponds to any known info in a central database, and if it isn't, they would have the option to add it to the central db. I'm not sure if the triggering script would be best placed on the client, or server side, so I'm at a loss as to where to start with this. Any script or config suggestions would be welcome.
Edit to add:
The data is preformatted, not created by the user. It consists of 7-10 fields of data that will likely be consistent with that seen by other users. The purpose is to build a troubleshooting database for users to reference or add to. The central server will be based on Q2A to allow upvotes, comments, etc.
This seems like the opposite of what Freebase does. In Freebase, users can connect to the Freebase API and check to see if something exists in the Freebase API if it does not already exist in their database. It is then up to them to cache the entry for faster retrieval in the future. Alternatively, at least in the past, the Freebase community enabled writing to the Freebase database using the MQL API.
If you are suggesting would strike me as being very involved. If you have content creators you really trust, you maybe can get away with not having any review process, but otherwise you will need some peer review and perhaps some programming. Unless you have no content standards, and it is anything goes, your database could quickly become overloaded with nonsense or things you don't want your website to be associated with (whatever those somethings might be).
Without knowing more about your database, I can't really say what those things would be, but what I will say is that if you are looking to have people throw stuff into a centralized location, you may want to (a) use something like OAuth, (b) set up some balances, because while one of your clients may think it's very important to have the 100 reasons why liberals/conservatives suck", another one of your clients may take offense. Guess who they will blame?
That being said, creating a RESTful API (don't know if you already have one or not) with a flag for insert_if_not_exists could work.
i.e. api.php?{json_string} would be picked up by a function/functions which determined what the user wanted to do in the json_string.
On the backend, your PHP function could parse it to an array very easily and if the insert_if_not_exists flag is triggered, you can create the post while you pull the data. Otherwise you just pull the data (or leave that part out if you only want to give them the option to post and not to pull in this fashion).
I have been playing around with Node.js for two days now, I am slowly understanding how it works. I have checked multiple threads and posts now but I seem to either misunderstanding them or the way I am thinking about this application is completely wrong.
My application is mainly based on PHP and uses Node.js as a notifications system.
I first wanted to this solely in Node.js but I am more familiar with PHP so that is why I only want to use Node.js as a notifications system.
I do not have any real code to show as I have been mainly playing around and see all what Node can do and so far it seems to be the thing I need, there is one thing I just can't figure out or seem to mis understand. So far I figured out how to send data between the user and the server and used socket.io for this.
So, what if I have a user, which is registered and logs-in on my application. He then has a socket id from socket.io, but when the user leaves my application and comes back the next day his socket ID is changed because it seems to change on every connection. I need to have my users somehow always have the same socket ID or something else which tells my node.js server that it should only send data to one specific user or multiple users. Also, as the socketid seems to change on every request it is even changed when the user visits a different page so I don't ever seem to know which user is what.
I am a little confused and the flow of working with both PHP and Node.js is still a little mystery to me so I hope my question is clear. I dont want to be depending on many modules as I find all these different modules kind of confusing for a beginner like me.
As long as PHP-Node.js are using sessions stored somewhere else other than flag file sessions let's say a cache service or a database mysql or nosql ..
you can use the "same flat file" sessions thought cache or database could be make your application "more"of course there are additional practises of allowing authenticated users to try to connect by controlling when to render the javascript code that holds the information to connect to socket.io server, where an additional list is stored in memory of all connected having information like username/log/timestamps/session variables/etc..
I am trying to create a web application which have a similar functionality with Google Alerts. (by similar I mean, the user can provide their email address for the alert to be sent to, daily or hourly) The only limitation is that it only gives alerts to user based on a certain keyword or hashtag. I think that I have found the fundamental API needed for this web application.
https://dev.twitter.com/docs/api/1/get/search
The problem is I still don't know all the web technologies needed for this application to work properly. For example, Do I have to store all of the searched keywords in database? Do I have to keep pooling ajax request all the time in order to keep my database updated? What if the keyword the user provided is very popular right now that might have thousands of tweets just in an hour (not to mention, there might be several emails that request several trending topics)?
By the way, I am trying to build this application using PHP. So please let me know, what kind of techniques I need to learn for such web app (and some references maybe)? Any kind of help will be appreciated. Thanks in advance :)
Regards,
Felix Perdana
I guess you should store user's e-mails and search keywords (or whatever) in the database.
Then your app should make API queries (so it should be run by a server) to get some relevant data. Then you have to send data to the all users.
To understand here is the algorithm:
User adds his request to the page like http://www.google.ru/alerts
You store his e-mail and keyword in the database.
Then your server runs script (you can loop it or use cron) which makes queries to the Twitter to get some data.
Your script process all the data and send it to the user's e-mails.