I am struggling with the logistics of getting a multi-device synchonisation to work.
Lets say I have an off-line web page (via app-cache) used as an address book - how would I go about structuring a robust multi-device synchronisation system.
I was thinking about the following but it doesn't seem robust:
Each device has 2 tables:
'synchronised (copy of server table at last update time) - along with the servers time-stamp at the last update point.
'awaiting synchronisation' - changes on that device waiting to be synchronised with the server. - this could have insert, delete and update items.
Then when the device connects to the server it:-
Uploads all changes to the server (via a mini API) and delete them from the 'awaiting synchronisation' table.
Sends the server the last synchonisation time.
Then download all updates from that time onwards and add them to the 'synchronised' table on the device.
Update the synchronization time. (using server time to ensure consistency across devices)
Would that work or would it fall over?
Like I said never tried a multi-device synchronisation and seems to be a hard thing to find topics on.
Finally are there any frameworks designed for this?
Anyone who has done similar with an App maybe as off-line web pages are rare - just need the logical order and any 'gotchyas' I need to be aware of.
Thanks in advance.
It can be tough to do that, in the end I realised that what I needed to do is create a version control system, similar to GIT... in JavaScript. Once you have the versions you can make reasoned decisions based on a change version number and the change that it was based upon.
I don't know if the project I started will be useful for you, it uses these concepts, but even if it is not I feel you should read through the documentation as it will help you think about how to tackle the wider problem. The project is stalled right now as I am looking for work (which is a lot of work) but I will be picking it up again soon.
Matt Forrester
Related
I have a PHP system, that does everything a social media platform does, i.e. add comments, upload images, add objects, logins, sessions etc. Storing all interactions in a MySQL database. So i've got a pretty good infrastructure to build on.
The next stage of my project is to develop the system so that notifications are sent to the "Networks" of "Contacts", which are associated with one and other. Such as the notifications system like Facebook. i.e. Chris has just commented on object N.
I'm looking at implementing this system for a lot of users: 10,000+, so it has to be reliable. I've researched the Facebook integration, resulting in techniques such as memcache, sockets & hashing.
Are they're any systems that can be easily adapted to this functionality, as I could do with a quick, reliable implementation.
P.s one thought I had was just querying the database every 5 seconds for e.g. "Select everything that has happened in the last 5 seconds" using jQuery, Ajax & PHP, but thats stupid, it would exhaust the server & database right?
I've seen this website & this article, can anyone reflect on this to tell me what is the best approach as I am hesitant about which path to follow.
Thanks
This is not possible with just pure vanilla PHP/MySQL. What you can do is set MySQL triggers (http://dev.mysql.com/doc/refman/5.0/en/triggers.html) on your data, and then use the UDF function sys_exec (Which you will need to install on your MySQL server) to run the notification php script. See this post: Invoking a PHP script from a MySQL trigger
If you can get this set up it should be pretty reliable and fast.
I'm running an enterprise level PHP application. It's a browser game with thousands of users online on an infrastructure that my boss refuses to upgrade and the machinery is running on 2-3 system load (yep linux) at all times. Anyhow that's not the real issue. The real issue is that some users wait until the server gets loaded (prime time) and they bring their mouse clickers and they click the same submit button like 10 - 20 times, sending 10-20 requests at the same time while the server is still producing the initial request, thus not updated the cache and the database.
Currently I have an output variable on each request, which is valid for 2 minutes and I have "mutex" lock which is basically a flag inside memcache which if found blocks the execution of the script further, but the mouse clicker makes so many requests at the same time that they run almost simultaneously which is a big issue for me.
How are you, the majority of StackOverflow folks dealing with this issue. I was thinking of flagging the cookie/session but I think I will get in the same issue if the server gets overloaded. Optimization is impossible, the source is 7 years old and is quite optimized, with no queries on most pages (running off of cache) and only querying the database on certain user input, like the one I'm trying to prevent.
Yep it's procedural code with no real objects. Machines run PHP 5 but the code itself is more of a PHP 4. I know, I know it's old and stuff but we can't spare the resource of rewriting this whole mess since most of the original developers left that know how stuff is intertwined and yeah, I'm basically patching old holes. But as far as I know this is a general issue on loaded PHP websites.
P.S: Disabling the button with javascript on submit is not an option. The real cheaters are advanced users. One of them had written a bot clicker and packed it as a Google Chrome extension. Don't ask how I dealt with that.
I would look for a solution outside your code.
Don't know which server you use but apache has some modules like mod_evasive for example.
You can also limit connections per second from an IP in your firewall
I'm getting the feeling this is touching more on how to update a legacy code base than anything else. While implementing some type of concurrency would be nice, the old code base is your real problem.
I highly recommend this video which discusses Technical Debt.
Watch it, then if you haven't already, explain to your boss in business terms what technical debt is. He will likely understand this. Explain that because the code hasn't been managed well (debt paid down) there is a very high level of technical debt. Suggest to him/her how to address this by using small incremental iterations to improve things.
limiting the IP connections will only make your players angry.
I fixed and rewrote a lot of stuff in some famous opensource game clones with old style code:
well, i must say that cheating can be always avoid executing the right queries and logic.
for example look at here http://www.xgproyect.net/2-9-x-fixes/9407-2-9-9-cheat-buildings-page.html
Anyway, about performace, keep in mind that code inside sessions will block all others thread untill current one is closed. So be carefull to inglobe all your code inside sessions.Also, sessions should never contain heavy data.
About scripts: in my games i have a php module that automatically rewrite links adding an random id saved in database, a sort of CSRFprotection. Human user will click on the changed link, so they will not see the changes but scripts will try to ask for the old link and after some try there are banned!
others scripts use the DOM , so its easy to avoid them inserting some useless DIV around the page.
edit: you can boost your app with https://github.com/facebook/hiphop-php/wiki
I don't know if there's an implementation already out there, but I'm looking into writing a cache server which has responsibility for populating itself on cache misses. That approach could work well in this scenario.
Basically you need a mechanism to mark a cache slot as pending on a miss; a read of a pending value should cause the client to sleep a small but random amount of time and retry; population of pending data in a traditional model would be done by the client encountering a miss instead of pending.
In this context, the script is the client, not the browser.
I have been playing around with Node.js for two days now, I am slowly understanding how it works. I have checked multiple threads and posts now but I seem to either misunderstanding them or the way I am thinking about this application is completely wrong.
My application is mainly based on PHP and uses Node.js as a notifications system.
I first wanted to this solely in Node.js but I am more familiar with PHP so that is why I only want to use Node.js as a notifications system.
I do not have any real code to show as I have been mainly playing around and see all what Node can do and so far it seems to be the thing I need, there is one thing I just can't figure out or seem to mis understand. So far I figured out how to send data between the user and the server and used socket.io for this.
So, what if I have a user, which is registered and logs-in on my application. He then has a socket id from socket.io, but when the user leaves my application and comes back the next day his socket ID is changed because it seems to change on every connection. I need to have my users somehow always have the same socket ID or something else which tells my node.js server that it should only send data to one specific user or multiple users. Also, as the socketid seems to change on every request it is even changed when the user visits a different page so I don't ever seem to know which user is what.
I am a little confused and the flow of working with both PHP and Node.js is still a little mystery to me so I hope my question is clear. I dont want to be depending on many modules as I find all these different modules kind of confusing for a beginner like me.
As long as PHP-Node.js are using sessions stored somewhere else other than flag file sessions let's say a cache service or a database mysql or nosql ..
you can use the "same flat file" sessions thought cache or database could be make your application "more"of course there are additional practises of allowing authenticated users to try to connect by controlling when to render the javascript code that holds the information to connect to socket.io server, where an additional list is stored in memory of all connected having information like username/log/timestamps/session variables/etc..
I'm building a chat widget using pusher service (http://pusher.com) and I need to save all the messages sent by users into database, so that it should be accessible after a period of time. I'm using mysql database and the only way that comes to my mind is that make a new insert each time the chat message event is triggered, but I'm afraid it will not be as fast as it should be.
What databases and techniques would you prefer in this case for saving chat messages?
One approach you can take is to use MySQL's INSERT DELAYED. When you do this, the client does not have to wait for the insert to complete. Rather, the server will queue the inserts and execute them as it can. So the process/thread thats handling the chat will not have to wait on them. But you'll still have a chat history stored in the database that can be retrieved. Assuming you are using pusher to share 'live' messages as they come in, and thus dont need immediate access to the chat history, this might do the trick for you.
Unless you're planning on creating a chat system including thousands of people I'm fairly sure that a properly structured mysql solution will handle this no problem. Where I work we have a intranet chat solution based on php and mysql and it is running flawlessly (for our medium sized team of ~100 some people).
One suggestion I do have is to make sure you understand mysql indexing. Make sure your chat system is truly utilizing the indexes to their fullest potential. You could greatly increase performance this way.
If concurrency really is becoming an issue for you I hear a lot of people are having good experiences with node.js. But if you're comfortable in php and mysql I'd say go that way first.
Happy coding!
(I know this is old but,) There is an interesting demo here (albeit in Rails) that does what you're talking about. It includes a possible schema.
https://github.com/dipth/pusher_demo
I've been given a task to connect multiple sites of the same client into a single network. So i would like to hear an architectural advice on connecting these sites into a single community.
These sites include:
1. Invision Power Board Forum (the most important site)
2. 3 custom made cms-s (changes to code allowable)
3. 1 drupal site
4. 3-4 wordpress blogs
Requirements are as follows:
1. Connecting all users of all sites into a single administrable entity. With permissions changing ability, users banning etc.
2. Later on, based on this implementation I have to implement "facebook like" chat, which will be available to all users regardless of place of login.
I have few ideas on my mind on how to go with this, but would like to hear some people with more experience and expertize than my self.
Cheers!
You're going to have one hell of a time. Each of those site platforms has a very disparate user architecture: there is no way to "connect" them all together fluidly without numerous codebase changes. You're looking at making deep changes to each of those platforms to communicate with a central database, likely modifying thousands (if not tens of thousands) of lines of code.
On top of the obvious (massive) changes to all of the platforms, you're going to have to worry about updates: what happens when a new version of Wordpress is released? You'd likely have to update all of your code manually (since you can't just drop in the changes). You'd also have to make sure that all of the code changes are compatible with your current database. God forbid one of the platforms starts storing user information differently---you'd have to make more massive code changes. This just isn't maintainable.
Your alternative (and best bet) is to have some sort of synchronization job that runs every hour or so: iterate through each user in each database and compare it to see if it both exists and is up-to-date in the other databases. If not, push the changes out. The problem with this is that it will get significantly slower as you get more and more users.
Perhaps another alternative is to simply offer a custom OpenID implementation. I believe that Drupal and Wordpress both have OpenID plugins that you can take advantage of. This way, you could allow your users to sign in with a pseudo-single sign-on service across your sites. The downside is that users could opt not to use it.
Good luck