I am looking to add to an existing C++/Windows application the on-line storage of application files, for backup purposes and easy of application file access between multiple computers. The files are around 100k in size, and I’d estimate that each customer will want to store 1-10 of these files, but some may wish to store hundreds in different folders. I would estimate that over time several thousand customers would want to use this feature, but the average use per month would not be that high (my customers tend to use the software extensively for a couple of months and then settle to a much lower usage pattern).
The security requirements are not high (no personal or sensitive information is in the application files), but basic login authentication would have to take place. I have a user forum (phpbb), which may be the easiest place to take care of the login creation / password recovery (depending on the server used below).
I have a web server (with php & mysql), and the disk space / bandwidth should not be a problem. I do not wish to use 3rd party libraries without source, as past experience has shown this introduces bugs / problems upgrading compilers etc.
I’m aware that I’ll need both server and client software and they will need to communicate using a protocol. As far as I can tell, my options are:
Use an existing online storage provider, such as DropBox. The problem I see with this is the client side interface software (I know of SharpBox, but it is .net), and other possible restrictions on storage space, account activity etc.
Use open source online storage software, such as OwnCloud. WebDAV should be able to be used for the interface, but again there currently is no client software for OwnCloud. I did get as far as setting up OwnCloud 2 and 3, but WebDAV seemed to only be partically supported. This is my preferred solution – use an off the shelf server with decent example C++ client to get me started.
Create my own server, protocol and client front end. I could use WebDAV or SOAP for the protocol. This is my last option just because of the amount of work to re-invent the wheel, but gives the simplest, most flexible system and best chance of integration with the phpbb forum for login credentials etc.
Are there other options which meet my needs which I have overlooked?
Related
Assume there are two different apps on appengine- one powered by Go and another by PHP
They each need to be able to make specific requests to eachother, purely over the backend network (i.e. these are the only services that need to make these specific requests- other remote requests should be blocked).
What is the best-practices way of doing this? Off the top of my head, here are 3 possible solutions and why I am a bit worried about them
1) Do not keep them as separate apps, but rather modules
The problem with this is that using modules introduces some other annoyances- such as difficulties with Channel Presence reporting. Also, conceptually, these 2 requests are really the only places they touch and it will be clearer to see what's going on in terms of database usage etc. if they are separated. But the presence issue is more of a show-stopper
2) Append the request with some hardcoded long secret key and only allow response if via SSL
It seems a bit strange to rely on this, since the key would never change... theoretically the only way it could be known is if an administrator on the account or someone with the source revealed it... but I don't know, just seems strange
3) Only allow via certain IP ranges (maybe combined with #2)
This just seems iffy, can the IP ranges be definitively known?
4) Pub/Sub
So it seems AppEngine allows a pub/sub mechanism- but that doesn't really fit my use case since I want to get the response right away - not via a postback once the subscriber processes it
All of them
-- As a side point, assuming it is some sort of https request, is this done using the Socket API for each language?
HTTPS is of course an excellent idea in general (not just for communication between two GAE apps).
But, for the specific use case, I would recommend relying on the X-Appengine-Inbound-Appid request header: App Engine's infrastructure ensures that this cannot be set on requests not coming from GAE apps, and, for requests that do come from GAE apps (via a url-fetch that doesn't follow redirects), the header is set to the app-id.
This is documented for Go at https://cloud.google.com/appengine/docs/go/urlfetch/ , for PHP at https://cloud.google.com/appengine/docs/php/urlfetch/ (and it's just the same for Java and Python, by the way).
purely over the backend network
Only allow via certain IP ranges
These requirement are difficult to impossible to fulfill with app engine infrastructure because you're not in control of the physical network routes. From the app engine FAQ:
App Engine does not currently provide a way to map static IP addresses to an application. In order to optimize the network path between an end user and an App Engine application, end users on different ISPs or geographic locations might use different IP addresses to access the same App Engine application.
Therefore always assume your communication happens over the open network and never assume anything about IPs.
Append the request with some hardcoded long secret key
The hard coded long secret does not provide any added security, only obscurity.
only allow response if via SSL
This is a better idea; encrypt all of your internal traffic with a strong algorithm. For example, ECDHE-RSA or ECDHE-ECDSA if available.
I work for a company where we do a lot of digital projects and need back and forth uploading/downloading of assets and files between clients and employees.
Ideally I want to put in place a web portal where users can login and access a designated area to upload/download files. Must be isolated from other users/clients and secure so I was thinking of creating an admin panel to set permissions to a user database.
This sounds like a common need to me. Are there any free or open frameworks that do this?
If I end up building this custom, using html, js, mysql, php, what would be the ideal backend setup for storing files? i.e. what type of server configuration would be secure and robust?
Thanks
ResourceSpace—free and open source digital asset management. The documentation's good and covers most situations, including configuring it to separate clients. The permissions configuration is a bit cryptic, but maybe they've improved that since I installed/updated it last (a while back). And it has a "pluggable remote API architecture."
Widen provides digital asset management software in a fully hosted environment. Digital asset management is becoming very popular with every type of business for easy retrieval and safe storage all their media files.
I am developing a web app and want to be able to stagger the deployment of new builds and versions across our users. For example...
Deploy new version of app and migrate a couple of test accounts to it for testing
When testing is happy, move say 5% of customers to the new version and monitor support problems and server load problems with those customers.
if it is still going ok, gradually move more and more customers over to the new version until everyone is updated.
Fogbugz and Kiln from FogCreek are using a deployment system like this. You can read about it here...
The problem I am trying to solve at the moment is that different accounts on the system can be using different versions of the code.
What is a good way of managing and controlling this? Can Apache do some of the heavy lifting here? I want to avoid too much overhead, or weird loader scripts to work out where to send the request. How do web apps like Fogbugz on Demand deal with the problem? Is there a recognised design pattern for this?
The users are identified via a domain name (eg user1.example.com, user-bob.example.com, etc).
There are easily hundreds of ways to accomplish this; so let's think at a high level without talking specifics of the architecture:
Large public sites like Yahoo and MSN handle design changes with random samples and set cookies with long timeouts to identify who should be receiving the new design.
For paid upgrades and beta invites you should be able to identify and tag which customer accounts will receive the new 'design' or feature set upon their login. For instance, the new updates to Digg v4 were for logged-in and opted-in customers only. Facebook had a similar rollout across their system with the new profile pages.
You may decide to pay for beta testers. You can easily use Amazon's Mechanical Turk or sites like custfeedback.com
The specifics will be up to you and your architecture. Hopefully you've written your software with this functionality in mind; and hopefully you've provided easy ways to provide both application and database upgrades easily at deploy time. Magento (an open-source e-commerce platform) handles this very well. Each module is built in a form of a plugin and each of its components keep record of their own version. Database upgrades are performed on the fly with install and upgrade scripts based on the new/future version retained in configuration files.
You may choose to move your beta testers to a new domain or database that has more detailed logging and realtime analysis than your production machine. This was the method mentioned in the Kiln blog post - they referenced the site http://martinfowler.com/bliki/BlueGreenDeployment.html - eventually, however you accomplish the segregation of your accounts and traffic, you eventually have to consolidate. You'll need to perform your migration in a maintenance window most likely and get everyone up to the same version.
Best of luck!
I need some guidance around how to develop the app I'm working on.
It's basically a backend system to manage photos and slideshows (eg arrange photos in albums, decide which ones to publish, update names and captions etc)
I would like to avoid giving the source code to clients but would like to keep the actual photos and thumbnails on the client's server.
I'm not sure what would be the best way to achieve this. In my mind the steps are:
a) client uploads a photo to MY site
b) photo is registered into my DB
c) the original photo is moved to client's server
d) thumbnails are generated and saved on client's server
then the public site:
e) install the public website on my client's server;
f) when a user is browsing the client's website, the script gets the list of images to show from my database, and gets them from the local server.
(hope I made myself clear)
basically the question is: what's the best way to give the client minimal/no access to the source code?
I agree with benjy, however, you can get away with using an API to manage the system specific calls, and just have an upload handler that communicates back to your API on the clints box, so they still have some code, it is minimal, and the code requires an API call to function. That way you reduce the DB need, and reduce the resources required to manage the clients code.
API is used to authenticate / manage communication while the upload / manage scripts handle the upload/image handling.
IMO, this seems a little unnecessary. What exactly is your concern about having the source code rest on a client's server? All you need is a signed license agreement between you and the client preventing them from doing anything with it.
Or, if you really don't trust them, just sell it as hosted software. No point in the above procedure, which is rather convoluted (no offense), when you can just have everything on one server.
Just my $.02.
You can obfuscate the code with a commercial tool like IonCubelink text, or you can develop your application and license it using a SaaS model, and provide an API for the client software to use.
Zend Guard, SourceGuardian, IonCube, and similar are other viable options if you cannot keep it local but want to make it difficult to find out what the "source" is.
I am about to deliver an Adobe AIR app to a customer.
But it's my first delivery of any sort, I.e. I have
no experience whatsoever with licensing etc.
Users of this app may or may not be online, so
can't count on that. In fact it's 99% sure that
they will be offline.
Nor do I expect them to very tech-savvy, who will
spend enough time scouting for ways to "crack" it.
So, is there an okeish type of way to protect this
app. That is, I don't want people to simply copy
the installation folder, take it to another machine
and run it. It should be slightly harder than this.
Oh, and I am also using PHP and MySql, with which
this AIR app communicates. So anything you guys could
help me with is very very welcome.
protect the php api and not the frontend app. have a license key which is bound to an ip address and authenticate the request (which contains the key) is coming from the correct ip.
If you want to protect your Flex app you can use irrObfuscator. There is a 30 days free demo.
If you wish to obfuscate your PHP code I would suggest ioncube. There is an online obfuscator that you can pay per à-la-carte. Pretty usefull. Tho you need ioncube loaders which is a set of PHP extension that you will find into the products section. Not sure but I think you can install loaders without playing with PHP config so it's shared-hosting friendly.
You can not protect anything that's webbased or javascript based, purely because there is complete sourcecode.
Anyone who knows how to use 'right click' could copy your files. You can obfuscate your code, but you cannot protect it. If you think that this shouldn't be possible, write a desktop app in a 'real' programming language.
When the app installs, I would do the following:
Create a file in "app-storage" that basically indicates the app has been installed.
Fire off a service call and make a record of the install
Change a file in the app directory to indicate the app has been installed
On subsequent startups, check for the presence of the file as long as the file in the app directory indicates the app has been installed. If you see the customer keeps installing their app over and over, this could be flagged in their account and appropriate action taken. If you want to get fancy, the file in "app-storage" could a one-way hash of some information from the file in the app directory (install date?) plus some value baked into the AIR application.
In general, I think the key here is to trust your users and not make the assumption they are trying to steal. You want to make the system as painless as possible. It does not build a good relationship with customers when you treat them like criminals, so creating an "ironclad" approach probably isn't even the best idea.
I think pretty much the only good way to do this is to require activation after installing (online activation, with a phone backup).
From what you're saying, it seems like the backend is installed on-site and would not be able to provide adequate copy protection.