Convincing an IT Manager to allow SQL Server instead of Access - php

An IT Manager is not allowing the use of SQL Server with an ASP.NET website being developed. The current setup being replaced is a php site connecting to a Microsoft Access database. I have a few reasons of my own as to why SQL should be used, but would like as many strong arguments as possible (student vs. IT Man.). Does anyone have any strong arguments on why SQL should be used? Particularly posed towards an IT Manager who has stated "this is the way we have been doing it, and [it] has been working."
Thanks in advance!
UPDATE
In the interest of 'unloading' this question... If you would recommend keeping Access, when and why?

Do a load test on your finished product and prove that Access isn't meant for powering websites.
Write your code so that you can change out the database back end easily. Then when access fails migrate your data to a real db, or MySQL if you have to.
Here are some Microsoft Web server stress tools
For the record, it is possible to send mostly SQL commands to the database and not keep an active connection open, thereby allowing far more than 6 or 7 connections at once, but the fact is that access just isn't meant to do it. So the "it works fine" point is like saying it is fine to clean your floor with sticky tape: it works, but isn't the right tool for the job.
UPDATED ANSWER TO UPDATED QUESTION:
Really the key here is the separation of data access in your code. You should be able to more or less have the data database structure in any number of DBMS. Things can get complicated, but a map of tables should be universal. Then should access not work, decide to use a different database.
Access CAN be used in kinda high traffic sites. With the SQL statement only routines I was able to build an e-commerce site that did a couple million a year in sales, and had 60K visitors a month. It is possible, but maybe not ideal. Those aren't big numbers, but they are the biggest for any site I have been a part of.
Keep access if IT Manager is too busy to maintain another server, or unwilling to spend time configuring one. Ultimately guessing does nothing, and testing tells you everything you need to know. Test and make decisions on the results.

Here's a document from Microsoft that might help:
Access vs. Sql Server
Another Article.
My own personal thoughts, Access has no place in an environment that could scale beyond more than two or three concurrent connections. Would you use Excel as the back end?

Your manager has stated the reason he wants to use Access. Are you responsible for designing an alternative? Do you have any reason to think you will benefit from proving your manager wrong? What is your personal upside in this conversation? Are you certain that Access won't be "good enough"? Is the redesigned site going to have heaver or different loads (i.e. more users, or less efficient design)? I'm not sure you want to be arguing with your manager that you can't implement something that does as well as the old design.
It's going to be a lot easier to let the project fail (if you expect that will be the outcome) and rescue it with SQL Server, than to get your manager to concede that you understand the situation better than he does.

Don't forget that for something as small as most Access Databases, you can use SQL Server Express Edition, which is free, so it won't cost you anything.

I found this nice quote as well:
It is not recommended to use an Access
database in a production web
application. For production purposes,
consider connecting to a Microsoft™
SQL Server database using the
SqlDataSource or ObjectDataSource
controls.
http://quickstarts.asp.net/QuickStartv20/aspnet/doc/ctrlref/data/accessdatasource.aspx

Don't argue, benchmark it. Real data should trump rhetoric (in a rational world, at least! ;-)
Set up text boxen with the two alternatives and run 'em hard. (If you're exposing web services, you can use a tool such as SoapUI for automated stress testing. There are lots of other tools in this space.) Collect stats and analyze them to determine the tradeoffs.

One simple reason to use SQL Server instead of a Microsoft Access Database: The MS Access DB can result in a bottleneck if the DB will be used heavily by a lot of users.

Licensing for one. I doubt he wants to have hundreds of Office licenses (one for each end user that connects to the site). SQL has licenses that allows multiple connects at the same time without specific connection licenses.
Not to mention scalability and reliability issues. SQL Server ios designed to be used and administrated in a 24/7 environment. Access has not.
SQL can scale to squillions of simultaneous connections, Access cannot.
SQL can backup while operating, Access cannot.
SQL is designed as a highly robust data repository, Access is not designed with the same requirements in mind.

Access doesn't deal with multiple users very well (at all?). This means if you have more than one person trying to access or especially update your site it's very likely to die or at best be very slow.
There's much better tooling around SQL Server (linq to sql or entity framework or any number of ORMs).
SQL express is a much better choice than access for a web site backend and it's free.

Consider the option that maybe he is right. If it is working fine with Access just leave it like this. If there are scalability problems in the future ( the site is used from more than 1 user simultaneously), than it his problem, not yours.
Also consider sqlite, may be better than access

Just grab a testsuite (or just throw one together):
compare the time taken for create a db with 1000,000 enteries.
search an entry in the db.
Vaccum the db
Delete the db
Do couple of operations that you think will be done more on the db couple of times.
and do it infront of him to compare (write a script).My guess is that either your IT manager is joking, or the site that you are working on are non critical and he doesn't want to allocate resources(including you).

MS Access is for 1 desk 1 user ! I was spending a year in a previous project to detach an application (growing to enterprise size in terms of users) from Access because its strange locking behavior and awful performance. SQL Server Express Edition is a good starting point, as echoed from previous posts.

Related

Should I use WebSockets in a social network app, or will PHP/AJAX suffice?

I would like your opinion on a project. The way I started myself and slowly presenting many gaps and problems either now or in the future they will create big issue.
The system will have a notification system, friends system, message system (private), and in general such systems. All these I have set up with:
jQuery, PHP, mysqli round-trips to avoid wordiness. I am getting at what the title says.
If all these do a simple PHP code and post and get methods for 3-4 online users will be amazing! The thing is when I have several users what can I do to make better use of the resources of the server? So I started looking more and and found like this socket.io
I just want someone to tell me who knows more what would be best to look for. Think how the update notification system work now. jQuery with post and repeated every 3-5 seconds, but it is by no means right.
If your goal is to set up a highly scalable notification service, then probably not.
That's not a strict no, because there are other factors than speed to consider, but when it comes to speed, read on.
WebSockets does give the user a consistently open, bi-directional connection that is, by its very nature, very fast. Also, the client doesn't need to request new information; it is sent when either party deems it appropriate to send.
However, the time savings that the connection itself gives is negligible in terms of the costs to generate the content. How many database calls do you make to check for new notifications? How much structured data do you generate to let the client know to change the notification icon? How many times do you read data from disk, or from across the network?
These same costs do not go away when using any WebSocket server; it just makes one mitigation technique more obvious: Keep the user's notification state in memory and update it as notifications change to prevent costly trips to disk, to the database, and across the server's local network.
Known proven techniques to mitigate the time costs of serving quickly changing web content:
Reverse proxy (Varnish-Cache)
Sits on port 80 and acts as a very thin web server. If a request is for something that isn't in the proxy's in-RAM cache, it sends the request on down to a "real" web server. This is especially useful for serving content that very rarely changes, such as your images and scripts, and has edge-side includes for content that mostly remains the same but has some small element that can't be cached... For instance, on an e-commerce site, a product's description, image, etc., may all be cached, but the HTML that shows the contents of a user's cart can't, so is an ideal candidate for an edge-side include.
This will help by greatly reducing the load on your system, since there will be far fewer requests that use disk IO, which is a resource far more limited than memory IO. (A hard drive can't seek for a database resource at the same time it's seeking for a cat jpeg.)
In Memory Key-Value Storage (Memcached)
This will probably give the most bang for your buck, in terms of creating a scalable notification system.
There are other in-memory key-value storage systems out there, but this one has support built right into PHP, not just once, but twice! (In the grand tradition of PHP core development, rather than fixing a broken implementation, they decided to consider the broken version deprecated without actually marking that system as deprecated and throwing the appropriate warnings, etc., that would get people to stop using the broken system. mysql_ v. mysqli_, I'm looking at you...) (Use the memcached version, not memcache.)
Anyways, it's simple: When you make a frequent database, filesystem, or network call, store the results in Memcached. When you update a record, file, or push data across the network, and that data is used in results stored in Memcached, update Memcached.
Then, when you need data, check Memcached first. If it's not there, then make the long, costly trip to disk, to the database, or across the network.
Keep in mind that Memcached is not a persistent datastore... That is, if you reboot the server, Memcached comes back up completely empty. You still need a persistent datastore, so still use your database, files, and network. Also, Memcached is specifically designed to be a volatile storage, serving only the most accessed and most updated data quickly. If the data gets old, it could be erased to make room for newer data. After all, RAM is fast, but it's not nearly as cheap as disk space, so this is a good tradeoff.
Also, no key-value storage systems are relational databases. There are reasons for relational databases. You do not want to write your own ACID guarantee wrapper around a key-value store. You do not want to enforce referential integrity on a key-value store. A fancy name for a key-value store is a No-SQL database. Keep that in mind: You might get horizontal scalability from the likes of Cassandra, and you might get blazing speed from the likes of Memcached, but you don't get SQL and all the many, many, many decades of evolution that RDBMSs have had.
And, finally:
Don't mix languages
If, after implementing a reverse proxy and an in-memory cache you still want to implement a WebSocket server, then more power to you. Just keep in mind the implications of which server you choose.
If you want to use Socket.io with Node.js, write your entire application in Javascript. Otherwise, choose a WebSocket server that is written in the same language as the rest of your system.
Example of a 1 language solution:
<?php // ~/projects/MySocialNetwork/core/users/myuser.php
class MyUser {
public function getNotificationCount() {
// Note: Don't got to the DB first, if we can help it.
if ($notifications = $memcachedWrapper->getNotificationCount($this->userId) !== null) // 0 is false-ish. Explicitly check for no result.
return $notifications;
$userModel = new MyUserModel($this->userId);
return $userModel->getNotificationCount();
}
}
...
<?php // ~/projects/WebSocketServerForMySocialNetwork/eventhandlers.php
function websocketTickCallback() {
foreach ($connectedUsers as $user) {
if ($user->getHasChangedNotifications()) {
$notificationCount = $user->getNotificationCount();
$contents = json_encode(array('Notification Count' => $notificationCount));
$message = new WebsocketResponse($user, $contents);
$message->send();
$user->resetHasChangedNotifications();
}
}
}
If we were using socket.io, we would have to write our MyUser class twice, once in PHP and once in Javascript. Who wants to bet that the classes will implement the same logic in the same ways in both languages? What if two developers are working on the different implementations of the classes? What if a bugfix gets applied to the PHP code, but nobody remembers the Javascript?

Adopting my current system to include real time notifications

I have a PHP system, that does everything a social media platform does, i.e. add comments, upload images, add objects, logins, sessions etc. Storing all interactions in a MySQL database. So i've got a pretty good infrastructure to build on.
The next stage of my project is to develop the system so that notifications are sent to the "Networks" of "Contacts", which are associated with one and other. Such as the notifications system like Facebook. i.e. Chris has just commented on object N.
I'm looking at implementing this system for a lot of users: 10,000+, so it has to be reliable. I've researched the Facebook integration, resulting in techniques such as memcache, sockets & hashing.
Are they're any systems that can be easily adapted to this functionality, as I could do with a quick, reliable implementation.
P.s one thought I had was just querying the database every 5 seconds for e.g. "Select everything that has happened in the last 5 seconds" using jQuery, Ajax & PHP, but thats stupid, it would exhaust the server & database right?
I've seen this website & this article, can anyone reflect on this to tell me what is the best approach as I am hesitant about which path to follow.
Thanks
This is not possible with just pure vanilla PHP/MySQL. What you can do is set MySQL triggers (http://dev.mysql.com/doc/refman/5.0/en/triggers.html) on your data, and then use the UDF function sys_exec (Which you will need to install on your MySQL server) to run the notification php script. See this post: Invoking a PHP script from a MySQL trigger
If you can get this set up it should be pretty reliable and fast.

How to see the bandwidth usage of my Flash application?

I'm developing an online sudoku game, with ActionScript 3.
I made the game and asked people to test it, it works, but the website goes down constantly. I'm using 000webhost, and I'm suspecting it is a bandwidth usage precaution.
My application updates the current puzzle, by parsing a JSON string every 2 seconds. And of course when players enter a number, it sends a $_GET request to update the mysql database. Do you think this causes a lot of data traffic?
How can I see the bandwidth usage value?
And how should I decrease the data traffic between Flash and mysql (or php, really).
Thanks !
There isn't enough information for a straight answer, and if there were, it'd probably take more time to figure out. But here's some directions you could look into.
Bandwidth may or may not be an issue. There are many things that could happen, you may very well put too much strain on the HTTP server, run out of worker threads, have your MySQL tables lock up most of the time, etc.
What you're doing indeed sounds like it's putting a lot of strain on the server. Monitoring this client side could be inefficient, you need to look at some server-side values, but you generally don't have access to those unless you have at least a VPS.
Transmitting data as JSONs is easier to implement and debug, but a more efficient way to send data (binary, instead of strings) is AMF: http://en.wikipedia.org/wiki/Action_Message_Format
One PHP implementation for the server side part is AMFPHP: http://www.silexlabs.org/amfphp/
Alternatively, your example is a good use case for the remote shared objects in Flash Media Server (or similar products). A remote shared object is exactly what you're doing with MySQL: it creates a common memory space where you can store whatever data and it keeps that data synchronised with all the clients. Automagically. :)
You can start from here: http://livedocs.adobe.com/flashmediaserver/3.0/hpdocs/help.html?content=00000100.html

What might be the best way to benchmark a users PC, PHP or JS?

PHP - Apache with Codeigniter
JS - typical with jQuery and in house lib
The Problem: Determining (without forcing a download) a user's PC ability &/or virus issue
The Why: We put out a software that is mostly used in clinics, but can be used from home, however, we need to know, before they go to our mainsite, if their pc can handle the enormities of our web-based, browser-served software.
Progress: So far, we've come up with a decent way to test dl speed, but that's about it.
What we've done: In php we create about a 2.5Gb array of data to send to the user in a view, from there the view calculates the time it took to get the data and then subtracts the php benchmark from this time in order to get a point of reference of upload/download time. This is not enough.
Some of our (local) users have been found to have "crappy" pc's or are virus infected and this can lead to 2 problems. (1)They crash in the middle of preforming task in our program, or (2) their virus' could be trying to inject into our js thus creating a bad experience that may make us look bad to the average (uneducated on how this stuff works) user, thus hurting "our" integrity.
I've done some googling around, but most plug-ins or advice forums/blogs i've found simply give ways to benchmark the speed of your JS and that is simply not enough. I need a simple bit of code (no visual interface included, another problem i found with one nice piece of js lib that did this, but would take days to remove all of the authors personal visual code) that will allow me to test the following 3 things:
The user's data transfer rate (i think we have this covered, but if better method presented i won't rule it out)
The user's processing speed, how fast is the computer in general
possible test for infection via malware, adware, whatever maybe harmful to the user's experience
What we are not looking to do: repair their pc! We don't care if they have problems, we just don't want to lead them into our site if they have too many problems. If they can't do it from home, then they will be recommended to go to their nearest local office to use this software "in house" so to speak.
Further Explanation
We know your can't test the user-side stuff with PHP, we're not that stupid, PHP is mentioned because it can still be useful in either determining connection speed or in delivering a script that may do what we want. Also, this is not a software for just anyone on the net to go sign up and use, if you find it online, unless you are affiliated with a specific clinic and have a login name and what not, your not ment to use the sight, and if you get in otherwise, it's illegal. I can't really reveal a whole lot of information yet as the sight is not live yet. What I can say, is it mostly used by clinics/offices for customers to preform a certain task. If they don't have time/transport/or otherwise and need to do it from home, then the option is available. However, if their home PC is not "up to snuff" it will be nothing but a problem for them and make the 2 hours task they are meant to preform become a 4-6hour nightmare. Thus the reason, i'm at one of my fav quest sights asking if anyone may have had experience with this before and may know a good way to test the user's PC so they can have the best possible resolution, either do it from home (as their PC is suitable) or be told they need to go to their local office. Hopefully this clears things up enough we can refrain from the "sillier" answers. I need a REAL viable solution and/or suggestions, please.
PHP has (virtually) no access to information about the client's computer. Data transfer can just as easily be limited by network speed as computer speed. Though if you don't care which is the limiter, it might work.
JavaScript can reliably check how quickly a set of operations are run, and send them back to the server... but that's about it. It has no access to the file system, for security reasons.
EDIT: Okay, with that revision, I think I can offer a real suggestion - basically, compromise. You are not going to be able to gather enough information to absolutely guarantee one way or another that the user's computer and connection are adequate, but you can get a general idea.
As someone suggested, use a 10MB-20MB file and several smaller ones to test actual transfer rate; this will give you a reasonable estimate. Then, use JavaScript to test their system speed. But don't just stick with one test, because that can be heavily dependent on browser. Do the research on what tests will best give an accurate representation of capability across browsers; things like looping over arrays, manipulating (invisible) elements, and complex math. If there is a significant discrepancy between browsers, then use different thresholds; PHP does know what browser they're using, so you can give the system different "good enough" ratings depending on that. Limiting by version (like, completely rejecting IE6) may help in that.
Finally... inform the user. Gently. First let them know, "Hey, this is going to run a test to see if your network connection and computer are fast enough to use our system." And if it fails, tell them which part, and give them a warning. "Hey, this really isn't as fast as we recommend. You really ought to go down to the local clinic to perform this task; if you choose to proceed, it may take a lot longer than intended." Hopefully, at that point, the user will realize that any issues are on them, not on you.
What you've heard is correct, there's no way to effectively benchmark a machine based on Javascript - especially because the javascript engine mostly depends on the actual browser the user is using, amongst numerous other variables - no file system permissions etc. A computer is hardly going to let a browsers sub-process stress itself anyway, the browser would simply crash first. PHP is obviously out as it's server-side.
Sites like System Requirements Lab have the user download a java applet to run in it's own scope.

Multi-tier applications with PHP?

I am relatively new to PHP, but experienced Java programmer in complex enterprise environments with SOA architecture and multitier applications. There, we'd normally implement business applications with business logic on the middle tier.
I am programming an alternative currency system, which should be easy deployable and customizable by individuals and communities; it will be open source. That's why php/mysql seems the best choice for me.
Users have accounts, and they get a balance. also, the system calculates prices depending on total services delivered and total available assets.
This means, on a purchase a series of calculations happen; the balance and the totals get updated; these are derived figures, something normally not put into a database.
Nevertheless, I resorted to putting triggers and stored procedures into the db, so that in the php code none of these updates are made.
What do people think? Is that a good approach? My experience suggests to me that this is not the best solution, and prompts me to implement a middle tier. However, I would not even know how to do that. On the other hand, what I have so far with store procs seems to me the most appropriate.
I hope I made my question clear. All comments appreciated. There might not be a "perfect" solution.
As is the tendency these days, getting away from the DB is generally a good thing. You get easier version control and you get to work in just one language. More than that, I feel that stored procedures are a hard way to go. On the other hand, if you like that stuff and you feel comfortable with SPs in MySql, they're not bad, but my feeling has always been that they're harder to debug and harder to handle.
On the triggers issue, I'm not sure whether that's necessary for your app. Since the events that trigger the calculations are invoked by the user, those things can happen in PHP, even if the user is redirected to a "waiting" page or another page in the meantime. Obviously, true triggers can only be done on the DB level, but you could use a daemon thread that runs a PHP script every X seconds... Avoid this at all costs and try to get the event to trigger from the user side.
All of this said, I wanted to plug my favorite solution for the data access layer on PHP: Doctrine. It's not perfect, but PHP being what it is, it's good enough. Does most of what you want, and keeps you working with objects instead of database procedures and so forth.
Regarding your title, multiple tiers are, in PHP, totally doable, but you have to do them and respect them. PHP code can call other PHP code, and it is now (5.2+) nicely OO and all that. Do make sure to ignore the fact that a lot of PHP code you'll see around is total crap and does not even use methods, let alone tiers, and decent OO modelling. It's all possible if you want to do it, including doing your own (or using an existing) MVC solution.
One issue with pushing lots of features to the DB level, instead of a data abstraction layer, is that you get locked into the DBMS's feature set. Open source software is often written so that it can be used with different DBs (certainly not always). It's possible that down the road you will want to make it easy to port to postgres or some other DBMS. Using lots of MySQL specific features now will make that harder.
There is absolutely nothing wrong with using triggers and stored procedures and other features that are provided by your DB server. It works and works well, you are using the full potential of the DB, instead of simply relegating it to being a simplistic data store.
However, I'm sure that for every developer on here who agrees with you (and me), there are at least as many who think the exact opposite and have had good experiences with doing that.
Thanks guys.
I was using db triggers because I thought it might be easier to control transaction integrity like that. As you might realize, I am a developer who is also trying to get grip of the db knowledge.
Now, I see there is the solution to spread the php code on multiple tiers, not only logically but also physically by deploying on different servers.
However, at this stage of development, I think I'll stick to my triggers/sp solution, as that doesn't feel to be that wrong. Distributing on multiple layers would require me to redesign my app consistently.
Also, thinking open source, if someone likes the alternative money system, it might be easier for people to just change layout for their requirements, while I would not need to worry that calculations get wrong if people touch php code.
On the other hand, of course, I agree that db stuff might get very hard to debug.
The DB init scripts are in source control, as are the php files :)
Thanks again

Categories