I'm wanting extra security for a particular point in my web app. So I want to lock the database (SQL Server 2005). Any suggestions or is this even necessary with SQL Server?
Edit on question:
The query is failing silently with no errors messages logged, and does not occur inside of a transaction.
Final Solution:
I never was able to solve the problem, however what I wound up doing was switching to MySQL and using a transactional level query here. This was not the main or even a primary reason to switch. I had been having problems with SQL Server and it allowed me to have our CMS and various other tools all running on the same database. Previous we had a SQL Server and a MySQL database running to run our site. The port was a bit on the time consuming however in the long run I feel it will work much better for the site and the business.
I suppose you have three options.
Set user permissions so that user x can only read from the database.
Set the database into single user mode so only one connection can access it
sp_dboption 'myDataBaseName', single, true
Set the database to readonly
sp_dboption 'myDataBaseName', read only, true
I never was able to solve the problem, however what I wound up doing was switching to MySQL and using a transactional level query here. This was not the main or even a primary reason to switch. I had been having problems with MSSQL and it allowed me to have our CMS and various other tools all running on the same database. Previous we had a MSSQL and a MySQL database running to run our site. The port was a bit on the time consuming however in the long run I feel it will work much better for the site and the business.
Related
This may seem like an obvious question but we have a PHP/MySQL app that runs on Windows 2008 server. The server has about 10 different sites running from it in total. Admin options on the site in question allow an administrator to run reports (through the site) which are huge and can take about 10mins in some cases. These reports are huge mysql queries that display the data on screen. When these reports are running the entire site goes slow for all users. So my questions are:
Is there a simple way to allocate server resources so if a (website) administrator runs reports, other users can still access the site without performance issues?
Even though running the report kills the website for all users of that site, it doesn't affect other sites on the same server. Why is that?
As mentioned, the report can take about 10 minutes to generate - is
it bad practice to make these kinds of reports available on the
website? Would these typically be generated by overnight scheduled tasks?
Many thanks in advance.
The load your putting on the server will most likely have nothing to do with the applications but the mysql table that you are probably slamming. Most people get around this by generating reports in down time or using mysql replication to have a second database which is used purely for reporting.
I recommend trying to get some server monitoring to see what is actually going on. I think Newrelic just released windows versions of its platform and you can try it out for free for 30 days i think.
There's the LOW_PRIORITY flag, but I'm not sure whether that would have any positive effect, since it's most likely a table / row locking issue that you're experiencing. You can get an idea of what's going on by using the SHOW PROCESSLIST; query.
If other websites run fine, it's even more likely that this is due to database locks (causing your web processes to wait for the lock to get released).
Lastly, it's always advisable to run big reporting queries overnight (or when the server load is minimal). Having a read replicated slave would also help.
I strongly suggest you install a replicated MySQL server, then running large administrator queries (SELECT only naturally) on it, to avoid the burden of having your website blocked!
If there's not too much transaction per second, you could even run the replica on a desktop computer remotely from your production server, and thus have a backup off-site of your DB!
Are 100% sure you have added all necessary indexes?
You need to have a insanely large website to have this kinds of problems unless you are missing indexes.
Make sure you have the right indexing and make sure you do not have connection fields of varchar, not very fast.
I have a database with quite a few large tables and millions of records that is working 24/7.
Has loads of activity and automated services processing it without issues due to proper indexing.
I currently run an Rails app on a single VPS with it's own MySQL database. This database also contains the users table. We are currently building a second application which should have the same users. If one registers for either, he has an account for both.
However, the problem is that this second (PHP) application must be hosting at another location. What are the best practices here? Can I directly connect to the external database without causing a big delay? Should I sync them? Create a API at the first?
I'm searching for the most maintainable method possible. Thank you.
You can allow the second host access to the first MySQL server. The best practices (as far as I'm aware) to do this would be to create a new user account, give it the required privileges to the users table, only allow access to it from the IP or domain of the second host and using a secure password. You would run this query on the MySQL server:
GRANT ALL ON mydatabase.users TO 'mynewuser'#'123.123.123.123'
IDENTIFIED BY 'mysecurepassword';
Needless to say you would replace 'mydatabase.users' with your database and table name, 'mynewuser' with the username of the user you want to have access (you can put anything here, the user will be automatically created), '123.123.123.123' with the IP or domain name of your second server and 'mysecurepassword' with a good and long password, preferably randomly generated.
Your second server would now be able to connect to the MySQL server and have ALL privileges (you can change that to what ever privileges it needs) on the mydatabase.users table.
MySQL 5.6 GRANT Syntax
Greensql.com on MySQL remote access best practices
To minimize the small performance penalty I would refrain from creating multiple MySQL connections (Try to use as few as possible, preferably just one). I'm not 100% on the following, but I'm pretty sure it would be a great idea to reduce the amount of separate queries you execute as well. Instead of running 10 separate inserts, run one insert with multiple VALUES segments, e.g.
INSERT INTO mytable(id, name) VALUES
(0, 'mia'),
(1,'tom'),
(2,'carl');
I'm sure MySQL Prepared Statements would also be of considerable help reducing the speed penalty
You can connect to the database from the second application.
but the delay for the connection will always be there.
Another thing is you have to consider about the security as the database will be accessible to the world. You can restrict the connection only to the second web server.
To reduce the delay, you have options like having a memcached or other caches so that you dont have to hit the database again and again.
You can have a mysql replication and have the main database as master and setup another mysql in the second application server and make it as slave.
The Second server's mysql syncs with the master and all the write queries (update, insert, delete etc.) should go to master. This should reduce the database load.
So the scenario is this:
I have a mySQL database on a local server running on Windows 2008 Server. The server is only meant to be accessible to users on our network and contains our companies production schedule information. I have what is essentially the same database running on a hosted server running linux, which is meant to be accessible online so our customers can connect to it and update their orders.
What I want to do is a two-way sync of two tables in the database so that the orders are current in both databases, and a one-way sync from our server to the hosted one with the data in the other tables. The front end to the database is written in PHP. I will say what I am working with so far, and I would appreciate if people could let me know if I am on the right track or barking up the wrong tree, and hopefully point me in the right direction.
My first idea is to make (at the end of the PHP scripts that generate changes to the orders tables) an export of the changes that have been made, perhaps using INSERT into OUTFILE WHERE account = account or something similar. This would keep the size of the file small rather than exporting the entire orders table. What I am hung up on is how to (A) export this as an SQL file rather than a CSV (B) how to include the information about what has been deleted as well as what has been inserted (C) how to fetch this file on the other server and execute the SQL statement.
I am looking into SSH and PowerShell currently but can't seem to formulate a solid vision of exactly how this will work. I am looking into cron jobs and Windows scheduled tasks as well. However, it would be best if somehow the updates simply occurred whenever there was a change rather than on a schedule to keep them synced in real time, but I can't quite figure that one out. I'd want to be running the scheduled task/cronjob at least once every few minutes, though I guess all it would need to be doing is checking if there were any dump files that needed to be put onto the opposing server, not necessarily syncing anything if nothing had changed.
Has anyone ever done something like this? We are talking about changing/adding/removing from 1(min) to 160 lines(max) in the tables at a time. I'd love to hear people's thoughts about this whole thing as I continue researching my options. Thanks.
Also, just to clarify, I'm not sure if one of these is really a master or a slave. There isn't one that's always the accurate data, it's more the most recent data that needs to be in both.
+1 More Note
Another thing I am thinking about now is to add at the end of the order updating script on one side another config/connect script pointing to the other servers database, and then rerun the exact same queries, since they have identical structures. Now that just sounds to easy.... Thoughts?
You may not be aware that MySQL itself can be configured with databases on separate servers that opportunistically sync to each other. See here for some details; also, search around for MySQL ring replication. The setup is slightly brittle and will require you to learn a bit about MySQL replication. Or you can build a cluster; much higher learning curve but less brittle.
If you really want to roll it yourself, you have quite an adventure in design ahead of you. The biggest problem you have to solve is not how to make it work, it's how to make it work correctly after one of the servers goes down for an hour or your DSL modem melts or a hard drive fills up or...
Start a query on a local and a remote server can be a problem if the connection breaks. It is better to each query locally stored in the file, such as GG-MM-DD-HH.sql, and then send the data every hour, when the hour expired. Update period can be reduced to 5 minutes for example.
In this way, if the connection breaks, the re-establishment take on all the left over files.
At the end of the file insert CRC for checking content.
I'm maintaining an inherited site built on Drupal. We are currently experiencing "too many connections" to the database.
In the /includes/database.mysql.inc file, #mysql_connect($url['host'], $url['user'], $url['pass'], TRUE, 2) (mysql_connect() documentation) is used to connect to the database.
Should $new_link = TRUE be used? My understanding is that it will "always open a new link." Could this be causing the "too many connections"?
Editing core is a no no. You'll forget you did, upgrade the version, and bam, changes are gone.
Drupal runs several high-performance sites without problems. For instance, the Grammy Awards site switched to Drupal this year and for the first time the site didn't go down during the cerimony! some configuration needs tweaking on your setup. Probably mysql.
Edit your my.cfg and restart your mysql server (/etc/my.cfg in fedora, RH, centos and /etc/mysql/my.cfg on *buntu)
[mysqld]
max_connections=some-number-here
alternatively, to first try the change without restarting the server, login to mysql and try:
show variables like 'max_connections' #this tells you the current number
set global max_connections=some-number-here
Oh, and like another person said: DO. NOT. EDIT. DRUPAL. CORE. It does pay off if you want to keep your site updated, may cause inflexible headache and bring you a world of hurt.
MySQL, just like any RDBMS out there will limit the amount of connections that it accepts at any time. The my.cnf configuration file specifies this value for the server under the max_connections configuration. You can change this configuration, but there are real limitations depending on the capacity of your server.
Persistent connections may help reducing the amount of time it takes to connect to the database, but it has no impact on the total amount of connections MySQL will accept.
Connect to MySQL and use 'SHOW PROCESSLIST'. It will show you the currently open connections and what they do. You might have multiple connections sitting idle or running queries that take way too long. For idle connections, it might just be a matter of making sure your code does not keep connections open when they don't need them. For the second one, they may be parts of your code that need to be optimized so that the queries don't take too long.
If all connections are legitimate, you simply have more load than your current configuration allows for. If you MySQL load is low even with the current connection count, you can increase it a little and see how it evolves.
If you are not on a dedicated server, you might not be able to do much about this. It may just be someone else's code causing trouble.
Sometimes, those failures are just temporary. When it fails, you can simply retry the connection a few milliseconds later. If it still fails, it might be a problem and stopping the script is the right thing to do. Don't put that in an infinite loop (seen it before, terrible idea).
FYI: using permanent connections with Drupal is asking for trouble. Noone uses that as far as I know.
The new_link parameter only has effect, if you have multiple calls to mysql_connect(), during 1 request, which is probably not the case here.
I suspect it is caused by too many users visiting your site, simultaneously, because for each visitor, a new connection to the DB will be made.
If you can confirm that this is the case, mysql_pconnect() might help, because your problem is not with the stress on your database server, but the number of connections. You should also read Persistent database connections to see if it is applicable to your webserver setup, if you choose to go this route.
We write a most of our websites in PHP and use MySQL database connections routinely. We are currently encountering a major performance issue on our dedicated server. When accessing our server it loads webpages very slowly and SSH'ing into the machine takes forever. We have restarted it a few times and after a few minutes that problem appears again.
Our web host (MidPhase) says that it could be related to a DOS attack and that they are going to place our dedicated server on CiscoGuard for 24hrs and check our server logs to verify if that is the case.
I'm concerned that we may have some poorly coded PHP scripts that are being exploited.
How would one check server wide for problems that could be caused by possibly PHP/MySQL injection exploits?
Thank you,
Tegan
I would check access logs for unusual requests (specially those indicating SQL injection, or massive requests to the same urls), and also enabling MySQL's slow query log can be useful, since it will allow you to see any heavy query that can indicate either someone dumping your db, or your own code performing poorly on queries.
Consider modifying the slow query time value (default 10 seconds) to have a valuable log, and not empty / bloated with queries.
Using mtop to go over MySQL's performance in real-time may be helpfull too.
Assuming you use a LAMP setup, I would start with something like
$ top
or
$ ps aux
to see what process is using lots of resources. It could be php / mysql but it could also be a mail server or a spam filter (just an example, if you are running that on the same server).
I suggest simply going through any PHP, and making sure that queries are being escaped (or use binding), and also that you're filtering for possible XSS attacks.