I am in the process of setting up my development environment and I just need to set up the MySQL Database. I have a .sql file and I'd like to import the information from that into a local database in PhpStorm.
I have the .sql file in my main project folder and I dragged it over to the Database Tool Window and it showed up as a DDL.
My question is: How do I turn that into an actual database that I can run off of localhost?
This is what I see
There is no database in PhpStorm. You need to set up a local server and THEN connect PhpStorm to that database.
If you are a beginner on Windows then you cannot go wrong with XAMPP.
PhpStorm can read the DDL but cannot store any data.
Hope that helps.
I have an phpmyadmin sql database which is controlled by some php files kept in c:/xampp/htdocs. Now I want to move the whole set up to another computer. An obvious way is to export import of the database and copy/paste of the php files. Is it possible to make a setup file so that when we run the setup file in some computer, the whole set up will be installed in that computer (as we install software)?
Excuse me it is a foolish question.
Whatever you do, you should use git (http://git-scm.com/) for keeping your files the same and for version control.
With git you can easily clone the php files into your second computer.
You should also add *.log, .lock, cache and user-data to your .gitignore file as well as various OS files (.thumbs, *.DS_Store, etc.).
Getting an identical webserver up is more tricky tho... Here Vagrant (https://www.vagrantup.com/) can be of great use as it lets you "Create and configure lightweight, reproducible, and portable development environments."
For replicating databases, you can use vagrant too...
Take a look at: http://scotch.io/tutorials/get-vagrant-up-and-running-in-no-time
I just started out with CakePHP and try to configure a plugin.
The plugin requires a table and contains a schema.php file that contains a class describing the table. From what I gather I have to run a command in my server CLI that will build a table according to this schema.
But my server doesn't seem to have CLI (it's cPanel, and my website is hosted by a good but cheap host). I tried to access the server using Putty, but the connection is aborted when I try.
However, I do have phpMyAdmin where I can run SQL commands. So if I can 'convert' my schema.php in the appropiate SQL commands, I can copy these manually and run them. Is there an easy way to do this?
(I know, the fasted solution is probably to just enter the table by hand. but that would be boring)
Thank you!!
Scipio
I am currently developing a medium sized web application using PHP and need to use some kind of version control. I also need to have the master copy of the code running in the apache document root so I can test the app while it is in development. Does anyone have any sugestions?
Thanks,
RayQuang
You can't go wrong with Git; everything you need to know is here: http://progit.org/book/
Yeah you should definitely use version control (Git or Subversion).
Here a short explanation how I'm using it in my web projects (I am using SVN):
I have a SVN project which I have checkouted on my local machine and on the webserver
Always when you change something you can commit your current running version
Log into the server (Could also be multiple servers) and do a svn update, so the newest code gets automatically deployed on the machine. The only thing you have to do is restart of the webserver
Note:
Take care what you commit. You've maybe another database configuration file on your local machine than on your server. You can put this into the svn ignore file list (guess git has something similar)
It is also easy possible that multiple persons work on the same project..
Don't commit logfiles
Links:
Git: http://git-scm.com/
Subversion: http://subversion.tigris.org/
I'd recommend Mercurial for its ease of use and that it keeps the working copy uncluttered, all versioning information is kept in just one .hg folder. I'd do it like this:
Set up a Mercurial repository at the server (hg init)
Do a hg clone of that repository to where you want your working copy
Work away!
When you want to test on the server, do a hg commit and hg push to move the changed files to the server
Run hg update on the server, or add
[hooks]
changegroup = hg update >&2
to the .hg/hgrc file (create it if it doesn't exist) on the server to have it automatically update.
For more info, you can also check out: http://hginit.com/
I have worked within a web development company where we had our local machines, a staging server and a a number of production servers. We worked on macs in perl and used svn to commit to stage, and perl scripts to load to production servers. Now I am working on my own project and would like to find good practices for web development when using shared web hosting and not working from a unix based environment (with all the magic I could do with perl / bash scripting / cron jobs etc)
So my question is given my conditions, which are:
I am using a single standard shared web hosting from an external provider (with ssh access)
I am working with at least one other person and intended to use SVN for source control
I am developing php/mysql under Windows (but using linux is a possibility)
What setup do you suggest for testing, deployment, migration of code/data? I have a xampp server installed on my local machine, but was unsure which methods use to migrate data etc under windows.
I have some PHP personnal-projects on shared-hosting ; here are a couple of thoughts, from what I'm doing on one of those (the one that is the most active, and needs some at least semi-automated synchronization way) :
A few words about my setup :
Some time ago, I had everything on SVN ; now, I'm using bazaar ; but the idea is exactly the same (except, with bazaar, I have local history and all that)
I have an ssh access to the production server, like you do
I work on Linux exclusivly (so, what I do might not be as easy with windows)
Now, How I work :
Everything that has te be on the production server (source-code, images, ...) is commited to SVN/bazarr/whatever
I work locally, with Apache/PHP/MySQL (I use a dump of the production DB that I import locally once in a while)
I am the only one working on that project ; it would probably be OK for a small team of 2/3 developpers, but not more.
What I did before :
I had some PHP script that checked the SVN server for modification between "last revision pushed to production" and HEAD
I'm guessing this homemade PHP script looks like the Perl script you are currently usng ^^
That script built a list of directories/files to upload to production
And uploaded those via FTP access
This was not very satisfying (there were bugs in my script, I suppose ; I never took time to correct those) ; and forced me to remember the revision number of the time I last pushed to production (well, it was automatically stored in a file by the script, so not that hard ^^ )
What I do now :
When switching to bazaar, I didn't want to rewrite that script, which didn't work very well anyway
I have dropped the script totally
As I have ssh access to the production server, I use rsync to synchronise from my development machine to the production server, when what I have locally is considered stable/production-ready.
A couple of notes about that way of doing things :
I don't have a staging server : my local setup is close enough to the production's one
Not having a staging server is OK for a simple project with one or two developpers
If I had a staging server, I'd probably go with :
do an "svn update" on it when you want to stage
when it is OK, launch the rsync command from the staging server (which will ba at the latest "stable" revision, so OK to be pushed to production)
With a bigger project, with more developpers, I would probably not go with that kind of setup ; but I find it quite OK for a (not too big) personnal project.
The only thing "special" here, which might be "linux-oriented" is using rsync ; a quick search seems to indicate there is a rsync executable that can be installed on windows : http://www.itefix.no/i2/node/10650
I've never tried it, though.
As a sidenote, here's what my rsync command looks like :
rsync --checksum \
--ignore-times \
--human-readable \
--progress \
--itemize-changes \
--archive \
--recursive \
--update \
--verbose \
--executability \
--delay-updates \
--compress --skip-compress=gz/zip/z/rpm/deb/iso/bz2/t[gb]z/7z/mp[34]/mov/avi/ogg/jpg/jpeg/png/gif \
--exclude-from=/SOME_LOCAL_PATH/ignore-rsync.txt \
/LOCAL_PATH/ \
USER#HOST:/REMOTE_PATH/
I'm using private/public keys mecanism, so rsync doesn't ask for a password, btw.
And, of course, I generally use the same command in "dry-run" mode first, to see what is going to be synchorised, with the option "--dry-run"
And the ignore-rsync.txt contains a list of files that I don't want to be pushed to production :
.svn
cache/cbfeed/*
cache/cbtpl/*
cache/dcstaticcache/*
cache/delicious.cache.html
cache/versions/*
Here, I just prevent cache directories to be pushed to production -- seems logical to not send those, as production data is not the same as development data.
(I'm just noticing there's still the ".svn" in this file... I could remove it, as I don't use SVN anymore for that project ^^ )
Hope this helps a bit...
Regarding SVN, I would suggest you go with a dedicated SVN host like beanstalk or use the same server machine to run an SVN server so both developers can work off it.
In the latter case, your deployment script would simply move the bits to a staging web folder (accessible via beta.mysite.com) and then another deployment script could move that to the live web directory. Deploying directly to the live site is obviously not a good idea.
If you decide to go with a dedicated host or want to deploy from your machine to the server, use rsync. This is also my current setup. RSync does differential syncs (over SSH) so it's fast and it was built for just this sort of stuff.
As you grow you can start using build tools with unit tests and whatnot. This leaves only the data sync issue.
I only sync data from remote -> local and use a DOS batch file that does this over SSH using mysqldump. Cygwin is useful for Windows machines but you can skip it. The SQL import script also runs a one line query to update some cells such as hostname and web root for local deployment.
Once you have this setup, you can focus on just writing code and remote deployment or local sync and deployement becomes a one click process.
One option is to use a dedicated framework for the task. Capistrano fits very well with scripting languages such as php. It's based on Ruby, but if you do a search, you should be able to find instructions on how to use it for deploying php applications.