I've been working with PhpStorm for a few months and without the need to save, it already automatically updates any change I make on the production website. This leads to numerous problems and headaches, since sometimes it does not save correctly, it deletes what I have written, it includes lines that I wrote seconds ago, it copies and pastes old lines... Depending on the speed of my connection and that of the server, this tends to increase.
The time wasted using this tool is considerable, and I was wondering if there was an option to NOT update changes in real time. I want them to update ONLY on save or some other manual way that saves me from this hassle when programming.
Thanks.
I've searched the internet but can't find anything on this.
Unfortunately I have to use this tool because it is what we use in my work.
I hope someone can save me from the accumulated frustration that I have with this...
Related
I recently started web development. The course I took was to install WAMP and start developing right away. I used an atom text editor, this -combined with wamp- proved to be a very fast way to write client-side code(HTML, CSS, Javascript).
But when I started to write serverside PHP things got a little messy. I should probably explain my site's structure here.
I keep separate PHP, CSS, javascript files for every page on the client side, for the server side a have 2 different types of PHP files:
Files that only perform a specific operation on the database(For example returning "5 more answers"). These are always called by AJAX requests.
Files that load the page for the first time. These are only used when the user opens the page for the first time, they do necessary database queries and return the page. Later requests always go to the 1st type of PHP files.
Now regarding my problem. I debugged until now by printing variables to the screen with var_dump() or echoing. But this started to become too slow as the data I work with grew. I wonder if there is a way of debugging which will let me but a breakpoint in one of my PHP files. Then, when I open it on the browser, on the localhost I created using WAMP, will let me go through the PHP file step by step.
I have been dealing with this issue for 3 days, I tried to make it work with Eclipse IDE but couldn't find a way. Also, there seems to be no tutorials or Q&A on the internet regarding the issue.
Breakpoint debugging opens a whole new world, and is the natural step after var_dump() debugging. Not only does it speed up development, but it provides much more information about your code, as you can step through each line and see what values have been set at each step, and how they evolve as your program executes its code. This means you can track the entirety of the values at different stages with one run - imagine tracking all variables at each point using var_dump()!
Although choosing an IDE is a personal decision based on personal taste, i strongly recommend you try out PhpStorm. If you can get a student licence go for it.
PhpStorm has extensive documentation & tutorials on all features in the IDE, debugging is no exception:
https://www.jetbrains.com/help/phpstorm/configuring-xdebug.html
https://www.youtube.com/watch?v=GokeXqI93x8
I don't know of a specific solution to your issue. I'm not exactly sure what you're doing but as a quick tip, I find add the following snippet to the top of the file useful as it will highly error more easily rather than browser just say nope.
error_reporting(E_ALL);
ini_set('display_errors', 'On');
Hope this help you a bit.
I tried out what's recommended in comments and answers. I first tried Netbeans. To be fair it disappointed me. Download kept getting stuck at 100%, even for different versions. When I stopped downloading and went ahead to create a php project, there was missing parts I guess. I couldn't even manage to create a php project. But that might just be me not being able to do it.
Then I followed #leuquim's answer and #Alex Howansky's comment and downloaded PHPStorm. And I got it to work in no more than 20 minutes. I downloaded it with a student's licence. For people who want to use PHPStorm with WAMP here's a Youtube tutorial:
https://www.youtube.com/watch?v=CxX4vnZFbZU
One thing to note in the video is that, maker of the video chooses PHP Web Application in the Run Configurations. That has been changed to PHP Web Page.
I have a moodle database which I exported a few months ago before our server went down. Now I want to generate reports from my old database, I have tried to import to new moodle site but moodledata folder is missing. So now I'm looking for another way to generate reports from my database. I have tried to make Msql queries but I think that would take a lot of time for now. I need help if there is any tool around which I can use or any API which I can use to generate reports from my database. I have tried to use Seal Report to tackle this issue but I have found that there is a lot of manual work to be done, I don't means this tool can't do that but I'm just looking if there is any other tool which can simplify my task.
NB: I know some will say this is not a programming question, Please feel free to suggest any best way to query using any language.
You should be able to set up a local copy of a Moodle site with a copy of the database and with a blank Moodle data folder (I've done this regularly in order to investigate issues on a customer's site).
Once you've done that, you will have access to any reporting tools you would normally have inside Moodle.
You may find it easiest to set up a fresh install of Moodle, pointed at a blank database, then, once the install is finished, edit the config.php file to point at the restored copy of the original site. You may have to purge caches (php admin/cli/purge_caches.php) and you may have to reset the admin password (php admin/cli/reset_password.php). It is also wise to turn off email (edit config.php and add $CFG->noemailever = true; ).
Me and my friend are in different countries have been developing a LAMP web app for several weeks. All these times we have been sharing source code over ftp. In this way php files become messy. I have heard about CVS, and have been reading about it. But I still cannot figure out how it works exactly.
How does the CVS could help me in this matter ?
I would be much appreciated for someone who point me in the right direction.
Ok here comes a very simple explanation of VCS. After using it for a while you'll laugh at the explanation but for now I guess this should help you.
What are the problems of your current ftp file sharing?
If 2 people upload the same file one of the files will get overwritten
After uploading it you'll only see who changed the file (the last time) but not where it got changed
You can't provide information about the changes (despite putting comments in the files itself)
You can't go back in time, once uploaded old files are lost
With version control you can solve these problems:
Files get either merged into one new file, or get overwritten but the old file will still be stored to roll back if needed
You can see who made which changes when
You can provide comments when you "upload" your files about what got changed (without storing these comments inside files)
You can always go back in time and restore old "uploads"/changes
You can also create small side projects by branching. This basically let's you split your project in smaller pieces and work on them separately.
So at the beginning of your work you usually get your local sources up-to-date by getting all the changes that got made. Then you do your work and afterwards you update the online version with your changes so that other developers can pull these changes and continue to work on them or integrate these changes into their current changes.
How to implement this sorcery?
You could google for "how to implement git" or "how to implement svn" but I would recommend you to use an online service as a beginner. Here is a list of services: https://git.wiki.kernel.org/index.php/GitHosting
My personal preference for closed source projects with a low number of developers is https://bitbucket.org/. You get a small wiki page and bug tracking tool provided with some of the services. If you want to use bitbucket, here is the very easy to understand documentation: https://confluence.atlassian.com/display/BITBUCKET/Bitbucket+101
Important to know:
Soon you'll learn that you don't upload files as I've written multiple times but rather change lines of code. You also don't upload them you "commit" them.
While cvs could help, not many developers will recommend using it for new projects. It has largely been replaced with Subversion (svn), but even that is falling out of favour. Many projects these days use distributed version control with git or Mercurial (hg).
A good introduction to git can be found in the free online book Pro Git.
In any case, these things are all version control systems. They help to synchronize the code between developers, and also let you track
who changed code,
when it was changed,
why it was changed, and
how it was changed.
This is very important on projects with multiple developers, but there is value in using such a system even when working on your own.
I am working on a site now that seems to have an infinite loop for the wp-cron.php file. My host recently limited my account because they said that a certain query to my database was creating 1GB of error logs every 15 seconds. I am not sure why this is happening.
I wanted to know if anyone has encountered and successfully solved this issue. We were working on this site on a dev server with no problems, but now since we've moved to our production environment we've been getting this issue. I am thinking that maybe some files were lost in the transfer, however it does not seem so.
Thanks
OK So I have found a solution for this, and I figure that it would help to let everyone on here know as well.
Basically, after a lot of research, I have found that the MailChimp Archives plugin apparently fires off the naughty cron job in question every time someone visits the site. For whatever reason, it got thrown into an infinite loop which was creating huge log files (64MB in about 3 seconds). Once I discovered exactly where the issue was coming from, I did the following:
Disabled the plugin
Found a Wordpress function that removes selected hook that schedules the runaway cron job (http://codex.wordpress.org/Function_Reference/wp_clear_scheduled_hook).
Used that function to remove the hook in question, by inserting it into my theme's functions.php file, and reloading the page.
Removed the function once I reloaded the page a few times.
Found the corresponding data in the database, which was in the wp_options table. I just searched the name of that same hook that caused the problem, and found that the option value field contained 9.5MB of text in it! Obviously the cause of the massive slowdown, since this 9.5MB of text needed to be loaded and parsed every time someone visited the page. I removed this completely from the database.
Once this was done, I started to notice incremental increases in performance on my Wordpress site over about a half hour or so. I also did another test to see if the log files were accumulating, and they were now only fluctuating between 3-4Kb, which was way better.
I hope this helps. Even though this seems to be a fairly common problem, I don't see many detailed solutions for it, so let this be the first.
Thanks
I believe there was an issue where if the server would go into an infinite loop if you didn't have the wp_cron.php file, since returning the error calls the file again. It's worth checking for in this case.
It's also possible that a variant might be happening - you try to access a file from wp-cron, and the file isn't found.
Even if all the files were copied, their paths might not have been copied correctly.
The cron jobs of Wordpress were causing a high CPU consumption on the server. Even defining define ('DISABLE_WP_CRON', 'true'); does not work. Without using a plugin the way I found it was to include this in the theme's functions.php:
global $wpdb;
$wpdb->update("wp_options", array("option_value"=>""), array("option_name"=>"cron"));
I have a website say www.livesite.com which is currently running. I have been developing a new version of the website on my local machine with http://localhost and then committing my changes with svn to www.testsite.com where I would test the site on the livesite.com server but under another domain (its the same environment as the live site but under a different domain).
Now I am ready to release the new version to livesite.com. Doing it the first time is easy, I could just copy & paste everything from testsite.com to livesite.com (not sure its the best way to do it).
I want to keep testsite.com as a testing site where I would push updates, test them and once satisfied move to livesite.com but I am not sure how to do that after the new site is launched.. I don't think copy pasting the whole directory is the right way of doing it and it will break the operations of current users on the livesite.com.
I also want to keep my svn history on testsite.com. What is the correct way of doing this with SVN ? Thank you so much!
Other answers mentioning Hudson or Weploy are good. They cover more issues than what follows. That said, the following may be sufficient.
If you feel that's overkill, here's the poor-man's way of doing it with SVN and a little creative sysadminning.
Make your proudction document root a symlink, not an actual directory. Meaning you have something like this:
/var/www/myproject-1-0-0
/var/www/myproject-1-1-0
/var/www/myproject-1-1-1
/var/www/html -> myproject-1-1-1
This means you can check out code onto production (say, myproject-1-1-2) without overwriting stuff being served. Then you can switch codebases near-instantly by doing something like:
$ rm html && ln -s myproject-1-1-2 html
I'd further recommend not doing an svn checkout/svn export of your trunk on the production box. Instead, create a branch ahead of time (name it something like myproject-X-Y-Z). That way if you need to do some very stressful tweaking of production code, you can commit it back to the branch, and merge it back to trunk once the fire is extinguished)
I do this a lot, and it works quite well. However, it has some major drawbacks:
Mainly, you have to handle database migrations, or other upgrade scripts, all by yourself. If you have scripts (plain-old-SQL, or something more involved), you need to think about how best to execute them. Downtime of hopefully-just-a-minute might not be a bad idea. You could keep a "maintenance site" around (/var/www/mainenance), and point the symlink there for a few moments if you needed to.
This method is not nearly as cool as Weploy, for example, but for relatively small projects (running on a single server, with not-huge databases), it's often good enough, and dead simple.
My answer will complicate things a little bit, but here goes:
I would for this type of scenario use Hudson.
Hudson will allow for you to have an auto deploy / clean the current dir out / add new from svn process. You can then worry about development and less about jugling and deploying from one place to another.
The caveat is that you need to learn a little bit on how to setup Hudson and how to make him work for you.
How to get started with PHP for Hudson
I think that should get you on the right track, a bit of work like I said, but pays off later on.
If only the server side code changes, it is possible that you can simply copy the code across and things will be okay. But even there you have to think of possibility of people in mid-interaction. If the client side code changes, especially if you are heavily using ajax, you will have to get the current users to reload their pages. If the database also changes, then you have ensure that no database transactions happen during the time that you are applying the database change scripts.
In all cases, and irrespective of whether you are using any continuous integration tool, I believe it is safest to go for downtime to apply these changes. One of the reasons why people have the "beta" sticker on their sites is so that they can log everyone off and shut them all out to apply changes without notice. As long as they don't do it very frequently they can get away with it too. Once you are out of beta, applying changes becomes a ceremony where you start announcing downtime weeks in advance, then get a window of 30 minutes to a few hours to apply all changes.
For underlying things like patching security flaws in the OS or system software, adding hardware etc, downtime can be avoided if there is load balancing, and the patches are applied one by one.