Hash::make not working route.php file - php

I'm having auth problems in my new laravel 4 app.
The one odd thing I have noticed and this might be why is that when I do:
var_dump(Hash::check('secret', Hash::make('secret')));
in the DB seeder (where I create my hashed passwords) I get true.
When I run that same command directly in a route, I get false.
Also, when I do a simple:
var_dump(Hash::make('secret'));
directly in the route it is still false.
Is this broken or am I missing something ?

There is something wrong with your install. This is what I get:
Route::get('/', function()
{
var_dump(Hash::make('secret')); // Gives a bcrypt string output
var_dump(Hash::check('secret', Hash::make('secret'))); // Output true
}
Did you do a composer update, and forget to update the app itself? That is the most common cause of Laravel 4 issues at the moment.
This forum post gives a detailed answer on how to update the main L4 app after a composer update.
Edit: I will post the forum stuff here - because you need to be logged in to Laravel forums to see beta section:
If you run composer update and experience problems after doing so, you most
likely need to merge in changes from the application skeleton, which
is the develop branch of laravel/laravel.
If you originally cloned this repository and still share a git history
with it, you can usually merge in changes easily. Assuming your remote
is 'upstream' pointed at this repository, you can do the following:
git fetch upstream
git merge upstream/develop
Alternatively you could cherry pick in individual commits from the develop branch, but I won't cover that here.
If you downloaded the zip distribution originally or removed the
upstream history, you can still resolve your problem manually. Look at
the commits on this branch and make any changes not present in your
application. Usually the breaking changes are simple configuration
changes.
Once Laravel 4 stable has been released the need to do this will be
much less frequent, but these changes can still occur. Keep in mind
that during this beta application breaking changes are very likely to
happen.
Thanks to Kindari for the forum post.

Related

How to determine unused packages in a composer project after removing dependencies in composer.json?

We have a project that uses composer for dependency management; its composer.json is externally changed by a versioning tool of our own concoction, which does not keep a history of when and what was require'd to be able to roll back using composer remove. But it does roll back the composer.json file entirely.
When we update the composer.json file, we'd like to purge the now-unused packages from the project. We couldn't find a command that does just this. We have a few ideas but each with its shortcomings:
Delete vendor/, composer.lock and run composer install. Problems: very slow; only works without plugins (otherwise more than vendor for destination and composer.lock for state might be involved); versions will be reset
Use a combination of composer show and composer why to determine unused packages and run compose remove on them. This is what I expected to find implemented in an internal command since it's so obvious. Implementing externally presents a few challenges though: only text output of composer show, no --format=json or something, difficult to parse especially since it's also plagued with warning messages if any, that can't be suppressed (-q turns off output completely, even for commands that list things, very useless and weird IMHO).
Note: we are also using wikimedia/composer-merge-plugin to merge a bunch of json configs so we can't easily diff the old and new composer.json files to get a list of packages to be remove'd, we need to operate on the merged package list which only composer knows and will mercifully let us glimpse at via show.
Which is the path of least pain to achieve this?
About using update: we can't use it, because it would make for the following workcycle: someone removes a dependency in a module, the module gets tested against trunk and passed QA (about 3 days), then they publish their changes; then, in order to remove their dead overhead, I have to run update on everything, which updates everything to whatever random version composer may decide, and in turn means triggering testing and QA for everything (about 1 week). This workflow is completely unacceptable.
I just want to remove dead code. Removing dead code should not mean restarting the whole testing cycle.
I would just run composer update.
If a package is no longer declared in your composer.json (or one of its dependencies), it will get removed when you execute update
Yes, it does have the side-effect of updating your dependencies to the most recent version available according to your constraints; but if your constraints are tight enough (and you can tighten them more if this kind of thing it's concern for your project) it shouldn't be problematic.
Seems less hassle than building a specific tool/work-flow for this use-case, IMO.
The main problem seems to be that your team is removing dependencies form the project and not keeping track of those removals. Any other solution after the fact obviates that, which is the real problem.
By keeping track of the changes you could have a task that removed (composer remove) these files without any side-effects or additional work.
So, just supposing you want to keep the existing structure (which, in my opinion, might solve your requirements well, but also brings problems), you could try to implement a Composer plugin on your own. Using the event Composer\Installer\POST_DEPENDENCIES_SOLVING, you might be able to access both the resolved list of dependencies and the list of packages in composer.lock.
Defining it could be as simple as writing such a class:
<?php
class ComposerDependencyHelper
{
public static function postDependencies(\Composer\Installer\InstallerEvent $event)
{
$composer = $event->getComposer();
}
}
And it is called in your root composer.json throught the script section:
"post-dependencies-solving": [
"ComposerDependencyHelper::postDependencies"
]
I don't have the time to check this further, but through that $composer object, you should have access to a lot of information (see https://getcomposer.org/apidoc/1.6.2/Composer/Composer.html for the current documentation):
getPackage returns your current package (as defined in composer.json), while getPackage()->getRequires() lists the dependencies
getLockedRepository might return the list of packages currently present in composer.lock
Through a simple marking process (build a list of "valid" installed packages, diff this list against the package list in the repository), you could identify those dependencies that are no longer required.
To keep this whole thing more clean in the future, you should stop removing packages from composer.json without running composer remove $package.

How to maintain code in two separate code bases for two different laravel projects?

So I am working on two different applications using laravel and both of them need a lot of the migrations, seeds, models, controllers, routes and so on similar to each other. In fact in most cases, they are absolutely same. What is the best solution to avoid redundancy in such a case.
The best solution I came up with was to extract a package and then use that package in both the applications but there are drawbacks. Every time I need to add a new feature which needs to be added to both of the laravel application, the package needs to updated. Once the package is updated, the main applications are updated. Sometimes small syntactical changes make me change something in the package and then update the packages again to see if it is working. But soon it becomes painful like this.
I have also tried using symlinks in composer file so that the package gets updated as I am working inside an application which uses them, but I still have to update the other application. Moreover, I still have to remove the symlinks before I push because the symlinks break in CI. They need the direct cloud URL for the repository.
Has anyone else gone into a similar problem with laravel and has an idea of how to resolve it? Or the best solution regarding the same.

How to make a git branch with only specific/selected files from a PHP project?

I'm a total newbie to this Git.
My PHP project files have been added to Git by admin.
Now one new person is going to start working on this project. He will work on one module of this project. So, being a senior developer I've been asked to create a branch for him that will contain only specific files that he will need to start work on the specific module.
So this thing has created so many questions in my mind :
Can I create a special branch for him with only specific/selected files from the project? If yes, how? If no, why?
Now only master branch of project is present. If the new branch of git is created for the new developer and he commits and pushes the changes he made to the git; how will they get merged with the master branch? Do I need to do it manually using third party tool like 'DeployHQ' or anything like or is there any way around.
To keep the things easy for him what I want to do is he should be able to commit, push the changes, those changes would straight away be reflected on server and he should be able to check it by running the pages in a browser. Can I make the this simple and easy as I'm thinking.
In a nutshell I don't want to disclose all of my project files to him and want to keep things easier and simpler for me as well as for him.
Please please please guide me.
Thanks.
The basic building block of GIT version control is project. You can't branch off only some files from the master as it doesn't make any sense in an environment where projects are the single version controlled entities.
You can add or remove files from a branch and later commit to the master with the changes.
Some people refer to the branching model in Git as its “killer
feature” , and it certainly sets Git apart in the VCS community. Why
is it so special? The way Git branches is incredibly lightweight,
making branching operations nearly instantaneous and switching back
and forth between branches generally just as fast. Unlike many other
VCSs, Git encourages a workflow that branches and merges often, even
multiple times in a day. Understanding and mastering this feature
gives you a powerful and unique tool and can literally change the way
that you develop.

Composer and third party bugs

While developing a Symfony2 project, I often come across bugs in third party bundles. Most of the time the bugs are subtle but hard to find. For example this week alone I have found three bugs where a value was tested using a simple if ( $value ) construct but required the use of ( $value !== null) or ( $value !== false ).
Without having sufficient permissions on the relevant github pages for the projects in question, the best I can do is push a pull request. It usually takes quite some time for the request to be merged. In the mean time, especially when using the master version, other pull requests are merged which in turn leads composer to update. When that happens, any local bug fixes will revert back to the original code.
Is there any method to handle this situation?
Ideally, I would like the third party bundle to update but have my modifications persist. Until the pull request is merged of course.
There is a project that allows you to apply patches after downloading packages with composer. It is created to be used with the Drupal project but I believe it should work with your own patches just as well.
https://github.com/jpstacey/composer-patcher
Otherwise, you could fork the project, make you improvements, submit a pull request and in the mean time use your own forked repository in composer. See [this answer][https://stackoverflow.com/a/14637668/3492835) for a detailed description of how to achieve that.
Edit:
The stars say it is about to be 2016 now, and a few things have changed.
jpstacey/composer-patcher is considered deprecated in favour of the netresearch/composer-patches-plugin project. This is a Composer plugin which does basically the same, but it is able to apply local patches as well.
Composer does not support this functionality out of the box. The reason is simple, one should not work with the development versions of other libraries. But fear not, you can easily work around this by forking the projects on GitHub. Of course this means a lot of overhead, but it is the best solution I can think of with which you can tackle this problem.
Note that this approach has several advantages over the patch approach:
You can directly create your pull request from your fork.
The git merge process will identify any conflicts.
A script to automate this process is easy:
#!/bin/sh
git fetch upstream
git checkout master
git merge upstream/master
You could create a Composer post update/install script which executes these command in each projects local directory if it is one of your forks. (I leave this implementation part to the reader. But one would need to create the repository locally first, since Composer only downloads the latest files without an repository data. This might add huge .git folders to a project since some projects are, well, huge.)

Deploy Laravel PHP apps to development and production server

I'm trying to figure out a way to deploy my company intranet PHP apps automatically, using GitLab, but so far, I'm not sure if the options that I've found on the internet will do the job.
Here's the scenario:
VM1: Remote Git server, that uses GitLab to administrate projects repos.
VM2: Development server. Each project has it's own development server.
VM3: Production server. Each project has it's own, as well.
Developers: Each developer uses a vagrant box, based on the project's development server.
What I want to do:
Whenever a developer push it's commits to the development branch, a hook in the remote server must update the development server tree with the last commit on that branch.
The same must occur when the master branch is updated, but it must update the production server tree.
However, since we use Laravel, we must run some extra console commands using Artisan, depending on each case.
We are following Vincent Driessen's Git Branching Model.
What I know so far:
I know that GitLab uses Web Hooks, but I'm not sure if that's going to work in this case, since I need to access a custom URL, doesn't sound very safe, but if it's the only solution, I can write a script to handle just that.
The other possible solution is to use Jenkins but I'm not an expert and I don't know yet if Jenkins is too much for my case, since I'm not using unit tests yet.
Do you guys have implemented a solution that could help in my case? Can anyone suggest an easy and elegant way to do that?
Thanks guys! Have a nice one
We do it the following way:
Developers checkout whatever Git branches, and as many branches as they want/need locally (Debian in VM Ware on Mac)
All branches get pulled to dev server whenever a change is pushed to Git. They are available in feature-x.dev.domain.com, feature-y.dev.domain.com etc., running against dev database.
Release branches are manually checked out on live system for testing, made available on release-x.test.domain.com etc. against the live database (if possible, depending on migrations).
We've semi-automated this using own scripts.
Database changes are made manually, due the sensitivity of their nature. However, we don't fint that a hassle, after getting used to migrations - and just remembering to note the alterations. We find good support by cloning databases locally for each branch that needs changes. An automated schema comparison quickly helps then, if changes to a migration file have been forgotten.
(The second point is the most productive one, making instant test on the dev platform available to everyone as soon as the first commit of a new branch is pushed)
I would suggest to keep things simple and work with git, hooks and remote repositores.
Pulling out heavy guns, like Jenkins or Gitlab for this task could be a bit too much.
I see your request as the following: "git after push and/or git after merge: push to remote repo".
You could setup "bare" remote repositories - one for "dev-stage", one for "production-stage".
Their only purpose is to receive pushes.
Each developer works on his feature-branch, based on the development branch.
When the feature-branch is ready, it is merge back to the main development branch.
Both trigger a "post merge" or "post-receive" hook, which execute a script.
The executed script can do whatever you want.
(Same approach for production: When the dev-branch has enough new features, it is merged to prod branch - triggers merge event - scripts...)
Here you want two things:
You want to push a specific branch to a specific remote repo.
In order to do this, you have to find out the specific branch in your hook script.
That's tricky, but solveable, see: https://stackoverflow.com/a/13057643/1163786 (writing a "git post-receive hook" to deal with a specific branch)
You want to execute additional steps for configuration/setup, like artisan, etc.
You might add these steps directly or as triggers to the hook script.
I think this request is related to internal and external deployment via git.
You might also search for tutorials, like "deployment with git", which might be helpful.
For example: http://ryanflorence.com/deploying-websites-with-a-tiny-git-hook/
http://git-scm.com/book/en/Git-Basics-Working-with-Remotes
http://githooks.com/ & https://www.kernel.org/pub/software/scm/git/docs/githooks.html
If you prefer to keep things straightforward and don't mind using paid third-party options, check out one of these:
http://deploybot.com/
https://www.deployhq.com/
https://envoyer.io/
Alternatively, if you fancy shifting to an integrated solution, I've not used better than Beanstalk.

Categories