I have a really nasty problem with a CodeIgniter application which I want to load balance using also MySQL replication.
In other PHP frameworks this would be a 5minute job because you may switch the datasource just before saving the data (CakePHP allows this, for example), but it seems in CodeIgniter that's a different story.
My problem is the application is legacy and I have to load balance it so I have all the hardware setup already done, the filesystem is in sync, the database has a slave and updates it as it should, but I have to make the second CodeIgniter application to read from the slave and write to the master.
I already found a solution but since the application is legacy I wouldn't want to go into its internals because I could destroy something that already works (it's also pretty sensible because people are paying within the app and it's also live which makes things even worse).
The solution I've found is this one which makes you change all the models in the application.
As further information CodeIgniter database driver (CI_DB_mysql_driver in this case) doesn't have any method for switching the connection or something.
Do you have another suggestion for tackling this problem? (besides changing all the models which I find a bit too intrusive)
Related
I'm very new to Redis and fairly new to Laravel so could use some pointers on this.
We have a legacy PHP application that stores sessions to Redis and I'm able to see it in Redis with the default naming convention, ie.: PHPREDIS_SESSION:1bd9ca87f5b606a35891c807857c2fde
We're moving towards a Laravel API framework and as a short-term hybrid solution we want the API layer to be able to recognize and work with the session that's already been created through the legacy application.
I've been making slow progress into understanding Redis and I can see the legacy system's entries in Redis from Laravel (if I connect with a blank prefix), but is there a way to cut through Laravel's specialized handling and have it load the same PHPREDIS_SESSION space?
I've orbited this so many times I'm wondering if I've missed something simple.
Ultimately I got this working simply by updating my .env file with:
REDIS_PREFIX=PHPREDIS_SESSION:
CACHE_PREFIX=
I was hoping to avoid that since it's kind of bleh, but it functions and I suppose a config file is better than forcing Laravel to act against the grain.
Laravel is now recognizing the session being stored by my legacy application and trying to load it. Now I just need to get de/serialization to sync...
I was working with Silex and Doctrine ORM. To make my database queries faster, I wanted to have a caching of some sort.
I looked at PhpFastCache - which provides a good caching framework - but does not really integrate with Doctrine. The best part about this is that I can have a local cache independent of any external service - like memcached. Since I have a small site which is hosted on shared host, I cannot spend money on having a service on cloud.
I also looked at existing cache providers for Doctrine ORM and all of them use external cache service.
The last thing I know I would have to do is write a provider myself using the PhpFastCache, but just wanted to make sure that there is no alternative online that I can use. I have tried my best by searching online all day today, but I just wanted to make sure.
Just to add: I have looked at APC and Memcache, but I have my site on shared hosting, and I would need a dedicated hosting for installing the PECL modules for APC/Memcache :(.
Doctrine includes quite a few cache drivers that do not seem to be documented. There is not one for PhpFastCache, but there are two that cache directly to the filesystem. Check out FilesystemCache and PhpFileCache. You can see the full list in the repository.
If I had to guess, I'd say that FilesystemCache is what you want. It stores serialized data in a plain file. PhpFileCache stores it as a PHP file, and then uses include to read it later. That means it has to be parsed by PHP on read, which is probably slower unless you use a PHP bytecode cache like APC.
Neither solution will be as fast as something like Memcache since they both read from the filesystem instead of memory, but they should provide an optimization for slow database queries that are run often.
Edit: As Kiran Madipally pointed out, it should be easy create your own PhpFastCache driver by extending CacheProvider.
I quickly wrote a provider for PhpFastCache. I have added the gist here:
https://gist.github.com/thephoenics/ee7de9f95bfdf5f6c24f
I am creating a web application. I am fairly accustomed to Laravel and how it functions. I would like to know how I incorporate Ember into the Laravel setup. I am guessing public folder, but when I use Yeoman to install Ember into the public it involves the node_modules and gruntfile. My question is should all this be in the public folder (any security concerns?)
Some people say it is not good to mix the two. I would like multiple single page views so it makes sense. Plus it is a good challenge to get stuck into. I have researched any answers and had no luck.
You need to put your script files in your public folder, otherwise the clients browser wouldn't be able to fetch and use them. No security concerns there. That includes grunt, bower or any other files (if it happens that you need them in your production server).
We at work use Laravel and Ember. Ember is our true front end and Laravel is our true back end. It is a very good idea to use them together. It's also a good idea to start with multiple apps so you don't get overwhelmed in no time. In time, you can evolve to one huge app and/or you'll start writing components and mixins you can reuse across your apps.
Just a note: I use a combination of bootstrapped data (json_encoded in .blade views) and data fetched from the server via getJSON (I don't currently use Ember Data because is not production ready yet)
I hope this helps you!
I'm looking for a decent example of creating a PHP class that will handle connections to multiple MongoDB databases. I'm working on a project that will have at least five separate databases. It would be extremely process heavy if I were to connect/disconnect each time I make a call to the database.
Can anyone point me in the right direction?
Check out Doctrine2 with it's MongoDB Features
When this project is large and will be maintained for some time, you should also consider taking a framework like Symfony2 into account, which harmonies perfectly with Doctrine2. Though it might feels like a huge rock to climb at the beginning, it is worth the reading and once you're into it, you are not going to make a website manually by hand anymore :)
I need to load data from an old DB into a migrated schema of this DB using Doctrine migration system.
I guess Doctrine might help me in this process.
I tried and lost a few hours using ETL scripts programs, without success.
From my point of view I need to :
Create a DB with the V0 schema
Load the data from the old DB (schema are identical)
Migrate DB to latest version using Doctrine migration
Extract data
Load it in the new DB
WHat do you think of this process?
Do you think it is feasable using Doctrine?
I tried a few searches on Google without success.
I am currently reviewed the features of Doctrine_Core class.
Thanks for your help
Yes, it is possible to migrate data from one database to another using Doctrine.
It sounds like you're trying to do a one-time database revision and migration and that your applications are not currently written using Doctrine. In that scenario, database abstraction has little or no benefit, unless you're also rewriting the applications to use it.
If you have no prior experience using Doctrine then I seriously doubt that writing custom migration classes in it will be easier than doing it with whatever database API you are already experienced using. It makes sense to use the migration classes (some times) if you are already using Doctrine for your development. Otherwise it's another layer and API you don't need.
I'm using Doctrine 1.2, which has some nice features for migrations but also a number of bugs and omissions of expected functionality. Reportedly version 2 improves on this but I haven't used it yet.