I am building a l large scale PHP Laravel App. It is an app for Developers which consists of many smaller apps/modules.
For example a Bookmarks app similar to PinBoard, a Notebook app similar to Evernote, Code Snippets app, RSS Feed Reader, and about 30 more little apps all in 1 app in which each app acts as a module.
MY question is about performance. With this many Models and Controllers, etc being loaded, can I not expect good performance on a large user base of this app?
I realize there are too many factors unknown like server settup and more but more so I am asking are things like Models and stuff all loaded on app init or only as needed? Many times most of the apps/modules will not be used in a request.
I have see some packages which add modules and the modules get there own controllers, models, routes, views, and everything that you wold see in the App folder for each module. However I am not sure I want to go that route so right now I have it all under the main app like usual.
The simple answers are yes, the app only loads the models/controllers you're using for a given request, and no, the number/size of models and controllers (including those in packages and modules) you have won't negatively affect your app's performance.
The more complex answer is, as always, it depends on how you've written your code; models calling into other models for some piece of business logic, or eager-loading several relationships worth of data without caching, or using an absurd amount of dependency injection in every single controller... but all that's pretty bad app design anyway and not specific to Laravel.
But the most impact you'll see to performance, in my experience, is Eloquent code that tries to be too fancy or inefficiently fetch data out of the database. Check out the documentation to learn about ways to avoid that:
https://laravel.com/docs/5.3/eloquent-relationships#eager-loading
Related
I have te following architectural scenario which doesn't depend on me and I can't change:
On one server/machine has several php applications and a postgresdatabase: each application uses its own schema in order to separate applications'data logically. I'm developing a new application - the one represented by the A - using Symfony, which manages and stores data that should be partially accessible by other apps, especially by an old app that's not been developed with symfony nor any other framework. To make it simple you could imagine A app storing movies sent from clients by means of its REST API. I would like B, and C accessing movies' data: B ocasionally should need accessing all the movies with a certain actor acting in it, and C would like to update the owner studio (or vice versa). Of course it would be an A's job, the symfony app that was born exactly for that purpose. SO I thought I have two ways, represented by the arrows:
A's exposes some way an API that B can call. That way I don't have to duplicate business logic and data persistence. Maybe by exposing Doctrine's entities or a controller in some way. Of course I should load at least doctrine, the container, the configuration the httpframework component and when it comes to using C I guess this solution would be unfeasible because I would have two apps using and loading most of the same classes, without any kind of isolation between the two. Another (extreme?) solution I thought is not using Symfony for exposing my A functionalities to the other apps: I could write an API class that connects to the DB and does some logic without any Symfony's support (PDO and friends. No services, no components, nothing.). Luckily what I have to expose is little and that wouldn't be that big problem.
Finally, I would avoid calling A by means of the shell (i.e. app.php console/getMovie) because it consumes tons of resources and I don't think it's that fast. My server is really small and couldn't live up to that
B and the other apps could access A's schema, but that way I maybe should duplicate some business logic and I see it kind of messy. Maybe C app could simply uses an A bundle I could write, explicitly written to expose some A's functionalities 3rd party apps need.
These are the two solutions I've found, but I do appreciate how you think I should design this.
It is possible to share data between schema by using views in a common place (usually public schema).
This has several advantages:
it lets you use your schema structure the way you want.
it exposes the data you want.
it lets you manage the write accesses you want (yes, views may be writeable)
it makes you able to deal with GRANT/REVOKE easily.
it makes the applications able to leverage Postgres` LISTEN/NOTIFY feature to communicate.
There are downsides:
Updating a view means deploying new model files on all the applications using it. It can be a project on its own that the other applications depend on through composer.
i just followed this docs api rest phalcon and it worked!, but i don't know how to properly set more resources like robots in the example, if i have 10 resources i don't like to have them in the same file.
using router?
Thanks a lot.
If by resources you mean models and if it works as is, you can simply create more models in my-rest-api/models/ directory. Normally you'd need to configure an autoloader, but micro apps probably know where to get models from.
If you are asking how to make your app better organised you probably need to move away from the micro app and take advantage of the MVC pattern. If in the example you worked with a single model and all related logic was handled in a single file, with MVC all logic is organised into controllers. Normally single controller handles logic related to a single model or models related to it. The official tutorial explores this in depth with further references.
Edit:
And as Julian himself pointed out Phalcon\Mvc\Micro\Collection is another approach for micro apps.
Problem
I know this is a bit of a conceptual question, but I am designing a
system and I am not sure if I want to use one script as a "Master
Controller", that talks to my PHP classes for the business logic, OR
if I wanted to create multiple smaller controllers that only focus on
certain tasks.
Example
For example, if I wanted to control ALL the functions of the
site(user management, times heets, reporting, and so forth), should I
combine all those "getter/setter" functions into one script, or should
I break it up into sections (one controller for user management, one
for time sheets and so forth). I am using AJAX to call these scripts
when needed.
My Request
My thinking is that I should use multiple smaller controllers, that
way when multiple users are accessing the site, they are not all
trying to execute that same script at the same time. I would expect
that if I used one big controller, that it could cause problems with
symptoms that would act like a DDOS. However, I am unsure if that
would actually happen and wanted to get some opinions/alternatives
suggestions.
BTW, this is not a MVC architecture per-say, more of a custom monster I am creating.
Any insight would be greatly appreciated.
For a complex system with multiple Ajax calls, you might want to look at the Flux and React systems developed by Facebook. These are open source and available on GitHub. Facebook developed Flux because they got into difficulties with a complex MVC system.
The Single Responibility Principle suggests to split you controller. It's a fundamental principle of good object oriented design.
I've 3 projects and the mostly need the same models, and the data. So I'm thinking to create a Core app that have Core models(User, Store, etc.). And create different apps for each project. Besides they use the core models, they can have their own models. So what's the best way to do this?
I left this as a comment but it is a potential answer to this question.
Original comment
You might want to look into an HMVC package for Laravel. The only decent use case for it is multi sites using the same core code. This would let you have multiple sites with a core code base using internal cross controller requests from the site controllers to the core code
HMVC or Heirarchical Model View Controller, as a concept extends the MVC pattern and allows developers to make cross controller/route requests. In theory this would allow for x number of sub installations of Laravel (using a package like this) to call a single common ancestor application that would provide them with an API for dealing with specific requests.
This is good as it provides separation of the api/master app and its child application instances, leaving them free to implement their own logic.
I think this may be something that would help you in this instance (and isn't difficult to use).
I don't want to reinvent the wheel, so ask how others do this ...
I have 2 upcoming projects, one in Delphi and one in PHP (plus maybe one in Java), so info specific to those would be welcome, but a generic answer is also acceptable.
So, I have a bunch of PCs and a database server (ODBC) and want to develop an MVC app.
I guess that the Model is on the d/b server and that the view is on each individual PC. Where is the controller? One on the d/b server, or one copy on each PC?
When writing data I imagine it is enough to lock the relevant d/b table(?). But how to update all of those views and tell them that there is new data or that the data which they are working on has been modified on deleted by another user?
Any code is welcome as are URLs or book recommendations ... thanks
Suggest you start with this
http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller
Its a regular intranet app right? why do you want to split everything separately. Models, views and controllers are just the respective files where you do the relevant stuff, they dont reside physically on a separate locations. I think you have got the Concept of mvc completely wrong. MVC means splitting and layering the code based on their functions not keeping them physically separate.
To be more clear in a laymans language your models views and controllers are just directories in your application. Your view files goes in to your view directory(not necessarily this is changeable though) your models that is your DB related operation files goes into models and the classes that control and drive your app goes into your controller directory.
Running it on intranet is quiet simple. ALl you have to do is host your files on a system that has a static IP or Name within your network then from other systems go to your browser and point your browser at
http://myserver/myapp or http://192.168.100.100/myapp
I dont know anything about delphi, but what ever i said above stands for PHP and many other languages.
So taking all the above points in you are not going to update the views separately on each system all files are in your central server and any changes made in them automatically reflects when requested from the clients.
From your question i assume you are completely new to web development or atleast mvc so first have a look at some simple mvc like codeigniter in PHP. CI has a very good DOC so you can come up to speed.
Hope i have answered all questions.
As far as I understood MVC, there is no rule about the location of the controller itself.
Please take in consideration that MVC is an architectural pattern, not an hardware nor even logical design. If, like in any n-Tier architecture, it could make sense to have the DB in a dedicated computer (for performance and maintainability/backup reasons), controllers and views can be... everywhere...
MVC is a pattern, that is, more a way of modeling/interfacing objects and classes than a way of distributing your application into modules. In fact, you can (and IMHO should) share MVC code among client and servers, whereas the objects are still implementing Model/View/Controllers in a separated way.
A "classic" MVC implementation (e.g. used by RoR or by DoR or Relax in the Delphi world - both in draft status) uses directories or files to split views and controllers. But this is just one implementation design of this pattern.
You can have a pure object MVC orientation, as we implemented for instance in our ORM framework. You can in fact have objects everywhere, to handle your model, to access the DB, to handle your business services.
In our mORMot Open Source framework, for Delphi 6-XE, you can follow this development pattern:
Data Tier is either SQLite3 and/or an internal very fast in-memory database, and/or any other external database (via OleDB or OCI), most SQL queries are created on the fly by the ORM kernel;
Logic Tier is performed by pure ORM aspect: you write Delphi classes which are mapped by the Data Tier into the database, and you can write your business logic in both Client or Server side, just by adding some events or methods to the classes; a Service-Oriented-Architecture (DataSnap-like) is also available, and can be used without any object;
Presentation Tier is either a Delphi Client, either an AJAX application, or any other back-end able to communicate using RESTful JSON over HTTP/1.1 - in this case, PHP or JAVA clients are perfect candidates.
I suggest you take the time to download and take a look at the mORMot documentation. It's far from perfect, but it tries to be complete. In the SAD document, you'll find some pages about the architecture we implemented, in particular MVC, n-Tier, SOA and ORM. Don't be afraid of the 1000 pages of PDF - you don't have to read it all: I just added some general diagrams which may help you modeling your solution.
In all cases, taking a breath before implementing a solution for your project is a very good decision. The time you'll spend now by looking at existing architectures will certainly save you pain in the future. Good start!