-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How should applications be structured? #23
Comments
Another pain point about organizing by type: |
@tcrayford: Two possibilities: 1) the core promise of Raptor is true. The combination of sufficiently powerful routes and a sufficiently powerful injector will remove the need for an imperative translation between HTTP requests and domain objects (very unlikely). 2) We'll introduce such an explicit layer. It'll be yet another thing that either goes in resources as they exist or in YourAppName::Responders (or whatever their name is). Raptor does already have a concept called "responder", but it's internal and used to separate redirect responses, template rendering responses, etc. |
I think 1) can happen, if custom injectors turn out working well enough in practice (though naming conflicts worry me, and we haven't actually done anything with regards to custom injectors). Then domain objects can just be namespaced inside YourAppName. |
Naming conflicts are a danger, but we can at least detect them at startup, since we have all of the injection sources loaded at App.new time anyway. |
Agreed. If it were a resource in the REST sense, then perhaps a |
Representers gonna represent. |
I like the idea of a REST layer, and I think that Raptor would facilitate one as easily as any other standard framework once its internals are solidified. Supporting REST super deeply like webmachine tends to take over the structure of your system (as it does for webmachine); I suspect that support at that level would require breaking apps up along different boundaries than the ones in Raptor. |
I've been thinking about Hexagonal Architecture for awhile. Conceptually, I like the idea that there is a divide between the "app" itself and delivery mechanisms (HTTP, client GUI, CLI, etc.). The web-secific code (including routes) should depend on the core app and not the other way around. Uncle Bob outlined a similar architecture in this talk: http://confreaks.net/videos/759-rubymidwest2011-keynote-architecture-the-lost-years. Bob describes the following concepts:
Notably, the Request Models, "Interactors", and Response Models are all non-HTTP specific. The idea is you could use them if you do a client-side GUI, CLI app, and also call them from your tests to test the full stack of domain logic w/o the HTTP layers. Raptor seems pretty close to this, but there are areas where these concerns are not separated cleanly. Thoughts? Was Raptor inspired by any of these ideas? |
As far as I know, neither of us was influenced by that talk, though I've been familiar with hexagonal architecture for a while. I almost wrote a comment earlier based on that same talk (I have seen it, but only after Raptor was created). I'm glad someone brought it up. :) Taking them one-by-one:
There are some cases (like response models) where Raptor doesn't give you anything, but sort of coaxes you into doing it yourself. I think that's a nice feature to have: constraints that guide you into the right path, rather than trying to build a giant framework with ResponseModel::Base. :) @brynary, do you see any holes in Raptor that we can fill by introducing these more rigorously (or loosening them)? |
From my initial review, it feels like the Request Models and Interactor (not a fan of that name) responsibilities are handled via Injectors, Requirements and the Routes themselves. This is the area that for me personally the Raptor approach is a bit tenuous. I think it's close, but it feels difficult to get a picture of "What are the steps performed when I get a request to update a blog post?". Can't quite put my finger on a way to clean it up, but I suspect there's a way to model that more cleanly at a lower level in Raptor, and then layer on the routing and convention-driven more "magic" stuff on top of that, leaving the programmer free to pick the right level of abstraction given their problem. I agree separating Presenters and View Models is overkill. The question in this end of the stack for me is: Is it worth separating Presenters (HTTP-specific) from Response Models (non-HTTP specific) in order to get the benefits of Hexagonal Architecture? So a Presenter would know about URLs and HTML, but a Response Model would not. The biggest conflict I see with Raptor and Hexagonal Architecture or Uncle Bob's approach is that the concept of routing is both a) HTTP-specific and b) used to manage domain-level concerns (e.g. authorization implemented as a Requirement). Superficially, the routes are defined in what would otherwsie be the business logic layer, but tying Requirements into the Routing seems to be a deal breaker. WDYT? -Bryan |
The (somewhat hand-wavy) "route via authorization etc." stuff does feel weird and always has. If you expand it out fully, here's what a request could theoretically like:
You can see the pattern: route, build models, route, build models. Exposing all of that directly would send people running. :) In some web frameworks, this is solved by middleware (e.g., auth middleware) that does hit the database. In Rails, usually it's solved by ApplicationController and controller filters. In Raptor, it's solved with requirements and injection:
For this to make architectural sense, try thinking of the requirements as more about application logic than HTTP. Here's a theoretical requirement:
The dependencies go Raptor -> AdminRequirement -> Record. There are no web-related dependencies in the requirement, even though you think of a requirement as part of routing. It would probably be preferable to have the user directly injected, but the result would be basically the same. I feel like I'm rambling... :) |
Yes, I understand AdminRequirement has no Web-related dependencies. However, I don't see a way to instruct the application to "Create a Post" without going through the router. A simple example would be:
It seems like there's is a lot of value in being able to tell the app "Joe User wants to create this post (w/ all the details)" without involving Web-based concerns. I guess what I'm getting at is I don't see a way to make calls into the business logic of a Raptor app without going through the Router (which is Web specific) even though the individual components themselves are mostly not Web-specific. Make sense? |
Oh! I see. Let me try to make that example concrete. Let's do "users can create posts; if the user is an admin, it's always published" using an ARish database layer:
You have to imagine an injection source for current_user, as well as one that's magically pulling form fields out from somewhere. That's another "might be a bad idea thing" thing that we haven't tried yet; if it's a bad idea, these methods would probably just take Either way, you can see that those classes themselves are usable in isolation. Raptor is injecting stuff into them, but the classes themselves don't know that. That's one of the rules of Raptor: it might invoke you in fancy ways, but you're just a plain old object. Does that make sense, or am I misinterpreting? |
I just realized that you might mean: given a user, who might or might not be an admin, how do we create a post for him in whichever way is appropriate (published or not), without going through the router? In which case, my response is: hmmmmm... |
@garybernhardt Also I think you mixed up a class name / method name change while editing your example code: |
Fixed. |
Gary -- Yes, I think you understood my question correctly in your last comment. Not sure if my example is the greatest, but the idea is that I'd like to be able to execute a core domain concept (e.g. "create a post"), with all of the domain logic involved with it (e.g. authorization) without any of the delivery mechanism details (e.g. this happens over PUT requests). Then, I could point my tests at that, or point a GUI app at it, or point a CLI UI at it, etc. In fairness, I've never done this before in a real app. But it feels like a core idea of both Hexagonal Architecture as well as Uncle Bob's architecture talk, and I feel like I identify with the benefits purported. This is possible in Raptor as is though (just like it is in Rails). You can not use Requirements for domain-level concerns, keeping the routes simpler. Then the object you route to would encapsulate the authorization logic. I'm just wondering if there's an evolution of the concepts Raptor currently supports that leads here. It's sort of splitting the Router in half. |
Oh and BTW, if this isn't the sort of thing you were looking for in this thread feel free to redirect it. :) |
It's not exactly what I had in mind, but it's really interesting and valuable, so no worries. :) I really need to build a non-trivial Raptor app and see how this plays out. I'm very torn now that I understand your point. |
Some additional thoughts on how this could work. App structure:
Note a Views::Post is basically a presenter. This highlights the "two step view" pattern. Actions are "Interactors".
ValdationFailure and PostSaved as "response models". They could be exceptions but I'm generally opposed to using exceptions for non-exceptional control flow. This is the only HTTP-specific part:
Routing can leverage conventions heavily to be as terse or verbose as is appropriate. In this case the only web-coupled components are the router and the injectors. So if I built a CLI app, that's what I'd swap out. |
I'm very intrigued about where this going... inspired by Uncle Bob's talk I started contributing to Webmachine—it strikes me as the most fully elaborated HTTP delivery mechanism available today in Ruby—with the intention to flesh out Request/Response Models and Interactors on top of Webmachine. I just learned the term Hexagonal Architecture today, but the 'ports and adapters' description of the same concept has been increasingly on my mind for the last 6-12 mos. My daydream involves implementing Uncle Bob's 'Interactors' as 'Interactions' in the DCI sense, because I see DCI as an escape route from the 500-line User model: pull the Roles out of the User entity and extend a given User instance with an appropriate Role at runtime (eg., Some of the concepts in Raptor are really exciting (esp. Injectors), but I also really like Webmachine's deep support for REST. That said, I, too, get tired thinking about the immense quantity of boiler-plate implied in the Delivery Mechanism -> Request Model -> Interactor -> Entities -> Response Model -> Presenter -> View Model layering approach. Looking for opportunities to collapse some of those layers, I don't think the Presenter/View Model division is essential, but will likely be helpful in complex views. I'm currently trying out the notion that Presenters are a single-entity special case of the more general concept of View Models (which are the first step of a two-step view), aka the module Responses
ResourceCreated = Class.new(Struct.new(:resource)) do
def saved?; true; end
end
ValidationFailed = Class.new(Struct.new(:resource)) do
def saved?; false; end
end
end
class Interactions::CreatePost
# Interactions are initialized with a generic Request model
def initialize(request)
@identifiers = request.identifiers # identifiers parsed from the URL (generally path & query)
@attributes = request.attributes # typically the parsed POST body
end
def current_user
@current_user ||= begin
user = User.find(@identifiers[:user_id])
user && user.extend(Roles::CurrentUser)
end
end
def post
@post ||= Post.new(@attributes).extend(Roles::Publishable)
end
def call
if post.valid?
if current_user.admin?
post.publish
else
post.save_as_draft
end
ResourceCreated.new(post)
else
ValidationFailed.new(post)
end
end
end
module Roles
module CurrentUser
def admin?
!!read_attribute(:admin) # or however the role-based access control works
end
end
module Publishable
def publish
# however this is defined
end
def save_as_draft
# however this is defined
end
end
end I'd like to bounce some of the ideas that are kicking around in my head off of other Rubyists who are heading in this direction, but I don't want to hijack this thread any further. Any suggestions on other places I should take this conversation? |
I really dislike extending modules into things at runtime (and I'm against extending modules into things anyway). |
I love where this is going, @brynary and @emmanuel. I hadn't actually typed a response model example up before reading those. It's surprisingly terse when you use a struct and route based on the return type! (I do agree with @tcrayford about extending at runtime, though. It also busts the method cache, which has a big perf hit.) |
@brynary's example shows a full separation between HTTP and application, but I also wonder about application flow vs. application logic. Consider this actiony code from his example: if post.valid?
if current_user.admin?
post.publish
else
post.save_as_draft
end
ResourceCreated.new(post)
else
ValidationFailed.new(post)
end "if current_user.admin?" looks like app logic to me: that's a rule about how posts work, and it always holds. "if post.valid?" looks like application flow: it's about how the user's actions, having been de-HTTPed, affect the model. Binding the two together allows you to reuse this code in, e.g., a CLI app. But it doesn't allow you to reuse it in another part of the app logic that wants to create a post given a user. If the post validation failed, this method would silently return ValidationFailed. A couple of solutions come to mind:
|
I'd have thought checking if posts are valid should be happening at the routing level (we already have custom dohickeys for saying NO to a route [they are called requirements]). It feels ice-soap-requiring to create a Post from something, then asking it if it is valid (at least to me). |
Just my $0.02 on this, if you like the directory structure, I'd recommend something like "presenters" or "presentations" over "views". First, since you had to explain it, I think that indicates that's how you think of it primarily. Also, and this is more political than technical, "views" is a very loaded term in ruby-land. |
@garybernhardt I agree with your points. I'd be interested in seeing where the struct return response + validation failed as an exception example looks. At that point I'd personally only use Requirements for HTTP-level concerns like I do in Rails. On the view side, I like the "two step view" approach and I like implementing it with one object (you could call it a Presenter, View, or View Model) and one template rather than separating a Presenter and View Model. I'm partial to the name of simply "View" rather than Presenter. It makes sense to me that you have an instance of a View, and you also have a corresponding Template. |
@tcrayford, you're right. Maybe this is the answer to @brynary's concern about logic at the routing level? "Post is valid" is a flow concern, so it goes in the route; "posts by admins get autopublished" is a logic concern, so it doesn't go in the route (even though it could)? That seems pretty right to me. Though now the question is: can we actually pay attention to this subtle distinction while building apps? |
Caveat, I'm new to Raptor and haven't used this. My thoughts are just based on reading this thread after finding out about on twitter.
I'd also wonder if this is indeed possible, particularly if you were also thinking of responses. In particular I wonder how it could be elegantly supported when you want to take advantage of HTTP features, particularly those that make it such a useful application protocol. Examples would be caching of responses (e-tags or last modified), conneg, links in responses, conditional requests? Take a small example, we are updating an Order. The first thing we want to do is work out if the request is valid, then if current "user" is authorised. Those are easy to do HTTP agnostically (the code throws exceptions which are mapped to HTTP responses). However next we want to work out if the request is based on an up to date representation of the order (its a conditional PUT). In this case that means comparing e-tags. If that passes through we update the order but when returning the response we'd need to set the appropriate cache headers and the response will probably contain links to other resources. In the cases I've seen doing this sort of thing elegantly does require a layer that is HTTP aware, but obviously that layer can then sit on top of the nice HTTP agnostic layer(s). |
I don't think any of that requires extra stuff on raptor's part. If you need to do heavy http stuff, having your own http layer between raptor and the core of your application makes a lot of sense to me. |
Interesting. Is that representative of the intended direction for Raptor? If so, it's really good to hear, because personally I'm very interested in building on a very capable/complete HTTP layer, with conditional requests, content negotiation, cache headers, links, etc (ie., more like Webmachine). If Raptor is not going to head in that direction, perhaps we can build reusable components in the application layer—ie., the inner hexagon/everything right of the delivery mechanism in Uncle Bob's diagrams. This would be: Request/Response Models, Interactors/Interactions, and Presenters/Views/ViewModels (we really should settle on a name for this concept). |
From my perspective, if you want to be doing heavy http stuff, you'll probably want some layering between raptor and your http stuff. I don't think either Gary or I want to build something that is close to Webmachine. |
On this, was wondering what the overall aim of Raptor is, or put differently how will it differ from other Web frameworks out there? |
There's a lot in the README about that. The core things (to me) are the injectors and the powerful routing. |
HTTP is important to me personally, and I've been lecturing people about REST for at least five years. :) I think that being a good HTTP citizen is definitely a goal for Raptor, but OO design is the top priority. Raptor probably won't enforce strict REST in the deep way that e.g. webmachine does, though we certainly won't be building it to violate core REST principles by default. |
I just realized that injectors' implementation means that Raptor is not (and will likely not be) 1.8.x-compatible. Huzzah for that. Also, I'd like to point out that injectors is somewhat similar to (but much more robust and useful than) merb-action-args. Just sayin'... merb was ahead of its time (and much potential was lost in the Merb 2/Rails 3 merger). I do feel a twinge of concern about the concept of tying method params names to injectors, but that's a discussion for another place &/or time. That is all. |
I wrote up a quick thought experiment based on this thread while on a plane yesterday. It contains:
You can see it here, including a README with a slightly longer description: https://github.com/garybernhardt/raptor/tree/master/design_experiment. I'd start with the README, then posts.rb. What do you guys think? It feels a bit big to me. I don't expect Raptor to be as terse as Rails, of course, but I also don't want to go off the deep end. |
@emmanuel, does the ease of injector definition shown in that example ease your fear? I think that we can probably work it out so that an error is thrown if you ever try to define an injector name that already exists. We can also inspect all interactors' methods at startup time. We should then be able to guarantee that, once a Raptor app starts, all interactors' arguments are injectable, and no injectables are clobbering each other. |
@garybernhardt catching up on this thread, but related to your last comment, would it be possible to have a development-time rake task or command that can be run that prints all the injectables and their sources? I'm thinking like "rake routes" in Rails, which is very handy for validating correct configuration. |
@garybernhardt I generally like your design experiment example. Here are my thoughts:
The last point I believe is a weakness for Rails. In the simple case, basic REST controllers feel heavyweight. Raptor may be better in those cases because you can route right to models. In the more complex cases, you can get into all sorts of trouble with encoding domain logic in controllers. Raptor can also be better in those cases because it gives you a clean point to insert service objects, and isolate concerns into injectables, etc. One thing I hadn't previously considered -- how do you assert fine grained control of the HTTP response in Raptor? For example, tweaking response headers. |
I definitely think that an injection equivalent of "rake routes" is in order. I also like the idea of Raptor automatically writing both the route table and injection table to specific files every time it reloads them. Then, when you commit, changes to those tables will show up in the diff. Seems like it'd be handy. You're right about Raptor scaling up and down, I think. The design example I showed is no bigger than what I'd do in Rails, if I did need all that structure. I guess my concerns were unfounded. :) Routing to records by default has been there since the beginning; all seven standard routes are set up to delegate to the records by default. The default show route retrieves and presents one record, for example:
The expression of it in Raptor is slightly more complex because it needs to merge parameter overrides in, but that's what you get if you don't customize anything. :) |
I tried to read everything but was a lot to catch up on. Sorry if this sucks. I also haven't seen the Uncle Bob thing yet, but I feel like I've been structuring my apps this way (mostly from trying CQRS stuff). I also do C# not rails. So yah. One thing I dig is having Interactors (if I'm using that word right) being able to trust their data and only returning 'success' and throwing everything else. So previous to an Interactor being called I've done everything possible to ensure that the call will be successful and if it isn't then I feel happy throwing. So to me I would see validation as part of the route flow. Basically I'm replacing an if statement in the controller with smarter routes. Authorization is a similar concern to validation and I treat it the same way. So this does mean CreatePost isn't just a single point that can be called from CLI to HTTP, but I do think you can structure validation/authorization in such a way that every piece is usable from CLI to HTTP. Each front end is just responsible for implementing its own flow. I'm so scared to use that word. For injecting, I don't know why, but it makes me uncomfortable to see request parameters constructor injected. I do think constructor injection is cool for other things (like injectables), but CreatePost#create params is nicer looking (with CreatePost#create title, body being the best). This is something I struggle a ton with in C# because my request parameters aren't a hash, but an object, so either I need (a) something to map this object to the method call of CreatePost#create or (b) CreatePost#create takes that object and my beautiful dream of having CreatePost#create not know anything about it being in a web app is destroyed (the request object is pretty view specific). This is kinda similar to if you want CreatePost#create to take a hash or individual parameters. One thing I'm totally blank on is Presenters. I currently have something that maps data(base) models to View Models, and its all very dumb code that would insult many of you to write (property mapping/setting basically). So is the goal Presenters wrap data models and delegate to them, but also allow new properties? Like if a model has a date, and you want it broken out month/day/year in the view, Presenters would be responsible for that? Last remark ... I totally dig being able to collapse/expand the levels depending on the application and think that is a nice feature to have. |
With your design experiment, you have: path "posts" do
# We could also scope all route targets inside Interactors, allowing this
# to say :to => "CreatePost.create".
create :to => "Interactors::CreatePost.create",
:ValidationFailure => render(:new)
end
If you wanted to do validation before path "posts" do
# We could also scope all route targets inside Interactors, allowing this
# to say :to => "CreatePost.create".
create :require => :valid_create_post, :to => "Interactors::CreatePost.create",
:ValidationFailure => render(:new)
end And then it would be up to me to write a Also, why not :with instead of :to? For the design experiment style :with seems to fit? ... mild tanget ... there is a scala web framework that treats routes like parser combinators, which I don't pretend to fully understand, but which these routes can look like in some respect. I guess that is skin deep though -- you can't have two or more create routes with each matching in different cases. (right?) |
Okay, so you can have multiple creates? This is pretty awesome. Would this work in current raptor implementation? |
Pretty sure it does. |
Huey,
|
I landed my the-great-inversion branch in master. It inverts the module structure, so that you have e.g. MyApp::Records::Post instead of Post::Record. I've talked about interactors with Tom. I think we decided that we should talk about them in the docs, but not bake them into Raptor. If Raptor knew about them, it would have to have multiple delegate targets per route (use the interactor if it exists; else fall back to the record). We don't want to do that; it's complicated. We haven't added the ability to route based on what the interactor/record/delegate returns; it's still based on exception only. Tom's currently building an app in Raptor as I understand it, and I'm hoping that his experience there will drive this if it's needed. |
Confirming that I'm building an app at the moment. The big thing that's hurting right now is layouts and assets. Layouts, I want to think about a bit more before doing something with them in raptor (as it is right now, I have a bunch of duplicated views). Assets can be fixed by middlewaren (which I have yet to do). |
I suspect that you're going to have two problems when you bring that middleware in... |
It's probably going to make startup time slower (and I'm already hurting from that). End to end specs take ~1s to start already (capybara webkit and sequel connection time, I guess). Should raptor include asset serving itself? |
Moving the asset discussion to #34. |
I'm closing this. The module inversion has been in for a while. There are still outstanding questions, like raising vs. returning an error subject, but they're less pressing. |
Current application structure:
A Raptor application is composed of resources, which can contain routes, requirements, presenters, and records. The file layout matches the module hierarchy. E.g., the example application contains a "posts" file with classes like Posts::PresentsOne and Posts::Record. In a larger resource, these might be separate files like posts/presenters.rb and posts/record.rb, but that's the programmer's problem; Raptor doesn't know about files.
My concerns:
The number of things that can go in a resource. To the four above we'll add custom injection sources, and we've waffled back and forth on separating the record class into two: a record and a repository to hold queries. That would make six things that can go in a resource, and I'm sure we'll discover more.
The word "resource". A Raptor resource is not a resource in the REST sense, which bothers me. It's not self-contained and not necessarily exposed to the outside. I don't know what to name it without dropping back to something even more generic like "component", which makes me think that it's the wrong abstraction. Tom and I have struggled with this since we were first sitting in a coffee shop in Seattle thinking about Raptor with pen and paper.
Possible solution:
If resources are to be removed as a concept, the only option I see is a Rails-style separation: organize by type (presenter, etc.), then by name. Rails does this by file: app/models/post.rb, and its users often dump all of the resulting classes into the global namespace. If we did it, I'd want to do it by module scope, not file. You define YourAppName::Presenters::Post, regardless of how the source is laid out. Raptor will still guess at the names for defaults (as Rails does).
Problems:
Two things worry me about this change. First, it might turn apps into incomprehensible ravioli if the culture becomes "one class per file". Of course, this being Ruby, you could structure your files in the old way (posts.rb containing all post-related pieces). You'd just be reopening modules a lot. I'd really like to see how this works out.
Second, routes don't fit into this scheme as nicely as they do into resources. Currently, all routes are implicitly (and inescapably) prefixed with their resource name (e.g., "/posts/"). Decoupling the two means that you'll have to specify the prefixes somehow, and I fear descending into the hierarchical complexity of Rails routes (Raptor's routing is quite subtle as it is).
Do you share my fears about resource bigness? Do you share my fears about the module-based solution? Do you see another solution we've missed so far?
The text was updated successfully, but these errors were encountered: