Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How should applications be structured? #23

Closed
garybernhardt opened this issue Jan 8, 2012 · 53 comments
Closed

How should applications be structured? #23

garybernhardt opened this issue Jan 8, 2012 · 53 comments

Comments

@garybernhardt
Copy link
Owner

Current application structure:

A Raptor application is composed of resources, which can contain routes, requirements, presenters, and records. The file layout matches the module hierarchy. E.g., the example application contains a "posts" file with classes like Posts::PresentsOne and Posts::Record. In a larger resource, these might be separate files like posts/presenters.rb and posts/record.rb, but that's the programmer's problem; Raptor doesn't know about files.

My concerns:

  1. The number of things that can go in a resource. To the four above we'll add custom injection sources, and we've waffled back and forth on separating the record class into two: a record and a repository to hold queries. That would make six things that can go in a resource, and I'm sure we'll discover more.

  2. The word "resource". A Raptor resource is not a resource in the REST sense, which bothers me. It's not self-contained and not necessarily exposed to the outside. I don't know what to name it without dropping back to something even more generic like "component", which makes me think that it's the wrong abstraction. Tom and I have struggled with this since we were first sitting in a coffee shop in Seattle thinking about Raptor with pen and paper.

Possible solution:

If resources are to be removed as a concept, the only option I see is a Rails-style separation: organize by type (presenter, etc.), then by name. Rails does this by file: app/models/post.rb, and its users often dump all of the resulting classes into the global namespace. If we did it, I'd want to do it by module scope, not file. You define YourAppName::Presenters::Post, regardless of how the source is laid out. Raptor will still guess at the names for defaults (as Rails does).

Problems:

Two things worry me about this change. First, it might turn apps into incomprehensible ravioli if the culture becomes "one class per file". Of course, this being Ruby, you could structure your files in the old way (posts.rb containing all post-related pieces). You'd just be reopening modules a lot. I'd really like to see how this works out.

Second, routes don't fit into this scheme as nicely as they do into resources. Currently, all routes are implicitly (and inescapably) prefixed with their resource name (e.g., "/posts/"). Decoupling the two means that you'll have to specify the prefixes somehow, and I fear descending into the hierarchical complexity of Rails routes (Raptor's routing is quite subtle as it is).

Do you share my fears about resource bigness? Do you share my fears about the module-based solution? Do you see another solution we've missed so far?

@tcrayford
Copy link
Collaborator

Another pain point about organizing by type:
How do we classify the object that routes call? Is that a Resource (ie YourAppName::Resources::Post)?
This is what Jersey (which is conceptually similar to Raptor, if filled with Java and actually working) calls these things. Other possible names (all of which I dislike): Handler, Controller, Responder.

@garybernhardt
Copy link
Owner Author

@tcrayford: Two possibilities: 1) the core promise of Raptor is true. The combination of sufficiently powerful routes and a sufficiently powerful injector will remove the need for an imperative translation between HTTP requests and domain objects (very unlikely). 2) We'll introduce such an explicit layer. It'll be yet another thing that either goes in resources as they exist or in YourAppName::Responders (or whatever their name is).

Raptor does already have a concept called "responder", but it's internal and used to separate redirect responses, template rendering responses, etc.

@tcrayford
Copy link
Collaborator

I think 1) can happen, if custom injectors turn out working well enough in practice (though naming conflicts worry me, and we haven't actually done anything with regards to custom injectors). Then domain objects can just be namespaced inside YourAppName.

@garybernhardt
Copy link
Owner Author

Naming conflicts are a danger, but we can at least detect them at startup, since we have all of the injection sources loaded at App.new time anyway.

@pda
Copy link

pda commented Jan 9, 2012

A Raptor resource is not a resource in the REST sense, which bothers me.

Agreed.

If it were a resource in the REST sense, then perhaps a Representer could be responsible for the representation of a Resource via HTTP? It's a bit better than Handler, Controller, Responder etc.

@tcrayford
Copy link
Collaborator

Representers gonna represent.

@garybernhardt
Copy link
Owner Author

@garybernhardt
Copy link
Owner Author

I like the idea of a REST layer, and I think that Raptor would facilitate one as easily as any other standard framework once its internals are solidified. Supporting REST super deeply like webmachine tends to take over the structure of your system (as it does for webmachine); I suspect that support at that level would require breaking apps up along different boundaries than the ones in Raptor.

@brynary
Copy link

brynary commented Jan 9, 2012

I've been thinking about Hexagonal Architecture for awhile. Conceptually, I like the idea that there is a divide between the "app" itself and delivery mechanisms (HTTP, client GUI, CLI, etc.). The web-secific code (including routes) should depend on the core app and not the other way around.

Uncle Bob outlined a similar architecture in this talk: http://confreaks.net/videos/759-rubymidwest2011-keynote-architecture-the-lost-years.

Bob describes the following concepts:

  • Request Models -- An object that is not HTTP-specific that represents an action to be performed. e.g. Bryan wants to change the title of this Post. Controllers translate HTTP requests into Request Models.
  • Interactors -- These service objects represent actions to be taken. E.g. "PostCreator". They don't know anything about the delivery mechanism. They coordinate with business objects.
  • Response Models -- These are a non-HTTP specific representation of the output of the action.
  • View Models -- These are a delivery-mechanism specific representation of the Response model. We might call these Presenters. Bob refers to Presenters as separate classes (service objects) that translate Response Models into View Models

Notably, the Request Models, "Interactors", and Response Models are all non-HTTP specific. The idea is you could use them if you do a client-side GUI, CLI app, and also call them from your tests to test the full stack of domain logic w/o the HTTP layers.

Raptor seems pretty close to this, but there are areas where these concerns are not separated cleanly.

Thoughts? Was Raptor inspired by any of these ideas?

@garybernhardt
Copy link
Owner Author

As far as I know, neither of us was influenced by that talk, though I've been familiar with hexagonal architecture for a while. I almost wrote a comment earlier based on that same talk (I have seen it, but only after Raptor was created). I'm glad someone brought it up. :) Taking them one-by-one:

  • Request models: The goal of Raptor's routing and injection schemes is to get the request models' benefit without them actually existing. This probably won't work out entirely; maybe we'll introduce them. Either way, the need for this separation is definitely acknowledged in Raptor.
  • Interactors: Most of the classes that I put in lib in a Rails app are rightly called interactors. One of the big motivations for Raptor is to make this easier. It wouldn't bother me introduce MyApp::Interactors in Raptor, delegating a user create to MyApp::Interactors::CreatesUsers (or some less-controversial naming convention).
  • Response models: These are implicitly supported, but there's no shortcut for constructing them. Whatever the delegate (interactor) returns gets passed to the decorator. For a page that requires more than one record's data, you'd basically be forced to construct a response model to aggregate the multiple records.
  • View models: In my mind, presenters fill this role. The idea of having all three of response models, view models, and presenters makes me tired. One seems necessary; I would happily try a second in a real app to see what happens.

There are some cases (like response models) where Raptor doesn't give you anything, but sort of coaxes you into doing it yourself. I think that's a nice feature to have: constraints that guide you into the right path, rather than trying to build a giant framework with ResponseModel::Base. :)

@brynary, do you see any holes in Raptor that we can fill by introducing these more rigorously (or loosening them)?

@brynary
Copy link

brynary commented Jan 9, 2012

From my initial review, it feels like the Request Models and Interactor (not a fan of that name) responsibilities are handled via Injectors, Requirements and the Routes themselves. This is the area that for me personally the Raptor approach is a bit tenuous. I think it's close, but it feels difficult to get a picture of "What are the steps performed when I get a request to update a blog post?". Can't quite put my finger on a way to clean it up, but I suspect there's a way to model that more cleanly at a lower level in Raptor, and then layer on the routing and convention-driven more "magic" stuff on top of that, leaving the programmer free to pick the right level of abstraction given their problem.

I agree separating Presenters and View Models is overkill. The question in this end of the stack for me is: Is it worth separating Presenters (HTTP-specific) from Response Models (non-HTTP specific) in order to get the benefits of Hexagonal Architecture? So a Presenter would know about URLs and HTML, but a Response Model would not.

The biggest conflict I see with Raptor and Hexagonal Architecture or Uncle Bob's approach is that the concept of routing is both a) HTTP-specific and b) used to manage domain-level concerns (e.g. authorization implemented as a Requirement). Superficially, the routes are defined in what would otherwsie be the business logic layer, but tying Requirements into the Routing seems to be a deal breaker.

WDYT?

-Bryan

@garybernhardt
Copy link
Owner Author

The (somewhat hand-wavy) "route via authorization etc." stuff does feel weird and always has. If you expand it out fully, here's what a request could theoretically like:

  1. Do initial route based on URL and verb, returning multiple candidate routes
  2. Construct a request model by extracting some database records
  3. Decide which of several "sub-routes" matches that request model (e.g., separate actions for admins and users)
  4. Construct another request model by extracting more database records

You can see the pattern: route, build models, route, build models. Exposing all of that directly would send people running. :) In some web frameworks, this is solved by middleware (e.g., auth middleware) that does hit the database. In Rails, usually it's solved by ApplicationController and controller filters. In Raptor, it's solved with requirements and injection:

  1. Do initial route based on URL and verb, returning multiple candidate routes
  2. Try to satisfy each route, injecting the requirements' arguments, possibly invoking injectors that hit the database
  3. Invoke the route's delegate, injecting its arguments which, together, are effectively an unpacked request model

For this to make architectural sense, try thinking of the requirements as more about application logic than HTTP. Here's a theoretical requirement:

class AdminRequirement
  def initialize(username); @username = username; end
  def match?; Records::User.find(username=@username).admin?; end
end

The dependencies go Raptor -> AdminRequirement -> Record. There are no web-related dependencies in the requirement, even though you think of a requirement as part of routing.

It would probably be preferable to have the user directly injected, but the result would be basically the same.

I feel like I'm rambling... :)

@brynary
Copy link

brynary commented Jan 9, 2012

Yes, I understand AdminRequirement has no Web-related dependencies. However, I don't see a way to instruct the application to "Create a Post" without going through the router.

A simple example would be:

  • Any user can create a post
  • Only admins can optionally publish the post when they create it

It seems like there's is a lot of value in being able to tell the app "Joe User wants to create this post (w/ all the details)" without involving Web-based concerns.

I guess what I'm getting at is I don't see a way to make calls into the business logic of a Raptor app without going through the Router (which is Web specific) even though the individual components themselves are mostly not Web-specific.

Make sense?

@garybernhardt
Copy link
Owner Author

Oh! I see. Let me try to make that example concrete. Let's do "users can create posts; if the user is an admin, it's always published" using an ARish database layer:

create :require => :admin, :to => "CreatesPosts#create_published"
create :to => "CreatesPosts#create"

class CreatesPosts
  def self.create(current_user, title, body)
    Post.create!(:user => current_user, :title => title, :body => body)
  end

  def self.create_published(*args)
    create(*args).publish
  end
end

You have to imagine an injection source for current_user, as well as one that's magically pulling form fields out from somewhere. That's another "might be a bad idea thing" thing that we haven't tried yet; if it's a bad idea, these methods would probably just take params and pull the fields out.

Either way, you can see that those classes themselves are usable in isolation. Raptor is injecting stuff into them, but the classes themselves don't know that. That's one of the rules of Raptor: it might invoke you in fancy ways, but you're just a plain old object. Does that make sense, or am I misinterpreting?

@garybernhardt
Copy link
Owner Author

I just realized that you might mean: given a user, who might or might not be an admin, how do we create a post for him in whichever way is appropriate (published or not), without going through the router?

In which case, my response is: hmmmmm...

@pda
Copy link

pda commented Jan 9, 2012

@garybernhardt Also I think you mixed up a class name / method name change while editing your example code: s/CreatesPublishedPosts#create/CreatesPosts#create_published/

@garybernhardt
Copy link
Owner Author

Fixed.

@brynary
Copy link

brynary commented Jan 9, 2012

Gary -- Yes, I think you understood my question correctly in your last comment.

Not sure if my example is the greatest, but the idea is that I'd like to be able to execute a core domain concept (e.g. "create a post"), with all of the domain logic involved with it (e.g. authorization) without any of the delivery mechanism details (e.g. this happens over PUT requests).

Then, I could point my tests at that, or point a GUI app at it, or point a CLI UI at it, etc. In fairness, I've never done this before in a real app. But it feels like a core idea of both Hexagonal Architecture as well as Uncle Bob's architecture talk, and I feel like I identify with the benefits purported.

This is possible in Raptor as is though (just like it is in Rails). You can not use Requirements for domain-level concerns, keeping the routes simpler. Then the object you route to would encapsulate the authorization logic. I'm just wondering if there's an evolution of the concepts Raptor currently supports that leads here. It's sort of splitting the Router in half.

@brynary
Copy link

brynary commented Jan 9, 2012

Oh and BTW, if this isn't the sort of thing you were looking for in this thread feel free to redirect it. :)

@garybernhardt
Copy link
Owner Author

It's not exactly what I had in mind, but it's really interesting and valuable, so no worries. :)

I really need to build a non-trivial Raptor app and see how this plays out. I'm very torn now that I understand your point.

@brynary
Copy link

brynary commented Jan 10, 2012

Some additional thoughts on how this could work. App structure:

config.ru
lib/blog.rb
lib/blog/router.rb
lib/blog/post.rb
lib/blog/actions/create_post.rb
lib/blog/views/post.rb
lib/blog/views/post_index.rb
templates/posts/new.html.slim
templates/posts/index.html.slim
templates/posts/show.html.slim

Note a Views::Post is basically a presenter. This highlights the "two step view" pattern. Actions are "Interactors".

class Actions::CreatePost
  PostSaved = Class.new(Struct.new(:post))

  # Initializer params are inferred and injected
  def initialize(user, post_attributes)
    @user = user
    @post_attributes = post_attributes
  end

  def execute
    post = Post.new(@post_attributes)

    if post.valid?
      if @user.admin?
        post.publish
      else
        post.save_as_draft
      end

      PostSaved.new(post)
    else
      ValidationFailure.new(post)
    end
  end
end

ValdationFailure and PostSaved as "response models". They could be exceptions but I'm generally opposed to using exceptions for non-exceptional control flow.

This is the only HTTP-specific part:

Raptor.route do
  inject Injectors::CurrentUser

  resources :posts do
    action :create, CreatePost,
      PostSaved: redirect_to(:show),
      ValidationFailure: render(:new)
  end
end

Routing can leverage conventions heavily to be as terse or verbose as is appropriate. In this case the only web-coupled components are the router and the injectors. So if I built a CLI app, that's what I'd swap out.

@emmanuel
Copy link

I'm very intrigued about where this going... inspired by Uncle Bob's talk I started contributing to Webmachine—it strikes me as the most fully elaborated HTTP delivery mechanism available today in Ruby—with the intention to flesh out Request/Response Models and Interactors on top of Webmachine. I just learned the term Hexagonal Architecture today, but the 'ports and adapters' description of the same concept has been increasingly on my mind for the last 6-12 mos.

My daydream involves implementing Uncle Bob's 'Interactors' as 'Interactions' in the DCI sense, because I see DCI as an escape route from the 500-line User model: pull the Roles out of the User entity and extend a given User instance with an appropriate Role at runtime (eg., user = User.find(params[:user_id]); user.extend(User::LoggedIn)).

Some of the concepts in Raptor are really exciting (esp. Injectors), but I also really like Webmachine's deep support for REST. That said, I, too, get tired thinking about the immense quantity of boiler-plate implied in the Delivery Mechanism -> Request Model -> Interactor -> Entities -> Response Model -> Presenter -> View Model layering approach.

Looking for opportunities to collapse some of those layers, I don't think the Presenter/View Model division is essential, but will likely be helpful in complex views. I'm currently trying out the notion that Presenters are a single-entity special case of the more general concept of View Models (which are the first step of a two-step view), aka the View in Handlebars' View/Template distinction. I'm not sure yet whether I believe that Injectors can eliminate the Request Model from the equation, and I'm not sure whether that's desirable. Having the Interaction (ie., Interactor) responsible for an Entity's entire lifecycle (ie., initialized with an id instead of an instance) implies to me that the Interaction would provide an API covering all aspects of Entity access and mutation. Here's a rough adaptation of your example, as I'm conceiving it:

module Responses
  ResourceCreated = Class.new(Struct.new(:resource)) do
    def saved?; true; end
  end
  ValidationFailed = Class.new(Struct.new(:resource)) do
    def saved?; false; end
  end
end

class Interactions::CreatePost

  # Interactions are initialized with a generic Request model
  def initialize(request)
    @identifiers = request.identifiers # identifiers parsed from the URL (generally path & query)
    @attributes  = request.attributes  # typically the parsed POST body
  end

  def current_user
    @current_user ||= begin
      user = User.find(@identifiers[:user_id])
      user && user.extend(Roles::CurrentUser)
    end
  end

  def post
    @post ||= Post.new(@attributes).extend(Roles::Publishable)
  end

  def call
    if post.valid?
      if current_user.admin?
        post.publish
      else
        post.save_as_draft
      end

      ResourceCreated.new(post)
    else
      ValidationFailed.new(post)
    end
  end
end

module Roles
  module CurrentUser
    def admin?
      !!read_attribute(:admin) # or however the role-based access control works
    end
  end

  module Publishable
    def publish
      # however this is defined
    end

    def save_as_draft
      # however this is defined
    end
  end
end

I'd like to bounce some of the ideas that are kicking around in my head off of other Rubyists who are heading in this direction, but I don't want to hijack this thread any further. Any suggestions on other places I should take this conversation?

@tcrayford
Copy link
Collaborator

I really dislike extending modules into things at runtime (and I'm against extending modules into things anyway).

@garybernhardt
Copy link
Owner Author

I love where this is going, @brynary and @emmanuel. I hadn't actually typed a response model example up before reading those. It's surprisingly terse when you use a struct and route based on the return type!

(I do agree with @tcrayford about extending at runtime, though. It also busts the method cache, which has a big perf hit.)

@garybernhardt
Copy link
Owner Author

@brynary's example shows a full separation between HTTP and application, but I also wonder about application flow vs. application logic. Consider this actiony code from his example:

if post.valid?
  if current_user.admin?
    post.publish
  else
    post.save_as_draft
  end

  ResourceCreated.new(post)
else
  ValidationFailed.new(post)
end

"if current_user.admin?" looks like app logic to me: that's a rule about how posts work, and it always holds. "if post.valid?" looks like application flow: it's about how the user's actions, having been de-HTTPed, affect the model.

Binding the two together allows you to reuse this code in, e.g., a CLI app. But it doesn't allow you to reuse it in another part of the app logic that wants to create a post given a user. If the post validation failed, this method would silently return ValidationFailed.

A couple of solutions come to mind:

  1. Try to separate "flow" from "interactors". I've spent a total of two minutes thinking about this and, while cute, it sounds like too much in the same way that presenter->view->template does.
  2. Leave it as-is, but make ValidationFailed an exception, so it at least can't fail silently when re-used. This is the way Raptor's router currently works.

@tcrayford
Copy link
Collaborator

I'd have thought checking if posts are valid should be happening at the routing level (we already have custom dohickeys for saying NO to a route [they are called requirements]). It feels ice-soap-requiring to create a Post from something, then asking it if it is valid (at least to me).

@bemurphy
Copy link

Note a Views::Post is basically a presenter

Just my $0.02 on this, if you like the directory structure, I'd recommend something like "presenters" or "presentations" over "views". First, since you had to explain it, I think that indicates that's how you think of it primarily. Also, and this is more political than technical, "views" is a very loaded term in ruby-land.

@brynary
Copy link

brynary commented Jan 10, 2012

@garybernhardt I agree with your points. I'd be interested in seeing where the struct return response + validation failed as an exception example looks. At that point I'd personally only use Requirements for HTTP-level concerns like I do in Rails.

On the view side, I like the "two step view" approach and I like implementing it with one object (you could call it a Presenter, View, or View Model) and one template rather than separating a Presenter and View Model. I'm partial to the name of simply "View" rather than Presenter. It makes sense to me that you have an instance of a View, and you also have a corresponding Template.

@garybernhardt
Copy link
Owner Author

@tcrayford, you're right. Maybe this is the answer to @brynary's concern about logic at the routing level? "Post is valid" is a flow concern, so it goes in the route; "posts by admins get autopublished" is a logic concern, so it doesn't go in the route (even though it could)?

That seems pretty right to me. Though now the question is: can we actually pay attention to this subtle distinction while building apps?

@colin-jack
Copy link

Caveat, I'm new to Raptor and haven't used this. My thoughts are just based on reading this thread after finding out about on twitter.

  1. the core promise of Raptor is true. The combination of sufficiently powerful routes and a sufficiently powerful
    injector will remove the need for an imperative translation between HTTP requests and domain objects (very unlikely).

I'd also wonder if this is indeed possible, particularly if you were also thinking of responses.

In particular I wonder how it could be elegantly supported when you want to take advantage of HTTP features, particularly those that make it such a useful application protocol. Examples would be caching of responses (e-tags or last modified), conneg, links in responses, conditional requests?

Take a small example, we are updating an Order. The first thing we want to do is work out if the request is valid, then if current "user" is authorised. Those are easy to do HTTP agnostically (the code throws exceptions which are mapped to HTTP responses).

However next we want to work out if the request is based on an up to date representation of the order (its a conditional PUT). In this case that means comparing e-tags. If that passes through we update the order but when returning the response we'd need to set the appropriate cache headers and the response will probably contain links to other resources.

In the cases I've seen doing this sort of thing elegantly does require a layer that is HTTP aware, but obviously that layer can then sit on top of the nice HTTP agnostic layer(s).

@tcrayford
Copy link
Collaborator

I don't think any of that requires extra stuff on raptor's part. If you need to do heavy http stuff, having your own http layer between raptor and the core of your application makes a lot of sense to me.

@emmanuel
Copy link

Interesting. Is that representative of the intended direction for Raptor? If so, it's really good to hear, because personally I'm very interested in building on a very capable/complete HTTP layer, with conditional requests, content negotiation, cache headers, links, etc (ie., more like Webmachine).

If Raptor is not going to head in that direction, perhaps we can build reusable components in the application layer—ie., the inner hexagon/everything right of the delivery mechanism in Uncle Bob's diagrams. This would be: Request/Response Models, Interactors/Interactions, and Presenters/Views/ViewModels (we really should settle on a name for this concept).

@tcrayford
Copy link
Collaborator

From my perspective, if you want to be doing heavy http stuff, you'll probably want some layering between raptor and your http stuff. I don't think either Gary or I want to build something that is close to Webmachine.

@colin-jack
Copy link

On this, was wondering what the overall aim of Raptor is, or put differently how will it differ from other Web frameworks out there?

@tcrayford
Copy link
Collaborator

There's a lot in the README about that. The core things (to me) are the injectors and the powerful routing.

@garybernhardt
Copy link
Owner Author

HTTP is important to me personally, and I've been lecturing people about REST for at least five years. :) I think that being a good HTTP citizen is definitely a goal for Raptor, but OO design is the top priority. Raptor probably won't enforce strict REST in the deep way that e.g. webmachine does, though we certainly won't be building it to violate core REST principles by default.

@emmanuel
Copy link

I just realized that injectors' implementation means that Raptor is not (and will likely not be) 1.8.x-compatible. Huzzah for that.

Also, I'd like to point out that injectors is somewhat similar to (but much more robust and useful than) merb-action-args. Just sayin'... merb was ahead of its time (and much potential was lost in the Merb 2/Rails 3 merger).

I do feel a twinge of concern about the concept of tying method params names to injectors, but that's a discussion for another place &/or time.

That is all.

@garybernhardt
Copy link
Owner Author

I wrote up a quick thought experiment based on this thread while on a plane yesterday. It contains:

  • An interactor in roughly the the style shown by @brynary with a couple of tweaks.
  • Supporting code with my proposed organizational scheme: files are named by topic (e.g., posts.rb), but they may contain many classes in different modules (injectables, interactors, models, etc.).
  • Separated models and records. Models know about records, but records don't know about models.
  • A working spec file for the interactor (!), even though Raptor doesn't support any of this yet. It's a nice display of this scheme affording isolation.
  • An example of how to do very specific injectables: I define a "post_params" injectable right in posts.rb. I think this is how we get around the question of "should we allow dynamic names for injectables?"

You can see it here, including a README with a slightly longer description: https://github.com/garybernhardt/raptor/tree/master/design_experiment. I'd start with the README, then posts.rb.

What do you guys think? It feels a bit big to me. I don't expect Raptor to be as terse as Rails, of course, but I also don't want to go off the deep end.

@garybernhardt
Copy link
Owner Author

@emmanuel, does the ease of injector definition shown in that example ease your fear? I think that we can probably work it out so that an error is thrown if you ever try to define an injector name that already exists. We can also inspect all interactors' methods at startup time. We should then be able to guarantee that, once a Raptor app starts, all interactors' arguments are injectable, and no injectables are clobbering each other.

@brynary
Copy link

brynary commented Jan 17, 2012

@garybernhardt catching up on this thread, but related to your last comment, would it be possible to have a development-time rake task or command that can be run that prints all the injectables and their sources?

I'm thinking like "rake routes" in Rails, which is very handy for validating correct configuration.

@brynary
Copy link

brynary commented Jan 17, 2012

@garybernhardt I generally like your design experiment example. Here are my thoughts:

  • File and module structures should never be imposed by Raptor, thought conventions are useful. For example, a convention could be searching in the Interactors module first. The conventions should be configurable though, so people who have different preferences can also benefit. (This should be simple and straightforward because Raptor itself is lightweight -- there won't be many settings at all.)
  • I think all of the additional boilerplate here is something that Raptor can support without requiring. For example, in simpler cases it may be fine to route directly to the model layer. In more complex situations, you can route to an interactor style object without breaking anything. I really like the idea that Raptor allows your app to scale up and down with your requirements.

The last point I believe is a weakness for Rails. In the simple case, basic REST controllers feel heavyweight. Raptor may be better in those cases because you can route right to models.

In the more complex cases, you can get into all sorts of trouble with encoding domain logic in controllers. Raptor can also be better in those cases because it gives you a clean point to insert service objects, and isolate concerns into injectables, etc.

One thing I hadn't previously considered -- how do you assert fine grained control of the HTTP response in Raptor? For example, tweaking response headers.

@garybernhardt
Copy link
Owner Author

I definitely think that an injection equivalent of "rake routes" is in order. I also like the idea of Raptor automatically writing both the route table and injection table to specific files every time it reloads them. Then, when you commit, changes to those tables will show up in the diff. Seems like it'd be handy.

You're right about Raptor scaling up and down, I think. The design example I showed is no bigger than what I'd do in Rails, if I did need all that structure. I guess my concerns were unfounded. :)

Routing to records by default has been there since the beginning; all seven standard routes are set up to delegate to the records by default. The default show route retrieves and presents one record, for example:

route(:show, "GET", "/your_resource_name/:id", :present => :one, :to => "YourModule::Record.find_by_id")

The expression of it in Raptor is slightly more complex because it needs to merge parameter overrides in, but that's what you get if you don't customize anything. :)

@eyston
Copy link

eyston commented Jan 18, 2012

I tried to read everything but was a lot to catch up on. Sorry if this sucks. I also haven't seen the Uncle Bob thing yet, but I feel like I've been structuring my apps this way (mostly from trying CQRS stuff). I also do C# not rails. So yah.

One thing I dig is having Interactors (if I'm using that word right) being able to trust their data and only returning 'success' and throwing everything else. So previous to an Interactor being called I've done everything possible to ensure that the call will be successful and if it isn't then I feel happy throwing. So to me I would see validation as part of the route flow. Basically I'm replacing an if statement in the controller with smarter routes.

Authorization is a similar concern to validation and I treat it the same way.

So this does mean CreatePost isn't just a single point that can be called from CLI to HTTP, but I do think you can structure validation/authorization in such a way that every piece is usable from CLI to HTTP. Each front end is just responsible for implementing its own flow. I'm so scared to use that word.

For injecting, I don't know why, but it makes me uncomfortable to see request parameters constructor injected. I do think constructor injection is cool for other things (like injectables), but CreatePost#create params is nicer looking (with CreatePost#create title, body being the best). This is something I struggle a ton with in C# because my request parameters aren't a hash, but an object, so either I need (a) something to map this object to the method call of CreatePost#create or (b) CreatePost#create takes that object and my beautiful dream of having CreatePost#create not know anything about it being in a web app is destroyed (the request object is pretty view specific). This is kinda similar to if you want CreatePost#create to take a hash or individual parameters.

One thing I'm totally blank on is Presenters. I currently have something that maps data(base) models to View Models, and its all very dumb code that would insult many of you to write (property mapping/setting basically). So is the goal Presenters wrap data models and delegate to them, but also allow new properties? Like if a model has a date, and you want it broken out month/day/year in the view, Presenters would be responsible for that?

Last remark ... I totally dig being able to collapse/expand the levels depending on the application and think that is a nice feature to have.

@eyston
Copy link

eyston commented Jan 18, 2012

With your design experiment, you have:

path "posts" do
    # We could also scope all route targets inside Interactors, allowing this
    # to say :to => "CreatePost.create".
    create :to => "Interactors::CreatePost.create",
      :ValidationFailure => render(:new)
end

CreatePost.create returns a Post. Raptor then redirects to posts/show/:id with the :id of the returned Post?

If you wanted to do validation before CreatePost.create it would be:

path "posts" do
    # We could also scope all route targets inside Interactors, allowing this
    # to say :to => "CreatePost.create".
    create :require => :valid_create_post, :to => "Interactors::CreatePost.create",
      :ValidationFailure => render(:new)
end

And then it would be up to me to write a :valid_create_post constraint? I haven't looked at the details of constraint implementation yet, so I'm not sure if that makes sense.

Also, why not :with instead of :to? For the design experiment style :with seems to fit?

... mild tanget ... there is a scala web framework that treats routes like parser combinators, which I don't pretend to fully understand, but which these routes can look like in some respect. I guess that is skin deep though -- you can't have two or more create routes with each matching in different cases. (right?)

@eyston
Copy link

eyston commented Jan 19, 2012

create :require => :admin, :to => "CreatesPosts#create_published"
create :to => "CreatesPosts#create"

Okay, so you can have multiple creates? This is pretty awesome. Would this work in current raptor implementation?

@tcrayford
Copy link
Collaborator

Pretty sure it does.

@garybernhardt
Copy link
Owner Author

Huey,

  • Validation can be part of the route flow even if the controller is the one triggering the validation. A route can say "ValidationError => redirect(somewhere)" to route out on the exception.
  • I imagine Record.create getting params injected. If you look at the FakeRecord class used in the old example, that's what it does.
  • Yes, the thing Rails culture calls presenters are usually just for wrapping records to provide fine-grained or slightly different representations to the templates.
  • In your code example, you don't need the valid_create_post. In fact, if you delegate to "PostRecord.create" instead of an interactor, that route is a complete application that will save incoming records (by injecting params) and render a different action if PostRecord.create raises a validation failure.
  • Yes, Raptor already supports multiple routes with the same verb/path. The "create", "new", etc. methods are just thin wrappers around the "route" method (you can see this in lib/raptor/router.rb). When Raptor routes an incoming request, it first filters the route table to everything that matches the verb/path, then begins running each candidate route's requirements until one matches.

@garybernhardt
Copy link
Owner Author

I landed my the-great-inversion branch in master. It inverts the module structure, so that you have e.g. MyApp::Records::Post instead of Post::Record.

I've talked about interactors with Tom. I think we decided that we should talk about them in the docs, but not bake them into Raptor. If Raptor knew about them, it would have to have multiple delegate targets per route (use the interactor if it exists; else fall back to the record). We don't want to do that; it's complicated.

We haven't added the ability to route based on what the interactor/record/delegate returns; it's still based on exception only. Tom's currently building an app in Raptor as I understand it, and I'm hoping that his experience there will drive this if it's needed.

@tcrayford
Copy link
Collaborator

Confirming that I'm building an app at the moment. The big thing that's hurting right now is layouts and assets. Layouts, I want to think about a bit more before doing something with them in raptor (as it is right now, I have a bunch of duplicated views). Assets can be fixed by middlewaren (which I have yet to do).

@garybernhardt
Copy link
Owner Author

I suspect that you're going to have two problems when you bring that middleware in...

@tcrayford
Copy link
Collaborator

It's probably going to make startup time slower (and I'm already hurting from that). End to end specs take ~1s to start already (capybara webkit and sequel connection time, I guess).

Should raptor include asset serving itself?

@garybernhardt
Copy link
Owner Author

Moving the asset discussion to #34.

@garybernhardt
Copy link
Owner Author

I'm closing this. The module inversion has been in for a while. There are still outstanding questions, like raising vs. returning an error subject, but they're less pressing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants