ash/README.md

125 lines
9.9 KiB
Markdown
Raw Normal View History

2019-10-03 16:08:36 +13:00
# Ash
2019-12-06 14:56:01 +13:00
## Quick Links
2019-12-09 08:58:06 +13:00
* [Resource Documentation](https://hexdocs.pm/ash/Ash.Resource.html)
* [DSL Documentation](https://hexdocs.pm/ash/Ash.Resource.DSL.html)
2019-12-24 10:26:59 +13:00
* [Code API documentation](https://hexdocs.pm/ash/Ash.Api.Interface.html)
2019-12-06 14:56:01 +13:00
## Introduction
2019-12-12 10:21:59 +13:00
Traditional MVC Frameworks (Rails, Django, .Net, Phoenix, etc) leave it up to the user to build the glue between requests for data (HTTP requests in various forms as well as server-side domain logic) and their respective ORMs. In that space, there is an incredible amount of boilerplate code that must get written from scratch for each application (authentication, authorization, sorting, filtering, pagination, sideloading relationships, serialization, etc).
2019-10-05 16:16:28 +13:00
2019-12-12 10:21:59 +13:00
Ash is an opinionated yet configurable framework designed to reduce boilerplate in Elixir application. Don't worry Phoenix developers - Ash is designed to play well with Phoenix too :). Ash does this by providing a layer of abstraction over your system's data layer(s) with `Resources`.
2019-10-05 16:16:28 +13:00
2019-12-12 10:21:59 +13:00
To riff on a famous JRR Tolkien quote, a `Resource`is "One Interface to rule them all, One Interface to find them" and will become an indispensable place to define contracts for interacting with data throughout your application.
2019-12-06 09:46:21 +13:00
2019-12-12 10:21:59 +13:00
To start using Ash, first declare your `Resources` using the Ash `Resource` DSL. You could technically stop there, and just leverage the Ash Elixir API to avoid writing boilerplate. More likely, you would use libraries like Ash.JsonApi or Ash.GraphQL(someday) with Phoenix to add external interfaces to those resources without having to write any extra code at all.
2019-12-06 09:46:21 +13:00
2019-12-12 10:21:59 +13:00
Developers should be focusing on their core business logic - not boilerplate code. Ash builds upon the incredible productivity of Phoenix and empowers developers to get up and running with a fully functional app in substantially less time, while still being flexible enough to allow customization when the need inevitably arises.
2019-10-05 16:16:28 +13:00
2019-12-12 10:21:59 +13:00
Ash is an open-source project and draws inspiration from similar ideas in other frameworks and concepts. The goal of Ash is to lower the barrier to adopting and using Elixir and Phoenix, and in doing so help these amazing communities attract new developers, projects, and companies.
2019-10-03 16:08:36 +13:00
2019-12-09 08:48:49 +13:00
## Example Resource
```elixir
defmodule Post do
use Ash.Resource, name: "posts", type: "post"
use AshJsonApi.JsonApiResource
use Ash.DataLayer.Postgres
actions do
read :default,
authorization_steps: [
2019-12-16 13:20:44 +13:00
authorize_if: user_is(:admin)
2019-12-09 08:48:49 +13:00
]
create :default,
authorization_steps: [
2019-12-16 13:20:44 +13:00
authorize_if: user_is(:admin)
2019-12-09 08:48:49 +13:00
]
end
attributes do
attribute :name, :string
end
relationships do
belongs_to :author, Author
end
end
```
2019-10-03 16:08:36 +13:00
2019-10-06 13:33:38 +13:00
## TODO LIST (in no order)
* Make our router cabaple of describing its routes in `mix phx.routes` Chris McCord says that we could probably power that, seeing as phoenix controls both APIs, and that capability could be added to `Plug.Router`
* Finish the serializer
2019-11-26 06:08:38 +13:00
* DSL level validations! Things like includes validating that their chain exists. All DSL structs should be strictly validated when they are created.
* Especially at compile time, we should *never* ignore or skip invalid options. If an option is present and invalid, an error is raised.
2019-10-07 09:36:06 +13:00
* break up the `Ash` module
* Wire up/formalize the error handling (this is high priority)
2019-11-03 09:36:46 +13:00
* Ensure that errors are properly propagated up from the data_layer behaviour, and every operation is allowed to fail
2019-11-26 12:23:59 +13:00
* figure out the ecto schema warning
* all actions need to be performed in a transaction
* document authorization thoroughly. *batch* (default) checks need to return a list of `ids` for which the check passed.
* So many parts of the system are reliant on things having an `id` key explicitly. THis will need to be addressed some day, and will be a huge pain in the ass
2019-11-26 19:50:53 +13:00
* Validate that the user resource has a get action
2019-11-28 10:36:25 +13:00
* `params` should be solidified. Perhaps as a struct. Or perhaps just renamed to `action_params` where it is used.
* Since actions contain rules now, consider making it possible to list each action as its own `do` block, with an internal DSL for configuring the action. (overkill?)
* Validate rules at creation
* Maybe fix the crappy parts of optimal and bring it in for opts validation?
* The ecto internals that live on structs are going to cause problems w/ pluggability of backends, like the `%Ecto.Association.NotLoaded{}`. That backend may need to scrub the ecto specifics off of those structs.
2019-12-02 10:58:29 +13:00
* Add a mixin compatibility checker framework, to allow for mix_ins to declare what features they do/don't support.
2019-12-06 05:50:30 +13:00
* Have ecto types ask the data layer about the kinds of filtering they can do, and that kind of thing.
2019-11-29 19:54:11 +13:00
* Make `Ash.Type` that is a superset of things like `Ecto.Type`. If we bring in ecto database-less(looking like more and more of a good idea to me) that kind of thing gets easier and we can potentially lean on ecto for type validations well.
* use a process to hold constructed DSL state, and then coalesce it all at the end. This can clean things up, and also allow us to potentially eliminate the registry. This will probably go hand in hand w/ the "capabilities" layer, where the DSL confirms that your data layer is capable of performing everything that your DSL declares
2019-11-30 09:29:35 +13:00
* make ets dep optional
2019-12-03 19:47:49 +13:00
* Bake in descriptions to the DSL
2019-12-05 12:04:07 +13:00
* Contributor guideline and code of conduct
2019-12-05 20:18:13 +13:00
* Do branch analysis of each record after authorizing it, in authorizer
* consider moving `type` and `name` for resources out into json api (or perhaps just `name`) since only json api uses that
2019-12-06 05:50:30 +13:00
* When we support embedding, figure out `embed_as` on `Ash.Type`
* Consider allowing declaring a data layer at the *api* level, or overriding the resource's data layer at the *api* level
2019-12-06 07:45:02 +13:00
* Since actions can return multiple errors, we need a testing utility to unwrap/assert on them
* Flesh out relationship options
* Flesh out field options (sortable, filterable, other behavior?)
2019-12-07 20:42:14 +13:00
* Unit test the Ets data layer
* Improve pagination in the ETS data layer
* Rearchitect relationship updates so that they can be sensible authorized. As in, which resource is responsible for authorizing updates to a relationship? Should there be some unified way to describe it? Or is updating a user's posts an entirely separate operation from updating a post's user?
* Test authorization
2019-12-08 10:33:31 +13:00
* Validate that all relationships on all resources in the API have destinations *in* that API, or don't and add in logic to pretend those don't exist through the API.
* Make authorization spit out informative errors (at least for developers)
* Use telemetry and/or some kind of hook system to add metrics
* Forbid impossible auth/creation situations (e.g "the id field is not exposed on a create action, and doesn't have a default, therefore writes will always fail.)
2019-12-09 17:05:56 +13:00
* Don't let users declare `has_one` relationships without claiming that there is a unique constraint on the destination field.
2019-12-09 20:07:23 +13:00
* Set up "atomic updates" (upserts). If an adapter supports them, and the auth passes precheck, we could turn `get + update` combos into `upserts`
2019-12-10 08:28:09 +13:00
* Use data layer compatibility features to disallow incompatible setups. For instance, if the data layer can't transact, then they can't have an editable `has_one` or `many_to_many` resource.
* Add `can?(:bulk_update)` to data layers, so we can more efficiently update relationships
* Figure out under what circumstances we can bulk fetch when reading before updating many_to_many and to_many relationships, and do so.
* most relationship stuff can't be done w/o primary keys
2019-12-10 18:08:59 +13:00
* includer errors are super obscure because you can't tell what action they are about
* Allow encoding database-level constraints into the resource, like "nullable: false" or something. This will let us validate things like not leaving orphans when bulk updating a many to many
* Validate filters, now that there can be duplicates. Doesn't make sense to provide two "exact equals" filters
* Eventually data_layers should state what raw types they support, and the filters they support on those raw types
2019-12-10 19:29:25 +13:00
* Raise on composite primary key if data layer can't do it
2019-12-12 10:21:59 +13:00
* Add impossibility checking for filters to avoid running queries that will never be possible.
2019-12-12 11:45:59 +13:00
* As soon as we add array types, the filter logic is going to break because we use "is a list" as a criterion for "has not been passed a raw value to match". This may not be too big of a problem if we just don't support a list. But using some sort of actual struct to represent "this is constructed filter" may be the real answer.
* Add a runtime-intialization to checks that can return data loading instructions to be executed prior to pre-check
* Naturally, only inner joins are allowed now. I think only inner joins will be necessary, as the pattern in ash would be to side load related data.
2019-12-16 13:20:44 +13:00
* certain attribute names are not going to be allowed, like `or`, `and`, `in`, things like that.
* consider, just for the sake of good old fashion fun/cool factor, a parser that can parse a string into a query at compile time, so that queries can look nice in code.
* validate reverse relationships!!
2019-12-16 13:20:44 +13:00
* Factor out shared relationship options into its own schema, and merge them, for clearer docs.
* Consider making a "params builder" so you can say things like `Ash.Params.add_side_load(params, [:foo, :bar, :baz])` and build params up over time.
* validate using composite primary keys using the `data_layer.can?(:composite_primary_key)`
2019-12-20 17:19:34 +13:00
* Think hard about the data_layer.can? pattern to make sure we're giving enough info, but not too much.
* Use the sat solver at compile time to tell people when requests they've configured (and maybe all combinations of includes they've allowed?) couldn't possibly be allowed together.
* Support arbitrary "through" relationships
* Replace all my ugly `reduce` with tuples with `reduce_while`
* Framework internals need to stop using `api.foo`, because the code interface
is supposed to be optional
* relationships updates are *extremely* unoptimized
2019-12-23 17:28:40 +13:00
* Clean up and test filter inspecting code.
2019-12-24 07:17:22 +13:00
* Handle related values on delete
2019-12-24 10:26:59 +13:00
* Use ashton to validate interface opts, not just document them: Easy and important