fix: runtime filter checking is unknown for non-selected values

docs: tons of work on docs/guides
This commit is contained in:
Zach Daniel 2022-08-30 02:22:15 -06:00
parent 005bb3ea3c
commit 834d99c57e
54 changed files with 961 additions and 1079 deletions

View file

@ -0,0 +1,61 @@
# Errors
There is a difficult balance to cut between informative errors and enabling simple reactions to those errors. Since many extensions may need to work with and/or adapt their behavior based on errors coming from Ash, we need rich error messages. However, when you have a hundred different exceptions to represent the various kinds of errors a system can produce, it becomes difficult to say something like "try this code, and if it is invalid, do x, if it is forbidden, do y. To this effect, exceptions in Ash have one of four classes mapping to the top level exceptions.
## Error Classes
- forbidden - `Ash.Error.Forbidden`
- invalid - `Ash.Error.Invalid`
- framework - `Ash.Error.Framework`
- unknown - `Ash.Error.Unknown`
Since many actions can be happening at once, we want to support the presence of multiple errors as a result of a request to Ash. We do this by grouping up the errors into one before returning or raising.
We choose an exception based on the order of the exceptions listed above. If there is a single forbidden, we choose `Ash.Error.Forbidden`, if there is a single invalid, we choose `Ash.Error.Invalid` and so on. The actual errors will be included in the `errors` key on the exception. The exception's message will contain a bulleted list of all the underlying exceptions that occurred. This makes it easy to react to specific kinds of errors, as well as to react to _any/all_ of the errors present.
An example of a single error being raised, representing multiple underlying errors:
```elixir
AshExample.Representative
|> Ash.Changeset.new(%{employee_id: "the best"})
|> AshExample.Api.create!()
** (Ash.Error.Invalid) Input Invalid
* employee_id: must be absent.
* first_name, last_name: at least 1 must be present.
(ash 1.3.0) lib/ash/api/api.ex:534: Ash.Api.unwrap_or_raise!/1
```
This allows easy rescuing of the major error classes, as well as inspection of the underlying cases
```elixir
try do
AshExample.Representative
|> Ash.Changeset.new(%{employee_id: "dabes"})
|> AshExample.Api.create!()
rescue
e in Ash.Error.Invalid ->
"Encountered #{Enum.count(e.errors)} errors"
end
"Encountered 2 errors"
```
This pattern does add some additional overhead when you want to rescue specific kinds of errors. For example, you may need to do something like this:
```elixir
try do
AshExample.Representative
|> Ash.Changeset.new(%{employee_id: "dabes"})
|> AshExample.Api.create!()
rescue
e in Ash.Error.Invalid ->
case Enum.find(e.errors, &(&1.__struct__ == A.Specific.Error)) do
nil ->
...handle errors
error ->
...handle specific error you found
end
end
```
This approach is relatively experimental. I haven't seen it done this way elsewhere, but it seems like a decent middle ground from a system that can generate multiple disparate errors on each pass.

View file

@ -19,11 +19,11 @@ These should all be straight forward enough to do a simple find and replace in y
- `destination_field_on_join_table` -> `destination_attribute_on_join_resource`
- `no_fields?` -> `no_attributes?`
## DSL changes
### DSL changes
A new option has been added to the pub_sub notifier. If you are using it with phoenix, and you want it to publish a `%Phoenix.Socket.Broadcast{}` struct (which is what it used to do if you specified the `name` option with pub sub), then you'll need to set `broadcast_type :phoenix_broadcast`
## Function Changes
### Function Changes
The following functions have been moved from `Ash.Resource.Info` to `Ash.Resource`. The old functions still exist, but will warn as deprecated.
@ -64,6 +64,10 @@ The following functions have been moved:
- Ash.Resource.extensions/1 -> `Spark.extensions/1`
### Expression Changes
The `has` operator has been removed from expressions. This is a holdover from when expressions only had partial support for nesting, and is unnecessary now. Now you can do `item in list` so `has` is unnecessary.
## Upgrading to 1.53
### Default actions

View file

@ -1,6 +1,6 @@
# Resources without a data layer
# Use Without Data Layers
If a resource is configured without a data layer, then it will always be working off of a temporary data set that lives only for the life of that query. This can be a powerful way to simply model input validations and/or custom/complex reads.
If a resource is configured without a data layer, then it will always be working off of a temporary data set that lives only for the life of that query. This can be a powerful way to simply model input validations and/or custom/complex reads. Technically, resources without a data layer simply use `Ash.DataLayer.Simple`, which does no persistence, and expects to find any data it should use for read actions in a context on the query
## Example
@ -10,7 +10,7 @@ defmodule MyApp.MyComplexResource do
# notice no data layer is configured
attributes do
#A primary key is always necessary on a resource, but you can make it a uuid and fill it in on each read, if you don't have one
#A primary key is always necessary on a resource, but this will simply generate one for you automatically
uuid_primary_key :id
attribute :some_complex_derived_number, :integer
end

View file

@ -1,3 +1,5 @@
# Validate Changes
# Validations
In ash, there are three kinds of validations.
@ -45,11 +47,17 @@ can do this with custom validations as well. See the documentation in `Ash.Resou
Right now, there are not very many built in validations, but the idea is that eventually we will have a rich
library of built in validations to choose from.
Validations can be scoped to the `type` (`:create`, `:update`, `:destroy`) of action (but not to specific actions). If you would like to adjust the validations for a specific action, that is (not yet, at the time of writing) supported by options on the action declaration.
Validations can be scoped to the `type` (`:create`, `:update`, `:destroy`) of action (but not to specific actions). If you would like to adjust the validations for a specific action, you can place that validation directly in the action, i.e
```elixir
create :create do
validate attribute_equals(:name, "fred)
end
```
### Important Note
By default, validations run on create and update only. Many validations don't make sense in the context of deletes.
By default, validations in the global `validations` block will run on create and update only. Many validations don't make sense in the context of destroys. To make them run on destroy, use `on: [:create, :update, :destroy]`
### Examples

View file

@ -19,6 +19,8 @@ calculations do
end
```
See the {{link:ash:guide:Expressions}} guide for more.
### Module Calculations
When calculations require more complex code or can't be pushed down into the data layer, a module that uses `Ash.Calculation` can be used.

View file

@ -1,49 +1,43 @@
# Code Interface
One of the ways that we interact with our resources is via hand-written code. The general pattern for that looks like building a query or a changeset for a given action, and dispatching it to the api using things like `MyApi.read/3` and `MyApi.create/3`. This, however, is just one way to use Ash, and is designed to help you build tools that work with resources, and to power things like `AshPhoenix.Form`, `AshGraphql.Resource` and `AshJsonApi.Resource`. When working with your resources in code, we generally want something more idiomatic and simple. For example, on a resource called `Helpdesk.Support.Ticket`:
If the action is an update or destroy, it will take a record or a changeset as its *first* argument.
If the action is a read action, it will take a starting query as an *opt in the last* argument.
```elixir
code_interface do
define_for Helpdesk.Support
All functions will have an optional last argument that accepts options. Those options are:
define :open_ticket, args: [:subject]
end
```
#{Spark.OptionsHelpers.docs(Ash.Resource.Interface.interface_options(nil))}
This simple setup now allows you to open a ticket with `Helpdesk.Support.Ticket.open_ticket(subject)`. You can cause it to raise errors instead of return them with `Helpdesk.Support.Ticket.open_ticket!(subject)`. For information on the options and additional inputs these defined functions take, look at the generated function documentation, which you can do in iex with `h Helpdesk.Support.Ticket.open_ticket`. For more information on the code interface, read the DSL documentation: {{link:ash:dsl:resource/code_interface}}.
For reads:
## Using the code interface
* `:query` - a query to start the action with, can be used to filter/sort the results of the action.
If the action is an update or destroy, it will take a record or a changeset as its *first* argument.
If the action is a read action, it will take a starting query as an *opt in the last* argument.
For creates:
All functions will have an optional last argument that accepts options. Those options are:
* `:changeset` - a changeset to start the action with
#{Spark.OptionsHelpers.docs(Ash.Resource.Interface.interface_options(nil))}
They will also have an optional second to last argument that is a freeform map to provide action input. It *must be a map*.
If it is a keyword list, it will be assumed that it is actually `options` (for convenience).
This allows for the following behaviour:
For reads:
```elixir
# Because the 3rd argument is a keyword list, we use it as options
Api.register_user(username, password, [tenant: "organization_22"])
# Because the 3rd argument is a keyword list, we use it as action input
Api.register_user(username, password, %{key: "val"})
# When all are provided it is unambiguous
Api.register_user(username, password, %{key: "val"}, [tenant: "organization_22"])
```
* `:query` - a query to start the action with, can be used to filter/sort the results of the action.
## get?
For creates:
Only relevant for read actions. Expects to only receive a single result from a read action.
* `:changeset` - a changeset to start the action with
The action should return a single result based on any arguments provided. To make it so that the function
takes a specific field, and filters on that field, use `get_by` instead.
They will also have an optional second to last argument that is a freeform map to provide action input. It *must be a map*.
If it is a keyword list, it will be assumed that it is actually `options` (for convenience).
This allows for the following behaviour:
Useful for creating functions like `get_user_by_email` that map to an action that has an `:email` argument.
## get_by
Automatically sets `get?` to `true`.
The action should return a single result based on any arguments provided. To make it so that the function
takes a specific field, and filters on that field, use `get_by` instead. When combined, `get_by` takes precedence.
Useful for creating functions like `get_user_by_id` that map to a basic read action.
## get_by_identity
```elixir
# Because the 3rd argument is a keyword list, we use it as options
Api.register_user(username, password, [tenant: "organization_22"])
# Because the 3rd argument is a keyword list, we use it as action input
Api.register_user(username, password, %{key: "val"})
# When all are provided it is unambiguous
Api.register_user(username, password, %{key: "val"}, [tenant: "organization_22"])
```

View file

@ -1,3 +1,103 @@
# Expressions
Talk about how we handle `nil` values (and how it is SQL-ish)
Ash expressions are used in various places like calculations, filters, and policies, and are meant to be portable representations of elixir expressions. You can create an expression using the `Ash.Query.expr/1` macro, like so:
```elixir
Ash.Query.expr(1 + 2)
Ash.Query.expr(x + y)
Ash.Query.expr(post.title <> " | " <> post.subtitle)
```
Ash expressions have some interesting properties in their evaluation, primarily because they are made to be portable, i.e executable in some data layer (like SQL) or executable in Elixir. In general, these expressions will behave the same way they do in Elixir. The primary difference is how `nil` values work. They behave the way that `NULL` values behave in SQL. This is primarily because this pattern is easier to replicate to various popular data layers, and is generally safer when using expressions for things like authentication. The practical implications of this are that `nil` values will "poison" many expressions, and cause them to return `nil`. For example, `x + nil` would always evaluate to `nil`.
## Operators
The following operators are available and they behave the same as they do in Elixir, except for the `nil` addendum above.
- `==`
- `!=`
- `>`
- `>=`
- `<`
- `<=`
- `in`
- `*`
- `-`
- `/`
- `<>`
- `||`
- `&&`
- `is_nil` | Custom, accepts a boolean on the right side i.e `x is_nil true` or `x is_nil false`.
## Functions
The following functions are built in. Data Layers can add their own functions to expressions. For example, `AshPostgres` adds a `fragment` function that allows you to provide SQL directly.
The following functions are built in:
- `if` | Works like elixir's `if`.
- `is_nil` | Works like elixir's `is_nil`
- `get_path` | i.e `get_path(value, ["foo", "bar"])`. This is what expressions like `value["foo"]["bar"]` are turned into under the hood.
- `ago` | i.e `deleted_at > ago(7, :day)`. The available time intervals are documented in {{link:ash:module:Ash.Type.DurationName}}
- `contains` | if one string contains another string, i.e `contains("fred", "red")`
## Use cases for expressions
### Filters
The most obvious place we use expressions is when filtering data. For example:
```elixir
Ash.Query.filter(Ticket, status == :open and opened_at >= ago(10, :day))
```
These filters will be run in the data layer, i.e in the SQL query.
## Portability
Ash expressions being portable is more important than it sounds. For example, if you were using AshPostgres and had the following calculation, which is an expression capable of being run in elixir or translated to SQL:
```elixir
calculate :full_name, :string, expr(first_name <> " " <> last_name)
```
And you did something like the following:
```elixir
User
|> Ash.Query.load(:full_name)
|> Ash.Query.sort(:full_name)
|> Accounts.read!()
```
You would see that it ran a SQL query with the `full_name` calculation as SQL. This allows for sorting on that value. However, if you had something like this:
```elixir
# data can be loaded in the query like above, or on demand later
Accounts.load!(user, :full_name)
```
you would see that no SQL queries are run. The calculation is simply run in Elixir and the value is set.
### Referencing related values
Related values can be references using dot delimiters, i.e `Ash.Query.filter(user.first_name == "fred")`.
When referencing related values in filters, if the reference is a `has_one` or `belongs_to`, the filter does exactly what it looks like (matches if the related value matches). If it is a `has_many` or a `many_to_many`, it matches if any of the related records match.
### Referencing aggregates and calculations
Aggregates are simple, as all aggregates can be referenced in filter expressions (if you are using a data layer that supports it).
For calculations, only those that define an expression can be referenced in other expressions.
Here are some examples:
```elixir
# given a `full_name` calculation
Ash.Query.filter(User, full_name == "Hob Goblin")
# given a `full_name` calculation that accepts an argument called `delimiter`
Ash.Query.filter(User, full_name(delimiter: "~") == "Hob~Goblin")
```

View file

@ -1,28 +1,80 @@
# Flows
A flow is a static definition of a set of steps to be run.
Flows are backed by `executors`, which determine how the workflow steps are performed.
The executor can be overriden on invocation, but not all executors will be capable of running all flows.
As of this writing, the default executor is the only one. It runs all steps in parallel unless values must be provided from one step to another, or in steps that are enclosed by a transaction.
Ash.Flow is still in its early days, so expect many features, step types, and executors to come in the future.
All explanations here pertain to the builtin executor, so be sure to read the documentation of any other executor you may use.
Flows are comprised of steps, which each have an `input` and an `result`. By default, each step is executed concurrently (or at least *may* be executed concurrently). When the result of one step is used in another, that will cause them to run in sequence. In the following flow, for example, the `:create_user` and `:create_blank_project` steps would happen concurrently, but both would wait on the `:create_org` step.
Available template functions:
```elixir
flow do
# Flow arguments allow you to parameterize the flow
argument :org_name, :string do
allow_nil? false
end
- `arg/1` to refer to a flow argument
- `result/1` to refer to the result of another step
argument :user_name, :string do
allow_nil? false
end
# The flow returns the result of the `:create_user` step.
returns :create_user
end
If given a single step, then the result of the step is returned. If given multiple, then a map of step name to result is returned.
If nothing is provided, then the last step is returned.
steps do
# The step is called `:create_org`, and it creates an `Organization` using the `register_org` action.
create :create_org, MyApp.Accounts.Organization, :register_org do
# The input to the action refers to an argument of the flow
input %{
name: arg(:org_name)
}
end
To rename keys in the map of step names to results, use a keyword list, where the key is the step and the value is what should be in
the returned map.
# The step is called :create_user, and it creates a `User` using the `:register_user` action.
create :create_user, MyApp.Accounts.User, :register_user do
input %{
# The input refers to an argument of the flow
name: arg(:user_name),
# and to the result of another step
org: result(:create_org)
}
end
For example:
# The step is called :create_blank_project, and it creates a `Project` using the `:register_user` action.
create :create_blank_project, MyApp.Accounts.Project, :create_example do
input %{
# The input refers to the result of another step
org: result(:create_org)
}
end
end
```
`returns :step_name`
`returns [:step_one, :step_two]`
`returns [step_one: :one, step_two: :two]`
## Return Values
`returns` determines what the flow returns, and may be one of three things:
- `:step_name` - will return the result of the configured step
- `%{step_name: :key}` will return a map of each key to the provided step name, i.e `%{key: <step_name_result>}`
- `[:step_name]` - which is equivalent to `%{step_name: :step_name}`
## Errors
Currently, any error anywhere in the flow will simply fail the flow and will return an error. Over time, error handling behavior will be added, as well as the ability to customize how transactions are rolled back, and to handle errors in a custom way.
## Custom steps
Generally speaking, you should also set the `touches_resources` if you set `async?` to true.
This ensures that the custom step will be run synchronously if any of those resource's data
layers is in a corresponding transaction. You don't necessarily need to set *all* of the
resources that will be touched. For example, all AshPostgres resources that share the same
repo share the same transaction state.
Custom steps allow you to implement any custom logic that you need. There aren't really any restrictions on what you do in a custom step, but there is one main consideration if you want your custom step to play nicely with transactions:
Generally speaking you should set the `touches_resources` if you set `async?` to true.
This ensures that the custom step will be run synchronously if any of those resource's data
layers is in a corresponding transaction. You don't necessarily need to set *all* of the
resources that will be touched. For example, all AshPostgres resources that share the same
repo share the same transaction state.

View file

@ -1,26 +1,28 @@
# Identities
Used for indicating that some set of attributes uniquely identify a resource.
Identities are a way to declare that a record (an instance of a resource) can be uniquely identified by a set of attributes. This information can be used in various ways throughout the framework. The primary key of the resource does not need to be listed as an identity.
This will allow these fields to be passed to `c:Ash.Api.get/3`, e.g `get(Resource, [some_field: 10])`,
if all of the keys are filterable. Otherwise they are purely descriptive at the moment.
The primary key of the resource does not need to be listed as an identity.
## Using Api.get
This will allow these fields to be passed to `c:Ash.Api.get/3`, e.g `get(Resource, [email: "foo"])`.
## eager check with
## Using upserts
The identity is checked on each validation of the changeset. For example, if you are using
`AshPhoenix.Form`, this looks for a conflicting record on each call to `Form.validate/2`.
For updates, it is only checked if one of the involved fields is being changed.
Create actions support the `upsert?: true` option, if the data layer supports it. An `upsert?` involves checking for a conflict on some set of attributes, and translating the behavior to an update in the case one is found. By default, the primary key is used when looking for duplicates, but you can set `[upsert?: true, upsert_identity: :identity_name]` to tell it to look for conflicts on a specific identity.
For creates, The identity is checked unless your are performing an `upsert`, and the
`upsert_identity` is this identity. Keep in mind that for this to work properly, you will need
to pass the `upsert?: true, upsert_identity: :identity_name` *when creating the changeset* instead of
passing it to the Api when creating.
## Creating unique constraints
The `primary?` action is used to search for a record. This will error if you have not
configured one.
Tools like `AshPostgres` will create unique constraints in the database automatically for each identity. These unique constraints will honor other configuration on your resource, like the `base_filter`.
## pre check with
## Eager Checking
Behaves the same as `eager_check_with`, but it runs just prior to the action being committed. Useful for
data layers that don't support transactions/unique constraints, or manual resources with identities.
Setting `eager_check_with: ApiName` on an identity will allow that identity to be checked when building a create changeset over the resource. This allows for showing quick up-front validations about wether some value is taken, for example.
If you are using `AshPhoenix.Form`, for example, this looks for a conflicting record on each call to `Form.validate/2`.
For updates, it is only checked if one of the involved fields is being changed.
For creates, The identity is checked unless your are performing an `upsert`, and the `upsert_identity` is this identity. Keep in mind that for this to work properly, you will need to pass the `upsert?: true, upsert_identity: :identity_name` *when creating the changeset* instead of passing it to the Api when creating. The `primary?` action is used to search for a record. This will error if you have not configured one.
## Pre Checking
`pre_check_with: ApiName` behaves the same as `eager_check_with`, but it runs just prior to the action being committed. Useful for data layers that don't support transactions/unique constraints, or manual resources with identities. `Ash.DataLayer.Ets` will actually require you to set `pre_check_with` since the ETS data layer has no built in support for unique constraints.

View file

@ -0,0 +1,56 @@
# Managing Relationships
In Ash, managing related data is done via `Ash.Changeset.manage_relationship/4`. There are various ways to leverage the functionality expressed there. If you are working with changesets directly, you can call that function. However, if you want that logic to be portable (e.g available in `ash_graphql` mutations and `ash_json_api` actions), then you want to use the following `argument` + `change` pattern:
```elixir
actions do
update :update do
argument :add_comment, :map do
allow_nil? false
end
argument :tags, {:array, :uuid} do
allow_nil? false
end
# First argument is the name of the action argument to use
# Second argument is the relationship to be managed
# Third argument is options. For more, see `Ash.Changeset.manage_relationship/4`. This accepts the same options.
change manage_relationship(:add_comment, :comments, type: :create)
# Second argument can be omitted, as the argument name is the same as the relationship
change manage_relationship(:tags, type: :replace)
end
end
```
With this, those arguments can be used simply in action input:
```elixir
post
|> Ash.Changeset.for_update(:update, tags: [tag1.id, tag2.id], add_comment: %{text: "comment text"})
|> MyApi.update!()
```
## Argument Types
Notice how we provided a map as input to `add_comment`, and a list of UUIDs as an input to `manage_relationship`. When providing maps or lists of maps, you are generally just providing input that will eventually be passed into actions on the destination resource. However, you can also provide individual values or lists of values. By default, we assume that value maps to the primary key of the destination resource, but you can use the `value_is_key` option to modify that behavior. For example, if you wanted adding a comment to take a list of strings, you could say:
```elixir
argument :add_comment, :string
...
change manage_relationship(:add_comment, :comments, type: :create, value_is_key: :text)
```
And then you could use it like so:
```elixir
post
|> Ash.Changeset.for_update(:update, tags: [tag1.id, tag2.id], add_comment: "comment text")
|> MyApi.update!()
```
## Derived behavior
Determining what will happen when managing related data can be complicated, as the nature of the problem itself is quite complicated. In some simple cases, like `type: :create`, there may be only one action that will be called. But in order to support all of the various ways that related resources may need to be managed, Ash provides a very rich set of options to determine what happens with the provided input. Tools like `AshPhoenix.Form` can look at your arguments that have a corresponding `manage_relationship` change, and derive the structure of those nested forms. Tools like `AshGraphql` can derive complex input objects to allow manipulating those relationships over a graphql Api. This all works because the options are, ultimately, quite explicit. It can be determined exactly what actions might be called, and therefore what input could be needed.

View file

@ -1,67 +1,79 @@
# Manual Actions
Manual actions are a way to implement an action in a fully custom way. This can be a very useful escape hatch when you have something that you are finding difficult to model with Ash's builtin tools.
All validation still takes place, but the `result` in any `after_action` callbacks
attached to that action will simply be the record that was read from the database initially.
For creates, the `result` will be `nil`, and you will be expected to handle the changeset in
an after_action callback and return an instance of the record. This is a good way to prevent
Ash from issuing an unnecessary update to the record, e.g updating the `updated_at` of the record
when an action actually only involves modifying relating records.
## Manual Creates/Updates/Destroy
You could then handle the changeset automatically.
For manual create/update/destroy actions, everything works pretty much the same, with the exception that the `after_action` hooks on a resource will receive a `nil` value for creates, and the old unmodified value for updates, and you are expected to add an after action hook that changes that `nil` value into the result of the action.
For example:
For example:
# in the action
# in the action
```elixir
action :special_create do
manual? true
change MyApp.DoCreate
end
```elixir
create :special_create do
manual? true
change MyApp.DoCreate
end
# The change
defmodule MyApp.DoCreate do
use Ash.Resource.Change
# The change
defmodule MyApp.DoCreate do
use Ash.Resource.Change
def change(changeset, _, _) do
Ash.Changeset.after_action(changeset, fn changeset, _result ->
# result will be `nil`, because this is a manual action
def change(changeset, _, _) do
Ash.Changeset.after_action(changeset, fn changeset, _result ->
# result will be `nil`, because this is a manual action
result = do_something_that_creates_the_record(changeset)
result = do_something_that_creates_the_record(changeset)
{:ok, result}
end)
end
end
```
{:ok, result}
end)
end
end
```
# for manual reads
## Manual Read Actions
Manual read actions work differently. They must be provided a module that will run the read action.
The module should implement the `Ash.Resource.ManualRead` behaviour, and actions will simply be handed the ash query and the data layer query.
Manual read actions will simply be handed the ash query and the data layer query.
If you simply want to customize/intercept the query before it is sent to the data layer
then use `modify_query` instead. Using them in conjunction can help ensure that calculations and aggregates
are all correct. For example, you could modify the query to alter/replace the where clause/filter using
`modify_query` which will affect which records calculations are returned for. Then you can customize how it is
run using `manual`.
```elixir
# in the resource
actions do
read :action_name do
manual MyApp.ManualRead
# or `{MyApp.ManualRead, ...opts}`
end
end
```elixir
# in the resource
actions do
read :action_name do
manual MyApp.ManualRead
# or `{MyApp.ManualRead, ...opts}`
end
end
# the implementation
defmodule MyApp.ManualRead do
use Ash.Resource.ManualRead
# the implementation
defmodule MyApp.ManualRead do
use Ash.Resource.ManualRead
def read(ash_query, ecto_query, _opts, _context) do
...
{:ok, query_results} | {:error, error}
end
end
```
def read(ash_query, ecto_query, _opts, _context) do
...
{:ok, query_results} | {:error, error}
end
end
```
### Modifying the query
As an alternative to manual read actions, you can also provide the `modify_query` option, which takes an `MFA` and allows low level manipulation of the query just before it is dispatched to the data layer.
For example:
```elixir
read :read do
modify_query {MyApp.ModifyQuery, :modify, []}
end
defmodule MyApp.ModifyQuery do
def modify(ash_query, data_layer_query) do
{:ok, modify_data_layer_query(data_layer_query)}
end
end
```
This can be used as a last-resort escape hatch when you want to still use resource actions but need to do something that you can't do easily with Ash tools. As with any low level escape hatch, here be dragons.

View file

@ -0,0 +1,85 @@
# Notifiers
## Built-in Notifiers
- PubSub: `Ash.Notifier.PubSub`
## Creating a notifier
A notifier is a simple extension that must implement a single callback `notify/1`. Notifiers do not have to implement an Ash DSL extension, but they may in order to configure how that notifier should behave. See `Ash.Notifier.Notification` for the currently available fields. Notifiers should not do anything intensive synchronously. If any heavy work needs to be done, they should delegate to something else to handle the notification, like sending it to a GenServer or GenStage.
Eventually, there may be built in notifiers that will make setting up a GenStage that reacts to your resource changes easy. Until then, you'll have to write your own.
For more information on creating a DSL extension to configure your notifier, see the docs for `Spark.Dsl.Extension`.
### Example notifier
```elixir
defmodule ExampleNotifier do
use Ash.Notifier
def notify(%Ash.Notifier.Notification{resource: resource, action: %{type: :create}, actor: actor}) do
if actor do
Logger.info("#{actor.id} created a #{resource}")
else
Logger.info("A non-logged in user created a #{resource}")
end
end
end
```
### Including a notifier in a resource
```elixir
defmodule MyResource do
use Ash.Resource,
notifiers: [ExampleNotifier]
end
```
## Transactions
API calls involving resources who's datalayer supports transactions (like Postgres), notifications are saved up and sent after the transaction is closed. For example, the api call below ultimately results in many many database calls.
```elixir
Post
|> Ash.Changeset.new(%{})
|> Ash.Changeset.append_to_relationship(:related_posts, [1, 2, 3])
|> Ash.Changeset.remove_from_relationship(:related_posts, [4, 5])
|> Ash.Changeset.append_to_relationship(:comments, [10])
|> Api.update!()
```
Ash doesn't support bulk database operations yet, so it performs the following operations:
- a read of the currently related posts
- a read of the currently related comments
- a creation of a post_link to relate to 1
- a creation of a post_link to relate to 2
- a creation of a post_link to relate to 3
- a destruction of the post_link related to 4
- a destruction of the post_link related to 5
- an update to comment 10, to set its `post_id` to this post
If all three of these resources have notifiers configured, we need to send a notification for each operation (notifications are not sent for reads). For data consistency reasons, if a data layer supports transactions, all writes are done in a transaction. However, if you try to read the record from the database that you have just received a notification about before the transaction has been closed, in a different process, the information will be wrong. For this reason, Ash accumulates notifications until they can be sent.
If you need to perform multiple operations against your resources in your own transaction, you will have to handle that case yourself. To support this, `c:Ash.Api.create/2`, `c:Ash.Api.update/2` and `c:Ash.Api.destroy/2` support a `return_notifications?: true` option. This causes the api call to return `{:ok, result, notifications}` in the succesful case. Here is an example of how you might use it.
```elixir
result =
Ash.DataLayer.transaction(resource, fn ->
{:ok, something, notifications1} = create_something()
{:ok, result, notifications2} = create_another_thing(something)
{:ok, notifications3} = destroy_something(something)
{result, Enum.concat([notifications1, notifications2, notifications3])}
end)
case result do
{:ok, value, notifications} ->
Ash.Notifier.notify(notifications)
value
{:error, error} ->
handle_error(error)
end
```

View file

@ -0,0 +1,70 @@
# Pagination
Pagination is configured at the action level. There are two kinds of pagination supported: `keyset` and `offset`. There are
pros and cons to each. An action can support both at the same time, or only one (or none). A full count of records can be
requested by passing `page: [count: true]`, but it should be kept in mind that doing this requires running the same query
twice, one of which is a count of all records. Ash does these in parallel, but it can still be quite expensive on large
datasets. For more information on the options for configuring actions to support pagination, see the [pagination section](Ash.Resource.Dsl.html#module-pagination) in `Ash.Resource.Dsl`.
## Offset Pagination
Offset pagination is done via providing a `limit` and an `offset`. A `limit` is how many records that should be returned on the page.
An `offset` is how many records from the beginning should be skipped. Using this, you might make requests like the following:
```elixir
# Get the first ten records
Api.read(Resource, page: [limit: 10])
# Get the second ten records
Api.read(Resource, page: [limit: 10, offset: 10])
# No need to do this in practice, see `c:Ash.Api.page/2`
```
### Offset Pros
- Simple to think about
- Possible to skip to a page by number. E.g the 5th page of 10 records is `offset: 40`
- Easy to reason about what page you are currently on (if the total number of records is requested)
- Can go to the last page (even though, if done by using the full count, the data could have changed)
### Offset Cons
- Does not perform well on large datasets
- When moving between pages, if data was created or deleted, records may appear on multiple pages or be skipped
## Keyset Pagination
Keyset pagination is done via providing an `after` or `before` option, as well as a `limit`. The value of this option should be
a `keyset` that has been returned from a previous request. Keysets are returned when a request is made with a `limit` to an action
that supports `keyset` pagination, and they are stored in the `__metadata__` key of each record. The `keyset` is a special value that
can be passed into the `after` or `before` options, to get records that occur after or before.
For example:
```elixir
page = Api.read(Resource, page: [limit: 10])
last_record = List.last(page.results)
# No need to do this in practice, see `c:Ash.Api.page/2`
next_page = Api.read(Resource, page: [limit: 10, after: last_record.__metadata__.keyset])
```
### Important Limitation
Keyset pagination cannot currently be used in conjunction with aggregate and calculation sorting.
Combining them will result in an error on the query.
### Keyset Pros
- Performs very well on large datasets (assuming indices exist on the columns being sorted on)
- Behaves well as data changes. The record specified will always be the first or last item in the page
### Keyset Cons
- A bit more complex
- Can't go to a specific page number
- Can't use aggregate and calculation sorting (at the moment, this will change soon)
For more information on keyset vs offset based pagination, see:
- [Offset vs Seek Pagination](https://taylorbrazelton.com/posts/2019/03/offset-vs-seek-pagination/)

View file

@ -1,45 +1,60 @@
# PubSub
Ash includes a builtin notifier to help you publish events over any kind of pub-sub pattern. This is plug and play with `Phoenix.PubSub`, but could be used with any pubsub pattern.
To include attribute values of the resource in the message, pass a list
of strings and attribute names. They will ultimately be joined with `:`.
For example:
You simply configure a module that defines a `broadcast/3` function, and then add some "publications" which configure under what conditions an event should be sent and what the topic should be.
```elixir
prefix "user"
## Topic Templates
publish :create, ["created", :user_id]
```
Often you want to include some piece of data in the thing being changed, like the `:id` attribute. This is done by providing a list as the topic, and using atoms which will be replaced by their corresponding values. They will ultimately be joined with `:`.
This might publish a message to \"user:created:1\"" for example.
For example:
For updates, if the field in the template is being changed, a message is sent
to *both* values. So if you change `user 1` to `user 2`, the same message would
be published to `user:updated:1` and `user:updated:2`. If there are multiple
attributes in the template, and they are all being changed, a message is sent for
every combination of substitutions.
```elixir
prefix "user"
## Template parts
publish :create, ["created", :user_id]
```
Templates may contain lists, in which case all combinations of values in the list will be used. Add
`nil` to the list if you want to produce a pattern where that entry is ommitted.
This might publish a message to \"user:created:1\"" for example.
The atom `:_tenant` may be used. If the changeset has a tenant set on it, that
value will be used, otherwise that combination of values is ignored.
For updates, if the field in the template is being changed, a message is sent
to *both* values. So if you change `user 1` to `user 2`, the same message would
be published to `user:updated:1` and `user:updated:2`. If there are multiple
attributes in the template, and they are all being changed, a message is sent for
every combination of substitutions.
The atom `:_pkey` may be used. It will be a stringified, concatenation of the primary key fields,
or just the primary key if there is only one primary key field.
## Template parts
The atom `:_skip` may be used. It only makes sense to use it in the context of a list of alternatives,
and adds a pattern where that part is skipped.
Templates may contain lists, in which case all combinations of values in the list will be used. Add
`nil` to the list if you want to produce a pattern where that entry is omitted.
```elixir
publish :updated, [[:team_id, :_tenant], "updated", [:id, nil]]
```
The atom `:_tenant` may be used. If the changeset has a tenant set on it, that
value will be used, otherwise that combination of values is ignored.
Would produce the following messages, given a `team_id` of 1, a `tenant` of `org_1`, and an `id` of `50`:
```elixir
"1:updated:50"
"1:updated"
"org_1:updated:50"
"org_1:updated"
```
The atom `:_pkey` may be used. It will be a stringified, concatenation of the primary key fields,
or just the primary key if there is only one primary key field.
The atom `nil` may be used. It only makes sense to use it in the context of a list of alternatives,
and adds a pattern where that part is skipped.
```elixir
publish :updated, [[:team_id, :_tenant], "updated", [:id, nil]]
```
Would produce the following messages, given a `team_id` of 1, a `tenant` of `org_1`, and an `id` of `50`:
```elixir
"1:updated:50"
"1:updated"
"org_1:updated:50"
"org_1:updated"
```
## Usage with Phoenix
Phoenix expects a specific shape of data to be broadcasted, and since it is so often used with Ash, instead of making you define your own notifier that creates the `%Phoenix.Socket.Broadcast` struct and publishes it, Ash has an option to do that automatically, via
```elixir
broadcast_type: :phoenix_broadcast
```

View file

@ -319,7 +319,7 @@ Helpdesk.Tickets.read!(Helpdesk.Tickets.Ticket)
Which will raise an error explaining that there is no data to be read for that resource.
In order to add persistence, we need to add a {{link:ash:guide:Data Layers:Data Layer}} to our resources. Before we do that, however, lets go over how Ash allows us to work against many different data layers (or even no data layer at all). Resources without a data layer will implicitly be using `Ash.DataLayer.Simple`, which will just return structs and do no persistence. The way that we do this is by leveraging `context`, a free-form map available on queries and changesets. The simple data layer looks for `query.context[:data_layer][:data][resource]`. It provides a utility, `Ash.DataLayer.Simple.set_data/2` to set it.
In order to add persistence, we need to add a data layer to our resources. Before we do that, however, lets go over how Ash allows us to work against many different data layers (or even no data layer at all). Resources without a data layer will implicitly be using `Ash.DataLayer.Simple`, which will just return structs and do no persistence. The way that we do this is by leveraging `context`, a free-form map available on queries and changesets. The simple data layer looks for `query.context[:data_layer][:data][resource]`. It provides a utility, `Ash.DataLayer.Simple.set_data/2` to set it.
Try the following in iex. We will open some tickets, and close some of them, and then use `Ash.DataLayer.Simple.set_data/2` to use those tickets.

View file

@ -2,7 +2,7 @@ defmodule Ash.CodeInterface do
@moduledoc """
Used to define the functions of a code interface for a resource.
For more information on defining code interfaces, see: `Ash.Resource.Dsl.html#module-code_interface`
For more information on defining code interfaces, see {{link:ash:}}
"""
@doc false
@ -191,7 +191,7 @@ defmodule Ash.CodeInterface do
|> Ash.Query.for_read(
unquote(action.name),
input,
Keyword.take(opts, [:actor, :tenant, :authorize?])
Keyword.take(opts, [:actor, :tenant, :authorize?, :tracer])
)
|> Ash.Query.filter(filters)
else
@ -200,7 +200,7 @@ defmodule Ash.CodeInterface do
|> Ash.Query.for_read(
unquote(action.name),
input,
Keyword.take(opts, [:actor, :tenant, :authorize?])
Keyword.take(opts, [:actor, :tenant, :authorize?, :tracer])
)
end
@ -264,7 +264,7 @@ defmodule Ash.CodeInterface do
|> Ash.Query.for_read(
unquote(action.name),
input,
Keyword.take(opts, [:actor, :tenant, :authorize?])
Keyword.take(opts, [:actor, :tenant, :authorize?, :tracer])
)
|> Ash.Query.filter(filters)
else
@ -273,7 +273,7 @@ defmodule Ash.CodeInterface do
|> Ash.Query.for_read(
unquote(action.name),
input,
Keyword.take(opts, [:actor, :tenant, :authorize?])
Keyword.take(opts, [:actor, :tenant, :authorize?, :tracer])
)
end
@ -326,12 +326,12 @@ defmodule Ash.CodeInterface do
|> Ash.Changeset.for_create(
unquote(action.name),
input,
Keyword.take(opts, [:actor, :tenant, :authorize?])
Keyword.take(opts, [:actor, :tenant, :authorize?, :tracer])
)
unquote(api).create(
changeset,
Keyword.drop(opts, [:actor, :changeset, :tenant, :authorize?])
Keyword.drop(opts, [:actor, :changeset, :tenant, :authorize?, :tracer])
)
end
end
@ -363,12 +363,12 @@ defmodule Ash.CodeInterface do
|> Ash.Changeset.for_create(
unquote(action.name),
input,
Keyword.take(opts, [:actor, :tenant, :authorize?])
Keyword.take(opts, [:actor, :tenant, :authorize?, :tracer])
)
unquote(api).create!(
changeset,
Keyword.drop(opts, [:actor, :changeset, :authorize?])
Keyword.drop(opts, [:actor, :changeset, :authorize?, :tracer])
)
end
end
@ -402,10 +402,13 @@ defmodule Ash.CodeInterface do
|> Ash.Changeset.for_update(
unquote(action.name),
input,
Keyword.take(opts, [:actor, :tenant, :authorize?])
Keyword.take(opts, [:actor, :tenant, :authorize?, :tracer])
)
unquote(api).update(changeset, Keyword.drop(opts, [:actor, :tenant, :authorize?]))
unquote(api).update(
changeset,
Keyword.drop(opts, [:actor, :tenant, :authorize?, :tracer])
)
end
end
@ -438,12 +441,12 @@ defmodule Ash.CodeInterface do
|> Ash.Changeset.for_update(
unquote(action.name),
input,
Keyword.take(opts, [:actor, :tenant, :authorize?])
Keyword.take(opts, [:actor, :tenant, :authorize?, :tracer])
)
unquote(api).update!(
changeset,
Keyword.drop(opts, [:actor, :tenant, :authorize?])
Keyword.drop(opts, [:actor, :tenant, :authorize?, :tracer])
)
end
end
@ -477,12 +480,12 @@ defmodule Ash.CodeInterface do
|> Ash.Changeset.for_destroy(
unquote(action.name),
input,
Keyword.take(opts, [:actor, :tenant, :authorize?])
Keyword.take(opts, [:actor, :tenant, :authorize?, :tracer])
)
unquote(api).destroy(
changeset,
Keyword.drop(opts, [:actor, :tenant, :authorize?])
Keyword.drop(opts, [:actor, :tenant, :authorize?, :tracer])
)
end
end
@ -516,12 +519,12 @@ defmodule Ash.CodeInterface do
|> Ash.Changeset.for_destroy(
unquote(action.name),
input,
Keyword.take(opts, [:actor, :tenant, :authorize?])
Keyword.take(opts, [:actor, :tenant, :authorize?, :tracer])
)
unquote(api).destroy!(
changeset,
Keyword.drop(opts, [:actor, :tenant, :authorize?])
Keyword.drop(opts, [:actor, :tenant, :authorize?, :tracer])
)
end
end

View file

@ -19,7 +19,7 @@ defmodule Ash.Error.Framework.ManualActionMissed do
# in the resource
action :special_create do
create :special_create do
manual? true
change MyApp.DoCreate
end

256
lib/ash/expr/expr.ex Normal file
View file

@ -0,0 +1,256 @@
defmodule Ash.Expr do
@moduledoc false
alias Ash.Query.{BooleanExpression, Not}
defmacro expr(do: body) do
quote do
Ash.Expr.expr(unquote(body))
end
end
defmacro expr(body) do
if Keyword.keyword?(body) do
quote do
unquote(body)
end
else
expr = do_expr(body)
quote do
unquote(expr)
end
end
end
@operator_symbols Ash.Query.Operator.operator_symbols()
@doc false
def do_expr(expr, escape? \\ true)
def do_expr({op, _, nil}, escape?) when is_atom(op) do
soft_escape(%Ash.Query.Ref{relationship_path: [], attribute: op}, escape?)
end
def do_expr({op, _, Elixir}, escape?) when is_atom(op) do
soft_escape(%Ash.Query.Ref{relationship_path: [], attribute: op}, escape?)
end
def do_expr({:^, _, [value]}, _escape?) do
value
end
def do_expr({{:., _, [Access, :get]}, _, [left, right]}, escape?) do
left = do_expr(left, false)
right = do_expr(right, false)
[left, right]
|> Ash.Query.Function.GetPath.new()
|> case do
{:ok, call} ->
soft_escape(call, escape?)
{:error, error} ->
raise error
end
end
def do_expr({{:., _, [_, _]} = left, _, []}, escape?) do
do_expr(left, escape?)
end
def do_expr({{:., _, [_, _]} = left, _, args}, escape?) do
args = Enum.map(args, &do_expr(&1, false))
case do_expr(left, escape?) do
{:%{}, [], parts} = other when is_list(parts) ->
if Enum.any?(parts, &(&1 == {:__struct__, Ash.Query.Ref})) do
ref = Map.new(parts)
soft_escape(
%Ash.Query.Call{
name: ref.attribute,
relationship_path: ref.relationship_path,
args: args,
operator?: false
},
escape?
)
else
other
end
%Ash.Query.Ref{} = ref ->
soft_escape(
%Ash.Query.Call{
name: ref.attribute,
relationship_path: ref.relationship_path,
args: args,
operator?: false
},
escape?
)
other ->
other
end
end
def do_expr({:ref, _, [field, path]}, escape?) do
ref =
case do_expr(path, false) do
%Ash.Query.Ref{attribute: head_attr, relationship_path: head_path} ->
case do_expr(field) do
%Ash.Query.Ref{attribute: tail_attribute, relationship_path: tail_relationship_path} ->
%Ash.Query.Ref{
relationship_path: head_path ++ [head_attr] ++ tail_relationship_path,
attribute: tail_attribute
}
other ->
%Ash.Query.Ref{relationship_path: head_path ++ [head_attr], attribute: other}
end
other ->
case do_expr(field, false) do
%Ash.Query.Ref{attribute: attribute, relationship_path: relationship_path} ->
%Ash.Query.Ref{
attribute: attribute,
relationship_path: List.wrap(other) ++ List.wrap(relationship_path)
}
other_field ->
%Ash.Query.Ref{attribute: other_field, relationship_path: other}
end
end
soft_escape(ref, escape?)
end
def do_expr({:ref, _, [field]}, escape?) do
ref =
case do_expr(field, false) do
%Ash.Query.Ref{} = ref ->
ref
other ->
%Ash.Query.Ref{attribute: other, relationship_path: []}
end
soft_escape(ref, escape?)
end
def do_expr({:., _, [left, right]} = ref, escape?) when is_atom(right) do
case do_ref(left, right) do
%Ash.Query.Ref{} = ref ->
soft_escape(ref, escape?)
:error ->
raise "Invalid reference! #{Macro.to_string(ref)}"
end
end
def do_expr({op, _, args}, escape?) when op in [:and, :or] do
args = Enum.map(args, &do_expr(&1, false))
soft_escape(BooleanExpression.optimized_new(op, Enum.at(args, 0), Enum.at(args, 1)), escape?)
end
def do_expr({op, _, [_, _] = args}, escape?)
when is_atom(op) and op in @operator_symbols do
args = Enum.map(args, &do_expr(&1, false))
soft_escape(%Ash.Query.Call{name: op, args: args, operator?: true}, escape?)
end
def do_expr({left, _, [{op, _, [right]}]}, escape?)
when is_atom(op) and op in @operator_symbols and is_atom(left) and left != :not do
args = Enum.map([{left, [], nil}, right], &do_expr(&1, false))
soft_escape(%Ash.Query.Call{name: op, args: args, operator?: true}, escape?)
end
def do_expr({:not, _, [expression]}, escape?) do
expression = do_expr(expression, false)
soft_escape(Not.new(expression), escape?)
end
def do_expr({:cond, _, [[do: options]]}, escape?) do
options
|> Enum.map(fn {:->, _, [condition, result]} ->
{condition, result}
end)
|> cond_to_if_tree()
|> do_expr(escape?)
end
def do_expr({op, _, args}, escape?) when is_atom(op) and is_list(args) do
last_arg = List.last(args)
args =
if Keyword.keyword?(last_arg) && Keyword.has_key?(last_arg, :do) do
Enum.map(:lists.droplast(args), &do_expr(&1, false)) ++
[
Enum.map(last_arg, fn {key, arg_value} ->
{key, do_expr(arg_value, false)}
end)
]
else
Enum.map(args, &do_expr(&1, false))
end
soft_escape(%Ash.Query.Call{name: op, args: args, operator?: false}, escape?)
end
def do_expr({left, _, _}, escape?) when is_tuple(left), do: do_expr(left, escape?)
def do_expr(other, _), do: other
defp cond_to_if_tree([{condition, result}]) do
{:if, [], [cond_condition(condition), [do: result]]}
end
defp cond_to_if_tree([{condition, result} | rest]) do
{:if, [], [cond_condition(condition), [do: result, else: cond_to_if_tree(rest)]]}
end
defp cond_condition([condition]) do
condition
end
defp cond_condition([condition | rest]) do
{:and, [], [condition, cond_condition(rest)]}
end
defp soft_escape(%_{} = val, _) do
{:%{}, [], Map.to_list(val)}
end
defp soft_escape(other, _), do: other
defp do_ref({left, _, nil}, right) do
%Ash.Query.Ref{relationship_path: [left], attribute: right}
end
defp do_ref({{:., _, [_, _]} = left, _, _}, right) do
do_ref(left, right)
end
defp do_ref({:., _, [left, right]}, far_right) do
case do_ref(left, right) do
%Ash.Query.Ref{relationship_path: path, attribute: attribute} = ref ->
%{ref | relationship_path: path ++ [attribute], attribute: far_right}
:error ->
:error
end
end
defp do_ref({left, _, _}, right) when is_atom(left) and is_atom(right) do
%Ash.Query.Ref{relationship_path: [left], attribute: right}
end
defp do_ref(_left, _right) do
:error
end
end

View file

@ -23,7 +23,6 @@ defmodule Ash.Filter do
Eq,
GreaterThan,
GreaterThanOrEqual,
Has,
In,
LessThan,
LessThanOrEqual,
@ -48,8 +47,7 @@ defmodule Ash.Filter do
LessThan,
GreaterThan,
LessThanOrEqual,
GreaterThanOrEqual,
Has
GreaterThanOrEqual
] ++ Ash.Query.Operator.Basic.operator_modules()
@builtins @functions ++ @operators

View file

@ -139,6 +139,7 @@ defmodule Ash.Filter.Runtime do
end)
end
@doc false
def do_match(record, expression) do
case expression do
%Ash.Filter{expression: expression} ->
@ -228,7 +229,7 @@ defmodule Ash.Filter.Runtime do
end
defp resolve_expr(%Ref{} = ref, record) do
{:ok, resolve_ref(ref, record)}
resolve_ref(ref, record)
end
defp resolve_expr(%BooleanExpression{left: left, right: right}, record) do
@ -290,17 +291,36 @@ defmodule Ash.Filter.Runtime do
|> get_related(path)
|> case do
nil ->
nil
{:ok, nil}
[] ->
nil
{:ok, nil}
%struct{} = record ->
if Spark.Dsl.is?(struct, Ash.Resource) do
if Ash.Resource.Info.attribute(struct, name) do
if Ash.Resource.selected?(record, name) do
{:ok, Map.get(record, name)}
else
:unknown
end
else
if Ash.Resource.loaded?(record, name) do
{:ok, Map.get(record, name)}
else
:unknown
end
end
else
{:ok, Map.get(record, name)}
end
record ->
Map.get(record, name)
{:ok, Map.get(record, name)}
end
end
defp resolve_ref(value, _record), do: value
defp resolve_ref(value, _record), do: {:ok, value}
defp path_to_load([first]), do: {first, []}

View file

@ -92,7 +92,7 @@ defmodule Ash.Flow.Dsl do
name: :run_flow,
describe: """
Runs another flow as part of the current flow.
The return value of the flow is the return value of the step.
The return value of the step is the return value of the flow.
""",
links: [],
examples: [
@ -176,12 +176,7 @@ defmodule Ash.Flow.Dsl do
],
description: [
type: :string,
doc: "A description of the flow",
links: [
guides: [
"ash:guide:Documentation"
]
]
doc: "A description of the flow"
],
trace_name: [
type: :string,
@ -228,13 +223,6 @@ defmodule Ash.Flow.Dsl do
}
end
update :update_user, User, :update do
record
end
over range(1, arg(:count))
output :create_user
create :create_user, Org, :create do
input %{
first_name: {Faker.Person, :first_name, []},

View file

@ -2,12 +2,6 @@ defmodule Ash.Flow do
@moduledoc """
A flow is a static definition of a set of steps to be run.
Flows are backed by `executors`, which determine how the workflow steps are performed.
The executor can be overriden on invocation, but not all executors will be capable of running all flows.
As of this writing, the default executor is the only one. It runs all steps in parallel unless values must be provided from one step to another.
Ash.Flow is still in its early days, and is not as stable or complete as the rest of the framework.
See the {{link:ash:guide:Flows}} guide for more.
"""

View file

@ -49,11 +49,7 @@ defmodule Ash.Flow.Step do
doc: """
A description for the step.
""",
links: [
guides: [
"ash:guide:Documentation"
]
]
links: []
]
]
end

View file

@ -112,17 +112,7 @@ defmodule Ash.Notifier.PubSub do
@moduledoc """
A pubsub notifier extension.
An Mnesia backed Ash Datalayer.
In your application intialization, you will need to call `Mnesia.create_schema([node()])`.
Additionally, you will want to create your mnesia tables there.
This data layer is *extremely unoptimized*, fetching all records from a table and filtering them
in memory. This is primarily used for testing the behavior of data layers in Ash. If it was improved,
it could be a viable data layer.
<!--- ash-hq-hide-start--> <!--- -->
<!--- ash-hq-hide-start--> <!--- -->
## DSL Documentation

View file

@ -1,17 +0,0 @@
defmodule Ash.Query.Operator.Has do
@moduledoc """
left has 1
this predicate matches if the right is in the list on the left
This actually just reverses the inputs and uses `in`.
"""
use Ash.Query.Operator,
operator: :has,
predicate?: true,
types: [[{:array, :any}, :same]]
def new(left, right) do
Ash.Query.Operator.In.new(right, left)
end
end

View file

@ -215,8 +215,7 @@ defmodule Ash.Query.Operator do
Ash.Query.Operator.IsNil,
Ash.Query.Operator.LessThanOrEqual,
Ash.Query.Operator.LessThan,
Ash.Query.Operator.NotEq,
Ash.Query.Operator.Has
Ash.Query.Operator.NotEq
] ++ Ash.Query.Operator.Basic.operator_modules()
end

View file

@ -71,7 +71,7 @@ defmodule Ash.Query do
}
alias Ash.Error.Load.{InvalidQuery, NoSuchRelationship}
alias Ash.Query.{Aggregate, BooleanExpression, Calculation, Not}
alias Ash.Query.{Aggregate, Calculation}
require Ash.Tracer
@ -174,7 +174,8 @@ defmodule Ash.Query do
Ash.Query.do_filter(unquote(query), unquote(expression))
end
else
expr = do_expr(expression)
require Ash.Expr
expr = Ash.Expr.do_expr(expression)
quote do
Ash.Query.do_filter(unquote(query), List.wrap(unquote(expr)))
@ -497,257 +498,13 @@ defmodule Ash.Query do
@doc """
Creates an Ash expression for evaluation later.
"""
defmacro expr(do: body) do
quote do
Ash.Query.expr(unquote(body))
end
end
defmacro expr(body) do
if Keyword.keyword?(body) do
quote do
unquote(body)
end
else
expr = do_expr(body)
quote do
unquote(expr)
end
quote do
require Ash.Expr
Ash.Expr.expr(unquote(body))
end
end
@operator_symbols Ash.Query.Operator.operator_symbols()
defp do_expr(expr, escape? \\ true)
defp do_expr({op, _, nil}, escape?) when is_atom(op) do
soft_escape(%Ash.Query.Ref{relationship_path: [], attribute: op}, escape?)
end
defp do_expr({op, _, Elixir}, escape?) when is_atom(op) do
soft_escape(%Ash.Query.Ref{relationship_path: [], attribute: op}, escape?)
end
defp do_expr({:^, _, [value]}, _escape?) do
value
end
defp do_expr({{:., _, [Access, :get]}, _, [left, right]}, escape?) do
left = do_expr(left, false)
right = do_expr(right, false)
[left, right]
|> Ash.Query.Function.GetPath.new()
|> case do
{:ok, call} ->
soft_escape(call, escape?)
{:error, error} ->
raise error
end
end
defp do_expr({{:., _, [_, _]} = left, _, []}, escape?) do
do_expr(left, escape?)
end
defp do_expr({{:., _, [_, _]} = left, _, args}, escape?) do
args = Enum.map(args, &do_expr(&1, false))
case do_expr(left, escape?) do
{:%{}, [], parts} = other when is_list(parts) ->
if Enum.any?(parts, &(&1 == {:__struct__, Ash.Query.Ref})) do
ref = Map.new(parts)
soft_escape(
%Ash.Query.Call{
name: ref.attribute,
relationship_path: ref.relationship_path,
args: args,
operator?: false
},
escape?
)
else
other
end
%Ash.Query.Ref{} = ref ->
soft_escape(
%Ash.Query.Call{
name: ref.attribute,
relationship_path: ref.relationship_path,
args: args,
operator?: false
},
escape?
)
other ->
other
end
end
defp do_expr({:ref, _, [field, path]}, escape?) do
ref =
case do_expr(path, false) do
%Ash.Query.Ref{attribute: head_attr, relationship_path: head_path} ->
case do_expr(field) do
%Ash.Query.Ref{attribute: tail_attribute, relationship_path: tail_relationship_path} ->
%Ash.Query.Ref{
relationship_path: head_path ++ [head_attr] ++ tail_relationship_path,
attribute: tail_attribute
}
other ->
%Ash.Query.Ref{relationship_path: head_path ++ [head_attr], attribute: other}
end
other ->
case do_expr(field, false) do
%Ash.Query.Ref{attribute: attribute, relationship_path: relationship_path} ->
%Ash.Query.Ref{
attribute: attribute,
relationship_path: List.wrap(other) ++ List.wrap(relationship_path)
}
other_field ->
%Ash.Query.Ref{attribute: other_field, relationship_path: other}
end
end
soft_escape(ref, escape?)
end
defp do_expr({:ref, _, [field]}, escape?) do
ref =
case do_expr(field, false) do
%Ash.Query.Ref{} = ref ->
ref
other ->
%Ash.Query.Ref{attribute: other, relationship_path: []}
end
soft_escape(ref, escape?)
end
defp do_expr({:., _, [left, right]} = ref, escape?) when is_atom(right) do
case do_ref(left, right) do
%Ash.Query.Ref{} = ref ->
soft_escape(ref, escape?)
:error ->
raise "Invalid reference! #{Macro.to_string(ref)}"
end
end
defp do_expr({op, _, args}, escape?) when op in [:and, :or] do
args = Enum.map(args, &do_expr(&1, false))
soft_escape(BooleanExpression.optimized_new(op, Enum.at(args, 0), Enum.at(args, 1)), escape?)
end
defp do_expr({op, _, [_, _] = args}, escape?)
when is_atom(op) and op in @operator_symbols do
args = Enum.map(args, &do_expr(&1, false))
soft_escape(%Ash.Query.Call{name: op, args: args, operator?: true}, escape?)
end
defp do_expr({left, _, [{op, _, [right]}]}, escape?)
when is_atom(op) and op in @operator_symbols and is_atom(left) and left != :not do
args = Enum.map([{left, [], nil}, right], &do_expr(&1, false))
soft_escape(%Ash.Query.Call{name: op, args: args, operator?: true}, escape?)
end
defp do_expr({:not, _, [expression]}, escape?) do
expression = do_expr(expression, false)
soft_escape(Not.new(expression), escape?)
end
defp do_expr({:cond, _, [[do: options]]}, escape?) do
options
|> Enum.map(fn {:->, _, [condition, result]} ->
{condition, result}
end)
|> cond_to_if_tree()
|> do_expr(escape?)
end
defp do_expr({op, _, args}, escape?) when is_atom(op) and is_list(args) do
last_arg = List.last(args)
args =
if Keyword.keyword?(last_arg) && Keyword.has_key?(last_arg, :do) do
Enum.map(:lists.droplast(args), &do_expr(&1, false)) ++
[
Enum.map(last_arg, fn {key, arg_value} ->
{key, do_expr(arg_value, false)}
end)
]
else
Enum.map(args, &do_expr(&1, false))
end
soft_escape(%Ash.Query.Call{name: op, args: args, operator?: false}, escape?)
end
defp do_expr({left, _, _}, escape?) when is_tuple(left), do: do_expr(left, escape?)
defp do_expr(other, _), do: other
defp cond_to_if_tree([{condition, result}]) do
{:if, [], [cond_condition(condition), [do: result]]}
end
defp cond_to_if_tree([{condition, result} | rest]) do
{:if, [], [cond_condition(condition), [do: result, else: cond_to_if_tree(rest)]]}
end
defp cond_condition([condition]) do
condition
end
defp cond_condition([condition | rest]) do
{:and, [], [condition, cond_condition(rest)]}
end
defp soft_escape(%_{} = val, _) do
{:%{}, [], Map.to_list(val)}
end
defp soft_escape(other, _), do: other
defp do_ref({left, _, nil}, right) do
%Ash.Query.Ref{relationship_path: [left], attribute: right}
end
defp do_ref({{:., _, [_, _]} = left, _, _}, right) do
do_ref(left, right)
end
defp do_ref({:., _, [left, right]}, far_right) do
case do_ref(left, right) do
%Ash.Query.Ref{relationship_path: path, attribute: attribute} = ref ->
%{ref | relationship_path: path ++ [attribute], attribute: far_right}
:error ->
:error
end
end
defp do_ref({left, _, _}, right) when is_atom(left) and is_atom(right) do
%Ash.Query.Ref{relationship_path: [left], attribute: right}
end
defp do_ref(_left, _right) do
:error
end
@doc """
Ensure that only the specified *attributes* are present in the results.
@ -815,8 +572,9 @@ defmodule Ash.Query do
"""
defmacro equivalent_to(query, expr) do
quote do
require Ash.Expr
query = unquote(query)
expr = unquote(do_expr(expr))
expr = unquote(Ash.Expr.do_expr(expr))
require Ash.Query
case Ash.Query.superset_of(query, expr) do
@ -851,8 +609,8 @@ defmodule Ash.Query do
defmacro superset_of(query, expr) do
quote do
query = unquote(query)
require Ash.Query
expr = unquote(do_expr(expr))
require Ash.Expr
expr = unquote(Ash.Expr.do_expr(expr))
left_filter = query.filter
{:ok, left_expression} =
@ -898,7 +656,8 @@ defmodule Ash.Query do
defmacro subset_of(query, expr) do
quote do
query = unquote(query)
expr = unquote(do_expr(expr))
require Ash.Expr
expr = unquote(Ash.Expr.do_expr(expr))
right_filter = query.filter
{:ok, right_expression} =

View file

@ -113,7 +113,7 @@ defmodule Ash.Resource do
|> Macro.underscore()
|> String.to_atom()
def default_short_name do
def default_short_name() do
@default_short_name
end
@ -263,9 +263,7 @@ defmodule Ash.Resource do
def selected?(%resource{} = record, field) do
case get_metadata(record, :selected) do
nil ->
attribute = Ash.Resource.Info.attribute(resource, field)
attribute && (!attribute.private? || attribute.primary_key?)
!!Ash.Resource.Info.attribute(resource, field)
select ->
if field in select do

View file

@ -34,11 +34,7 @@ defmodule Ash.Resource.Actions.Argument do
description: [
type: :string,
doc: "An optional description for the argument.",
links: [
guides: [
"ash:guide:Documentation"
]
]
links: []
],
constraints: [
type: :keyword_list,

View file

@ -38,11 +38,7 @@ defmodule Ash.Resource.Actions.Metadata do
description: [
type: :string,
doc: "An optional description for the metadata.",
links: [
guides: [
"ash:guide:Documentation"
]
]
links: []
],
allow_nil?: [
type: :boolean,

View file

@ -36,11 +36,7 @@ defmodule Ash.Resource.Actions.Read do
type: :any,
doc:
"A filter template that will be applied whenever the action is used. See `Ash.Filter` for more on templates",
links: [
guides: [
"ash:guide:Filters"
]
]
links: []
],
manual: [
type: {:spark_behaviour, Ash.Resource.ManualRead},

View file

@ -17,11 +17,7 @@ defmodule Ash.Resource.Actions.SharedOptions do
description: [
type: :string,
doc: "An optional description for the action",
links: [
guides: [
"ash:guide:Documentation"
]
]
links: []
],
transaction?: [
type: :boolean,

View file

@ -58,9 +58,7 @@ defmodule Ash.Resource.Attribute do
description: [
type: :string,
doc: "An optional description for the attribute.",
links: [
modules: ["ash:guide:Documentation"]
]
links: []
],
sensitive?: [
type: :boolean,

View file

@ -62,11 +62,7 @@ defmodule Ash.Resource.Calculation do
],
description: [
type: :string,
links: [
guides: [
"ash:guide:Documentation"
]
],
links: [],
doc: "An optional description for the calculation"
],
private?: [

View file

@ -774,20 +774,12 @@ defmodule Ash.Resource.Dsl do
type: :string,
doc:
"A human readable description of the resource, to be used in generated documentation",
links: [
guides: [
"ash:guide:Documentation"
]
]
links: []
],
base_filter: [
type: :any,
doc: "A filter statement to be applied to any queries on the resource",
links: [
guides: [
"ash:guide:Filters"
]
]
links: []
],
default_context: [
type: :any,

View file

@ -41,11 +41,7 @@ defmodule Ash.Resource.Identity do
description: [
type: :string,
doc: "An optional description for the identity",
links: [
guides: [
"ash:guide:Documentation"
]
]
links: []
],
message: [
type: :string,

View file

@ -39,7 +39,11 @@ defmodule Ash.Resource.Interface do
],
actor: [
type: :any,
doc: "Set the actor for authorization"
doc: "set the actor for authorization"
],
actor: [
type: :any,
doc: "set the tracer for the action"
],
authorize?: [
type: :boolean,
@ -105,14 +109,14 @@ defmodule Ash.Resource.Interface do
get?: [
type: :boolean,
doc: """
Expects to only receive a single result from a read action. Ignored for other action types.
Expects to only receive a single result from a read action, and returns a single result instead of a list. Ignored for other action types.
""",
links: []
],
get_by: [
type: {:list, :atom},
doc: """
Takes a list of fields and adds those fields as arguments, which will then be used to filter. Ignored for non-read actions.
Takes a list of fields and adds those fields as arguments, which will then be used to filter. Sets `get?` to true automatically. Ignored for non-read actions.
""",
links: []
],

View file

@ -15,9 +15,7 @@ defmodule Ash.Resource.Relationships.SharedOptions do
description: [
type: :string,
doc: "An optional description for the relationship",
links: [
modules: ["ash:guide:Documentation"]
]
links: []
],
destination_attribute: [
type: :atom,
@ -92,11 +90,7 @@ defmodule Ash.Resource.Relationships.SharedOptions do
doc: """
The API module to use when working with the related entity.
""",
links: [
guides: [
"ash:guide:Multiple Apis"
]
]
links: []
],
filter: [
type: :any,

View file

@ -1,13 +0,0 @@
# Authorization
## Ash Policy Authorizer
Generally speaking, you will want to use `Ash.Policy.Authorizer` to authorize access to your resources.
At one point, it was a separate package but it is now built directly into Ash.
For usage, see the policies guide.
## Implementing a custom authorizer
Implementing a custom authorizer is pretty complex. Instead of writing a guide, it would be best to just have some discussions if/when someone thinks that they need one. Make an issue and we'll talk it over.

View file

@ -1,66 +0,0 @@
# Calculations
Calculations in Ash allow for displaying complex values as a top level value of a resource.
They are relatively limited in their current form, supporting only functional calculations,
where you provide a module that takes a list of records and returns a list of values for that
calculation. Eventually, there will be support for calculations that can be embedded into the
data layer(for things like postgres) that will allow for sorting and filtering on calculated
data.
## Declaring calculations on a resource
Example:
```elixir
defmodule Concat do
# An example concatenation calculation, that accepts the delimeter as an argument,
#and the fields to concatenate as options
use Ash.Calculation, type: :string
# Optional callback that verifies the passed in options (and optionally transforms them)
@impl true
def init(opts) do
if opts[:keys] && is_list(opts[:keys]) && Enum.all?(opts[:keys], &is_atom/1) do
{:ok, opts}
else
{:error, "Expected a `keys` option for which keys to concat"}
end
end
@impl true
def calculate(records, opts, %{separator: separator}) do
Enum.map(records, fn record ->
Enum.map_join(opts[:keys], separator, fn key ->
to_string(Map.get(record, key))
end)
end)
end
end
# Usage in a resource
calculations do
calculate :full_name, {Concat, keys: [:first_name, :last_name]} do
# You currently need to use the [allow_empty?: true, trim?: false] constraints here.
# The separator could be an empty string or require a leading or trailing space,
# but would be trimmed or even set to `nil` without the constraints.
argument :separator, :string, constraints: [allow_empty?: true, trim?: false]
end
end
```
See the documentation for the calculations section in `Ash.Resource.Dsl` and the `Ash.Calculation` docs for more information.
The calculations declared on a resource allow for declaring a set of named calculations that can be used by extensions.
They can also be loaded in the query using `Ash.Query.load/2`, or after the fact using `c:Ash.Api.load/3`. Calculations declared on the resource will be keys in the resource's struct.
## Custom calculations in the query
Example:
```elixir
User
|> Ash.Query.new()
|> Ash.Query.calculate(:full_name, {Concat, keys: [:first_name, :last_name]}, :string, %{separator: ","})
```
See the documentation for `Ash.Query.calculate/4` for more information.

View file

@ -1,45 +0,0 @@
# Improving Compile Times
In previous versions of Ash, the standard way to configure the list of resources for an Api module looked like this:
```elixir
defmodule MyApp.MyApi do
use Ash.Api
resources do
resource MyApp.MyResource
...
end
end
```
This caused many compilation dependency issues, causing slow compile times when changing single files, and could also potentially lead to deadlocks.
The preferred way of doing this now looks like this:
```elixir
# Define a registry module
defmodule MyApp.MyApi.Registry do
use Ash.Registry,
extensions: Ash.Registry.ResourceValidations
entries do
entry MyApp.MyResource
...
end
end
defmodule MyApp.MyApi do
use Ash.Api, otp_app: :my_app
end
# in `config/config.exs`
config :my_app, MyApp.MyApi,
resources: [
registry: MyApp.MyApi.Registry
]
```
This will prevent a bunch of cross-concern compile time dependencies, allowing for much faster compile times in general.

View file

@ -60,12 +60,11 @@ end
When referencing related values, if the reference is a `has_one` or `belongs_to`, the filter does exactly what it looks like (matches if the related value matches). If it is a `has_many` or a `many_to_many`, it matches if any of the related records match.
### Referencing aggregates and calculations
Aggregates are simple, insofar as all aggregates can be referenced in filter expressions (if you are using a data layer that supports it).
For calculations, only those that define an expression can be referenced in other expressions. See the section below on declaring calculations with expressions.
For calculations, only those that define an expression can be referenced in other expressions.
Here are some examples:

View file

@ -1,20 +0,0 @@
# Resource Formatting
Each extension has its own formatting configuration for the extension that it creates. You'll want to update your own `.formatter.exs` to import those configurations. This is an example:
```elixir
# .formatter.exs
[
import_deps: [
:ash,
:ash_postgres,
:ash_json_api,
:ash_graphql
],
inputs: ["*.{ex,exs}", "priv/*/seeds.exs", "{config,lib,test}/**/*.{ex,exs}"],
subdirectories: ["priv/*/migrations"]
]
```
There is not support for the automatic generation of a .formatter for _custom_ extensions, but if you're developing an extension library you can use the `mix ash.formatter` task to
automatically generate a formatter for your DSL. Eventually, we will want to add support for _adding_ to a .formatter.exs from custom extensions.

View file

@ -1,40 +0,0 @@
# Identities
Identities can be used to describe the ways that a resource is uniquely identified. For example, you may have a user resource that has an `id` primary key, but is uniquely identifiable via the `email` attribute as well.
To configure this, add an `identities` block to your resource
For example:
```elixir
identities do
identity :unique_email, [:email]
end
```
## Effects
Identities are used in various ways across Ash and it's extensions. This list is not necessarily exhaustive:
### Ash
* Identities can be used with `c:Ash.Api.get/3`, e.g `MyApi.get(User, [email: "foo@bar.com"])`
### AshPostgres
* The [migration generator](https://hexdocs.pm/ash_postgres/Mix.Tasks.AshPostgres.GenerateMigrations.html) creates unique constraints for identities
### AshJsonApi
* Get routes can be configured to use a specific identity, creating a route like `GET /users/foo@bar.com`
### AshGraphql
* Get queries and mutations can be configured to use a specific identity, to create a query like the following. (Arbitrary filtering is supported on list queries, this is is for creating queries that return a single result)
```graphql
query{
getUser(email: "foo@bar.com"){
id
}
}
```

View file

@ -1,111 +0,0 @@
# Managing Relationships
In Ash, managing related data is done via `Ash.Changeset.manage_relationship/4`. There are various ways to leverage the functionality expressed there. If you are working with changesets directly, you can call that function. However, if you want that logic to be portable (e.g available in `ash_graphql` mutations and `ash_json_api` actions), then you want to use the following `argument` + `change` pattern:
```elixir
actions do
update :update do
argument :add_comment, :map do
allow_nil? false
end
argument :tags, {:array, :uuid} do
allow_nil? false
end
# First argument is the name of the action argument to use
# Second argument is the relationship to be managed
# Third argument is options. For more, see `Ash.Changeset.manage_relationship/4`. This accepts the same options.
change manage_relationship(:add_comment, :comments, type: :create)
# Second argument can be ommitted, as the argument name is the same as the relationship
change manage_relationship(:tags, type: :replace)
end
end
```
With this, those arguments can be used simply in action input:
```elixir
post
|> Ash.Changeset.for_update(:update, tags: [tag1_uuid, tag2_uuid], add_comment: %{text: "comment text"})
|> MyApi.update!()
```
It gets even simpler if you are using the `code_interface`, for example:
```elixir
# With this in your resource
code_interface do
define :update_post, action: :update
end
# You can use it like so:
MyApi.update_post!(%{tags: [tag1_uuid, tag2_uuid], add_comment: %{text: "comment text"}})
```
These arguments will also be exposed as fields in `ash_graphql` and `ash_json_api`.
## Argument Types
Notice how we provided a map as input to `add_comment`. The only types supported by `manage_relationship` are values that map to the primary key of the resource, which is why `tags` allowed the list of `:uuid`s. However, `%{text: "comment text"}` must be a map,
as it will eventually be passed to a create action on the `Comment` resource. The ergonomics of this are still being worked out, but there are ways to make it such that your action accepts input like `add_comment: "comment text"`. For now, the only way to do it would be by adding a private argument to hold the proper input for `add_comment`, and a change to set that argument, based on the provided value. For example:
```elixir
defmodule MyApp.Post.Changes.SetAddCommentArgument do
use Ash.Resource.Change
def change(changeset, _, _) do
case Ash.Changeset.fetch_argument(changeset, :add_comment) do
{:ok, comment_text} -> Ash.Changeset.set_argument(changeset, :private_add_comment, %{text: comment_text})
:error -> changeset
end
end
end
actions do
update :update do
argument :add_comment, :string do
allow_nil? false
end
argument :private_add_comment, :map do
# Extensions know not to expose private arguments
private? true
end
change MyApp.Post.Changes.SetAddCommentArgument
change manage_relationship(:private_add_comment, :comments, type: :create)
end
end
```
## Graphql Input Types
In `ash_graphql`, a type of `:map` simply translates to `:json`. Right now, there is nothing that can automatically generate the requisite input object for a given argument that eventually gets passed to `manage_relationship/3`. So if you want typed input objects to use with those arguments, you will need to use a custom map type implementation, and have it refer to a custom `absinthe` type. Thankfully, `absinthe` makes it very easy to define new input_object types. For example:
```elixir
defmodule MyApp.Types.CreateCommentInput do
use Ash.Type
def graphql_input_type, do: :create_comment_input
defdelegate storage_type, to: Ash.Type.Map
defdelegate cast_input(value, constraints), to: Ash.Type.Map
defdelegate cast_stored(value, constraints), to: Ash.Type.Map
defdelegate dump_to_native(value, constraints), to: Ash.Type.Map
end
```
Given that type definition, you could then add the following to your absinthe schema:
```elixir
input_object :create_comment_input do
field :text, :string
end
```
We're open to suggestions on making this process more ergonomic in general.

View file

@ -1,76 +0,0 @@
# Multitenancy
Multitenancy is the idea of splitting up your data into discrete areas, typically by customer. One of the most common examples of this, is the idea of splitting up a postgres database into "schemas" one for each customer that you have. Then, when making any queries, you ensure to always specify the "schema" you are querying, and you never need to worry about data crossing over between customers. The biggest benefits of this kind of strategy are the simplification of authorization logic, and better performance. Instead of all queries from all customers needing to use the same large table, they are each instead all using their own smaller tables. Another benefit is that it is much easier to delete a single customer's data on request.
In Ash, there are a two primary strategies for implementing multitenancy. The first (and simplest) works for any data layer that supports filtering, and requires very little maintenance/mental overhead. It is done via expecting a given attribute to line up with the `tenant`, and is called `:attribute`. The second, is based on the data layer backing your resource, and is called `:context`. For information on
context based multitenancy, see the documentation of your datalayer. For example, `AshPostgres` uses postgres schemas. While the `:attribute` strategy is simple to implement, it also offers fewer advantages, primarily acting as another way to ensure your data is filtered to the correct tenant.
## Attribute Multitenancy
```elixir
defmodule MyApp.Users do
use Ash.Resource, ...
multitenancy do
strategy :attribute
attribute :organization_id
end
...
relationships do
belongs_to :organization, MyApp.Organization
end
end
```
In this case, if you were to try to run a query without specifying a tenant, you would get an error telling you that the tenant is required.
Setting the tenant when using the code API is done via `Ash.Query.set_tenant/2` and `Ash.Changeset.set_tenant/2`. If you are using an extension, such as `AshJsonApi` or `AshGraphql` the method of setting tenant context is explained in that extension's documentation.
Example usage of the above:
```elixir
# Error when not setting a tenant
MyApp.Users
|> Ash.Query.filter(name == "fred")
|> MyApi.read!()
** (Ash.Error.Unknown)
* "Queries against the Helpdesk.Accounts.User resource require a tenant to be specified"
(ash 1.22.0) lib/ash/api/api.ex:944: Ash.Api.unwrap_or_raise!/2
# Automatically filtering by `organization_id == 1`
MyApp.Users
|> Ash.Query.filter(name == "fred")
|> Ash.Query.set_tenant(1)
|> MyApi.read!()
[...]
# Automatically setting `organization_id` to `1`
MyApp.Users
|> Ash.Changeset.new(name: "fred")
|> Ash.Changeset.set_tenant(1)
|> MyApi.create!()
%MyApp.User{organization_id: 1}
```
If you want to enable running queries _without_ a tenant as well as queries with a tenant, the `global?` option supports this. You will likely need to incorporate this ability into any authorization rules though, to ensure that users from one tenant can't access other tenant's data.
```elixir
multitenancy do
strategy :attribute
attribute :organization_id
global? true
end
```
You can also provide the `parse_attribute?` option if the tenant being set doesn't exactly match the attribute value, e.g the tenant is `org_10` and the attribute is `organization_id`, which requires just `10`.
## Context Multitenancy
Context multitenancy allows for the data layer to dictate how multitenancy works. For example, a csv data layer might implement multitenancy via saving the file with different suffixes, or an API wrapping data layer might use different subdomains for the tenant.
For `AshPostgres` context multitenancy, which uses postgres schemas, see the [guide](https://hexdocs.pm/ash_postgres/multitenancy.html)

View file

@ -1,85 +0,0 @@
# Notifiers
## Built-in Notifiers
- PubSub: `Ash.Notifier.PubSub`
## Creating a notifier
A notifier is a simple extension that must implement a single callback `notify/1`. Notifiers do not have to implement an Ash DSL extension, but they may in order to configure how that notifier should behave. See `Ash.Notifier.Notification` for the currently available fields. Notifiers should not do anything intensive synchronously. If any heavy work needs to be done, they should delegate to something else to handle the notification, like sending it to a GenServer or GenStage.
Eventually, there will likely be built in notifiers that will make setting up an GenStage that reacts to your resource changes easy. Until then, you'll have to write your own.
For more information on creating a DSL extension to configure your notifier, see the docs for `Spark.Dsl.Extension`.
### Example notifier
```elixir
defmodule ExampleNotifier do
use Ash.Notifier
def notify(%Ash.Notifier.Notification{resource: resource, action: %{type: :create}, actor: actor}) do
if actor do
Logger.info("#{actor.id} created a #{resource}")
else
Logger.info("A non-logged in user created a #{resource}")
end
end
end
```
### Including a notifier in a resource
```elixir
defmodule MyResource do
use Ash.Resource,
notifiers: [ExampleNotifier]
end
```
## Transactions
API calls involving resources who's datalayer supports transactions (like Postgres), notifications are saved up and sent after the transaction is closed. For example, the api call below ultimately results in many many database calls.
```elixir
Post
|> Ash.Changeset.new(%{})
|> Ash.Changeset.append_to_relationship(:related_posts, [1, 2, 3])
|> Ash.Changeset.remove_from_relationship(:related_posts, [4, 5])
|> Ash.Changeset.append_to_relationship(:comments, [10])
|> Api.update!()
```
Ash doesn't support bulk database operations yet, so it performs the following operations:
- a read of the currently related posts
- a read of the currently related comments
- a creation of a post_link to relate to 1
- a creation of a post_link to relate to 2
- a creation of a post_link to relate to 3
- a destruction of the post_link related to 4
- a destruction of the post_link related to 5
- an update to comment 10, to set its `post_id` to this post
If all three of these resources have notifiers configured, we need to send a notification for each operation (notifications are not sent for reads). For data consistency reasons, if a data layer supports transactions, all writes are done in a transaction. However, if you try to read the record from the database that you have just received a notification about before the transaction has been closed, in a different process, the information will be wrong. For this reason, Ash accumulates notifications until they can be sent.
If you need to perform multiple operations against your resources in your own transaction, you will have to handle that case yourself. To support this, `c:Ash.Api.create/2`, `c:Ash.Api.update/2` and `c:Ash.Api.destroy/2` support a `return_notifications?: true` option. This causes the api call to return `{:ok, result, notifications}` in the succesful case. Here is an example of how you might use it.
```elixir
result =
Ash.DataLayer.transaction(resource, fn ->
{:ok, something, notifications1} = create_something()
{:ok, result, notifications2} = create_another_thing(something)
{:ok, notifications3} = destroy_something(something)
{result, Enum.concat([notifications1, notifications2, notifications3])}
end)
case result do
{:ok, value, notifications} ->
Ash.Notifier.notify(notifications)
value
{:error, error} ->
handle_error(error)
end
```

View file

@ -1,70 +0,0 @@
# Pagination
Pagination is configured at the action level. There are two kinds of pagination supported: `keyset` and `offset`. There are
pros and cons to each. An action can support both at the same time, or only one (or none). A full count of records can be
requested by passing `page: [count: true]`, but it should be kept in mind that doing this requires running the same query
twice, one of which is a count of all records. Ash does these in parallel, but it can still be quite expensive on large
datasets. For more information on the options for configuring actions to support pagination, see the [pagination section](Ash.Resource.Dsl.html#module-pagination) in `Ash.Resource.Dsl`.
## Offset Pagination
Offset pagination is done via providing a `limit` and an `offset`. A `limit` is how many records that should be returned on the page.
An `offset` is how many records from the beginning should be skipped. Using this, you might make requests like the following:
```elixir
# Get the first ten records
Api.read(Resource, page: [limit: 10])
# Get the second ten records
Api.read(Resource, page: [limit: 10, offset: 10])
# No need to do this in practice, see `c:Ash.Api.page/2`
```
### Offset Pros
- Simple to think about
- Possible to skip to a page by number. E.g the 5th page of 10 records is `offset: 40`
- Easy to reason about what page you are currently on (if the total number of records is requested)
- Can go to the last page (even though, if done by using the full count, the data could have changed)
### Offset Cons
- Does not perform well on large datasets
- When moving between pages, if data was created or deleted, records may appear on multiple pages or be skipped
## Keyset Pagination
Keyset pagination is done via providing an `after` or `before` option, as well as a `limit`. The value of this option should be
a `keyset` that has been returned from a previous request. Keysets are returned when a request is made with a `limit` to an action
that supports `keyset` pagination, and they are stored in the `__metadata__` key of each record. The `keyset` is a special value that
can be passed into the `after` or `before` options, to get records that occur after or before.
For example:
```elixir
page = Api.read(Resource, page: [limit: 10])
last_record = List.last(page.results)
# No need to do this in practice, see `c:Ash.Api.page/2`
next_page = Api.read(Resource, page: [limit: 10, after: last_record.__metadata__.keyset])
```
### Important Limitation
Keyset pagination cannot currently be used in conjunction with aggregate and calculation sorting.
Combining them will result in an error on the query.
### Keyset Pros
- Performs very well on large datasets (assuming indices exist on the columns being sorted on)
- Behaves well as data changes. The record specified will always be the first or last item in the page
### Keyset Cons
- A bit more complex
- Can't go to a specific page number
- Can't use aggregate and calculation sorting
For more information on keyset vs offset based pagination, see:
- [Offset vs Seek Pagination](https://taylorbrazelton.com/posts/2019/03/offset-vs-seek-pagination/)