docs: a few small improvements for actions and policy guides. (#991)

This commit is contained in:
James Harton 2024-04-08 09:40:36 +12:00 committed by GitHub
parent 063fc72747
commit 915fc03565
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
4 changed files with 28 additions and 19 deletions

View file

@ -35,6 +35,7 @@ Ash.bulk_create([%{title: "Foo"}, %{title: "Bar"}], Ticket, :open)
```
> ### Check the docs! {: .warning}
>
> Make sure to thoroughly read and understand the documentation in `Ash.bulk_create/4` before using. Read each option and note the default values. By default, bulk creates don't return records or errors, and don't emit notifications.
## Performance
@ -45,7 +46,7 @@ Generally speaking, all regular Ash create actions are compatible (or can be mad
- Actions that reference arguments in changes, i.e `change set_attribute(:attr, ^arg(:arg))` will prevent us from using the `batch_change/3` behavior. This is usually not a problem, for instance that change is lightweight and would not benefit from being optimized with `batch_change/3`
- If your action uses `after_action` hooks, or has `after_batch/3` logic defined for any of its changes, then we *must* ask the data layer to return the records it inserted. Again, this is not generally a problem because we throw away the results of each batch by default. If you are using `return_records?: true` then you are already requesting all of the results anyway.
- If your action uses `after_action` hooks, or has `after_batch/3` logic defined for any of its changes, then we _must_ ask the data layer to return the records it inserted. Again, this is not generally a problem because we throw away the results of each batch by default. If you are using `return_records?: true` then you are already requesting all of the results anyway.
## Returning a Stream
@ -67,7 +68,9 @@ end)
```
> ### Be careful with streams {: .warning}
> Because streams are lazily evaluated, if you were to do something like this:
>
> Because streams are lazily evaluated, if you were to do something like this:
>
> ```elixir
> [input1, input2, ...] # has 300 things in it
> |> Ash.bulk_create(
@ -79,6 +82,7 @@ end)
> )
> |> Enum.take(150) # stream has 300, but we only take 150
> ```
>
> What would happen is that we would insert 200 records. The stream would end after we process the first two batches of 100. Be sure you aren't using things like `Stream.take` or `Enum.take` to limit the amount of things pulled from the stream, unless you actually want to limit the number of records created.
## Upserts
@ -100,11 +104,12 @@ Ash.create!(changeset, upsert?: true, upsert_identity: :unique_email)
```
> ### Upserts do not use an update action {: .warning}
>
> While an upsert is conceptually a "create or update" operation, it does not result in an update action being called. The data layer contains the upsert implementation. This means that if you have things like global changes that are only run on update, they will not be run on upserts that result in an update. Additionally, notifications for updates will not be emitted from upserts.
### Atomic Updates
Upserts support atomic updates. These atomic updates *do not apply to the data being created*. They are only applied in the case of an update. For example:
Upserts support atomic updates. These atomic updates _do not apply to the data being created_. They are only applied in the case of an update. For example:
```elixir
create :create_game do
@ -123,7 +128,7 @@ For information on options when calling the action, see `Ash.create/2`.
## What happens when you run a create Action
When All actions are run in a transaction if the data layer supports it. You can opt out of this behavior by supplying `transaction?: false` when creating the action. When an action is being run in a transaction, all steps inside of it are serialized because transactions cannot be split across processes.
All actions are run in a transaction if the data layer supports it. You can opt out of this behavior by supplying `transaction?: false` when creating the action. When an action is being run in a transaction, all steps inside of it are serialized because transactions cannot be split across processes.
- Authorization is performed on the changes
- A before action hook is added to set up belongs_to relationships that are managed. This means potentially creating/modifying the destination of the relationship, and then changing the `destination_attribute` of the relationship.

View file

@ -37,6 +37,7 @@ Ash.destroy!(ticket, return_destroyed?: true)
```
> ### Loading on destroyed records {: .warning}
>
> Keep in mind that using `Ash.load` on destroyed data will produced mixed results. Relationships may appear as empty, or may be loaded as expected (depending on the data layer/relationship implementation) and calculations/aggregates may show as `nil` if they must be run in the data layer.
## Bulk Destroys
@ -45,7 +46,7 @@ There are three strategies for bulk destroying data. They are, in order of prefe
## Atomic
Atomic bulk updates are used when the subject of the bulk update is a query and the data layer supports updating a query. They map to a single statement to the data layer to destroy all matching records. The data layer must support updating a query.
Atomic bulk destroys are used when the subject of the bulk destroy is a query and the data layer supports destroying a query. They map to a single statement to the data layer to destroy all matching records.
### Example
@ -64,30 +65,29 @@ WHERE status = 'open';
## Atomic Batches
Atomic batches is used when the subject of the bulk destroy is an enumerable (i.e list or stream) of records and the data layer supports destroying a query. The records are pulled out in batches, and then each batch follows the logic described [above](#atomic). The batch size is controllable by the `batch_size` option.
Atomic batches are used when the subject of the bulk destroy is an enumerable (i.e list or stream) of records and the data layer supports destroying a query. The records are pulled out in batches, and then each batch follows the logic described [above](#atomic). The batch size is controllable by the `batch_size` option.
### Example
```elixir
Ash.bulk_update!(one_hundred_tickets, :close, %{reason: "Closing all open tickets."}, batch_size: 10)
Ash.bulk_destroy!(one_hundred_tickets, :close, %{}, batch_size: 10)
```
If using a SQL data layer, this would produce ten queries along the lines of
```sql
UPDATE tickets
SET status = 'closed',
reason = 'Closing all open tickets.'
DELETE FROM tickets
WHERE id IN (...ids)
```
## Stream
Stream is used when the data layer does not support updating a query. If a query is given, it is run and the records are used as an enumerable of inputs. If an enumerable of inputs is given, each one is destroyed individually. There is nothing inherently wrong with doing this kind of destroy, but it will naturally be slower than the other two strategies.
Stream is used when the data layer does not support destroying a query. If a query is given, it is run and the records are used as an enumerable of inputs. If an enumerable of inputs is given, each one is destroyed individually. There is nothing inherently wrong with doing this kind of destroy, but it will naturally be slower than the other two strategies.
The benefit of having a single interface (`Ash.bulk_destroy/4`) is that the caller doesn't need to change based on the performance implications of the action.
> ### Check the docs! {: .warning}
>
> Make sure to thoroughly read and understand the documentation in `Ash.bulk_destroy/4` before using. Read each option and note the default values. By default, bulk destroys don't return records or errors, and don't emit notifications.
### Destroying records

View file

@ -40,6 +40,7 @@ end
Changing attributes in this way makes them safer to use in concurrent environments, and is typically more performant than doing it manually in memory.
> ### atomics are not stored with other changes {: .warning}
>
> While we recommend using atomics wherever possible, it is important to note that they are stored in their own map in the changeset, i.e `changeset.atomics`, meaning if you need to do something later in the action with the new value for an attribute, you won't be able to access the new value. This is because atomics are evaluated in the data layer.
## Fully Atomic updates
@ -47,6 +48,7 @@ Changing attributes in this way makes them safer to use in concurrent environmen
Atomic updates are a special case of update actions that can be done atomically. If your update action can't be done atomically, you will get an error unless you have set `require_atomic? false`. This is to encourage you to opt for atomic updates whereever reasonable. Not all actions can reasonably be made atomic, and not all non-atomic actions are problematic for concurrency. The goal is only to make sure that you are aware and have considered the implications.
> ### What does atomic mean? {: .info}
>
> An atomic update is one that can be done in a single operation in the data layer. This ensures that there are no issues with concurrent access to the record being updated, and that it is as performant as possible.
> For example, the following action cannot be done atomically, because it has
> an anonymous function change on it.
@ -54,14 +56,14 @@ Atomic updates are a special case of update actions that can be done atomically.
> ```elixir
> update :increment_score do
> change fn changeset, _ ->
> Ash.Changeset.set_attribute(changeset, :score, changeset.data.score)
> Ash.Changeset.set_attribute(changeset, :score, changeset.data.score + 1)
> end
> end
> ```
>
> The action shown above is not safe to run concurrently. If two separate processes fetch the record with score `1`, and then call `increment_score`, they will both set the score to `2`, when what you almost certainly intended to do was end up with a score of `3`
>
> By contrast, the following action *can* be done atomically
> By contrast, the following action _can_ be done atomically
>
> ```elixir
> update :increment_score do
@ -114,7 +116,7 @@ end
## Bulk updates
There are three strategies for bulk updating data. They are, in order of preference: `:atomic`, `:atomic_batches`, and `:stream`. When calling `Ash.bulk_update/4`, you can provide a strategy or strategies that can be used, and Ash will choose the best one available. The implementation of the udpate action and the capabilities of the data layer determine what strategies can be used.
There are three strategies for bulk updating data. They are, in order of preference: `:atomic`, `:atomic_batches`, and `:stream`. When calling `Ash.bulk_update/4`, you can provide a strategy or strategies that can be used, and Ash will choose the best one available. The implementation of the update action and the capabilities of the data layer determine what strategies can be used.
## Atomic

View file

@ -2,7 +2,7 @@
Policies determine what actions on a resource are permitted for a given actor, and can also filter the results of read actions to restrict the results to only records that should be visible.
To restrict access to specific fields (attributes, aggregates, calculations), the section on field policies.
To restrict access to specific fields (attributes, aggregates, calculations), see the section on field policies.
Read and understand the [Actors & Authorization guide](/documentation/topics/security/actors-and-authorization.md) before proceeding, which explains actors, how to set them, and other relevant configurations.
@ -98,7 +98,7 @@ This will be covered in greater detail in [Checks](#checks), but will be briefly
Ash provides two basic types of policy checks - _simple_ checks and _filter_ checks. Simple checks are what we commonly think of with authorization, and what the above example would suggest - is an actor allowed to perform a given operation, yes or no? But we can also use filter checks - given a list of resources, which ones is an actor allowed to perform the operation on?
Filter checks are frequently used with read actions, as they can refer to multiple instances (eg. "list all products"), but may also be applied to actions like bulk-deleting records (which is not currently supported, but will be eventually).
Filter checks are applied to all read actions, including those generated for bulk updates and destroys.
### Bypass policies
@ -163,7 +163,9 @@ For the example from earlier:
As mentioned earlier, there are two distinct types of checks - _simple_ checks and _filter_ checks. So far we've seen examples of both - let's look in a bit more detail.
(Both simple and filter checks are a subset of a third type of check - a _manual_ check - but you will almost always want to write simple or filter checks.)
> #### Manual Checks {: .neutral}
>
> Both simple and filter checks are a subset of a third type of check - a _manual_ check - but you will almost always want to write simple or filter checks.
#### Simple checks
@ -188,7 +190,7 @@ defmodule MyApp.Checks.ActorIsOldEnough do
age >= 21
end
def match?(_, _, _), do: true
def match?(_, _, _), do: false
end
```
@ -330,7 +332,7 @@ field_policies do
end
```
If *any* field policies exist then *all* fields must be authorized by a field policy.
If _any_ field policies exist then _all_ fields must be authorized by a field policy.
If you want a "deny-list" style, then you can add policies for specific fields.
and add a catch-all policy using the special field name `:*`. All policies that apply
to a field must be authorized.