Skip to content

Commit

Permalink
bare bones spec for MLTensor
Browse files Browse the repository at this point in the history
  • Loading branch information
a-sully committed Nov 15, 2024
1 parent 9a11f60 commit 38d5744
Showing 1 changed file with 329 additions and 1 deletion.
330 changes: 329 additions & 1 deletion index.bs
Original file line number Diff line number Diff line change
Expand Up @@ -861,6 +861,7 @@ The {{MLContext}} interface represents a global state of neural network compute

<script type=idl>
typedef record<USVString, ArrayBufferView> MLNamedArrayBufferViews;
typedef record<DOMString, MLTensor> MLNamedTensors;

dictionary MLComputeResult {
MLNamedArrayBufferViews inputs;
Expand All @@ -871,7 +872,15 @@ dictionary MLComputeResult {
interface MLContext {
Promise<MLComputeResult> compute(
MLGraph graph, MLNamedArrayBufferViews inputs, MLNamedArrayBufferViews outputs);

undefined dispatch(MLGraph graph, MLNamedTensors inputs, MLNamedTensors outputs);

Promise<MLTensor> createTensor(MLTensorDescriptor descriptor);

Promise<ArrayBuffer> readTensor(MLTensor tensor);
Promise<undefined> readTensor(MLTensor tensor, AllowSharedBufferSource outputData);

undefined writeTensor(MLTensor tensor, AllowSharedBufferSource inputData);

MLOpSupportLimits opSupportLimits();
};
</script>
Expand All @@ -888,6 +897,11 @@ interface MLContext {
: <dfn>\[[powerPreference]]</dfn> of type {{MLPowerPreference}}.
::
The {{MLContext}}'s {{MLPowerPreference}}.
: <dfn>\[[timeline]]</dfn>
::
A timeline associated with the execution of operations on the compute units of the {{MLContext}}. These operations include inferencing on [=computational graphs=] and modifying the {{MLTensor/[[data]]}} of {{MLTensor}}s.

Issue(520): More rigorously define this timeline.
</dl>
</div>

Expand Down Expand Up @@ -919,6 +933,17 @@ When the {{MLContext/[[contextType]]}} is set to [=context type/default=] with t
1. If |bufferView|.\[[ByteLength]] is not equal to |descriptor|'s [=MLOperandDescriptor/byte length=], return false.
</details>

<details open algorithm>
<summary>
To <dfn>validate tensors with descriptors</dfn> given an {{MLNamedTensors}} |namedTensors| with [=record=]&lt;{{USVString}}, {{MLOperandDescriptor}}&gt |namedDescriptors|:
</summary>
1. If |namedTensors|'s [=map/size=] is not equal to |namedDescriptors|'s [=map/size=], then return false.
1. [=map/For each=] |name| → |tensor| of |namedTensors|:
1. If |namedDescriptors|[|name|] does not [=map/exist=], then return false.
1. If |tensor|'s {{MLTensor/[[descriptor]]}} is not equal to |namedDescriptors|[|name|], then return false.
1. Return true.
</details>

<details open algorithm>
<summary>
To <dfn>execute graph</dfn>, given {{MLGraph}} |graph|, {{MLNamedArrayBufferViews}} |inputs| and {{MLNamedArrayBufferViews}} |outputs|, run the following steps. They return {{undefined}}, or an error.
Expand Down Expand Up @@ -1049,6 +1074,200 @@ Note: Invocations of {{MLContext/compute()}} will fail if any of the {{MLContext
</details>
</div>

### {{MLContext/dispatch()}} ### {#api-mlcontext-dispatch}

Schedules the computational workload of a compiled {{MLGraph}} on the {{MLContext}}'s {{MLContext/[[timeline]]}}.

<div dfn-for="MLContext/dispatch(graph, inputs, outputs)" dfn-type=argument>
**Arguments:**
- <dfn>graph</dfn>: an {{MLGraph}}. The computational graph to be executed.
- <dfn>inputs</dfn>: an {{MLNamedTensors}}. The inputs to the computational graph.
- <dfn>outputs</dfn>: an {{MLNamedTensors}}. The outputs to the computational graph.

**Returns:** {{undefined}}.
</div>

<details open algorithm>
<summary>
The <dfn method for=MLContext>dispatch(|graph|, |inputs|, |outputs|)</dfn> method steps are:
</summary>
1. Let |allTensors| be a [=/list=] of {{MLTensor}}s consisting of |inputs|'s [=map/values=] [=list/extended=] by |outputs|'s [=map/values=].
1. If |allTensors| contains any duplicate [=list/items=], then [=exception/throw=] a {{TypeError}}.
1. [=list/For each=] |tensor| of |allTensors|:
1. If |tensor|'s {{MLTensor/[[context]]}} is not [=this=], then [=exception/throw=] a {{TypeError}}.
1. If |tensor|'s {{MLTensor/[[isDestroyed]]}} is true, then [=exception/throw=] a {{TypeError}}.
1. If [=validating tensors with descriptors=] given |inputs| and |graph|'s {{MLGraph/[[inputDescriptors]]}} returns false, then [=exception/throw=] a {{TypeError}}.
1. If [=validating tensors with descriptors=] given |outputs| and |graph|'s {{MLGraph/[[outputDescriptors]]}} returns false, then [=exception/throw=] a {{TypeError}}.
1. Enqueue the following steps to |graph|'s {{MLGraph/[[context]]}}'s {{MLContext/[[timeline]]}}:
1. Issue a compute request to |graph|'s {{MLGraph/[[implementation]]}} given |inputs| and |outputs|.

Issue(778): Add a mechanism for reporting errors during graph execution.

1. Return {{undefined}}.
</details>

#### Examples #### {#api-mlcontext-dispatch-examples}
<div class="example">
<details open>
<summary>
The following code showcases executing an {{MLGraph}} using {{MLTensor}}s.
</summary>
<pre highlight="js">
const descriptor = {dataType: 'float32', shape: [2, 2]};
const context = await navigator.ml.createContext();
const builder = new MLGraphBuilder(context);

// 1. Create a computational graph 'C = 0.2 * A + B'.
const constant = builder.constant(descriptor, new Float32Array(4).fill(0.2));
const A = builder.input('A', descriptor);
const B = builder.input('B', descriptor);
const C = builder.add(builder.mul(A, constant), B);

// 2. Compile the graph.
const graph = await builder.build({'C': C});

// 3. Create reusable input and output tensors.
const [inputTensorA, inputTensorB, outputTensorC] =
await Promise.all([
context.createTensor({
dataType: 'float32', shape: [2, 2], writable: true
}),
context.createTensor({
dataType: 'float32', shape: [2, 2], writable: true
}),
context.createTensor({
dataType: 'float32', shape: [2, 2], readable: true
})
]);

// 4. Initialize the inputs.
context.writeTensor(inputTensorA, new Float32Array(4).fill(1.0));
context.writeTensor(inputTensorB, new Float32Array(4).fill(0.8));

// 5. Execute the graph.
const inputs = {
'A': inputTensorA,
'B': inputTensorB
};
const outputs = {
'C': outputTensorC
};
context.dispatch(graph, inputs, outputs);

// 6. Read back the computed result.
const result = await context.readTensor(outputTensorC);
console.log('Output value:', new Float32Array(result)); // [1, 1, 1, 1]
</pre>
</details>
</div>

### {{MLContext/createTensor()}} ### {#api-mlcontext-createtensor}

Creates an {{MLTensor}} associated with this {{MLContext}}.

<div dfn-for="MLContext/createTensor(descriptor)" dfn-type=argument>
**Arguments:**
- <dfn>descriptor</dfn>: an {{MLTensorDescriptor}}.

**Returns:** Promise<MLTensor>.
</div>

<details open algorithm>
<summary>
The <dfn method for=MLContext>createTensor(|descriptor|)</dfn> method steps are:
</summary>
1. Let |global| be [=this=]'s [=relevant global object=].
1. Let |tensor| be the result of [=creating an MLTensor=] given [=this=], and |descriptor|.
1. Let |promise| be [=a new promise=].
1. Enqueue the following steps to [=this=]'s {{MLContext/[[timeline]]}}:
1. Create |tensor|'s {{MLTensor/[[data]]}} given |descriptor| and initialize all bytes to zeros.
1. If that fails, then [=queue an ML task=] with |global| to [=reject=] |promise| with an "{{UnknownError}}" {{DOMException}}, and abort these steps.
1. Otherwise, [=queue an ML task=] with |global| to [=resolve=] |promise| with |tensor|.
1. Return |promise|.
</details>

### {{MLContext/readTensor()}} ### {#api-mlcontext-readtensor}

Reads back the {{MLTensor/[[data]]}} of an {{MLTensor}} from the {{MLContext}}'s {{MLContext/[[timeline]]}} to script.

<div dfn-for="MLContext/readTensor(tensor)" dfn-type=argument>
**Arguments:**
- <dfn>tensor</dfn>: an {{MLTensor}}. The tensor to be read.

**Returns:** Promise<ArrayBuffer>. A buffer containing the result of the read.
</div>

<div dfn-for="MLContext/readTensor(tensor, outputData)" dfn-type=argument>
**Arguments:**
- <dfn>tensor</dfn>: an {{MLTensor}}. The tensor to be read.
- <dfn>outputData</dfn>: an {{AllowSharedBufferSource}}. The buffer to read the result into.

**Returns:** Promise<undefined>.
</div>

<details open algorithm>
<summary>
The <dfn method for=MLContext>readTensor(|tensor|)</dfn> method steps are:
</summary>
1. Let |global| be [=this=]'s [=relevant global object=].
1. Let |realm| be [=this=]'s [=relevant realm=].
1. If |tensor|'s {{MLGraph/[[context]]}} is not [=this=], then return [=a new promise=] [=rejected=] with a {{TypeError}}.
1. If |tensor|'s {{MLTensor/[[isDestroyed]]}} is true, then return [=a new promise=] [=rejected=] with a {{TypeError}}.
1. If |tensor|'s {{MLTensor/readable}} is false, then return [=a new promise=] [=rejected=] with a {{TypeError}}.
1. Let |promise| be [=a new promise=].
1. Enqueue the following steps to |tensor|'s {{MLGraph/[[context]]}}'s {{MLContext/[[timeline]]}}:
1. Let |bytes| be a [=/byte sequence=] containing a copy of |tensor|'s {{MLTensor/[[data]]}}.
1. [=Queue an ML task=] with |global| to [=ArrayBuffer/create=] an {{ArrayBuffer}} |result| given |bytes| and |realm| and then [=resolve=] |promise| with |result|.
1. Return |promise|.
</details>

<details open algorithm>
<summary>
The <dfn method for=MLContext>readTensor(|tensor|, |outputData|)</dfn> method steps are:
</summary>
1. Let |global| be [=this=]'s [=relevant global object=].
1. If |tensor|'s {{MLGraph/[[context]]}} is not [=this=], then return [=a new promise=] [=rejected=] with a {{TypeError}}.
1. If |tensor|'s {{MLTensor/[[isDestroyed]]}} is true, then return [=a new promise=] [=rejected=] with a {{TypeError}}.
1. If |tensor|'s {{MLTensor/readable}} is false, then return [=a new promise=] [=rejected=] with a {{TypeError}}.
1. If [=validating buffer with descriptor=] given |outputData| and |tensor|'s {{MLTensor/[[descriptor]]}} returns false, then return [=a new promise=] [=rejected=] with a {{TypeError}}.
1. Let |promise| be [=a new promise=].
1. Enqueue the following steps to |tensor|'s {{MLGraph/[[context]]}}'s {{MLContext/[[timeline]]}}:
1. Let |bytes| be a [=/byte sequence=] containing a copy of |tensor|'s {{MLTensor/[[data]]}}.
1. [=Queue an ML task=] with |global| to [=ArrayBuffer/write=] |bytes| to |outputData| and then [=resolve=] |promise| with {{undefined}}.
1. Return |promise|.
</details>


### {{MLContext/writeTensor()}} ### {#api-mlcontext-writetensor}

Writes data to the {{MLTensor/[[data]]}} of an {{MLTensor}} on the {{MLContext}}'s {{MLContext/[[timeline]]}}.

<div dfn-for="MLContext/writeTensor(tensor, inputData)" dfn-type=argument>
**Arguments:**
- <dfn>tensor</dfn>: an {{MLTensor}}. The tensor to be written to.
- <dfn>inputData</dfn>: an {{AllowSharedBufferSource}}. The buffer whose bytes will be written into the tensor.

**Returns:** {{undefined}}.
</div>

<details open algorithm>
<summary>
The <dfn method for=MLContext>writeTensor(|tensor|, |inputData|)</dfn> method steps are:
</summary>
1. If |tensor|'s {{MLGraph/[[context]]}} is not [=this=], then [=exception/throw=] a {{TypeError}}.
1. If |tensor|'s {{MLTensor/[[isDestroyed]]}} is true, then [=exception/throw=] a {{TypeError}}.
1. If |tensor|'s {{MLTensor/writable}} is false, then [=exception/throw=] a {{TypeError}}.
1. If [=validating buffer with descriptor=] given |inputData| and |tensor|'s {{MLTensor/[[descriptor]]}} returns false, then [=exception/throw=] a {{TypeError}}.
1. Let |bytes| be the result of [=getting a copy of the bytes held by the buffer source=] given |inputData|.
1. [=Assert=]: |bytes|'s [=byte sequence/length=] is equal to |tensor|'s {{MLTensor/[[descriptor]]}}'s [=MLOperandDescriptor/byte length=].
1. Enqueue the following steps to |tensor|'s {{MLGraph/[[context]]}}'s {{MLContext/[[timeline]]}}:
1. Copy |bytes| to |tensor|'s {{MLTensor/[[data]]}}.

Issue(778): Add a mechanism for reporting errors while writing to a tensor.

1. Return {{undefined}}.
</details>

### {{MLContext/opSupportLimits()}} ### {#api-mlcontext-opsupportlimits}
The {{MLContext/opSupportLimits()}} exposes level of support that differs across implementations at operator level. Consumers of the WebNN API are encouraged to probe feature support level by using {{MLContext/opSupportLimits()}} to determine the optimal model architecture to be deployed for each target platform.

Expand Down Expand Up @@ -1322,6 +1541,115 @@ To <dfn for="MLGraphBuilder">validate operand</dfn> given {{MLGraphBuilder}} |bu

Issue(whatwg/webidl#1388): Support for unions of {{bigint}} and [=numeric types=] is new in [[WEBIDL]], and implementation support is also limited. Prototype implementations are encouraged to provide feedback for this approach.

## {{MLTensorDescriptor}} dictionary ## {#api-mltensordescriptor}

An {{MLTensorDescriptor}} describes the characteristics and capabilities of an {{MLTensor}}.

<script type=idl>
dictionary MLTensorDescriptor : MLOperandDescriptor {
boolean readable = false;
boolean writable = false;
};
</script>

<dl dfn-type=dict-member dfn-for=MLTensorDescriptor>
: <dfn>readable</dfn>
:: Whether the tensor's contents can be read via {{MLContext/readTensor()}}.

: <dfn>writable</dfn>
:: Whether the tensor's contents can be written to via {{MLContext/writeTensor()}}.
</dl>

## {{MLTensor}} interface ## {#api-mltensor}

The {{MLTensor}} interface represents a tensor which may be used as an input or output to an {{MLGraph}}. The memory backing an {{MLTensor}} should be allocated in an [=implementation-defined=] fashion according to the requirements of the {{MLContext}} and the {{MLTensorDescriptor}} used to create it. Operations involving the {{MLTensor/[[data]]}} of an {{MLTensor}} occur on the {{MLContext/[[timeline]]}} of its associated {{MLContext}}.

Note: The [=implementation-defined=] requirements of how an {{MLTensor}} is allocated may include constraints such as that the memory is allocated with a particular byte alignment or in a particular memory pool.

<script type=idl>
[SecureContext, Exposed=(Window, DedicatedWorker)]
interface MLTensor {
readonly attribute MLOperandDataType dataType;
readonly attribute FrozenArray<unsigned long> shape;
readonly attribute boolean readable;
readonly attribute boolean writable;

undefined destroy();
};
</script>

<div class=internal-slots>
{{MLTensor}} has the following internal slots:
<dl dfn-type=attribute dfn-for="MLTensor">
: <dfn>\[[context]]</dfn> of type {{MLContext}}
::
The {{MLTensor}}'s associated context.

: <dfn>\[[descriptor]]</dfn> of type {{MLTensorDescriptor}}
::
The {{MLTensor}}'s descriptor.

: <dfn>\[[isDestroyed]]</dfn> of type {{boolean}}
::
Whether {{MLTensor}}.{{MLTensor/destroy()}} has been called. Once destroyed, the {{MLTensor}} can no longer be used.

: <dfn>\[[data]]</dfn> of an [=implementation-defined=] type
::
The bytes backing the {{MLTensor}}. This data may only be modified from the {{MLTensor/[[context]]}}'s {{MLContext/[[timeline]]}}.
</dl>
</div>

An {{MLTensor}}'s <dfn for=MLTensor>dataType</dfn> is its {{MLTensor/[[descriptor]]}}'s {{MLOperandDescriptor/dataType}}.

An {{MLTensor}}'s <dfn for=MLTensor>shape</dfn> is its {{MLTensor/[[descriptor]]}}'s {{MLOperandDescriptor/shape}}.

An {{MLTensor}} <dfn for=MLTensor>readable</dfn> if its {{MLTensor/[[descriptor]]}}'s {{MLTensorDescriptor/readable}} is true.

An {{MLTensor}} <dfn for=MLTensor>writable</dfn> if its {{MLTensor/[[descriptor]]}}'s {{MLTensorDescriptor/writable}} is true.

The <dfn attribute for=MLTensor>dataType</dfn> [=getter steps=] are to return [=this=]'s [=MLTensor/dataType=].

The <dfn attribute for=MLTensor>shape</dfn> [=getter steps=] are to return [=this=]'s [=MLTensor/shape=].

The <dfn attribute for=MLTensor>readable</dfn> [=getter steps=] are to return [=this=]'s [=MLTensor/readable=].

The <dfn attribute for=MLTensor>writable</dfn> [=getter steps=] are to return [=this=]'s [=MLTensor/writable=].

### Creating an {{MLTensor}} ### {#api-mltensor-create}

An {{MLTensor}} is created with by its associated {{MLContext}}. Note that creating an {{MLTensor}} does not include initializing its {{MLTensor/[[data]]}}. This {{MLTensor/[[data]]}} should be initialized soon afterwards.

<details open algorithm>
<summary>
To <dfn>create an MLTensor</dfn> given {{MLContext}} |context| and {{MLTensorDescriptor}} |descriptor|, run the following steps:
</summary>
1. Let |tensor| be a new {{MLTensor}}.
1. Set |tensor|'s {{MLTensor/[[context]]}} to |context|.
1. Set |tensor|'s {{MLTensor/[[descriptor]]}} to |descriptor|.
1. Set |tensor|'s {{MLTensor/[[isDestroyed]]}} to false.
1. Return |tensor|.
</details>

### {{MLTensor/destroy()}} ### {#api-mltensor-destroy}

Destroys the {{MLTensor}}. This method is idempotent.

<div dfn-for="MLTensor/destroy()" dfn-type=argument>
**Returns:** {{undefined}}.
</div>

<details open algorithm>
<summary>
The <dfn method for=MLTensor>destroy()</dfn> method steps are:
</summary>
1. Set [=this=]'s {{MLTensor/[[isDestroyed]]}} to true.
1. Enqueue the following steps to [=this=]'s {{MLTensor/[[context]]}}'s {{MLContext/[[timeline]]}}:
1. Free [=this=]'s {{MLTensor/[[data]]}}.
1. Return {{undefined}}.
</details>

Note: Since no further operations can be enqueued using this tensor, implementations can free resource allocations associated with this tensor.

## {{MLGraphBuilder}} interface ## {#api-mlgraphbuilder}

The {{MLGraphBuilder}} interface defines a set of operations as identified by the [[#usecases]] that can be composed into a computational graph. It also represents the intermediate state of a graph building session.
Expand Down

0 comments on commit 38d5744

Please sign in to comment.