Skip to content

Commit

Permalink
Merge branch 'kopflos' into cli
Browse files Browse the repository at this point in the history
  • Loading branch information
tpluscode committed Oct 17, 2024
2 parents d07ff2f + 2b2276f commit 35bd14a
Show file tree
Hide file tree
Showing 17 changed files with 2,346 additions and 1,637 deletions.
2 changes: 2 additions & 0 deletions docs/apis/kopflos/explanations/_category_.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
label: Explanations
position: 3
70 changes: 70 additions & 0 deletions docs/apis/kopflos/explanations/request-pipeline.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
---
title: Request pipeline
sidebar_position: 1
---

# Kopflos request pipeline

```
Incoming Request
└─▶ Kopflos handler
4**/5** ◀─┴─▶ Resource Shape Lookup
4**/5** ◀─┴─▶ Resource Loader Lookup
4**/5** ◀─┴─▶ Load Resource
4** ◀─┴─▶ Authorization
400 ◀─┴─▶ Validation
4**/5** ◀─┴─▶ (User handler)
└─▶ Reply
```

## Incoming request

Incoming request is handled by the server library, such as express or fastify and then forwarded to Kopflos.

## Kopflos handler

Kopflos handler is the main entry point for all incoming requests. It is responsible for orchestrating the request pipeline.

## Resource Shape Lookup

Resource Shape Lookup executes SPARQL against the `default` query endpoint to find the shape targeting the requested resource.

:::tip
See also: [How to Select which resources should be served by the API](../how-to/resource-shape.md)
:::

## Resource Loader Lookup + Load Resource

When the Resource Shape is found, a resource loader is selected based from `kopflos:resourceLoader` property, going bottom-up from the Resource/Property Shape to the share `kopflos:Config` resource.

It is used to load the requested resource's Core Representation.

:::info
The Core Representation are the triples returned by the resource loader. Typically, that would be the result of a SPARQL `DESCRIBE` query or contents of resource's "own graph".
:::

:::warning
By default, a loader which returns the resource's own graph is used.
:::

## Authorization

Not implemented yet.

## Validation

Not implemented yet.

## User handler

Finally, the handler is executed. If no handler is defined, and the request method is a GET, the resource's Core Representation is returned.

The result of the handler is forwarded back to the server library to be sent as a response.
34 changes: 34 additions & 0 deletions docs/apis/kopflos/how-to/load-api.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# Load API description

A Kopflos API is composed of triples that describe the API's structure and behavior. These triples need to be loaded into the API's store before the API can be used.

## Load from named graphs

The simplest and recommended way to load the API is to use named graphs. A single API can be loaded from multiple named graphs, and there can be multiple APIs served from a single Kopflos instance. Use the code below to load the API from named graphs:

```js
import Kopflos from '@kopflos-cms/core'

let config
const api = new Kopflos(config)
await Kopflos.fromGraphs(api, 'http://example.com/api1', 'http://example.com/api2', 'http://example.com/shared')
```

:::hint
[RDF/JS NamedNode](https://rdf.js.org/data-model-spec/#namednode-interface) objects can be used as well.
:::

The given named graphs will be loaded from the default SPARQL endpoint.

## Initialise with preloaded data

If you have the API triples in memory, you can initialise the API with it directly:

```typescript
import Kopflos from '@kopflos-cms/core'

let config
let dataset: DatasetCore

const api = new Kopflos(config, { dataset })
```
139 changes: 139 additions & 0 deletions docs/apis/kopflos/how-to/resource-loaders.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
# Configure resource loaders

As explained on the [Request pipeline page](../explanations/request-pipeline.md), early in the request pipeline, Kopflos selects a resource loader to load the requested resource's Core Representation.

Resource Loaders can be declared on instances of `kopflos:ResourceShape`, `kopflos:Api`, and `kopflos:Config` resources.

## Predefined loaders

Kopflos recognizes two loaders out of the box. They are identified by their shorthand identifiers.

### Own graph loader

The "Own Graph" loader is the default.
It loads the named graph for the requested resource.
For example,
a request to `https://example.org/person/1` will load the entire named graph `https://example.org/person/1` as Core Representation.

```turtle
PREFIX kopflos: <https://kopflos.described.at/>
<>
# a kopflos:ResourceShape ; # or
# a kopflos:Api ; # or
# a kopflos:Config ;
kopflos:resourceLoader kopflos:OwnGraphLoader ;
.
```

### `DESCRIBE` loader

The `DESCRIBE` loader is a built-in loader
that uses the SPARQL `DESCRIBE` query to load the requested resource.
The loader will issue a `DESCRIBE` query to the default SPARQL endpoint.

```turtle
PREFIX kopflos: <https://kopflos.described.at/>
<>
# a kopflos:ResourceShape ; # or
# a kopflos:Api ; # or
# a kopflos:Config ;
kopflos:resourceLoader kopflos:DescribeLoader ;
.
```

## Loader precedence

Kopflos selects a loader based on the `kopflos:resourceLoader` property,
going bottom-up from the Resource Shape to the shared `kopflos:Config` resource.

```turtle
PREFIX sh: <http://www.w3.org/ns/shacl#>
PREFIX kopflos: <https://kopflos.described.at/>
PREFIX ex: <http://example.org/>
ex:PersonShape
a kopflos:ResourceShape ;
kopflos:api ex:PeopleApi ;
sh:targetNode ex:Person ;
kopflos:resourceLoader ex:ResourceLoader1 ;
.
ex:AddressShape
a kopflos:ResourceShape ;
kopflos:api ex:PeopleApi ;
sh:targetNode ex:Person ;
.
ex:PeopleApi
a kopflos:Api ;
kopflos:config ex:Config ;
kopflos:resourceLoader ex:ResourceLoader2 ;
.
ex:ArticleShape
kopflos:api ex:PublishingApi ;
sh:targetNode ex:Article ;
.
ex:PublishingApi
a kopflos:Api ;
kopflos:config ex:Config ;
.
ex:Config
a kopflos:Config ;
kopflos:resourceLoader ex:ResourceLoader3 ;
.
```

The algorithm is straightforward. For any given matched resource shape, Kopflos will:
1. Check if the resource shape has a `kopflos:resourceLoader` property. If it does, Kopflos will use the loader specified in the property.
2. If the resource shape does not have a `kopflos:resourceLoader` property, Kopflos will look for the `kopflos:resourceLoader` property in the resource's API resource.
3. If the API resource does not have a `kopflos:resourceLoader` property, Kopflos will look for the `kopflos:resourceLoader` property in the linked `kopflos:Config` resource.

In the example above:
- If a requested resource matches the `ex:PersonShape`, Kopflos will use `ex:ResourceLoader1` from the resource shape itself.
- If a requested resource matches the `ex:AddressShape`, Kopflos will use `ex:ResourceLoader2` from the API resource.
- If a requested resource matches the `ex:ArticleShape`, Kopflos will use `ex:ResourceLoader3` from the Config resource.

## Implementing and using custom Resource Loaders

A loader is a function that takes the requested URI
(in the form of a [NamedNode](https://rdf.js.org/data-model-spec/#namednode-interface))
and returns a Core Representation as an [RDF/JS Stream](https://rdf.js.org/stream-spec/#stream-interface).
It also takes a second argument: the instance of Kopflos that is processing the request.

For example, here is a loader which sends a request query to a Stardog database,
with a query hint to ensure that the Concise Bounded Description (CBD) is returned.

```ts
// src/resource-loaders/stardog-loaders.ts
import type { ResourceLoader } from '@kopflos-cms/core'

export const CBD: ResourceLoader = async (uri, kopflos) => {
const query = `
#pragma describe.strategy cbd
DESCRIBE <${uri.value}>`

return kopflos.env.sparql.default.stream.query.construct(query)
}
```

To use it as the default in an API, declare the loader in the API's configuration:

```turtle
PREFIX code: <https://cube.link/code#>
PREFIX kopflos: <https://kopflos.described.at/>
PREFIX ex: <http://example.org/>
ex:Config
a kopflos:Config ;
kopflos:resourceLoader
[
a code:EcmaScriptModule ;
code:link <file:src/resource-loaders/stardog-loaders.js#CBD> ;
] ;
.
```
25 changes: 25 additions & 0 deletions docs/apis/kopflos/how-to/web-server-libraries.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Integrate with web server libraries

All libraries integrated with Kopflos require the same configuration, at the very least the SPARQL endpoint URL and the API graph URL(s).

## Express

```ts
import express from 'express'
import kopflos from '@kopflos-cms/express'

const app = express()

app.use(kopflos({
sparql: {
default: 'http://localhost:3030/ds/query',
},
apiGraphs: [
'https://example.com/api',
]
}))

app.listen(3000, () => {
console.log('Server is running on http://localhost:3000')
})
```
10 changes: 10 additions & 0 deletions docs/apps/apps.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
displayed_sidebar: appsSidebar
title: Data Centric Applications
sidebar_label: Introduction
sidebar_position: 0
---

A data-centric application is a software system primarily focused on managing, processing, and utilizing data.
In such applications, data is the central component, and the application is designed to organize, store, analyze, and present this data effectively.
The primary goal of a data-centric application is to ensure that data is easily accessible, accurate, and can be used to drive decision-making or automate processes.
2 changes: 2 additions & 0 deletions docs/apps/blueprint/_category_.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
label: Zazuko Blueprint
position: 1
61 changes: 61 additions & 0 deletions docs/apps/blueprint/docker.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
---
displayed_sidebar: appsSidebar
title: Deploy Zazuko Blueprint using Docker
sidebar_label: Deploy using Docker
sidebar_position: 1
---

We are providing a [container image](https://github.com/zazuko/blueprint/pkgs/container/blueprint) for Zazuko Blueprint, that gets built automatically on every push to the [main branch](https://github.com/zazuko/blueprint).
We also tag some releases, so you can use a specific version of Blueprint.

You can pull the latest version of the container image using the following command:

```sh
docker pull ghcr.io/zazuko/blueprint:latest
```

The container exposes the Blueprint instance on port 80.

When deploying the container in production, make sure to use a specific version of the container image, instead of `latest`.

## Configuration

You will need to provide some configuration to the container, using environment variables.

The following environment variables are available:

| Variable | Description | Default |
| ------------------------ | ------------------------------------------- | ------------------------------------------- |
| ENDPOINT_URL | SPARQL endpoint URL | **required** |
| SPARQL_CONSOLE_URL | SPARQL console URL | http://example.com/sparql/#query |
| GRAPH_EXPLORER_URL | Graph Explorer URL | http://example.com/graph-explorer/?resource |
| FULL_TEXT_SEARCH_DIALECT | Full text search dialect | fuseki |
| NEPTUNE_FTS_ENDPOINT | OpenSearch endpoint for the Neptune dialect | http://example.com/ |

Currently, the supported full text search dialects are `stardog`, `fuseki` and `neptune`.

If you are using `neptune` as the full text search dialect, you will need to provide the `NEPTUNE_FTS_ENDPOINT` environment variable.

In case you are using a Trifid instance, which is deployed at `http://example.com`, that is configured over a Fuseki endpoint, you can use the following configuration:

```env
ENDPOINT_URL=http://example.com/query
SPARQL_CONSOLE_URL=http://example.com/sparql/#query
GRAPH_EXPLORER_URL=http://example.com/graph-explorer/?resource
FULL_TEXT_SEARCH_DIALECT=fuseki
```

## Running the container

Using the configuration above, you can run the container with the following command:

```sh
docker run -d -p 8080:80 \
-e ENDPOINT_URL=http://example.com/query \
-e SPARQL_CONSOLE_URL=http://example.com/sparql/#query \
-e GRAPH_EXPLORER_URL=http://example.com/graph-explorer/?resource \
-e FULL_TEXT_SEARCH_DIALECT=fuseki \
ghcr.io/zazuko/blueprint:latest
```

And open your browser at `http://localhost:8080`.
Loading

0 comments on commit 35bd14a

Please sign in to comment.