Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Computing $L_i$ instead of $L$ #151

Open
orkolorko opened this issue Feb 14, 2023 · 4 comments
Open

Computing $L_i$ instead of $L$ #151

orkolorko opened this issue Feb 14, 2023 · 4 comments

Comments

@orkolorko
Copy link
Collaborator

orkolorko commented Feb 14, 2023

A first modification we could do, in the long refactor of Dual+iterate could be the following.

If $T$ satisfies the following assumption there exists a partition ${P_i}$ of $M$ of full measure such that

  1. $T_i: = T|_{P_i}$ is injective
  2. $|DT_i|>0$

then we can write

$$ L f(x) =\sum_{i=1}^n L_i f $$

where

$$ L_i f(x) = \frac{f(T_i^{-1}(x))}{|DT(T_i^{-1}(x))|}\chi_{T_i(P_i)}(x) $$

to each branch is associated an operator, and the discretized operator is the sum of these operators.

@fph
Copy link
Collaborator

fph commented Feb 14, 2023

Maybe I am missing some motivation; what would be the advantage of doing this?

Here $n$ is the number of branches, right? And that expression is for the hat assembler only, not for Ulam?

Anyway I think we already do something very similar: the preimages are computed branch-by-branch, so we compute those terms separately, store them in the I, J, nzvals vectors and add them up when we create the sparse matrix in assemble.

In other words, the elements of our "dual" are already the functions $L_i f(x_k)$ for all $i$ and $k$.

@fph
Copy link
Collaborator

fph commented Feb 14, 2023

Oh OK maybe I get it now; your plan is having this part when we split the assembly over branches as a "common sub-factor" of assemble for all basis?

For the future my suggestion would be instead having completely separated assemble methods for each basis, dispatched over the base type: especially looking at future Chebyshev / FFT strategies, they seem different enough that introducing a common interface is just a complication. Indeed all that iterate business with the duals makes the current code really hard to read.

And in a sense preimages is already a common part of all of them that we have identified and abstracted out. Note that calling preimages on a dynamic already concatenates the preimages over each branch in a single vector, so it already plays that role, more or less.

@orkolorko
Copy link
Collaborator Author

I am thinking about the fact that each one of the branches can be computed in parallel, mostly, but I need to think more about it...

@fph
Copy link
Collaborator

fph commented Feb 21, 2023

Note that at least for ComposedDynamics, one should not split into branches; see #153 (comment) for an explanation.

Apart from that, I agree that preimages along different branches could be computed in parallel, and then afterwards also $L_i$ as you suggest; that is something to study.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants