-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch backend to SymbolicUtils? #5
Comments
Can we implement the current system using just symbolics.jl without getting into the symbolic utils part of it? They seem to give an endpoint to "register" functions so they are treated as primitive symbols/functions. That sounds like it might help https://juliasymbolics.github.io/Symbolics.jl/dev/manual/functions/ But one would still have to write the canon_bp rules using the symbolicutils API probably. If you have an idea on how to begin to implement this in symbolicutils.jl I'm very curious, and would love to hear a very rough sketch. |
That's a good idea, and if possible could make this a lot easier. I'm not quite sure how you could enforce indices matching, etc in Symbolics.jl, but maybe I just need to think about it more. I'd love to hear ideas if you have them. And apparently Symbolics.jl is supposed to eventually support array-valued variables, so then adding rules for tensor contractions might be pretty easy. The rough sketch I had in mind for SymbolicUtils went something like this: So I was thinking something like this, basing the rough types off of abstract type Tensor <: Number end
struct TensorHead <: Tensor
num_indices::Integer
TIT::TensorIndexType
# index symmetries, maybe more stuff
end
struct TensorIndexType
name::Symbol
end
struct TensorIndex
name::Symbol
TIT::TensorIndexType
is_up::Bool
end
struct TensorExpr <: Tensor
operation::Symbol
args::Vector{T} where T <: Tensor
free_indices::Vector{TensorIndex}
end
istree(tens::T) where T <: Tensor = false
istree(tens::TensorExpr) = true
operation(T::TensorExpr) = T.operation
arguments(T::TensorExpr) = T.args I think I got that right? And then actual addition / index contraction / function add(a::TensorExpr, b::TensorExpr)
if all([aᵢ ∈ b.free_indices for aᵢ in a.free_indices]) &&
all([bᵢ ∈ a.free_indices for bᵢ in b.free_indices])
# there's definitely a faster way to check this
# addition is associative and commutative so use acrule
r = @acrule ~x + ~y => TensorExpr(operation=:+,args=push((~x).args,~y))
return r(a + b)
else
error("incompatible indices")
end
end |
This is actually an excellent starting point, I think. It would probably be better to implement this using SymbolicUtils then, this starting point looks very clear and elegant to me. I imagine the index lowering and raising problems may be too finicky to implement in Symbolics.jl. I've coded up a Minimum Working Example of the above, and one thing I've noted is that SymbolicUtils is not very easy to work with and the error messages are not at all helpful. Starting from the code you have provided, the next step would be to have something like Aside: The operation for a I was trying to set it up with the following code. #Define Index Type
L = TensorIndexType(:Lorentz)
#Define Indices
μ = TensorIndex(:μ,L,true)
ν = TensorIndex(:ν,L,true)
ρ = TensorIndex(:ρ,L,true)
σ = TensorIndex(:σ,L,true)
#Define Tensor Heads
A = TensorHead(2,L)
B = TensorHead(2,L)
#Do the operation (this currently not implemented)
C = A(μ,ν) + B(μ,ν)
Now one would need a way to define a Tensor with some indices, like The issue is that I propose we create a new struct
We can create an abstract type abstract type AbstractTensor end
abstract type Tensor <: Number end
struct TensorIndexType
name::Symbol
end
struct TensorIndex
name::Symbol
TIT::TensorIndexType
is_up::Bool
end
struct TensorHead <: AbstractTensor
num_indices::Integer
TIT::TensorIndexType
# index symmetries, maybe more stuff
end
struct TensorInstance <: Tensor
tensorhead::TensorHead
free_indices::Vector{TensorIndex}
end
function (A::TensorHead)(indices::Vararg{TensorIndex})::TensorInstance
## Error check to see if the right number of indices is provided
## If we allow Tensorhead to be a parametric type like
## TensorHead{num_indices}, we can use dispatch to get rid of the
## error check
if length(indices) == A.num_indices
TensorInstance(A,collect(indices))
else
error("Number of Indices provided doesn't match the
number of indices of the the tensor")
end
end
struct TensorExpr <: Tensor
operation::Function
args::Vector{Number}
free_indices::Vector{TensorIndex}
end
using SymbolicUtils
import Base.+, Base.-, Base.*
free_indices(a::TensorExpr) = a.free_indices
free_indices(a::TensorInstance) = a.free_indices
index_matching(a::Tensor, b::Tensor) = (all([aᵢ ∈ free_indices(b) for aᵢ in free_indices(a)]) &&
all([bᵢ ∈ free_indices(a) for bᵢ in free_indices(b)]))
function (+)(a::Tensor, b::Tensor)
if index_matching(a,b)
TensorExpr(operation= +,args=vcat(a,b),free_indices = free_indices(a))
else
error("incompatible indices")
end
end
function (-)(a::Tensor, b::Tensor)
if index_matching(a,b)
TensorExpr(operation= -,args=vcat(a,b),free_indices = free_indices(a))
else
error("incompatible indices")
end
end
(*)(a::Number, b::Tensor) = TensorExpr(operation= *,args=vcat(a,b),free_indices = free_indices(b))
(*)(a::Tensor, b::Number) = TensorExpr(operation= *,args=vcat(a,b),free_indices = free_indices(a))
Then I tested the code above by running: #Define Index Type
L = TensorIndexType(:Lorentz)
#Define Indices
μ = TensorIndex(:μ,L,true)
ν = TensorIndex(:ν,L,true)
ρ = TensorIndex(:ρ,L,true)
σ = TensorIndex(:σ,L,true)
#Define Tensor Heads
A = TensorHead(2,L)
B = TensorHead(2,L)
C = A(μ,ν) + A(μ,ν)
C |> display #Basically gives A(μ,ν) + A(μ,ν) as a TensorExpr
r = @rule( ~x => ~x*10 )
r(C) |> display #Basically gives 10*(A(μ,ν) + A(μ,ν)) as a TensorExpr
r2 = @rule( ~x + ~x => 2*~x )
r2(C) |> display #Returns Nothing, which means it was unable to match this expression
Hence doing Please do critique any design choices and/or symbolicutils implementation here. If you want I could make a PR with the Minimal Working example and you can examine that code. |
Wow, awesome work. Yeah I think a PR should work well, as far as I know nobody is using this repo in production so we can just commit to master. I'll tag the current state just in case someone needs it. One thing I am just now realizing is that we probably want tensors with mixed index types (think like changing bases and things like that) so maybe TensorHead should have been struct TensorHead <: AbstractTensor
index_types::Vector{TensorIndexType}
# index symmetries, maybe more stuff
end in practice, Also great idea with But yeah, awesome work and we'll probably have to play around with the exact implementation for a while before simplification is working well, but this is great progress. |
SymbolicUtils.jl
is pure Julia, and is extendable: https://github.com/JuliaSymbolics/SymbolicUtils.jlIt could replace the
sympy
backend.The text was updated successfully, but these errors were encountered: