Adding generic methods to switch models from Float32 to Float64 and other precisions #422
AntoninKns
started this conversation in
Ideas
Replies: 2 comments
-
I would suggest having multiple solvers instead of multiple models. If your model is written sufficiently generically (as are the BundleAdjustmentModels), you simply evaluate one of the problem functions at an |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am currently working on a Levenberg-Marquardt algorithm and I am using NLPModels to solve it.
My model has the following form :
and I want to do the following code.
The idea is to start solving the problem using
Float32
up until certain criterias are met, then switch the model toFloat64
adn then solve the problem from this point but usingFloat64
.I created a method
convert_type(T::Type, model::BundleAdjustmentModel)
that copies every element from my model from the current precision to a new one and keeps counters and the rest of the meta-model unchanged.Currently there is no such generic method in NLPModels. I was discussing with @tmigot about implementing one in order to have a nice generic way to do it across all the different NLPModels subtypes the same way it works for other functions in the API for example the obj function.
Following @tmigot idea we might instead add a way to create a model in every necessary precision beforehand and add a function to switch from one precision to the other without needing further allocations.
The idea is to have something that kind of looks like this :
and then having a function
switch( :: Type, :: AbstractNLSModel)
that allows to go from one precision to the other.Some questions arise from this :
Beta Was this translation helpful? Give feedback.
All reactions