Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issued with sin and cos function after update version of NeuralPDE #710

Closed
HuynhTran0301 opened this issue Jul 27, 2023 · 4 comments
Closed

Comments

@HuynhTran0301
Copy link

HuynhTran0301 commented Jul 27, 2023

I have updated to the latest version of NeuralPDE and used the res = Optimization.solve(prob,OptimizationOptimJL.BFGS(); callback = callback, maxiters = 1500) to solve the ODE system. I got the message:

MethodError: no method matching cos(::Matrix{Float64})
You may have intended to import Base.cos

This issue did not occur when I used the older version.
How could I fix this error?

@HuynhTran0301 HuynhTran0301 changed the title Issued when using Julia version 1.9.2 Issued when using Julia version 1.9.2 with sin and cos function Jul 27, 2023
@HuynhTran0301 HuynhTran0301 reopened this Jul 27, 2023
@HuynhTran0301 HuynhTran0301 changed the title Issued when using Julia version 1.9.2 with sin and cos function Issued with sin and cos function after update version of NeuralPDE Jul 27, 2023
@ChrisRackauckas
Copy link
Member

Can you share a reproducer?

@HuynhTran0301
Copy link
Author

This is my code:

using NeuralPDE, Lux, ModelingToolkit, Optimization, OptimizationOptimJL, OptimizationOptimisers
import ModelingToolkit: Interval
using CSV
using DataFrames
data = CSV.File("3gens.csv");
Y1 = CSV.read("Y1.CSV", DataFrame, types=Complex{Float64});

#Input of the system.
E1 = 1.054;
E2 = 1.050;
E3 = 1.017;

omegaN = 120*pi;

#Define equation of the system
@variables delta1(..) delta2(..) delta3(..) omega1(..) omega2(..) omega3(..) TM1(..) TM2(..) TM3(..) Psv1(..) Psv2(..) Psv3(..)
@parameters t 
D = Differential(t)
omega_s = 120*pi

eq1 = [
    D(delta1(t)) ~ omega1(t) - omega_s,
    #
    D(delta2(t)) ~ omega2(t) - omega_s,
    #
    D(delta3(t)) ~ omega3(t) - omega_s,
    #
    D(omega1(t)) ~ (TM1(t) - (E1^2*Y1[1,1].re + E1*E2*sin.(delta1(t)-delta2(t))*Y1[1,2].im + E1*E2*cos.(delta1(t)-delta2(t))*Y1[1,2].re + E1*E3*sin.(delta1(t)-delta3(t))*Y1[1,3].im + 
        E1*E3*cos.(delta1(t)-delta3(t))*Y1[1,3].re))/(2*data["H"][1])*omega_s,
    #
    D(omega2(t)) ~ (TM2(t) - (E2^2*Y1[2,2].re + E1*E2*sin.(delta2(t)-delta1(t))*Y1[2,1].im + E1*E2*cos.(delta2(t)-delta1(t))*Y1[2,1].re + E2*E3*sin.(delta2(t)-delta3(t))*Y1[2,3].im + 
        E2*E3*cos.(delta2(t)-delta3(t))*Y1[2,3].re))/(2*data["H"][2])*omega_s,
    #
    D(omega3(t)) ~ (TM3(t) - (E3^2*Y1[3,3].re + E3*E1*sin.(delta3(t)-delta1(t))*Y1[3,1].im + E3*E1*cos.(delta3(t)-delta1(t))*Y1[3,1].re + E3*E2*sin.(delta3(t)-delta2(t))*Y1[3,2].im + 
        E3*E2*cos.(delta3(t)-delta2(t))*Y1[3,2].re))/(2*data["H"][3])*omega_s];

eq2 = [D(TM1(t)) ~ (-TM1(t) + Psv1(t))/data["TCH"][1],
    D(TM2(t)) ~ (-TM2(t) + Psv2(t))/data["TCH"][2],
    D(TM3(t)) ~ (-TM3(t) + Psv3(t))/data["TCH"][3]]

eq3 = [D(Psv1(t)) ~ (-Psv1(t) + 0.70945 + 0.335*(-0.0833) - (omega1(t)/omega_s - 1)/data["RD"][1])/data["TSV"][1],
    D(Psv2(t)) ~ (-Psv2(t) + 1.62342 + 0.33*(-0.0833) - (omega2(t)/omega_s - 1)/data["RD"][2])/data["TSV"][2],
    D(Psv3(t)) ~ (-Psv3(t) + 0.84843 + 0.335*(-0.0833) - (omega3(t)/omega_s - 1)/data["RD"][3])/data["TSV"][3]];

eqs = [eq1;eq2;eq3];


bcs = [delta1(0.0) ~ 0.03957, delta2(0.0) ~ 0.3447, delta3(0.0) ~ 0.23038,
    omega1(0.0) ~ omega_s, omega2(0.0) ~ omega_s, omega3(0.0) ~ omega_s,
    TM1(0.0) ~ 0.70945, TM2(0.0) ~ 1.62342, TM3(0.0) ~ 0.848433,
    Psv1(0.0) ~ 0.70945, Psv2(0.0) ~ 1.62342, Psv3(0.0)~ 0.848433]

domains = [t ∈ Interval(0.0,25.0)]


chain =[Lux.Chain(Lux.BatchNorm(1,Lux.relu),Dense(1,10,Lux.tanh),Lux.BatchNorm(10,Lux.relu),Dense(10,20,Lux.tanh),Lux.BatchNorm(20,Lux.relu),Dense(20,10,Lux.tanh),
                  Lux.BatchNorm(10,Lux.relu),Dense(10,1)) for _ in 1:12]


dvs = [delta1(t),delta2(t),delta3(t),omega1(t),omega2(t),omega3(t),TM1(t),TM2(t),TM3(t),Psv1(t),Psv2(t),Psv3(t)]

@named pde_system = PDESystem(eqs,bcs,domains,[t],dvs)


strategy = NeuralPDE.GridTraining(0.01)
discretization = PhysicsInformedNN(chain, strategy)
sym_prob = NeuralPDE.symbolic_discretize(pde_system, discretization)

pde_loss_functions = sym_prob.loss_functions.pde_loss_functions
bc_loss_functions = sym_prob.loss_functions.bc_loss_functions

callback = function (p, l)
    println("loss: ", l)
    return false
end
loss_functions =  [pde_loss_functions;bc_loss_functions]

function loss_function(θ,p)
    sum(map(l->l(θ) ,loss_functions))
end

f_ = OptimizationFunction(loss_function, Optimization.AutoZygote())
prob = Optimization.OptimizationProblem(f_, sym_prob.flat_init_params);
phi = sym_prob.phi;

res = Optimization.solve(prob,OptimizationOptimJL.BFGS(); callback = callback, maxiters = 20000)

The error is:

MethodError: no method matching cos(::Matrix{Float64})
You may have intended to import Base.cos

Closest candidates are:
  cos(::Float32)
   @ NaNMath C:\Users\htran\.julia\packages\NaNMath\ceWIc\src\NaNMath.jl:10
  cos(::Float64)
   @ NaNMath C:\Users\htran\.julia\packages\NaNMath\ceWIc\src\NaNMath.jl:9
  cos(::DualNumbers.Dual)
   @ DualNumbers C:\Users\htran\.julia\packages\DualNumbers\5knFX\src\dual.jl:327
  ...

The CSV files are attached below.
Besides, would you help me how to get predictions after training with BatchNorm layers in the chain?
when I use the code to generate a prediction as a tutorial, I have an error below:

ts = 0.0:0.01:25.0
minimizers_ = [res.u.depvar[sym_prob.depvars[i]] for i in 1:12]
u_predict  = [[phi[i]([t],minimizers_[i])[1] for t in ts] for i in 1:12];
BoundsError: attempt to access Tuple{Int64} at index [0]

Stacktrace:
  [1] getindex(t::Tuple, i::Int64)
    @ Base .\tuple.jl:29
  [2] _get_reshape_dims
    @ C:\Users\htran\.julia\packages\LuxLib\wG638\src\utils.jl:39 [inlined]
  [3] _reshape_into_proper_shape
    @ C:\Users\htran\.julia\packages\LuxLib\wG638\src\utils.jl:51 [inlined]
  [4] _normalization(x::Vector{Float64}, running_mean::Vector{Float32}, running_var::Vector{Float32}, scale::SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, bias::SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, reduce_dims::Val{(1,)}, training::Val{true}, momentum::Float32, epsilon::Float32)
    @ LuxLib C:\Users\htran\.julia\packages\LuxLib\wG638\src\impl\normalization.jl:71
  [5] batchnorm(x::Vector{Float64}, scale::SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, bias::SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, running_mean::Vector{Float32}, running_var::Vector{Float32}; momentum::Float32, training::Val{true}, epsilon::Float32)
    @ LuxLib C:\Users\htran\.julia\packages\LuxLib\wG638\src\api\batchnorm.jl:47
  [6] (::BatchNorm{true, true, typeof(NNlib.relu), typeof(Lux.zeros32), typeof(Lux.ones32), Float32})(x::Vector{Float64}, ps::ComponentArrays.ComponentVector{Float64, SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, Tuple{ComponentArrays.Axis{(scale = 1:1, bias = 2:2)}}}, st::NamedTuple{(:running_mean, :running_var, :training), Tuple{Vector{Float32}, Vector{Float32}, Val{true}}})
    @ Lux C:\Users\htran\.julia\packages\Lux\8FZSB\src\layers\normalize.jl:125
  [7] apply(model::BatchNorm{true, true, typeof(NNlib.relu), typeof(Lux.zeros32), typeof(Lux.ones32), Float32}, x::Vector{Float64}, ps::ComponentArrays.ComponentVector{Float64, SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, Tuple{ComponentArrays.Axis{(scale = 1:1, bias = 2:2)}}}, st::NamedTuple{(:running_mean, :running_var, :training), Tuple{Vector{Float32}, Vector{Float32}, Val{true}}})
    @ LuxCore C:\Users\htran\.julia\packages\LuxCore\yC3wg\src\LuxCore.jl:100
  [8] macro expansion
    @ .\abstractarray.jl:0 [inlined]
  [9] applychain(layers::NamedTuple{(:layer_1, :layer_2, :layer_3, :layer_4, :layer_5), Tuple{BatchNorm{true, true, typeof(NNlib.relu), typeof(Lux.zeros32), typeof(Lux.ones32), Float32}, Dense{true, typeof(NNlib.tanh_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(NNlib.tanh_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(NNlib.tanh_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(identity), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}}}, x::Vector{Float64}, ps::ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:2, Axis(scale = 1:1, bias = 2:2)), layer_2 = ViewAxis(3:22, Axis(weight = ViewAxis(1:10, ShapedAxis((10, 1), NamedTuple())), bias = ViewAxis(11:20, ShapedAxis((10, 1), NamedTuple())))), layer_3 = ViewAxis(23:242, Axis(weight = ViewAxis(1:200, ShapedAxis((20, 10), NamedTuple())), bias = ViewAxis(201:220, ShapedAxis((20, 1), NamedTuple())))), layer_4 = ViewAxis(243:452, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20), NamedTuple())), bias = ViewAxis(201:210, ShapedAxis((10, 1), NamedTuple())))), layer_5 = ViewAxis(453:463, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10), NamedTuple())), bias = ViewAxis(11:11, ShapedAxis((1, 1), NamedTuple())))))}}}, st::NamedTuple{(:layer_1, :layer_2, :layer_3, :layer_4, :layer_5), Tuple{NamedTuple{(:running_mean, :running_var, :training), Tuple{Vector{Float32}, Vector{Float32}, Val{true}}}, Vararg{NamedTuple{(), Tuple{}}, 4}}})
    @ Lux C:\Users\htran\.julia\packages\Lux\8FZSB\src\layers\containers.jl:460
 [10] (::Chain{NamedTuple{(:layer_1, :layer_2, :layer_3, :layer_4, :layer_5), Tuple{BatchNorm{true, true, typeof(NNlib.relu), typeof(Lux.zeros32), typeof(Lux.ones32), Float32}, Dense{true, typeof(NNlib.tanh_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(NNlib.tanh_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(NNlib.tanh_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(identity), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}}}})(x::Vector{Float64}, ps::ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:2, Axis(scale = 1:1, bias = 2:2)), layer_2 = ViewAxis(3:22, Axis(weight = ViewAxis(1:10, ShapedAxis((10, 1), NamedTuple())), bias = ViewAxis(11:20, ShapedAxis((10, 1), NamedTuple())))), layer_3 = ViewAxis(23:242, Axis(weight = ViewAxis(1:200, ShapedAxis((20, 10), NamedTuple())), bias = ViewAxis(201:220, ShapedAxis((20, 1), NamedTuple())))), layer_4 = ViewAxis(243:452, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20), NamedTuple())), bias = ViewAxis(201:210, ShapedAxis((10, 1), NamedTuple())))), layer_5 = ViewAxis(453:463, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10), NamedTuple())), bias = ViewAxis(11:11, ShapedAxis((1, 1), NamedTuple())))))}}}, st::NamedTuple{(:layer_1, :layer_2, :layer_3, :layer_4, :layer_5), Tuple{NamedTuple{(:running_mean, :running_var, :training), Tuple{Vector{Float32}, Vector{Float32}, Val{true}}}, Vararg{NamedTuple{(), Tuple{}}, 4}}})
    @ Lux C:\Users\htran\.julia\packages\Lux\8FZSB\src\layers\containers.jl:457
 [11] (::NeuralPDE.Phi{Chain{NamedTuple{(:layer_1, :layer_2, :layer_3, :layer_4, :layer_5), Tuple{BatchNorm{true, true, typeof(NNlib.relu), typeof(Lux.zeros32), typeof(Lux.ones32), Float32}, Dense{true, typeof(NNlib.tanh_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(NNlib.tanh_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(NNlib.tanh_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(identity), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}}}}, NamedTuple{(:layer_1, :layer_2, :layer_3, :layer_4, :layer_5), Tuple{NamedTuple{(:running_mean, :running_var, :training), Tuple{Vector{Float32}, Vector{Float32}, Val{true}}}, Vararg{NamedTuple{(), Tuple{}}, 4}}}})(x::Vector{Float64}, θ::ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:2, Axis(scale = 1:1, bias = 2:2)), layer_2 = ViewAxis(3:22, Axis(weight = ViewAxis(1:10, ShapedAxis((10, 1), NamedTuple())), bias = ViewAxis(11:20, ShapedAxis((10, 1), NamedTuple())))), layer_3 = ViewAxis(23:242, Axis(weight = ViewAxis(1:200, ShapedAxis((20, 10), NamedTuple())), bias = ViewAxis(201:220, ShapedAxis((20, 1), NamedTuple())))), layer_4 = ViewAxis(243:452, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20), NamedTuple())), bias = ViewAxis(201:210, ShapedAxis((10, 1), NamedTuple())))), layer_5 = ViewAxis(453:463, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10), NamedTuple())), bias = ViewAxis(11:11, ShapedAxis((1, 1), NamedTuple())))))}}})
    @ NeuralPDE C:\Users\htran\.julia\packages\NeuralPDE\F4RlZ\src\pinn_types.jl:365
 [12] (::var"#92#94"{Int64})(t::Float64)
    @ Main .\none:0
 [13] iterate
    @ .\generator.jl:47 [inlined]
 [14] collect(itr::Base.Generator{StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, var"#92#94"{Int64}})
    @ Base .\array.jl:782
 [15] (::var"#91#93")(i::Int64)
    @ Main .\none:0
 [16] iterate
    @ .\generator.jl:47 [inlined]
 [17] collect(itr::Base.Generator{UnitRange{Int64}, var"#91#93"})
    @ Base .\array.jl:782
 [18] top-level scope
    @ In[53]:3

Thank you.
3gens.csv
Y1.csv

@pnavaro
Copy link

pnavaro commented Jul 30, 2023

same issue with the example https://docs.sciml.ai/NeuralPDE/stable/tutorials/pdesystem/

Status `~/JuliaProjects/PINNs.jl/Project.toml`
⌅ [b2108857] Lux v0.4.58
  [961ee093] ModelingToolkit v8.63.0
  [315f7962] NeuralPDE v5.7.0
  [7f7a1694] Optimization v3.15.2
  [36348300] OptimizationOptimJL v0.1.9

@HuynhTran0301
Copy link
Author

The issue had been solved in the new version packages.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants