Tutorials

Here are some tutorials on how to use Tulipa.

Basic example

For our first example, let's use a tiny existing dataset. Inside the code for this package, you can find the folder test/inputs/Tiny, which includes all the files necessary to create a model and solve it.

The files inside the "Tiny" folder define the assets and flows data, their profiles, and their time resolution, as well as define the representative periods and which periods in the full problem formulation they represent.¹

For more details about these files, see Input.

¹ Ignore bad-assets-data.csv, which is used for testing.

Run scenario

To read all data from the Tiny folder, perform all necessary steps to create a model, and solve the model, run the following in a Julia terminal:

using DuckDB, TulipaIO, TulipaEnergyModel

# input_dir should be the path to Tiny as a string (something like "test/inputs/Tiny")
# TulipaEnergyModel.schema_per_table_name contains the schema with columns and types the file must have
connection = DBInterface.connect(DuckDB.DB)
read_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)
energy_problem = run_scenario(connection)
EnergyProblem:
  - Time creating internal structures (in seconds): 8.062490649
  - Time computing constraints partitions (in seconds): 4.751217182
  - Model created!
    - Time for  creating the model (in seconds): 10.905695627
    - Number of variables: 368
    - Number of constraints for variable bounds: 368
    - Number of structural constraints: 432
  - Model solved! 
    - Time for  solving the model (in seconds): 2.342701207
    - Termination status: OPTIMAL
    - Objective value: 269238.4382375954

The energy_problem variable is of type EnergyProblem. For more details, see the documentation for that type or the section Structures.

That's all it takes to run a scenario! To learn about the data required to run your own scenario, see the Input section of How to Use.

Manually running each step

If we need more control, we can create the energy problem first, then the optimization model inside it, and finally ask for it to be solved.

using DuckDB, TulipaIO, TulipaEnergyModel

# input_dir should be the path to Tiny as a string (something like "test/inputs/Tiny")
connection = DBInterface.connect(DuckDB.DB)
read_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)
energy_problem = EnergyProblem(connection)
EnergyProblem:
  - Time creating internal structures (in seconds): 0.133250186
  - Time computing constraints partitions (in seconds): 0.000205715
  - Model not created!
  - Model not solved!

The energy problem does not have a model yet:

energy_problem.model === nothing
true

To create the internal model, we call the function create_model!.

create_model!(energy_problem)
energy_problem.model
A JuMP Model
├ solver: none
├ objective_sense: MIN_SENSE
│ └ objective_function_type: JuMP.AffExpr
├ num_variables: 368
├ num_constraints: 809
│ ├ JuMP.AffExpr in MOI.EqualTo{Float64}: 72
│ ├ JuMP.AffExpr in MOI.LessThan{Float64}: 360
│ ├ JuMP.VariableRef in MOI.GreaterThan{Float64}: 368
│ ├ JuMP.VariableRef in MOI.LessThan{Float64}: 1
│ └ JuMP.VariableRef in MOI.Integer: 8
└ Names registered in the model
  └ :accumulated_decommission_units_transport_using_simple_method, :accumulated_decommission_units_using_compact_method, :accumulated_decommission_units_using_simple_method, :accumulated_energy_capacity, :accumulated_energy_units_simple_method, :accumulated_flows_export_units, :accumulated_flows_import_units, :accumulated_initial_units, :accumulated_investment_limit, :accumulated_investment_units_transport_using_simple_method, :accumulated_investment_units_using_simple_method, :accumulated_units, :accumulated_units_compact_method, :accumulated_units_simple_method, :assets_decommission_compact_method, :assets_decommission_energy_simple_method, :assets_decommission_simple_method, :assets_investment, :assets_investment_energy, :assets_profile_times_capacity_in, :assets_profile_times_capacity_in_with_binary_part1, :assets_profile_times_capacity_in_with_binary_part2, :assets_profile_times_capacity_out, :assets_profile_times_capacity_out_with_binary_part1, :assets_profile_times_capacity_out_with_binary_part2, :consumer_balance, :conversion_balance, :flow, :flows_decommission_using_simple_method, :flows_investment, :hub_balance, :incoming_flow_highest_in_out_resolution, :incoming_flow_highest_in_resolution, :incoming_flow_lowest_resolution, :incoming_flow_lowest_storage_resolution_intra_rp, :incoming_flow_storage_inter_rp_balance, :is_charging, :max_energy_inter_rp, :max_input_flows_limit, :max_input_flows_limit_with_binary_part1, :max_input_flows_limit_with_binary_part2, :max_output_flows_limit, :max_output_flows_limit_with_binary_part1, :max_output_flows_limit_with_binary_part2, :max_storage_level_inter_rp_limit, :max_storage_level_intra_rp_limit, :max_transport_flow_limit, :min_energy_inter_rp, :min_storage_level_inter_rp_limit, :min_storage_level_intra_rp_limit, :min_transport_flow_limit, :outgoing_flow_highest_in_out_resolution, :outgoing_flow_highest_out_resolution, :outgoing_flow_lowest_resolution, :outgoing_flow_lowest_storage_resolution_intra_rp, :outgoing_flow_storage_inter_rp_balance, :storage_level_inter_rp, :storage_level_intra_rp, :units_on

The model has not been solved yet, which can be verified through the solved flag inside the energy problem:

energy_problem.solved
false

Finally, we can solve the model:

solution = solve_model!(energy_problem)
TulipaEnergyModel.Solution(Dict((2030, "wind") => 35.0, (2030, "ocgt") => 1.1368683772161603e-15, (2030, "ccgt") => 2.0, (2030, "solar") => 44.99999994237288), Dict{Tuple{Int64, String}, Float64}(), Dict{Tuple{Int64, Tuple{String, String}}, Float64}(), Float64[], Float64[], Float64[], Float64[], [0.0, 1.1368683772161603e-13, 1.1368683772161603e-13, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0  …  0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 269238.4382388837, Dict(:hub_balance => [], :consumer_balance => [9.125, 0.1825, 0.1825, 0.1825, 0.1825, 0.1825, 0.1825, 0.1825, 9.125, 9.125  …  3.6500000000000004, 3.6500000000000004, 3.6500000000000004, 3.6500000000000004, 3.6500000000000004, 13.139999999999999, 13.139999999999999, 13.139999999999999, 13.139999999999999, 13.139999999999999]))

The solution is included in the individual assets and flows, but for completeness, we return the full solution object, also defined in the Structures section.

In particular, the objective value and the termination status are also included in the energy problem:

energy_problem.objective_value, energy_problem.termination_status
(269238.4382375954, MathOptInterface.OPTIMAL)

Manually creating all structures without EnergyProblem

For additional control, it might be desirable to use the internal structures of EnergyProblem directly. This can be error-prone, so use it with care. The full description for these structures can be found in Structures.

using DuckDB, TulipaIO, TulipaEnergyModel

# input_dir should be the path to Tiny as a string (something like "test/inputs/Tiny")
connection = DBInterface.connect(DuckDB.DB)
read_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)
model_parameters = ModelParameters(connection)
graph, representative_periods, timeframe, groups, years = create_internal_structures(connection)
(Meta graph based on a Graphs.SimpleGraphs.SimpleDiGraph{Int64} with vertex labels of type String, vertex metadata of type GraphAssetData, edge metadata of type GraphFlowData, graph metadata given by nothing, and default weight nothing, Dict{Int64, Vector{RepresentativePeriod}}(2030 => [RepresentativePeriod(73.0, 1:24, 1.0), RepresentativePeriod(182.5, 1:24, 1.0), RepresentativePeriod(109.5, 1:24, 1.0)]), Timeframe(365, 366×4 DataFrame
 Row │ year    period  rep_period  weight
     │ Int32?  Int32?  Int32?      Float64?
─────┼──────────────────────────────────────
   1 │   2030       1           1       1.0
   2 │   2030       2           1       1.0
   3 │   2030       3           2       1.0
   4 │   2030       4           2       1.0
   5 │   2030       5           2       1.0
   6 │   2030       6           2       1.0
   7 │   2030       7           2       1.0
   8 │   2030       8           3       1.0
  ⋮  │   ⋮       ⋮         ⋮          ⋮
 360 │   2030     360           3       1.0
 361 │   2030     361           1       1.0
 362 │   2030     362           2       1.0
 363 │   2030     363           2       1.0
 364 │   2030     364           3       1.0
 365 │   2030     365           2       0.5
 366 │   2030     365           3       0.5
                            351 rows omitted), Group[], Year[Year(2030, 8760, true)])

We also need a time partition for the constraints to create the model. Creating an energy problem automatically computes this data, but since we are doing it manually, we need to calculate it ourselves.

constraints_partitions = compute_constraints_partitions(graph, representative_periods, years)
Dict{Symbol, Dict{Tuple{String, Int64, Int64}, Vector{UnitRange{Int64}}}} with 8 entries:
  :lowest                        => Dict(("ocgt", 2030, 1)=>[1:1, 2:2, 3:3, 4:4…
  :lowest_in_out                 => Dict()
  :highest_in                    => Dict()
  :lowest_storage_level_intra_rp => Dict()
  :units_on                      => Dict()
  :highest_in_out                => Dict(("demand", 2030, 2)=>[1:1, 2:2, 3:3, 4…
  :highest_out                   => Dict(("ocgt", 2030, 1)=>[1:1, 2:2, 3:3, 4:4…
  :units_on_and_outflows         => Dict()

The constraints_partitions has two dictionaries with the keys :lowest_resolution and :highest_resolution. The lowest resolution dictionary is mainly used to create the constraints for energy balance, whereas the highest resolution dictionary is mainly used to create the capacity constraints in the model.

Finally, we also need dataframes that store the linearized indexes of the variables.

dataframes = construct_dataframes(graph, representative_periods, constraints_partitions, years)
Dict{Symbol, DataFrames.DataFrame} with 12 entries:
  :storage_level_inter_rp        => 0×4 DataFrame…
  :max_energy_inter_rp           => 0×4 DataFrame…
  :highest_in                    => 0×5 DataFrame…
  :units_on                      => 0×1 DataFrame…
  :highest_in_out                => 72×5 DataFrame…
  :flows                         => 360×7 DataFrame…
  :lowest                        => 360×5 DataFrame…
  :lowest_in_out                 => 0×5 DataFrame…
  :min_energy_inter_rp           => 0×4 DataFrame…
  :lowest_storage_level_intra_rp => 0×5 DataFrame…
  :highest_out                   => 360×5 DataFrame…
  :units_on_and_outflows         => 0×5 DataFrame

Now we can compute the model.

model = create_model(graph, representative_periods, dataframes, years, timeframe, groups, model_parameters)
A JuMP Model
├ solver: none
├ objective_sense: MIN_SENSE
│ └ objective_function_type: JuMP.AffExpr
├ num_variables: 368
├ num_constraints: 809
│ ├ JuMP.AffExpr in MOI.EqualTo{Float64}: 72
│ ├ JuMP.AffExpr in MOI.LessThan{Float64}: 360
│ ├ JuMP.VariableRef in MOI.GreaterThan{Float64}: 368
│ ├ JuMP.VariableRef in MOI.LessThan{Float64}: 1
│ └ JuMP.VariableRef in MOI.Integer: 8
└ Names registered in the model
  └ :accumulated_decommission_units_transport_using_simple_method, :accumulated_decommission_units_using_compact_method, :accumulated_decommission_units_using_simple_method, :accumulated_energy_capacity, :accumulated_energy_units_simple_method, :accumulated_flows_export_units, :accumulated_flows_import_units, :accumulated_initial_units, :accumulated_investment_limit, :accumulated_investment_units_transport_using_simple_method, :accumulated_investment_units_using_simple_method, :accumulated_units, :accumulated_units_compact_method, :accumulated_units_simple_method, :assets_decommission_compact_method, :assets_decommission_energy_simple_method, :assets_decommission_simple_method, :assets_investment, :assets_investment_energy, :assets_profile_times_capacity_in, :assets_profile_times_capacity_in_with_binary_part1, :assets_profile_times_capacity_in_with_binary_part2, :assets_profile_times_capacity_out, :assets_profile_times_capacity_out_with_binary_part1, :assets_profile_times_capacity_out_with_binary_part2, :consumer_balance, :conversion_balance, :flow, :flows_decommission_using_simple_method, :flows_investment, :hub_balance, :incoming_flow_highest_in_out_resolution, :incoming_flow_highest_in_resolution, :incoming_flow_lowest_resolution, :incoming_flow_lowest_storage_resolution_intra_rp, :incoming_flow_storage_inter_rp_balance, :is_charging, :max_energy_inter_rp, :max_input_flows_limit, :max_input_flows_limit_with_binary_part1, :max_input_flows_limit_with_binary_part2, :max_output_flows_limit, :max_output_flows_limit_with_binary_part1, :max_output_flows_limit_with_binary_part2, :max_storage_level_inter_rp_limit, :max_storage_level_intra_rp_limit, :max_transport_flow_limit, :min_energy_inter_rp, :min_storage_level_inter_rp_limit, :min_storage_level_intra_rp_limit, :min_transport_flow_limit, :outgoing_flow_highest_in_out_resolution, :outgoing_flow_highest_out_resolution, :outgoing_flow_lowest_resolution, :outgoing_flow_lowest_storage_resolution_intra_rp, :outgoing_flow_storage_inter_rp_balance, :storage_level_inter_rp, :storage_level_intra_rp, :units_on

Finally, we can compute the solution.

solution = solve_model(model)
TulipaEnergyModel.Solution(Dict((2030, "wind") => 35.0, (2030, "ocgt") => 1.1368683772161603e-15, (2030, "ccgt") => 2.0, (2030, "solar") => 44.99999994237288), Dict{Tuple{Int64, String}, Float64}(), Dict{Tuple{Int64, Tuple{String, String}}, Float64}(), Float64[], Float64[], Float64[], Float64[], [0.0, 1.1368683772161603e-13, 1.1368683772161603e-13, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0  …  0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 269238.4382388837, Dict(:hub_balance => [], :consumer_balance => [9.125, 0.1825, 0.1825, 0.1825, 0.1825, 0.1825, 0.1825, 0.1825, 9.125, 9.125  …  3.6500000000000004, 3.6500000000000004, 3.6500000000000004, 3.6500000000000004, 3.6500000000000004, 13.139999999999999, 13.139999999999999, 13.139999999999999, 13.139999999999999, 13.139999999999999]))

or, if we want to store the flow, storage_level_intra_rp, and storage_level_inter_rp optimal value in the dataframes:

solution = solve_model!(dataframes, model)
TulipaEnergyModel.Solution(Dict((2030, "wind") => 35.0, (2030, "ocgt") => 1.1368683772161603e-15, (2030, "ccgt") => 2.0, (2030, "solar") => 44.99999994237288), Dict{Tuple{Int64, String}, Float64}(), Dict{Tuple{Int64, Tuple{String, String}}, Float64}(), Float64[], Float64[], Float64[], Float64[], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0  …  0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 269238.4382375954, Dict(:hub_balance => [], :consumer_balance => [9.125, 0.1825, 0.1825, 0.1825, 0.1825, 0.1825, 0.1825, 0.1825, 9.125, 9.125  …  3.6500000000000004, 3.6500000000000004, 3.6500000000000004, 3.6500000000000004, 3.6500000000000004, 13.139999999999999, 13.139999999999999, 13.139999999999999, 13.139999999999999, 13.139999999999999]))

This solution structure is the same as the one returned when using an EnergyProblem.

Change optimizer and specify parameters

By default, the model is solved using the HiGHS optimizer (or solver). To change this, we can give the functions run_scenario, solve_model, or solve_model! a different optimizer.

For instance, we run the GLPK optimizer below:

using DuckDB, TulipaIO, TulipaEnergyModel, GLPK

connection = DBInterface.connect(DuckDB.DB)
read_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)
energy_problem = run_scenario(connection, optimizer = GLPK.Optimizer)
EnergyProblem:
  - Time creating internal structures (in seconds): 0.129456486
  - Time computing constraints partitions (in seconds): 0.000214462
  - Model created!
    - Time for  creating the model (in seconds): 0.006781487
    - Number of variables: 368
    - Number of constraints for variable bounds: 368
    - Number of structural constraints: 432
  - Model solved! 
    - Time for  solving the model (in seconds): 2.303393069
    - Termination status: OPTIMAL
    - Objective value: 269238.4382417078

or

using GLPK

solution = solve_model!(energy_problem, GLPK.Optimizer)
TulipaEnergyModel.Solution(Dict((2030, "wind") => 35.0, (2030, "ocgt") => 1.1368683772161603e-15, (2030, "ccgt") => 2.0, (2030, "solar") => 44.99999994237288), Dict{Tuple{Int64, String}, Float64}(), Dict{Tuple{Int64, Tuple{String, String}}, Float64}(), Float64[], Float64[], Float64[], Float64[], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0  …  0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 269238.4382375954, Dict(:hub_balance => [], :consumer_balance => [9.125, 0.1825, 0.1825, 0.1825, 0.1825, 0.1825, 0.1825, 0.1825, 9.125, 9.125  …  3.6500000000000004, 3.6500000000000004, 3.6500000000000004, 3.6500000000000004, 3.6500000000000004, 13.139999999999999, 13.139999999999999, 13.139999999999999, 13.139999999999999, 13.139999999999999]))

or

using GLPK

solution = solve_model(model, GLPK.Optimizer)
TulipaEnergyModel.Solution(Dict((2030, "wind") => 35.0, (2030, "ocgt") => 1.1368683772161603e-15, (2030, "ccgt") => 2.0, (2030, "solar") => 44.99999994237288), Dict{Tuple{Int64, String}, Float64}(), Dict{Tuple{Int64, Tuple{String, String}}, Float64}(), Float64[], Float64[], Float64[], Float64[], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0  …  0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 269238.4382375954, Dict(:hub_balance => [], :consumer_balance => [9.125, 0.1825, 0.1825, 0.1825, 0.1825, 0.1825, 0.1825, 0.1825, 9.125, 9.125  …  3.6500000000000004, 3.6500000000000004, 3.6500000000000004, 3.6500000000000004, 3.6500000000000004, 13.139999999999999, 13.139999999999999, 13.139999999999999, 13.139999999999999, 13.139999999999999]))

Notice that, in any of these cases, we need to explicitly add the GLPK package ourselves and add using GLPK before using GLPK.Optimizer.

In any of these cases, default parameters for the GLPK optimizer are used, which you can query using default_parameters. You can pass a dictionary using the keyword argument parameters to change the defaults. For instance, in the example below, we change the maximum allowed runtime for GLPK to be 1 seconds, which will most likely cause it to fail to converge in time.

using DuckDB, TulipaIO, TulipaEnergyModel, GLPK

parameters = Dict("tm_lim" => 1)
connection = DBInterface.connect(DuckDB.DB)
read_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)
energy_problem = run_scenario(connection, optimizer = GLPK.Optimizer, parameters = parameters)
energy_problem.termination_status
TIME_LIMIT::TerminationStatusCode = 12

For the complete list of parameters, check your chosen optimizer.

These parameters can also be passed via a file. See the read_parameters_from_file function for more details.

Using the graph structure

Read about the graph structure in the Graph section first.

We will use the graph created above for the "Tiny" dataset.

The first thing that we can do is access all assets. They are the labels of the graph and can be accessed via the MetaGraphsNext API:

using MetaGraphsNext
# Accessing assets
labels(graph)
Base.Generator{Base.OneTo{Int64}, MetaGraphsNext.var"#20#21"{MetaGraphsNext.MetaGraph{Int64, Graphs.SimpleGraphs.SimpleDiGraph{Int64}, String, GraphAssetData, GraphFlowData, Nothing, Nothing, Nothing}}}(MetaGraphsNext.var"#20#21"{MetaGraphsNext.MetaGraph{Int64, Graphs.SimpleGraphs.SimpleDiGraph{Int64}, String, GraphAssetData, GraphFlowData, Nothing, Nothing, Nothing}}(Meta graph based on a Graphs.SimpleGraphs.SimpleDiGraph{Int64} with vertex labels of type String, vertex metadata of type GraphAssetData, edge metadata of type GraphFlowData, graph metadata given by nothing, and default weight nothing), Base.OneTo(6))

Notice that the result is a generator, so if we want the actual results, we have to collect it:

labels(graph) |> collect
6-element Vector{String}:
 "ocgt"
 "ccgt"
 "wind"
 "solar"
 "ens"
 "demand"

To access the asset data, we can index the graph with an asset label:

graph["ocgt"]
GraphAssetData("producer", missing, "simple", Dict{Int64, Bool}(2030 => 1), Dict{Int64, Bool}(2030 => 1), Dict{Int64, Bool}(2030 => 1), 15.0, 1.0, 0.05, Dict(2030 => 25.0), Dict(2030 => 0.0), Dict{Int64, Union{Missing, Float64}}(2030 => missing), 100.0, Dict(2030 => Dict(2030 => 0.0)), Dict(2030 => 0.0), Dict{Int64, Union{MathOptInterface.EqualTo, MathOptInterface.GreaterThan}}(2030 => MathOptInterface.EqualTo{Float64}(0.0)), Dict{Int64, Bool}(2030 => 0), Dict{Int64, Union{Missing, Float64}}(2030 => 0.0), Dict(2030 => 0.0), Dict{Int64, Union{Missing, Float64}}(2030 => 0.0), Dict(2030 => 0.0), Dict{Int64, Bool}(2030 => 0), Dict(2030 => 0.0), Dict(2030 => 5.0), Dict{Int64, Union{Missing, Float64}}(2030 => missing), 0.0, Dict{Int64, Bool}(2030 => 0), Dict{Int64, Union{Missing, String}}(2030 => missing), Dict{Int64, Union{Missing, Float64}}(2030 => missing), Dict{Int64, Union{Missing, Float64}}(2030 => missing), Dict{Int64, Bool}(2030 => 0), Dict{Int64, Union{Missing, String}}(2030 => missing), Dict{Int64, Union{Missing, Float64}}(2030 => missing), Dict{Int64, Bool}(2030 => 0), Dict{Int64, Union{Missing, Float64}}(2030 => missing), Dict{Int64, Bool}(2030 => 0), Dict{Int64, Union{Missing, Float64}}(2030 => missing), Dict{Int64, Union{Missing, Float64}}(2030 => missing), Dict{Int64, Dict{Int64, Dict{String, Vector{Float64}}}}(), Dict{Int64, Dict{Int64, Dict{Tuple{String, Int64}, Vector{Float64}}}}(), Dict{Int64, Vector{UnitRange{Int64}}}(), Dict{Int64, Dict{Int64, Vector{UnitRange{Int64}}}}(2030 => Dict(2 => [1:1, 2:2, 3:3, 4:4, 5:5, 6:6, 7:7, 8:8, 9:9, 10:10  …  15:15, 16:16, 17:17, 18:18, 19:19, 20:20, 21:21, 22:22, 23:23, 24:24], 3 => [1:1, 2:2, 3:3, 4:4, 5:5, 6:6, 7:7, 8:8, 9:9, 10:10  …  15:15, 16:16, 17:17, 18:18, 19:19, 20:20, 21:21, 22:22, 23:23, 24:24], 1 => [1:1, 2:2, 3:3, 4:4, 5:5, 6:6, 7:7, 8:8, 9:9, 10:10  …  15:15, 16:16, 17:17, 18:18, 19:19, 20:20, 21:21, 22:22, 23:23, 24:24])), Dict{Int64, Float64}(), Dict{Int64, Float64}(), Dict{Tuple{Int64, UnitRange{Int64}}, Float64}(), Dict{UnitRange{Int64}, Float64}(), Dict{UnitRange{Int64}, Float64}(), Dict{UnitRange{Int64}, Float64}())

This is a Julia struct, or composite type, named GraphAssetData. We can access its fields with .:

graph["ocgt"].type
"producer"

Since labels returns a generator, we can iterate over its contents without collecting it into a vector.

for a in labels(graph)
    println("Asset $a has type $(graph[a].type)")
end
Asset ocgt has type producer
Asset ccgt has type producer
Asset wind has type producer
Asset solar has type producer
Asset ens has type producer
Asset demand has type consumer

To get all flows we can use edge_labels:

edge_labels(graph) |> collect
5-element Vector{Tuple{String, String}}:
 ("ocgt", "demand")
 ("ccgt", "demand")
 ("wind", "demand")
 ("solar", "demand")
 ("ens", "demand")

To access the flow data, we index with graph[u, v]:

graph["ocgt", "demand"]
GraphFlowData("electricity", Dict{Int64, Bool}(2030 => 1), false, Dict{Int64, Bool}(2030 => 0), Dict{Int64, Bool}(2030 => 0), 10.0, 1.0, 0.02, Dict(2030 => 0.07), Dict(2030 => 0.0), Dict(2030 => 0.0), Dict{Int64, Union{Missing, Float64}}(2030 => missing), 0.0, Dict(2030 => 0.0), Dict(2030 => 0.0), Dict(2030 => 0.0), Dict{Int64, Dict{String, Vector{Float64}}}(), Dict{Int64, Dict{Tuple{String, Int64}, Vector{Float64}}}(), Dict{Int64, Vector{UnitRange{Int64}}}(), Dict{Int64, Dict{Int64, Vector{UnitRange{Int64}}}}(2030 => Dict(2 => [1:1, 2:2, 3:3, 4:4, 5:5, 6:6, 7:7, 8:8, 9:9, 10:10  …  15:15, 16:16, 17:17, 18:18, 19:19, 20:20, 21:21, 22:22, 23:23, 24:24], 3 => [1:1, 2:2, 3:3, 4:4, 5:5, 6:6, 7:7, 8:8, 9:9, 10:10  …  15:15, 16:16, 17:17, 18:18, 19:19, 20:20, 21:21, 22:22, 23:23, 24:24], 1 => [1:1, 2:2, 3:3, 4:4, 5:5, 6:6, 7:7, 8:8, 9:9, 10:10  …  15:15, 16:16, 17:17, 18:18, 19:19, 20:20, 21:21, 22:22, 23:23, 24:24])), Dict{Tuple{Int64, UnitRange{Int64}}, Float64}(), Dict{Int64, Float64}())

The type of the flow struct is GraphFlowData.

We can easily find all assets v for which a flow (a, v) exists for a given asset a (in this case, demand):

inneighbor_labels(graph, "demand") |> collect
5-element Vector{String}:
 "ocgt"
 "ccgt"
 "wind"
 "solar"
 "ens"

Similarly, all assets u for which a flow (u, a) exists for a given asset a (in this case, ocgt):

outneighbor_labels(graph, "ocgt") |> collect
1-element Vector{String}:
 "demand"

Manipulating the solution

First, see the description of the solution object.

Let's consider the larger dataset "Norse" in this section. And let's talk about two ways to access the solution.

The solution returned by solve_model

The solution, as shown before, can be obtained when calling solve_model or solve_model!.

using DuckDB, TulipaIO, TulipaEnergyModel

# input_dir should be the path to Norse as a string (something like "test/inputs/Norse")
connection = DBInterface.connect(DuckDB.DB)
read_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)
energy_problem = EnergyProblem(connection)
create_model!(energy_problem)
solution = solve_model!(energy_problem)

To create a traditional array in the order given by the investable assets, one can run

The solution.flow, solution.storage_level_intra_rp, and solution.storage_level_inter_rp values are linearized according to the dataframes in the dictionary energy_problem.dataframes with keys :flows, :lowest_storage_level_intra_rp, and :storage_level_inter_rp, respectively. You need to query the data from these dataframes and then use the column index to select the appropriate value.

To create a vector with all values of flow for a given (u, v) and rp, one can run

using MetaGraphsNext
graph = energy_problem.graph

(u, v) = first(edge_labels(graph))
rp = 1
df = filter(
    row -> row.rep_period == rp && row.from == u && row.to == v,
    energy_problem.dataframes[:flows],
    view = true,
)
[solution.flow[row.index] for row in eachrow(df)]
70-element Vector{Float64}:
   0.0
   0.0
   0.0
   0.0
   0.0
   0.0
   0.0
   0.0
   0.0
   0.0
   ⋮
   0.0
   0.0
   0.0
   0.0
   0.0
 475.0
   0.0
   0.0
   0.0

To create a vector with the all values of storage_level_intra_rp for a given a and rp, one can run

a = energy_problem.dataframes[:lowest_storage_level_intra_rp].asset[1]
rp = 1
df = filter(
    row -> row.asset == a && row.rep_period == rp,
    energy_problem.dataframes[:lowest_storage_level_intra_rp],
    view = true,
)
[solution.storage_level_intra_rp[row.index] for row in eachrow(df)]
56-element Vector{Float64}:
    0.0
    0.0
   41.79999999999999
  416.1
  718.2
  720.1
  720.1
  720.1
  720.1
  720.1
    ⋮
  586.75
  586.75
  586.75
  586.75
  843.25
 1000.0
    0.0
    0.0
    0.0

To create a vector with the all values of storage_level_inter_rp for a given a, one can run

a = energy_problem.dataframes[:storage_level_inter_rp].asset[1]
df = filter(
    row -> row.asset == a,
    energy_problem.dataframes[:storage_level_inter_rp],
    view = true,
)
[solution.storage_level_inter_rp[row.index] for row in eachrow(df)]
52-element Vector{Float64}:
 26125.0
 27250.0
 28375.0
 29500.000000000004
 30625.000000000004
 31750.000000000004
 32875.0
 34000.0
 35125.0
 36250.0
     ⋮
 28300.000000000004
 27500.000000000004
 28625.000000000007
 29750.000000000015
 30875.000000000022
 32000.00000000003
 33125.00000000004
 34250.000000000044
 35535.71428571434

The solution inside the graph

In addition to the solution object, the solution is also stored by the individual assets and flows when solve_model! is called (i.e., when using an EnergyProblem object).

They can be accessed like any other value from GraphAssetData or GraphFlowData, which means that we recreate the values from the previous section in a new way:

years = [year.id for year in energy_problem.years]
Dict(
    (y, a) => [
        energy_problem.graph[a].investment[y]
    ] for y in years for a in labels(graph) if graph[a].investable[y]
)
Dict{Tuple{Int64, String}, Vector{Float64}} with 12 entries:
  (2030, "Valhalla_Fuel_cell")    => [259.0]
  (2030, "Valhalla_GT")           => [0.0]
  (2030, "Valhalla_H2_storage")   => [106.0]
  (2030, "Asgard_Solar")          => [10.0]
  (2030, "Asgard_CCGT")           => [83.0]
  (2030, "Midgard_PHS")           => [1.0]
  (2030, "Valhalla_Heat_pump")    => [33.0]
  (2030, "Valhalla_Electrolyser") => [0.0]
  (2030, "Midgard_Nuclear_SMR")   => [20.202]
  (2030, "Midgard_Wind")          => [12984.0]
  (2030, "Valhalla_H2_generator") => [133.0]
  (2030, "Asgard_Battery")        => [0.0]
Dict(
    (y, a) => [
        energy_problem.graph[u, v].investment[y]
    ] for y in years for (u, v) in edge_labels(graph) if graph[u, v].investable[y]
)
Dict{Tuple{Int64, String}, Vector{Float64}} with 1 entry:
  (2030, "Midgard_Hydro") => [7.0]
(u, v) = first(edge_labels(graph))
rp = 1
df = filter(
    row -> row.rep_period == rp && row.from == u && row.to == v,
    energy_problem.dataframes[:flows],
    view = true,
)
[energy_problem.graph[u, v].flow[(rp, row.timesteps_block)] for row in eachrow(df)]
70-element Vector{Float64}:
   0.0
   0.0
   0.0
   0.0
   0.0
   0.0
   0.0
   0.0
   0.0
   0.0
   ⋮
   0.0
   0.0
   0.0
   0.0
   0.0
 475.0
   0.0
   0.0
   0.0

To create a vector with all the values of storage_level_intra_rp for a given a and rp, one can run

a = energy_problem.dataframes[:lowest_storage_level_intra_rp].asset[1]
rp = 1
df = filter(
    row -> row.asset == a && row.rep_period == rp,
    energy_problem.dataframes[:lowest_storage_level_intra_rp],
    view = true,
)
[energy_problem.graph[a].storage_level_intra_rp[(rp, row.timesteps_block)] for row in eachrow(df)]
56-element Vector{Float64}:
    0.0
    0.0
   41.79999999999999
  416.1
  718.2
  720.1
  720.1
  720.1
  720.1
  720.1
    ⋮
  586.75
  586.75
  586.75
  586.75
  843.25
 1000.0
    0.0
    0.0
    0.0

To create a vector with all the values of storage_level_inter_rp for a given a, one can run

a = energy_problem.dataframes[:storage_level_inter_rp].asset[1]
df = filter(
    row -> row.asset == a,
    energy_problem.dataframes[:storage_level_inter_rp],
    view = true,
)
[energy_problem.graph[a].storage_level_inter_rp[row.periods_block] for row in eachrow(df)]
52-element Vector{Float64}:
 26125.0
 27250.0
 28375.0
 29500.000000000004
 30625.000000000004
 31750.000000000004
 32875.0
 34000.0
 35125.0
 36250.0
     ⋮
 28300.000000000004
 27500.000000000004
 28625.000000000007
 29750.000000000015
 30875.000000000022
 32000.00000000003
 33125.00000000004
 34250.000000000044
 35535.71428571434

The solution inside the dataframes object

In addition to being stored in the solution object, and in the graph object, the solution for the flow, storage_level_intra_rp, and storage_level_inter_rp is also stored inside the corresponding DataFrame objects if solve_model! is called.

The code below will do the same as in the two previous examples:

(u, v) = first(edge_labels(graph))
rp = 1
df = filter(
    row -> row.rep_period == rp && row.from == u && row.to == v,
    energy_problem.dataframes[:flows],
    view = true,
)
df.solution
70-element view(::Vector{Float64}, 1:70) with eltype Float64:
   0.0
   0.0
   0.0
   0.0
   0.0
   0.0
   0.0
   0.0
   0.0
   0.0
   ⋮
   0.0
   0.0
   0.0
   0.0
   0.0
 475.0
   0.0
   0.0
   0.0
a = energy_problem.dataframes[:storage_level_inter_rp].asset[1]
df = filter(
    row -> row.asset == a,
    energy_problem.dataframes[:storage_level_inter_rp],
    view = true,
)
df.solution
52-element view(::Vector{Float64}, 1:52) with eltype Float64:
 26125.0
 27250.0
 28375.0
 29500.000000000004
 30625.000000000004
 31750.000000000004
 32875.0
 34000.0
 35125.0
 36250.0
     ⋮
 28300.000000000004
 27500.000000000004
 28625.000000000007
 29750.000000000015
 30875.000000000022
 32000.00000000003
 33125.00000000004
 34250.000000000044
 35535.71428571434
a = energy_problem.dataframes[:lowest_storage_level_intra_rp].asset[1]
rp = 1
df = filter(
    row -> row.asset == a && row.rep_period == rp,
    energy_problem.dataframes[:lowest_storage_level_intra_rp],
    view = true,
)
df.solution
56-element view(::Vector{Float64}, 1:56) with eltype Float64:
    0.0
    0.0
   41.79999999999999
  416.1
  718.2
  720.1
  720.1
  720.1
  720.1
  720.1
    ⋮
  586.75
  586.75
  586.75
  586.75
  843.25
 1000.0
    0.0
    0.0
    0.0

Values of constraints and expressions

By accessing the model directly, we can query the values of constraints and expressions. We need to know the name of the constraint and how it is indexed, and for that, you will need to check the model.

For instance, we can get all incoming flows in the lowest resolution for a given asset for a given representative period with the following:

using JuMP
a = energy_problem.dataframes[:lowest].asset[end]
rp = 1
df = filter(
    row -> row.asset == a && row.rep_period == rp,
    energy_problem.dataframes[:lowest],
    view = true,
)
[value(energy_problem.model[:incoming_flow_lowest_resolution][row.index]) for row in eachrow(df)]
168-element Vector{Float64}:
    0.0
 4745.582215835807
 5000.0
 5000.0
 5000.0
 5000.0
 5000.0
 5000.0
 5000.0
 5000.0
    ⋮
    0.0
    0.0
    0.0
    0.0
    0.0
    0.0
    0.0
    0.0
    0.0

The values of constraints can also be obtained, however, they are frequently indexed in a subset, which means that their indexing is not straightforward. To know how they are indexed, it is necessary to look at the model code. For instance, to get the consumer balance, we first need to filter the :highest_in_out dataframes by consumers:

df_consumers = filter(
    row -> graph[row.asset].type == "consumer",
    energy_problem.dataframes[:highest_in_out],
    view = false,
);

We set view = false to create a copy of this DataFrame so we can make our indexes:

df_consumers.index = 1:size(df_consumers, 1) # overwrites existing index
1:960

Now we can filter this DataFrame. Note that the names in the stored dataframes are defined as Symbol.

a = "Asgard_E_demand"
df = filter(
    row -> row.asset == a && row.rep_period == rp,
    df_consumers,
    view = true,
)
value.(energy_problem.model[:consumer_balance][df.index])
168-element Vector{Float64}:
 33951.9255800345
 32339.440666441926
 30618.08007858144
 29154.857719356212
 28466.28865613107
 27703.76592365483
 26184.454469129858
 26463.520980592206
 27178.204198305513
 29616.349838426668
     ⋮
 60971.61327059055
 61868.328688588
 65787.17792
 65262.26867188129
 63041.17124693834
 58471.05592686129
 54777.52475361511
 52128.801264625705
 46907.046645223265

Here value. (i.e., broadcasting) was used instead of the vector comprehension from previous examples just to show that it also works.

The value of the constraint is obtained by looking only at the part with variables. So a constraint like 2x + 3y - 1 <= 4 would return the value of 2x + 3y.

Writing the output to CSV

To save the solution to CSV files, you can use save_solution_to_file:

mkdir("outputs")
save_solution_to_file("outputs", energy_problem)

Plotting

In the previous sections, we have shown how to create vectors such as the one for flows. If you want simple plots, you can plot the vectors directly using any package you like.

If you would like more custom plots, check out TulipaPlots.jl, under development, which provides tailor-made plots for TulipaEnergyModel.jl.