Tutorials

Here are some tutorials on how to use Tulipa.

Basic example

For our first example, let's use a tiny existing dataset. Inside the code for this package, you can find the folder test/inputs/Tiny, which includes all the files necessary to create a model and solve it.

The files inside the "Tiny" folder define the assets and flows data, their profiles, and their time resolution, as well as define the representative periods and which periods in the full problem formulation they represent.

For more details about these files, see Input.

Run scenario

To read all data from the Tiny folder, perform all necessary steps to create a model, and solve the model, run the following in a Julia terminal:

using DuckDB, TulipaIO, TulipaEnergyModel

# input_dir should be the path to Tiny as a string (something like "test/inputs/Tiny")
# TulipaEnergyModel.schema_per_table_name contains the schema with columns and types the file must have
connection = DBInterface.connect(DuckDB.DB)
read_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)
energy_problem = run_scenario(connection)
EnergyProblem:
  - Model created!
    - Number of variables: 364
    - Number of constraints for variable bounds: 364
    - Number of structural constraints: 432
  - Model solved!
    - Termination status: OPTIMAL
    - Objective value: 269238.4382415647

The energy_problem variable is of type EnergyProblem. For more details, see the documentation for that type or the section Structures.

That's all it takes to run a scenario! To learn about the data required to run your own scenario, see the Input section of How to Use.

Manually running each step

If we need more control, we can create the energy problem first, then the optimization model inside it, and finally ask for it to be solved.

using DuckDB, TulipaIO, TulipaEnergyModel

# input_dir should be the path to Tiny as a string (something like "test/inputs/Tiny")
connection = DBInterface.connect(DuckDB.DB)
read_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)
energy_problem = EnergyProblem(connection)
EnergyProblem:
  - Model not created!
  - Model not solved!

The energy problem does not have a model yet:

energy_problem.model === nothing
true

To create the internal model, we call the function create_model!.

create_model!(energy_problem)
energy_problem.model
A JuMP Model
├ solver: none
├ objective_sense: MIN_SENSE
│ └ objective_function_type: JuMP.AffExpr
├ num_variables: 364
├ num_constraints: 801
│ ├ JuMP.AffExpr in MOI.EqualTo{Float64}: 72
│ ├ JuMP.AffExpr in MOI.LessThan{Float64}: 360
│ ├ JuMP.VariableRef in MOI.GreaterThan{Float64}: 364
│ ├ JuMP.VariableRef in MOI.LessThan{Float64}: 1
│ └ JuMP.VariableRef in MOI.Integer: 4
└ Names registered in the model
  └ :balance_consumer, :balance_conversion, :balance_hub, :balance_storage_over_clustered_year, :balance_storage_rep_period, :investment_group_max_limit, :investment_group_min_limit, :limit_units_on, :max_energy_over_clustered_year, :max_input_flows_limit, :max_input_flows_limit_investable_storage_with_binary_and__with_investment_limit, :max_input_flows_limit_investable_storage_with_binary_and__with_investment_variable, :max_input_flows_limit_non_investable_storage_with_binary, :max_output_flow_with_basic_unit_commitment, :max_output_flows_limit, :max_output_flows_limit_investable_storage_with_binary_and_with_investment_limit, :max_output_flows_limit_investable_storage_with_binary_and_with_investment_variable, :max_output_flows_limit_non_investable_storage_with_binary, :max_ramp_down_with_unit_commitment, :max_ramp_down_without_unit_commitment, :max_ramp_up_with_unit_commitment, :max_ramp_up_without_unit_commitment, :max_storage_level_over_clustered_year_limit, :max_storage_level_rep_period_limit, :max_transport_flow_limit, :min_energy_over_clustered_year, :min_output_flow_with_unit_commitment, :min_storage_level_over_clustered_year_limit, :min_storage_level_rep_period_limit, :min_transport_flow_limit

The model has not been solved yet, which can be verified through the solved flag inside the energy problem:

energy_problem.solved
false

Finally, we can solve the model:

solve_model!(energy_problem)

The compute the solution and save it in the DuckDB connection, we can use

save_solution!(energy_problem)

The solutions will be saved in the variable and constraints tables. To save the solution to CSV files, you can use export_solution_to_csv_files

mkdir("output_folder")
export_solution_to_csv_files("output_folder", energy_problem)

The objective value and the termination status are also included in the energy problem:

energy_problem.objective_value, energy_problem.termination_status
(269238.4382415647, MathOptInterface.OPTIMAL)

Manually creating all structures without EnergyProblem

The EnergyProblem structure holds various internal structures, including the JuMP model and the DuckDB connection. There is currently no reason to manually create and maintain these structures yourself, so we recommend that you use the previous sections instead.

To avoid having to update this documentation whenever we make changes to the internals of TulipaEnergyModel before the v1.0.0 release, we will keep this section empty until then.

Change optimizer and specify parameters

By default, the model is solved using the HiGHS optimizer (or solver). To change this, we can give the functions run_scenario or solve_model! a different optimizer.

Warning

HiGHS is the only open source solver that we recommend. GLPK and Cbc are not (fully) tested for Tulipa.

For instance, let's run the Tiny example using the GLPK optimizer:

using DuckDB, TulipaIO, TulipaEnergyModel, GLPK

# input_dir should be the path to Tiny as a string (something like "test/inputs/Tiny")
connection = DBInterface.connect(DuckDB.DB)
read_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)
energy_problem = run_scenario(connection, optimizer = GLPK.Optimizer)
EnergyProblem:
  - Model created!
    - Number of variables: 364
    - Number of constraints for variable bounds: 364
    - Number of structural constraints: 432
  - Model solved!
    - Termination status: OPTIMAL
    - Objective value: 269238.4382403546

or

using GLPK

solution = solve_model!(energy_problem, GLPK.Optimizer)
Info

Notice that, in any of these cases, we need to explicitly add the GLPK package ourselves and add using GLPK before using GLPK.Optimizer.

In any of these cases, default parameters for the GLPK optimizer are used, which you can query using default_parameters. You can pass a dictionary using the keyword argument parameters to change the defaults. For instance, in the example below, we change the maximum allowed runtime for GLPK to be 1 seconds, which will most likely cause it to fail to converge in time.

using DuckDB, TulipaIO, TulipaEnergyModel, GLPK

parameters = Dict("tm_lim" => 1)
connection = DBInterface.connect(DuckDB.DB)
read_csv_folder(connection, input_dir; schemas = TulipaEnergyModel.schema_per_table_name)
energy_problem = run_scenario(connection, optimizer = GLPK.Optimizer, parameters = parameters)
energy_problem.termination_status
TIME_LIMIT::TerminationStatusCode = 12

For the complete list of parameters, check your chosen optimizer.

These parameters can also be passed via a file. See the read_parameters_from_file function for more details.