Reference

​This is a comprehensive list of all of the functions in the model.
The function most pertinent to the user is: run_scenario

Index

Contents

TulipaEnergyModel.EnergyProblemType

Structure to hold all parts of an energy problem. It is a wrapper around various other relevant structures. It hides the complexity behind the energy problem, making the usage more friendly, although more verbose.

Fields

  • graph: The Graph object that defines the geometry of the energy problem.
  • representative_periods: A vector of Representative Periods.
  • constraints_partitions: Dictionaries that connect pairs of asset and representative periods to time partitions (vectors of time blocks)
  • timeframe: The number of periods of the representative_periods.
  • dataframes: The data frames used to linearize the variables and constraints. These are used internally in the model only.
  • model: A JuMP.Model object representing the optimization model.
  • solved: A boolean indicating whether the model has been solved or not.
  • objective_value: The objective value of the solved problem.
  • termination_status: The termination status of the optimization model.
  • time_read_data: Time taken for reading the data (in seconds).
  • time_create_model: Time taken for creating the model (in seconds).
  • time_solve_model: Time taken for solving the model (in seconds).

Constructor

  • EnergyProblem(graph, representative_periods, timeframe): Constructs a new EnergyProblem object with the given graph, representative periods, and timeframe. The constraints_partitions field is computed from the representative_periods, and the other fields are initialized with default values.

See the basic example tutorial to see how these can be used.

source
TulipaEnergyModel.TableTreeType

Structure to hold the tabular data.

Fields

  • static: Stores the data that does not vary inside a year. Its fields are
    • assets: Assets data.
    • flows: Flows data.
  • profiles: Stores the profile data indexed by:
    • assets: Dictionary with the reference to assets' profiles indexed by periods ("rep-periods" or "timeframe").
    • flows: Reference to flows' profiles for representative periods.
    • profiles: Actual profile data. Dictionary of dictionary indexed by periods and then by the profile name.
  • partitions: Stores the partitions data indexed by:
    • assets: Dictionary with the specification of the assets' partitions indexed by periods.
    • flows: Specification of the flows' partitions for representative periods.
  • periods: Stores the periods data, indexed by:
    • rep_periods: Representative periods.
    • timeframe: Timeframe periods.
source
TulipaEnergyModel._check_initial_storage_level!Method
_check_initial_storage_level!(df)

Determine the starting value for the initial storage level for interpolating the storage level. If there is no initial storage level given, we will use the final storage level. Otherwise, we use the given initial storage level.

source
TulipaEnergyModel._interpolate_storage_level!Method
_interpolate_storage_level!(df, time_column::Symbol)

Transform the storage level dataframe from grouped timesteps or periods to incremental ones by interpolation. The starting value is the value of the previous grouped timesteps or periods or the initial value. The ending value is the value for the grouped timesteps or periods.

source
TulipaEnergyModel._parse_rp_partitionFunction
_parse_rp_partition(Val(specification), timestep_string, rp_timesteps)

Parses the timestep_string according to the specification. The representative period timesteps (rp_timesteps) might not be used in the computation, but it will be used for validation.

The specification defines what is expected from the timestep_string:

  • :uniform: The timestep_string should be a single number indicating the duration of each block. Examples: "3", "4", "1".
  • :explicit: The timestep_string should be a semicolon-separated list of integers. Each integer is a duration of a block. Examples: "3;3;3;3", "4;4;4", "1;1;1;1;1;1;1;1;1;1;1;1", and "3;3;4;2".
  • :math: The timestep_string should be an expression of the form NxD+NxD…, where D is the duration of the block and N is the number of blocks. Examples: "4x3", "3x4", "12x1", and "2x3+1x4+1x2".

The generated blocks will be ranges (a:b). The first block starts at 1, and the last block ends at length(rp_timesteps).

The following table summarizes the formats for a rp_timesteps = 1:12:

Output:uniform:explicit:math
1:3, 4:6, 7:9, 10:1233;3;3;34x3
1:4, 5:8, 9:1244;4;43x4
1:1, 2:2, …, 12:1211;1;1;1;1;1;1;1;1;1;1;112x1
1:3, 4:6, 7:10, 11:12NA3;3;4;22x3+1x4+1x2

Examples

using TulipaEnergyModel
TulipaEnergyModel._parse_rp_partition(Val(:uniform), "3", 1:12)

# output

4-element Vector{UnitRange{Int64}}:
 1:3
 4:6
 7:9
 10:12
using TulipaEnergyModel
TulipaEnergyModel._parse_rp_partition(Val(:explicit), "4;4;4", 1:12)

# output

3-element Vector{UnitRange{Int64}}:
 1:4
 5:8
 9:12
using TulipaEnergyModel
TulipaEnergyModel._parse_rp_partition(Val(:math), "2x3+1x4+1x2", 1:12)

# output

4-element Vector{UnitRange{Int64}}:
 1:3
 4:6
 7:10
 11:12
source
TulipaEnergyModel.add_expression_terms_inter_rp_constraints!Method
add_expression_terms_inter_rp_constraints!(df_inter,
                                           df_flows,
                                           df_map,
                                           graph,
                                           representative_periods,
                                           )

Computes the incoming and outgoing expressions per row of df_inter for the constraints that are between (inter) the representative periods.

This function is only used internally in the model.

source
TulipaEnergyModel.add_expression_terms_intra_rp_constraints!Method
add_expression_terms_intra_rp_constraints!(df_cons,
                                           df_flows,
                                           workspace,
                                           representative_periods,
                                           graph;
                                           use_highest_resolution = true,
                                           multiply_by_duration = true,
                                           )

Computes the incoming and outgoing expressions per row of df_cons for the constraints that are within (intra) the representative periods.

This function is only used internally in the model.

This strategy is based on the replies in this discourse thread:

  • https://discourse.julialang.org/t/help-improving-the-speed-of-a-dataframes-operation/107615/23
source
TulipaEnergyModel.compute_assets_partitions!Method
compute_assets_partitions!(partitions, df, a, representative_periods)

Parses the time blocks in the DataFrame df for the asset a and every representative period in the timesteps_per_rp dictionary, modifying the input partitions.

partitions must be a dictionary indexed by the representative periods, possibly empty.

timesteps_per_rp must be a dictionary indexed by rp and its values are the timesteps of that rp.

To obtain the partitions, the columns specification and partition from df are passed to the function _parse_rp_partition.

source
TulipaEnergyModel.compute_constraints_partitionsMethod
cons_partitions = compute_constraints_partitions(graph, representative_periods)

Computes the constraints partitions using the assets and flows partitions stored in the graph, and the representative periods.

The function computes the constraints partitions by iterating over the partition dictionary, which specifies the partition strategy for each resolution (i.e., lowest or highest). For each asset and representative period, it calls the compute_rp_partition function to compute the partition based on the strategy.

source
TulipaEnergyModel.compute_dual_variablesMethod
compute_dual_variables(model)

Compute the dual variables for the given model.

If the model does not have dual variables, this function fixes the discrete variables, optimizes the model, and then computes the dual variables.

Arguments

  • model: The model for which to compute the dual variables.

Returns

A named tuple containing the dual variables of selected constraints.

source
TulipaEnergyModel.compute_flows_partitions!Method
compute_flows_partitions!(partitions, df, u, v, representative_periods)

Parses the time blocks in the DataFrame df for the flow (u, v) and every representative period in the timesteps_per_rp dictionary, modifying the input partitions.

partitions must be a dictionary indexed by the representative periods, possibly empty.

timesteps_per_rp must be a dictionary indexed by rp and its values are the timesteps of that rp.

To obtain the partitions, the columns specification and partition from df are passed to the function _parse_rp_partition.

source
TulipaEnergyModel.compute_rp_partitionMethod
rp_partition = compute_rp_partition(partitions, :lowest)

Given the timesteps of various flows/assets in the partitions input, compute the representative period partitions.

Each element of partitions is a partition with the following assumptions:

  • An element is of the form V = [r₁, r₂, …, rₘ], where each rᵢ is a range a:b.
  • r₁ starts at 1.
  • rᵢ₊₁ starts at the end of rᵢ plus 1.
  • rₘ ends at some value N, that is the same for all elements of partitions.

Notice that this implies that they form a disjunct partition of 1:N.

The output will also be a partition with the conditions above.

Strategies

:lowest

If strategy = :lowest (default), then the output is constructed greedily, i.e., it selects the next largest breakpoint following the algorithm below:

  1. Input: Vᴵ₁, …, Vᴵₚ, a list of time blocks. Each element of Vᴵⱼ is a range r = r.start:r.end. Output: V.
  2. Compute the end of the representative period N (all Vᴵⱼ should have the same end)
  3. Start with an empty V = []
  4. Define the beginning of the range s = 1
  5. Define an array with all the next breakpoints B such that Bⱼ is the first r.end such that r.end ≥ s for each r ∈ Vᴵⱼ.
  6. The end of the range will be the e = max Bⱼ.
  7. Define r = s:e and add r to the end of V.
  8. If e = N, then END
  9. Otherwise, define s = e + 1 and go to step 4.

Examples

partition1 = [1:4, 5:8, 9:12]
partition2 = [1:3, 4:6, 7:9, 10:12]
compute_rp_partition([partition1, partition2], :lowest)

# output

3-element Vector{UnitRange{Int64}}:
 1:4
 5:8
 9:12
partition1 = [1:1, 2:3, 4:6, 7:10, 11:12]
partition2 = [1:2, 3:4, 5:5, 6:7, 8:9, 10:12]
compute_rp_partition([partition1, partition2], :lowest)

# output

5-element Vector{UnitRange{Int64}}:
 1:2
 3:4
 5:6
 7:10
 11:12

:highest

If strategy = :highest, then the output selects includes all the breakpoints from the input. Another way of describing it, is to select the minimum end-point instead of the maximum end-point in the :lowest strategy.

Examples

partition1 = [1:4, 5:8, 9:12]
partition2 = [1:3, 4:6, 7:9, 10:12]
compute_rp_partition([partition1, partition2], :highest)

# output

6-element Vector{UnitRange{Int64}}:
 1:3
 4:4
 5:6
 7:8
 9:9
 10:12
partition1 = [1:1, 2:3, 4:6, 7:10, 11:12]
partition2 = [1:2, 3:4, 5:5, 6:7, 8:9, 10:12]
compute_rp_partition([partition1, partition2], :highest)

# output

10-element Vector{UnitRange{Int64}}:
 1:1
 2:2
 3:3
 4:4
 5:5
 6:6
 7:7
 8:9
 10:10
 11:12
source
TulipaEnergyModel.construct_dataframesMethod
dataframes = construct_dataframes(
    graph,
    representative_periods,
    constraints_partitions,
    timeframe,
)

Computes the data frames used to linearize the variables and constraints. These are used internally in the model only.

source
TulipaEnergyModel.create_connection_and_import_from_csv_folderMethod
connection = create_connection_and_import_from_csv_folder(input_folder)

Creates a DuckDB connection and reads the CSVs in the input_folder into the DB. The names of the tables will be the names of the files, except that - will be converted into _, and the extension will be ignored.

source
TulipaEnergyModel.create_input_dataframesMethod
table_tree = create_input_dataframes(connection)

Returns the table_tree::TableTree structure that holds all data using a DB connection that has loaded all the relevant tables. Set strict = true to error if assets are missing from partition data.

The following tables are expected to exist in the DB.

Warn

The schemas are currently being ignored, see issue

#636 for more information.

_ assets_timeframe_partitions: Following the schema schemas.assets.timeframe_partition. _ assets_data: Following the schema schemas.assets.data. _ assets_timeframe_profiles: Following the schema schemas.assets.profiles_reference. _ assets_rep_periods_profiles: Following the schema schemas.assets.profiles_reference. _ assets_rep_periods_partitions: Following the schema schemas.assets.rep_periods_partition. _ flows_data: Following the schema schemas.flows.data. _ flows_rep_periods_profiles: Following the schema schemas.flows.profiles_reference. _ flows_rep_periods_partitions: Following the schema schemas.flows.rep_periods_partition. _ profiles_timeframe_<type>: Following the schema schemas.timeframe.profiles_data. _ profiles_rep_periods_<type>: Following the schema schemas.rep_periods.profiles_data. _ rep_periods_data: Following the schema schemas.rep_periods.data. _ rep_periods_mapping: Following the schema schemas.rep_periods.mapping.

source
TulipaEnergyModel.create_internal_structuresMethod
graph, representative_periods, timeframe  = create_internal_structures(table_tree)

Return the graph, representative_periods, and timeframe structures given the input dataframes structure.

The details of these structures are:

source
TulipaEnergyModel.create_modelMethod
model = create_model(graph, representative_periods, dataframes, timeframe; write_lp_file = false)

Create the energy model given the graph, representative_periods, dictionary of dataframes (created by construct_dataframes), and timeframe.

source
TulipaEnergyModel.default_parametersMethod
default_parameters(Val(optimizer_name_symbol))
default_parameters(optimizer)
default_parameters(optimizer_name_symbol)
default_parameters(optimizer_name_string)

Returns the default parameters for a given JuMP optimizer. Falls back to Dict() for undefined solvers.

Arguments

There are four ways to use this function:

  • Val(optimizer_name_symbol): This uses type dispatch with the special Val type. Pass the solver name as a Symbol (e.g., Val(:HiGHS)).
  • optimizer: The JuMP optimizer type (e.g., HiGHS.Optimizer).
  • optimizer_name_symbol or optimizer_name_string: Pass the name in Symbol or String format and it will be converted to Val.

Using Val is necessary for the dispatch. All other cases will convert the argument and call the Val version, which might lead to type instability.

Examples

using HiGHS
default_parameters(HiGHS.Optimizer)

# output

Dict{String, Any} with 1 entry:
  "output_flag" => false

Another case

default_parameters(Val(:Cbc))

# output

Dict{String, Any} with 1 entry:
  "logLevel" => 0
default_parameters(:Cbc) == default_parameters("Cbc") == default_parameters(Val(:Cbc))

# output

true
source
TulipaEnergyModel.profile_aggregationMethod
profile_aggregation(agg, profiles, key, block, default_value)

Aggregates the profiles[key] over the block using the agg function. If the profile does not exist, uses default_value instead of each profile value.

profiles should be a dictionary of profiles, for instance graph[a].profiles or graph[u, v].profiles. If profiles[key] exists, then this function computes the aggregation of profiles[key] over the range block using the aggregator agg, i.e., agg(profiles[key][block]). If profiles[key] does not exist, then this substitutes it with a vector of default_values.

source
TulipaEnergyModel.read_parameters_from_fileMethod
read_parameters_from_file(filepath)

Parse the parameters from a file into a dictionary. The keys and values are NOT checked to be valid parameters for any specific solvers.

The file should contain a list of lines of the following type:

key = value

The file is parsed as TOML, which is intuitive. See the example below.

Example

# Creating file
filepath, io = mktemp()
println(io,
  """
    true_or_false = true
    integer_number = 5
    real_number1 = 3.14
    big_number = 6.66E06
    small_number = 1e-8
    string = "something"
  """
)
close(io)
# Reading
read_parameters_from_file(filepath)

# output

Dict{String, Any} with 6 entries:
  "string"         => "something"
  "integer_number" => 5
  "small_number"   => 1.0e-8
  "true_or_false"  => true
  "real_number1"   => 3.14
  "big_number"     => 6.66e6
source
TulipaEnergyModel.run_scenarioFunction
energy_problem = run_scenario(input_folder[, output_folder; optimizer, parameters])

Run the scenario in the given input_folder and return the energy problem. The output_folder is optional. If it is specified, save the sets, parameters, and solution to the output_folder.

The optimizer and parameters keyword arguments can be used to change the optimizer (the default is HiGHS) and its parameters. The variables are passed to the solve_model function.

source
TulipaEnergyModel.save_solution_to_fileMethod
save_solution_to_file(output_file, graph, solution)

Saves the solution in CSV files inside output_folder.

The following files are created:

  • assets-investment.csv: The format of each row is a,v,p*v, where a is the asset name, v is the corresponding asset investment value, and p is the corresponding capacity value. Only investable assets are included.
  • assets-investments-energy.csv: The format of each row is a,v,p*v, where a is the asset name, v is the corresponding asset investment value on energy, and p is the corresponding energy capacity value. Only investable assets with a storage_method_energy set to true are included.
  • flows-investment.csv: Similar to assets-investment.csv, but for flows.
  • flows.csv: The value of each flow, per (from, to) flow, rp representative period and timestep. Since the flow is in power, the value at a timestep is equal to the value at the corresponding time block, i.e., if flow[1:3] = 30, then flow[1] = flow[2] = flow[3] = 30.
  • storage-level.csv: The value of each storage level, per asset, rp representative period, and timestep. Since the storage level is in energy, the value at a timestep is a proportional fraction of the value at the corresponding time block, i.e., if level[1:3] = 30, then level[1] = level[2] = level[3] = 10.
source
TulipaEnergyModel.solve_modelFunction
solution = solve_model(model[, optimizer; parameters])

Solve the JuMP model and return the solution. The optimizer argument should be an MILP solver from the JuMP list of supported solvers. By default we use HiGHS.

The keyword argument parameters should be passed as a list of key => value pairs. These can be created manually, obtained using default_parameters, or read from a file using read_parameters_from_file.

The solution object is a mutable struct with the following fields:

  • assets_investment[a]: The investment for each asset, indexed on the investable asset a. To create a traditional array in the order given by the investable assets, one can run

    [solution.assets_investment[a] for a in labels(graph) if graph[a].investable]
    • assets_investment_energy[a]: The investment on energy component for each asset, indexed on the investable asset a with a storage_method_energy set to true.

    To create a traditional array in the order given by the investable assets, one can run

    [solution.assets_investment_energy[a] for a in labels(graph) if graph[a].investable && graph[a].storage_method_energy
  • flows_investment[u, v]: The investment for each flow, indexed on the investable flow (u, v). To create a traditional array in the order given by the investable flows, one can run

    [solution.flows_investment[(u, v)] for (u, v) in edge_labels(graph) if graph[u, v].investable]
  • storage_level_intra_rp[a, rp, timesteps_block]: The storage level for the storage asset a for a representative period rp and a time block timesteps_block. The list of time blocks is defined by constraints_partitions, which was used to create the model. To create a vector with all values of storage_level_intra_rp for a given a and rp, one can run

    [solution.storage_level_intra_rp[a, rp, timesteps_block] for timesteps_block in constraints_partitions[:lowest_resolution][(a, rp)]]
  • storage_level_inter_rp[a, pb]: The storage level for the storage asset a for a periods block pb. To create a vector with all values of storage_level_inter_rp for a given a, one can run

    [solution.storage_level_inter_rp[a, bp] for bp in graph[a].timeframe_partitions[a]]
  • flow[(u, v), rp, timesteps_block]: The flow value for a given flow (u, v) at a given representative period rp, and time block timesteps_block. The list of time blocks is defined by graph[(u, v)].partitions[rp]. To create a vector with all values of flow for a given (u, v) and rp, one can run

    [solution.flow[(u, v), rp, timesteps_block] for timesteps_block in graph[u, v].partitions[rp]]
  • objective_value: A Float64 with the objective value at the solution.

  • duals: A NamedTuple containing the dual variables of selected constraints.

Examples

parameters = Dict{String,Any}("presolve" => "on", "time_limit" => 60.0, "output_flag" => true)
solution = solve_model(model, HiGHS.Optimizer; parameters = parameters)
source
TulipaEnergyModel.solve_model!Method
solution = solve_model!(dataframes, model, ...)

Solves the JuMP model, returns the solution, and modifies dataframes to include the solution. The modifications made to dataframes are:

  • df_flows.solution = solution.flow
  • df_storage_level_intra_rp.solution = solution.storage_level_intra_rp
  • df_storage_level_inter_rp.solution = solution.storage_level_inter_rp
source