Skip to content

Reference topology generators

This sections gives access to methods that produce reference or sample topologies of different types, e.g. to be used as a reference for different studies.

Interactions

Generator of sample topologies

This algorithm produces example networks of reference.

Click to see more details

Form to complete:

  • Algorithm Input: generator of sample topologies: This algorithm produces example networks of reference..
  • Network type: The type of network to import. Options:
    • Reference network 1.
    • Reference network 2.
    • IP - 3 clusters - 1 aggregation.
    • Reference network 1+2.
  • Network prefix: The prefix of the network.
  • Make greenfield design IP and transport: Make greenfield design IP and transport.
  • Fault tolerant network: Fault tolerance is applied to the network.
  • OTU recovery type: OTU recovery type for the new OTUs created. Options:
    • No recovery: The OTN layer does not make any attempt to automatically recover the OTUs.
    • 1+1 OTU path: OTUs will be attempted to be routed with two paths maximally link and node disjoint (i.e. as disjoint as possible). If only one path exists, the OTU will be realized via a single path. If the OTU has two paths, it will be up as long as one of the two paths is up.
    • OTU restoration: OTUs will be marked as restorable. This means that if the OTU original path fails, the OTU restoration algorithm will search a valid path for it.
  • ODU recovery type: ODU recovery type for the new ODUs created. Options:
    • No recovery: The OTN layer does not make any attempt to automatically recover the ODUs.
    • 1+1 ODUs: ODUs will be attempted to be routed with two paths maximally OTU link and node disjoint (i.e. as disjoint as possible). If only one path exists, the ODU will be realized via a single path. If the ODU has two paths, it will be up as long as one of the two paths is up.
    • ODU restoration: ODUs will be marked as restorable. This means that if the ODU original path fails, the ODU restoration algorithm will search a valid path for it.
  • Use MPLS-TE tunnels in clusters and aggregation: Use MPLS-TE tunnels in clusters and aggregation.
  • Number of BSs per cluster: Number of BSs per cluster.
  • Traffic scaling factor respect to default: Traffic scaling factor respect to default.
  • Default total downstream traffic from internet Gbps: Default total downstream traffic from internet in Gbps. The total amount is this one multiplied by the scaling factor.
  • Default total downstream traffic from datacenters: Default total downstream traffic from datacenters. The total amount is this one multiplied by the scaling factor.
  • Default total P2P traffic to rest of the nodes Gbps: Default total P2P traffic to rest of the nodes in Gbps. The total amount is this one multiplied by the scaling factor.
  • Traffic downstream vs upstream: Traffic downstream vs upstream.
  • Fraction traffic priority respect to total: Fraction traffic priority respect to total.
  • Max span length Km: Max span length in Km.

  • Max # line OTUs per transponder: If present, the this limits the maximum number of line side OTUs that a transponder allocated in the PoPs can have. The dimensioning algorithm creates new transponder nodes when needed.

  • Max transponder throughput (Gbps): If present, the this limits the aggregated capacity of the line side OTUs of the transponder. The dimensioning algorithm creates new transponder nodes when needed.

Form response:

  • Algorithm Output: output stats: Output stats of the algorithm.
  • # IP routers.
  • # Multilayer IP & OTN PoPs.

Generator of sample topologies for TIM

This algorithm produces example networks, where the user can choose among differnt topologies, PoP (Point-of-presence) architecvtures, and other properties.

Click to see more details

Form to complete:

  • Network type: The type of network to import. Options:
    • Metro-Small-TIM.
    • Metro-Medium-TIM.
    • Metro-Large-TIM.
    • Metro-DenseUrban-TIM.
    • Metro-Allegro-TIM.
    • Example 3 nodes.
    • Example 4 nodes.
  • UPF/BNG placement policy: The policy to use for placing UPF/BNG nodes in the network. Options:
    • All metro-Core backbone nodes.
    • Tagged PoPs.
    • All metro-core nodes (backbone or not).
  • PoP Tag for UPF/BNG placement: If set, the nodes where UPF/BNGs are placed in Pops tagged with this tag.

  • PoP architecture: The architecture to use in the point-of-presence. Options:

    • Router-Tranponsder-Mux-Roadm: Each point-of-presence includes a conventional IP router, a bank of transponders, connected to a flexi-grid multiplexer as add/drop module (implemented with a mux-demux, or with an splitter/coupler), and this connected to a degree of a ROADM, which then connects to the outgoing optical links to neighbor PoPs.
    • Packet Optical Whitebox-Mux-ROADM: Each point-of-presence includes a packet-optical router with colored pluggables, connected to a flexi-grid multiplexer as add/drop module (implemented with a mux-demux, or with an splitter/coupler), and this connected to a degree of a ROADM, which then connects to the outgoing optical links to neighbor PoPs.
    • Router-Mux point-to-point OEO: Each point-of-presence includes a packet-optical router with colored pluggables, each pluggable connected to one (or maybe more in XR case) outgoing optical links. Each outgoing link is ended in a flexi-grid mux-demux.
  • IP topology policy: If selected, it is possible to save an IP topology in the design, using tags in the nodes. Options:
    • No IP Topology.
    • Hierarchical (B5G-OPEN).
    • OEO.
  • Prefered IP transport type: The prefered IP transport type to use in the IP topology. This will result in design rules added.. Options:
    • IPoWDM P2P pluggable transport: The IP ports of the adjacency are connected via an ODU (Optical Data Unit), starting in an IPoWDM point-to-point (P2P) optical pluggable in the end routers..
    • IPoWDM P2MP pluggable transport: The IP ports of the adjacency are connected via an ODU (Optical Data Unit), starting in an IPoWDM point-to-multipoint (P2MP) optical pluggable in the end routers. P2MP pluggables can connect to more than one P2MP pluggables creating e.g. optical trees..
    • OTN transport: The IP ports of the adjacency are connected via an ODU (Optical Data Unit) in the OTN network, using a regular transponder (not optical pluggable-based)..
    • Virtual P2P connection: The IP ports of the adjacency are connected via virtual point-to-point connections, in an unspecified generic transport technology.
  • SRGs created: If selected, it is possible to create SRGs in the design. Options:
    • None: No SRGs aare created.
    • One per UPF/BNG node: Creates one SRG for UPF/BNG node..
    • One per bidirectional WDM link between PoPs: Creates one SRG for each pair of WDM links (bidirecional one opposite to the other) between PoPs. Intra PoP links are not considered..
    • One per bidirectional WDM link between PoPs and per UPF/BNG node: Creates one SRG for each pair of WDM links (bidirecional one opposite to the other) between PoPs, and one for each UPF/BNG node. Intra PoP links are not considered..
  • End node tag prefix: If set, for each adjacency added, the end nodes are added a tag with a key equal to this prefix, followed by -XXX, where XX an index on the bidi adjacency to add..

  • Clear previous topology: Remove all pre-existing nodes, ASs, layouts and optical signal types. Then create default new ones, together with ASs and default IGPs ..

  • Inicial usable frequency (THz): The initial usable frequency in THz for all the optical links.
  • Optical links BW (THz): The total bandwidth in all the optical links in this design.
  • Total downstream traffic (Tbps): The total downstream traffic to assume for traffic computations. If not set, and we are in Allegro network case, the total traffic is computed from the year and CAGR values per household and cell.

  • Year: In CAGR-based traffic computations, the year (zero means today).

  • Year 0 normalized traffic per household (Gbps): The normalized traffic per household in abstract units (since everything is normalized at the end acccording to the total downstream traffic).
  • Year 0 normalized traffic per macro cell (Gbps): The normalized traffic per macro cell in abstract units (since everything is normalized at the end acccording to the total downstream traffic).
  • Year 0 normalized traffic per small cell (Gbps): The normalized traffic per small cell in abstract units (since everything is normalized at the end acccording to the total downstream traffic).
  • Fraction of total traffic (DC): The fraction of total traffic that goes to the DC gw nodes (closest one according to BGP/OSPF metrics). Can only be non-zero in networks with DC-Gw nodes.
  • Fraction of total traffic (p2p): The fraction of total traffic that is p2p.
  • Fraction of total traffic (web): The fraction of total traffic that is web.
  • Fraction of total traffic (video): The fraction of total traffic that is video.
  • Ratio of upstream vs downstream traffic (DC GW): The ratio of upstream vs downstream traffic for traffic that goes to DC.
  • Ratio of upstream vs downstream traffic (web): The ratio of upstream vs downstream traffic for web.
  • Ratio of upstream vs downstream traffic (video): The ratio of upstream vs downstream traffic for video.
  • Fraction of node p2p traffic that goes to the core: The fraction of node p2p traffic that goes to the core.
  • Fraction of node web traffic that goes to the core: The fraction of node web traffic that goes to the core.
  • Fraction of node video traffic that goes to the core: The fraction of node video traffic that goes to the core.
  • Randomize traffic: If checked, the traffic of all demands is randomly changed, but keeping the total traffic constant.
  • Random seed: The seed to use in the random number generator.
  • Maximum multiplication factor: Each traffic demand will be first multiplied by a factor randomly chosen between (1-k , 1+k), being k this value. If zero, no randomization os produced. After this randomization the final demand traffic is scaled to fit the total normalization value.
  • Forbid Agg connections after the hub: If checked, OTSI design rules are added for setting that: Otsis from Agg to Hub with non-aggregation intermediate nodes are forbidden, i.e. connections from aggregation nodes, that traverse a hub node without ending in it.
  • Force Otsis to be OEO between two PoPs: If checked, OTSI design rules are added for setting that: Otsis must be OEO between two PoPs are forced.