ZigZag - Deep Learning Hardware Design Space Exploration
This repository presents the novel version of our tried-and-tested hardware Architecture-Mapping Design Space Exploration (DSE) Framework for Deep Learning (DL) accelerators. ZigZag bridges the gap between algorithmic DL decisions and their acceleration cost on specialized accelerators through a fast and accurate hardware cost estimation.
SalsaEngine Class Reference

Class that handles optimization of temporal mapping given a: More...

Public Member Functions

def __init__ (self, *Accelerator accelerator, LayerNode layer, SpatialMappingInternal spatial_mapping, TemporalMappingType mapping_type, **Any kwargs)
 Initialize the engine with the given: More...
 
def run (self, Queue cme_queue)
 Call the necessary methods, start the processes and collect the best temporal mapping found during the run. More...
 
def run_simulated_annealing_opt (self, cme_queue)
 Run a simulated annealing optimization on the loop ordering using a loma memory allocation strategy. More...
 
def get_temporal_loops (self)
 Get all loops that have to be temporally scheduled given layer and spatial mapping. More...
 
def get_prime_factors (self)
 Get the prime factors for all temporal loops in the following format: [('C', 2), ('OY', 2), ('OX', 2), ('K', 7), ...]. More...
 

Public Attributes

 accelerator
 
 layer
 
 spatial_mapping
 
 mapping_type
 
 iteration_number
 
 start_temperature
 
 opt_criterion_name
 
 lpf_limit
 
 cme_queue
 
 temporal_loop_dim_size
 
 temporal_mapping_lpf
 

Detailed Description

Class that handles optimization of temporal mapping given a:

  • layer
  • spatial mapping
  • memory hierarchy
  • number of iterations
  • start temperature This optimization is carried out through simulated annealing loop order based. Each loop is broken down to the smallest possible part (prime factors), then a runtime estimation is performed to choose the fastest engine to use (LOMA or SALSA).

    TODO cleanup

Constructor & Destructor Documentation

◆ __init__()

def __init__ (   self,
*Accelerator  accelerator,
LayerNode  layer,
SpatialMappingInternal  spatial_mapping,
TemporalMappingType  mapping_type,
**Any  kwargs 
)

Initialize the engine with the given:

  • LayerNode
  • SpatialMapping
  • Accelerator
  • Number of iterations
  • Start temperature The memory hierarchy from the correct core is extracted from the accelerator.

Member Function Documentation

◆ get_prime_factors()

def get_prime_factors (   self)

Get the prime factors for all temporal loops in the following format: [('C', 2), ('OY', 2), ('OX', 2), ('K', 7), ...].

◆ get_temporal_loops()

def get_temporal_loops (   self)

Get all loops that have to be temporally scheduled given layer and spatial mapping.

◆ run()

def run (   self,
Queue  cme_queue 
)

Call the necessary methods, start the processes and collect the best temporal mapping found during the run.

Here is the caller graph for this function:

◆ run_simulated_annealing_opt()

def run_simulated_annealing_opt (   self,
  cme_queue 
)

Run a simulated annealing optimization on the loop ordering using a loma memory allocation strategy.

Member Data Documentation

◆ accelerator

accelerator

◆ cme_queue

cme_queue

◆ iteration_number

iteration_number

◆ layer

layer

◆ lpf_limit

lpf_limit

◆ mapping_type

mapping_type

◆ opt_criterion_name

opt_criterion_name

◆ spatial_mapping

spatial_mapping

◆ start_temperature

start_temperature

◆ temporal_loop_dim_size

temporal_loop_dim_size

◆ temporal_mapping_lpf

temporal_mapping_lpf

The documentation for this class was generated from the following file: