ZigZag - Deep Learning Hardware Design Space Exploration
This repository presents the novel version of our tried-and-tested hardware Architecture-Mapping Design Space Exploration (DSE) Framework for Deep Learning (DL) accelerators. ZigZag bridges the gap between algorithmic DL decisions and their acceleration cost on specialized accelerators through a fast and accurate hardware cost estimation.
zigzag.api Namespace Reference

Functions

( tuple[float, float, list[tuple[CostModelEvaluationABC, Any]]]|tuple[float, float, float, float, list[tuple[CostModelEvaluationABC, Any]]]) get_hardware_performance_zigzag (str|list[dict[str, Any]]|ModelProto workload, str accelerator, str mapping, *Literal["loma"]|Literal["salsa"] temporal_mapping_search_engine="loma", Literal["uneven"]|Literal["even"] temporal_mapping_type="uneven", str opt="latency", str dump_folder=f"outputs/{datetime.now()}", str|None pickle_filename=None, int lpf_limit=6, int nb_spatial_mappings_generated=3, bool in_memory_compute=False, bool exploit_data_locality=False, bool enable_mix_spatial_mapping=False, bool loma_show_progress_bar=True)
 ZigZag API: estimates the cost of running the given workload on the given hardware architecture. More...
 
tuple[float, float, float, float, list[tuple[CostModelEvaluationABC, Any]]] get_hardware_performance_zigzag_imc (*Any args)
 Overload with type hint. More...
 

Function Documentation

◆ get_hardware_performance_zigzag()

( tuple[float, float, list[tuple[CostModelEvaluationABC, Any]]] | tuple[float, float, float, float, list[tuple[CostModelEvaluationABC, Any]]] ) zigzag.api.get_hardware_performance_zigzag ( str | list[dict[str, Any]] | ModelProto  workload,
str  accelerator,
str  mapping,
*Literal["loma"] | Literal["salsa"]   temporal_mapping_search_engine = "loma",
Literal["uneven"] | Literal["even"]   temporal_mapping_type = "uneven",
str   opt = "latency",
str   dump_folder = f"outputs/{datetime.now()}",
str | None   pickle_filename = None,
int   lpf_limit = 6,
int   nb_spatial_mappings_generated = 3,
bool   in_memory_compute = False,
bool   exploit_data_locality = False,
bool   enable_mix_spatial_mapping = False,
bool   loma_show_progress_bar = True 
)

ZigZag API: estimates the cost of running the given workload on the given hardware architecture.

Parameters
workloadEither a filepath to the workload ONNX or yaml file, an ONNX model.
acceleratorFilepath to accelerator yaml file.
mappingFilepath to mapping yaml file.
optOptimization criterion: either energy, latency or EDP.
dump_folderFolder where outputs will be saved.
pickle_filenameFilename of pickle dump.
lpf_limitDetermines the number of temporal unrollings that are evaluated.
nb_spatial_mappings_generatedMax nb of spatial mappings automatically generated (if not provided in mapping).
in_memory_computeOptimizes the run for IMC architectures.
exploit_data_localityIff true, an attempt will be made to keep data in lower-level memory in between layers
enable_mix_spatial_mappingWether mixed spatial mappings will be generated, i.e. unrolling multiple Layer Dimensions in a single Operational Array Dimension.
Here is the caller graph for this function:

◆ get_hardware_performance_zigzag_imc()

tuple[float, float, float, float, list[tuple[CostModelEvaluationABC, Any]]] zigzag.api.get_hardware_performance_zigzag_imc ( *Any  args)

Overload with type hint.

Here is the call graph for this function: