Correlation¶
-
class
dpa.correlation.Correlator(samples, traces, keys)¶ creates a new
Correlatorinstance used to rapidly calculate correlations in a DPA scenario- samples
- is the number of samples in each trace, i.e. the trace length
- traces
- is the total number of traces to be processed
- keys
- is the number of hypothesis
>>> c = Correlator(2, 3, 1) #create a new correlator >>> c.hypo[0] = 5 #calculate a hypothesis for each trace >>> c.hypo[1] = 4 >>> c.hypo[2] = 3 >>> c.preprocess() #preprocess the hypothesis >>> from preprocessor import buffer_from_list >>> c.add_trace(buffer_from_list(types.uint8_t, [10, 0])) >>> c.add_trace(buffer_from_list(types.uint8_t, [ 8, 30])) >>> c.add_trace(buffer_from_list(types.uint8_t, [ 6, 15])) >>> c.update_matrix() #calculate the correlation >>> round(c.matrix[0], 2) 1.0 >>> round(c.matrix[1], 2) -0.5
-
add_trace(buf, idx=-1)¶ processes a trace (
dpa.preprocessor.Bufferbuf) for theCorrelatorby updating intermediate valuesif idx is set, the trace will be added as trace number idx allowing to add traces in arbitrary order
-
preprocess()¶ preprocesses the hypothesis. MUST be called before adding the first trace
-
update_matrix()¶ updates the correlation matrix. MUST be called before accessing the matrix
-
dpa.correlation.dump_matrix()¶ dump a octave readable form of the
Correlator.matrixm to the file-descriptor f, assuming that m is a keys x samples matrix
Workflow¶
-
class
dpa.workflow.DPAWorkflow(info_dict={}, count=None, base_path='.')¶ A helper class for a complete trace analysis workflow
Processes a set of traces in a series of analysis steps. Each step is executed by a trace processor. See the
dpa.processorsmodule for details.In a profiling phase, 100 input traces will be processed sequentially to experimentally determine boundaries and lengths (e.g. for the
dpa.processors.AverageCountProcessor). This phase is necessary, because several processors require further information. For example, thedpa.processors.NormalizeProcessorneeds information on the minimum and maximum of each trace. However, these values need to be constant for the whole set of traces as the output of the processor would otherwise be useless. Other processors like the peak-extraction processor need information on traces’ average and variance which typically do not vary much in a set of traces. Therefore, such information is pre-calculated using a couple of sample traces in a profiling phase. This functionality is supported by respective capabilities of the individual processor classes.The number of traces to profile can be set with the profile_size attribute. A record information dictionary can be passed to the module, which contains information to be used by the seperate processors, but the following keys are also used:
- errors
- a list of trace numbers to ignore. These traces will not be read or processed
- profile_traces
- a list of traces that are meant to be used for additional profiling if the initial 100 are not sufficient
- trace_type
- the data type of input traces. Default:
dpa.preprocessor.types.uint8_t
In the actual processing phase the traces are processed in parallel using all available cores if the corresponding trace processors are implemented to release the GIL for the actual processing. This is the case for the preprocessing toolsuite including the correlator.
See this source file for a more practical and thorough application of this class.
>>> w = DPAWorkflow(count=100) >>> w.processors = [TraceProcessor()] # the TraceProcessor() doesn't modify anything >>> w.process()
-
errors= []¶
-
path_iter(in_path, out_path=None)¶
-
process()¶ processes the active trace set
See
DPAWorkflowfor a generic overview of provided functionality.
-
profile_size= 100¶
-
store_avg(avg, name='')¶
Processors¶
A collection of trace processors for use in a dpa.workflow.DPAWorkflow.
A trace processor can be easily written by inheriting from TraceProcessor:
>>> class CustomProcessor(TraceProcessor):
... def process(trace):
... return custom_processing_function(trace)
Further processors can be listed with:
$ pydoc dpa.processors
-
class
dpa.processors.TraceProcessor(dst_type=0, ref=None, save=False, name=None)¶ A no-op trace processor.
This is a simple base class for other trace processors doing actual computations, that should inherit from this class and overwrite the
process()method.Trace processors can be chained:
>>> a = TraceProcessor() >>> b = TraceProcessor() >>> c = a(b)
c.process(buf) is thus equivalent to a.process(b.process(buf))
-
process(trace, idx=-1)¶ processes the
dpa.preprocessor.Buffertrace and returns the modified version
-
profile(trace)¶ performs profiling steps (such as finding min/max values) for the
dpa.preprocessor.Buffertrace
-
get_samples()¶ returns the estimated number of samples of a output trace based on the profiling phase
-
finalize()¶ finishes pending tasks
-
-
class
dpa.processors.CombinedProcessor(a, b, name=None)¶ A helper class to provide the chaining functionality for a
TraceProcessor
-
class
dpa.processors.RasterizeProcessor(edge, period, trigger=150, pause_trigger=1100, min_pause=0, max_pause=0, header_size=128, **kwargs)¶ A processor for rasterization of traces. Please consult the thesis for an exact description.
Uses a pattern defined by
dpa.preprocessor.Bufferedge to make each period the same length (period).Further options can be specified:
- trigger
- edge-comparism threshhold that starts search for a local minimum difference
- pause_trigger
- number of samples to pass without a trigger match, that causes indication for a data pause. This can be used to align the start of traces at pauses in the trace
- min_pause
- the minimum amount of pauses that MUST occur
- max_pause
- the maximum amount of pauses that are allowed to occur before assuming an error
- header_size
- number of samples to skip from the beginning of the trace
FIXME: there can only ever be one active configuration of this running at the same time
-
class
dpa.processors.PeakProcessor(break_length=0, break_count=0, *args, **kwargs)¶ Performs peak extraction on the trace.
Peak extraction needs information on average and variance of the trace. The values are determined in a profiling stage.
-
class
dpa.processors.RectifyProcessor(*args, **kwargs)¶ Rectifies a trace
-
class
dpa.processors.NormalizeProcessor(dst_type=0, ref=None, save=False, name=None)¶ Normalizes trace data globally to fit a smaller data type.
This is useful if trace data does not fit a small data type without further processing. The
NormalizeProcessordetermines min and max values of the traces in a profiling stage and fits them into the new data type.A common application might be:
>>> NormalizeProcessor(dst_type=types.uint8_t)
-
class
dpa.processors.VoidProcessor(dst_type=0, ref=None, save=False, name=None)¶ Base class for processors not producing new traces
-
class
dpa.processors.AverageCountProcessor(callback=<function <lambda>>, **kwargs)¶ Accumulates traces to create a average and variance trace
Take an argument:
- callback
a function that is called after calculation of the average has been finished
callback(avg, var, name)where name is the name of the referenced processor
-
finalize()¶ calculates the average and calls the callback function
-
static
averagize(processors, callback=None, **kwargs)¶ takes a list of processors and creates a AverageCountProcessor instance for each one
-
class
dpa.processors.CorrelationProcessor(correlator=None, **kwargs)¶ Adds each processed trace to the correlation module
- correlator
- a
dpa.correlation.Correlatorinstance that has already been initialized with hypothesis and preprocessed
-
correlations()¶ retrieves the correlations after processing finished, by cutting the correlation matrix into corresponding chunks