# Utilities¶

Helper functions for symbolic theano code

## persistence¶

agentnet.utils.persistence.save(nn, filename)[source]

Saves lasagne network weights to the target file. Does not store the architecture itself.

Basic usage: >> nn = lasagne.layers.InputLayer(...) >> nn = lasagne.layers.SomeLayer(...) >> nn = lasagne.layers.SomeLayer(...) >> train_my_nn() >> save(nn,”nn_weights.pcl”)

Parameters: nn – neural network output layer(s) filename – weight filename
agentnet.utils.persistence.load(nn, filename)[source]

Loads lasagne network weights from the target file into NN you provided. Requires that NN architecture is exactly same as NN which weights were saved. Minor alterations like changing hard-coded batch size will probably work, but are not guaranteed.

Basic usage: >> nn = lasagne.layers.InputLayer(...) >> nn = lasagne.layers.SomeLayer(...) >> nn = lasagne.layers.SomeLayer(...) >> train_my_nn() >> save(nn,”previously_saved_weights.pcl”) >> crash_and_lose_progress() >> nn = the_same_nn_as_before() >> load(nn,”previously_saved_weights.pcl”)

Parameters: nn – neural network output layer(s) filename – weight filename the network with weights loaded WARNING! the load() function is inplace, meaning that weights are loaded in the NN instance you provided and NOT in a copy.

## clone network¶

Utility functions that can clone lasagne network layers in a custom way. [Will be] used for: - target networks, e.g. older copies of NN used for reference Qvalues. - DPG-like methods where critic has to process both optimal and actual actions

agentnet.utils.clone.clone_network(original_network, bottom_layers=None, share_params=False, share_inputs=True, name_prefix=None)[source]

Creates a copy of lasagne network layer(s) provided as original_network.

If bottom_layers is a list of layers or a single layer, function won’t copy these layers, using existing ones instead.

Else, if bottom_layers is a dictionary of {existing_layer:new_layer}, each time original network would have used existing_layer, cloned network uses new_layer

It is possible to either use existing weights or clone them via share_weights flag. If weights are shared, target_network will always have same weights as original one. Any changes (e.g. loading or training) will affect both original and cloned network. This is useful if you want both networks to train together (i.e. you have same network applied twice) One example of such case is Deep DPG algorithm: http://arxiv.org/abs/1509.02971

Otherwise, if weights are NOT shared, the cloned network will begin with same weights as the original one at the moment it was cloned, but than the two networks will be completely independent. This is useful if you want cloned network to deviate from original. One example is when you need a “target network” for your deep RL agent, that stores older weights snapshot. The DQN that uses this trick can be found here: https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf

Parameters: original_network (lasagne.layers.Layer or list/tuple/dict/any_iterable of such. If list, layers must be VALUES, not keys.) – A network to be cloned (all output layers) bottom_layers (lasagne.layers.Layer or a list/tuple/dict of such.) – the layers which you don’t want to clone. See description above. This parameter can also contain ARBITRARY objects within the original_network that you want to share. share_params – if True, cloned network will use same shared variables for weights. Otherwise new shared variables will be created and set to original NN values. WARNING! shared weights must be accessible via lasagne.layers.get_all_params with no flags If you want custom other parameters to be shared, use bottom_layers share_inputs (bool) – if True, all InputLayers will still be shared even if not mentioned in bottom_layers name_prefix (string or None) – if not None, adds this prefix to all the layers and params of the cloned network a clone of original_network (whether layer, list, dict, tuple or whatever

## layers¶

agentnet.utils.layers.DictLayer(incomings, output_shapes, output_dtypes=None, **kwargs)[source]

A base class for Lasagne layer that returns several outputs.

For a custom dictlayer you should implement get_output_for so that it returns a dict of {key:tensor_for_that_key}

By default it just outputs all the inputs IF their number matches, otherwise it raises an exception.

In other words, if you return ‘foo’ and ‘bar’ of shapes (None, 25) and (None,15,5,7), self.get_output_shape must be {‘foo’: (None,25), ‘bar’: (None,15,5,7)}

warning: this layer is needed for the purpose of graph optimization,
it slightly breaks Lasagne conventions, so it is hacky.
Parameters: incomings (lasagne.layers.Layer or a list of such) – Incoming layers. output_shapes (dict of { output_key: tuple of shape dimensions (like input layer shape) } or a list of shapes, in which case keys are integers from 0 to len(output_shapes)) – Shapes of key-value outputs from the DictLayer. output_dtypes (None, dict of {key:dtype of output} or a list of dtypes. Key names must match those in output_shapes.) – If provided, defines the dtypes of all key-value outputs. None means all float32.
agentnet.utils.layers.get_layer_dtype(layer, default=None)[source]

takes layer’s output_dtype property if it is defined, otherwise defaults to default or (if it’s not given) theano.config.floatX

agentnet.utils.layers.clip_grads(layer, clipping_bound)[source]

Clips grads passing through a lasagne.layers.layer

agentnet.utils.layers.mul(*args, **kwargs)[source]

Element-wise multiply layers

agentnet.utils.layers.add(*args, **kwargs)[source]

Element-wise sum of layers

## format¶

agentnet.utils.format.check_list(variables)[source]

Ensure that variables is a list or converts to one. If naive conversion fails, throws an error :param variables: sequence expected

agentnet.utils.format.check_tuple(variables)[source]

Ensure that variables is a list or converts to one. If naive conversion fails, throws an error :param variables: sequence expected

agentnet.utils.format.check_ordered_dict(variables)[source]

Ensure that variables is an OrderedDict :param variables: dictionary expected

agentnet.utils.format.unpack_list(array, parts_lengths)[source]

Returns slices of the input list a. unpack_list(a, [2,3,5]) -> a[:2], a[2:2+3], a[2+3:2+3+5]

Parameters: array – array-like or tensor variable parts_lengths – lengths of subparts