TensorFlow 2.10 has been released, and highlights of this release include user-friendly features in Keras to help develop transformers, deterministic and stateless initializers, updates to the optimizer API, and new tools to help load audio data.
This release also enhances performance with oneDNN, extends GPU support on Windows, and more. This release also marks TensorFlow Decision Forest 1.0!
Extended, unified mask support for Keras attention layers
Starting with TensorFlow 2.10, the mask handling of Keras attention layers such as tf.keras.layers.Attention, tf.keras.layers.AdditiveAttention and tf.keras.layers.MultiHeadAttention has been extended and unified. Two features in particular have been added:
Causal attention :All three layers are now supportedcall use_causal_mask parameter(Attentionand AdditiveAttention Used to pass the Causal parameter to init).
Implicit masking: Keras Attention, AdditiveAttention and MultiHeadAttention Layers now support implicit masking. (requires Set mask_zero=True in tf.keras.layers.Embedding).
Combined, it simplifies the implementation of any Transformer-style model.
New Keras Optimizer API
The previous Tensorflow 2.9 release released a new version of the Keras Optimizer API in tf.keras.optimizers.experimental, which will replace the current tf.keras.optimizers namespace in TensorFlow 2.11.
To officially switch the optimizer namespace to the new API, TensorFlow 2.10’s tf.keras.optimizers.legacy exports all current Keras optimizers. Most users will not be affected by this change, please check the API documentation to check if the API used in the workflow has changed.
Deterministic and stateless Keras initializers
TensorFlow 2.10 makes the Keras initializer ( tf.keras.initializers API) stateless and deterministic, built on stateless TF random operations. As of TensorFlow 2.10, the seeded and unseeded Keras initializers always produce the same value each time they are called (for a given variable shape).
Stateless initializers enable Keras to support new features such as multi-client model training with DTensor.
BackupAndRestore checkpointing with step granularity
In the previous version Tensorflow 2.9, the tf.keras.callbacks.BackupAndRestore Keras callback would back up the model and training state at epoch boundaries.
In Tensorflow 2.10, the callback can also back up the model every N training steps. However, when BackupAndRestore is used with tf.distribute.MultiWorkerMirroredStrategy, the distributed dataset iterator state will be reinitialized and not restored when restoring the model.
Easily generate audio classification datasets from directories of audio files
Audio classification datasets can now be easily generated from a directory of .wav files using the new program tf.keras.utils.audio_dataset_from_directory.
Just sort the audio files into different directories for each file class, and a single line of code will provide a token tf.data.Dataset that can be passed to the Keras model.View example
EinsumDense layer turned to stable function
The einsum function is the swiss army knife of linear algebra. It can effectively and clearly describe a wide variety of operations. The tf.keras.layers.EinsumDense layer brings some functionality to Keras.
Operations such as einsum, einops.rearrange, and EinsumDense layers operate on string “equations” describing input and output axes. For EinsumDense, the equation lists the axes of the input parameters, the axes of the weights, and the axes of the outputs.
For more information, check out the Tensorflow blog.
#TensorFlow #released #News Fast Delivery