You’ll train a simple MLP on MNIST using TensorFlow Core plus DTensor in a data-parallel setup: create a one-dimensional mesh (“batch”), keep model weights replicated (DVariables), shard the global batch across devices via pack/repack, and run a standard loop with tf.GradientTape, custom Adam, and accuracy/loss metrics. The code shows how mesh/layout choices propagate through ops, how to write DTensor-aware layers, and how to evaluate/plot results. Saving is limited today—DTensor models must be fully replicated to export, and saved models lose DTensor annotations.You’ll train a simple MLP on MNIST using TensorFlow Core plus DTensor in a data-parallel setup: create a one-dimensional mesh (“batch”), keep model weights replicated (DVariables), shard the global batch across devices via pack/repack, and run a standard loop with tf.GradientTape, custom Adam, and accuracy/loss metrics. The code shows how mesh/layout choices propagate through ops, how to write DTensor-aware layers, and how to evaluate/plot results. Saving is limited today—DTensor models must be fully replicated to export, and saved models lose DTensor annotations.

Data Parallel MNIST with DTensor and TensorFlow Core

2025/09/09 16:00

Content Overview

  • Introduction
  • Overview of data parallel training with DTensor
  • Setup
  • The MNIST Dataset
  • Preprocessing the data
  • Build the MLP
  • The dense layer
  • The MLP sequential model
  • Training metrics
  • Optimizer
  • Data packing
  • Training
  • Performance evaluation
  • Saving your model
  • Conclusion

\ \ \

Introduction

This notebook uses the TensorFlow Core low-level APIs and DTensor to demonstrate a data-parallel distributed training example.

Visit the Core APIs overview to learn more about TensorFlow Core and its intended use cases. Refer to the DTensor Overview guide and Distributed Training with DTensors tutorial to learn more about DTensor.

This example uses the same model and optimizer as those shown in the Multilayer Perceptrons tutorial. See this tutorial first to get comfortable with writing an end-to-end machine learning workflow with the Core APIs.

\

:::tip Note: DTensor is still an experimental TensorFlow API which means that its features are available for testing, and it is intended for use in test environments only.

:::

\

Overview of data parallel training with DTensor

Before building an MLP that supports distribution, take a moment to explore the fundamentals of DTensor for data parallel training.

DTensor allows you to run distributed training across devices to improve efficiency, reliability and scalability. DTensor distributes the program and tensors according to the sharding directives through a procedure called Single program, multiple data (SPMD) expansion. A variable of a DTensor aware layer is created as dtensor.DVariable, and the constructors of DTensor aware layer objects take additional Layout inputs in addition to the usual layer parameters.

The main ideas for data parallel training are as follows:

  • Model variables are replicated on N devices each.
  • A global batch is split into N per-replica batches.
  • Each per-replica batch is trained on the replica device.
  • The gradient is reduced before weight up data is collectively performed on all replicas.
  • Data parallel training provides nearly linear speed with respect to the number of devices

Setup

DTensor is part of TensorFlow 2.9.0 release.

\

#!pip install --quiet --upgrade --pre tensorflow 

\

import matplotlib from matplotlib import pyplot as plt # Preset Matplotlib figure sizes. matplotlib.rcParams['figure.figsize'] = [9, 6] 

\

import tensorflow as tf import tensorflow_datasets as tfds from tensorflow.experimental import dtensor print(tf.__version__) # Set random seed for reproducible results  tf.random.set_seed(22) 

\

2024-08-15 02:49:40.914029: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2024-08-15 02:49:40.935518: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2024-08-15 02:49:40.941702: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2.17.0 

Configure 8 virtual CPUs for this experiment. DTensor can also be used with GPU or TPU devices. Given that this notebook uses virtual devices, the speedup gained from distributed training is not noticeable.

\

def configure_virtual_cpus(ncpu):   phy_devices = tf.config.list_physical_devices('CPU')   tf.config.set_logical_device_configuration(phy_devices[0], [         tf.config.LogicalDeviceConfiguration(),     ] * ncpu)  configure_virtual_cpus(8)  DEVICES = [f'CPU:{i}' for i in range(8)] devices = tf.config.list_logical_devices('CPU') device_names = [d.name for d in devices] device_names 

\

WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1723690183.661893  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690183.665603  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690183.669301  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690183.672556  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690183.683679  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690183.687589  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690183.691101  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690183.694059  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690183.696961  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690183.700515  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690183.704018  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690183.706976  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690184.934382  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690184.936519  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690184.938569  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690184.940700  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690184.942765  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690184.944750  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690184.946705  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690184.948674  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690184.950629  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690184.952626  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690184.954710  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690184.956738  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690184.995780  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690184.997864  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690184.999851  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690185.001859  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See mo ['/device:CPU:0',  '/device:CPU:1',  '/device:CPU:2',  '/device:CPU:3',  '/device:CPU:4',  '/device:CPU:5',  '/device:CPU:6',  '/device:CPU:7'] re at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690185.003740  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690185.005715  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690185.007659  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690185.009659  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690185.011546  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690185.014055  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690185.016445  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723690185.018866  157397 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 

The MNIST Dataset

The dataset is available from TensorFlow Datasets. Split the data into training and testing sets. Only use 5000 examples for training and testing to save time.

\

train_data, test_data = tfds.load("mnist", split=['train[:5000]', 'test[:5000]'], batch_size=128, as_supervised=True) 

Preprocessing the data

Preprocess the data by reshaping it to be 2-dimensional and by rescaling it to fit into the unit interval, [0,1].

\

def preprocess(x, y):   # Reshaping the data   x = tf.reshape(x, shape=[-1, 784])   # Rescaling the data   x = x/255   return x, y  train_data, test_data = train_data.map(preprocess), test_data.map(preprocess) 

Build the MLP

Build an MLP model with DTensor aware layers.

The dense layer

Start by creating a dense layer module that supports DTensor. The dtensor.call_with_layout function can be used to call a function that takes in a DTensor input and produces a DTensor output. This is useful for initializing a DTensor variable, dtensor.DVariable, with a TensorFlow supported function.

\

class DenseLayer(tf.Module):    def __init__(self, in_dim, out_dim, weight_layout, activation=tf.identity):     super().__init__()     # Initialize dimensions and the activation function     self.in_dim, self.out_dim = in_dim, out_dim     self.activation = activation      # Initialize the DTensor weights using the Xavier scheme     uniform_initializer = tf.function(tf.random.stateless_uniform)     xavier_lim = tf.sqrt(6.)/tf.sqrt(tf.cast(self.in_dim + self.out_dim, tf.float32))     self.w = dtensor.DVariable(       dtensor.call_with_layout(           uniform_initializer, weight_layout,           shape=(self.in_dim, self.out_dim), seed=(22, 23),           minval=-xavier_lim, maxval=xavier_lim))      # Initialize the bias with the zeros     bias_layout = weight_layout.delete([0])     self.b = dtensor.DVariable(       dtensor.call_with_layout(tf.zeros, bias_layout, shape=[out_dim]))    def __call__(self, x):     # Compute the forward pass     z = tf.add(tf.matmul(x, self.w), self.b)     return self.activation(z) 

The MLP sequential model

Now create an MLP module that executes the dense layers sequentially.

\

class MLP(tf.Module):    def __init__(self, layers):     self.layers = layers    def __call__(self, x, preds=False):      # Execute the model's layers sequentially     for layer in self.layers:       x = layer(x)     return x 

Performing "data-parallel" training with DTensor is equivalent to tf.distribute.MirroredStrategy. To do this each device will run the same model on a shard of the data batch. So you'll need the following:

  • dtensor.Mesh with a single "batch" dimension
  • dtensor.Layout for all the weights that replicates them across the mesh (using dtensor.UNSHARDED for each axis)
  • dtensor.Layout for the data that splits the batch dimension across the mesh

Create a DTensor mesh that consists of a single batch dimension, where each device becomes a replica that receives a shard from the global batch. Use this mesh to instantiate an MLP mode with the following architecture:

Forward Pass: ReLU(784 x 700) x ReLU(700 x 500) x Softmax(500 x 10)

\

mesh = dtensor.create_mesh([("batch", 8)], devices=DEVICES) weight_layout = dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh)  input_size = 784 hidden_layer_1_size = 700 hidden_layer_2_size = 500 hidden_layer_2_size = 10  mlp_model = MLP([     DenseLayer(in_dim=input_size, out_dim=hidden_layer_1_size,                 weight_layout=weight_layout,                activation=tf.nn.relu),     DenseLayer(in_dim=hidden_layer_1_size , out_dim=hidden_layer_2_size,                weight_layout=weight_layout,                activation=tf.nn.relu),     DenseLayer(in_dim=hidden_layer_2_size, out_dim=hidden_layer_2_size,                 weight_layout=weight_layout)]) 

Training metrics

Use the cross-entropy loss function and accuracy metric for training.

\

def cross_entropy_loss(y_pred, y):   # Compute cross entropy loss with a sparse operation   sparse_ce = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=y_pred)   return tf.reduce_mean(sparse_ce)  def accuracy(y_pred, y):   # Compute accuracy after extracting class predictions   class_preds = tf.argmax(y_pred, axis=1)   is_equal = tf.equal(y, class_preds)   return tf.reduce_mean(tf.cast(is_equal, tf.float32)) 

Optimizer

Using an optimizer can result in significantly faster convergence compared to standard gradient descent. The Adam optimizer is implemented below and has been configured to be compatible with DTensor. In order to use Keras optimizers with DTensor, refer to the experimentaltf.keras.dtensor.experimental.optimizers module.

\

class Adam(tf.Module):      def __init__(self, model_vars, learning_rate=1e-3, beta_1=0.9, beta_2=0.999, ep=1e-7):       # Initialize optimizer parameters and variable slots       self.model_vars = model_vars       self.beta_1 = beta_1       self.beta_2 = beta_2       self.learning_rate = learning_rate       self.ep = ep       self.t = 1.       self.v_dvar, self.s_dvar = [], []       # Initialize optimizer variable slots       for var in model_vars:         v = dtensor.DVariable(dtensor.call_with_layout(tf.zeros, var.layout, shape=var.shape))         s = dtensor.DVariable(dtensor.call_with_layout(tf.zeros, var.layout, shape=var.shape))         self.v_dvar.append(v)         self.s_dvar.append(s)      def apply_gradients(self, grads):       # Update the model variables given their gradients       for i, (d_var, var) in enumerate(zip(grads, self.model_vars)):         self.v_dvar[i].assign(self.beta_1*self.v_dvar[i] + (1-self.beta_1)*d_var)         self.s_dvar[i].assign(self.beta_2*self.s_dvar[i] + (1-self.beta_2)*tf.square(d_var))         v_dvar_bc = self.v_dvar[i]/(1-(self.beta_1**self.t))         s_dvar_bc = self.s_dvar[i]/(1-(self.beta_2**self.t))         var.assign_sub(self.learning_rate*(v_dvar_bc/(tf.sqrt(s_dvar_bc) + self.ep)))       self.t += 1.       return 

Data packing

Start by writing a helper function for transferring data to the device. This function should use dtensor.pack to send (and only send) the shard of the global batch that is intended for a replica to the device backing the replica. For simplicity, assume a single-client application.

Next, write a function that uses this helper function to pack the training data batches into DTensors sharded along the batch (first) axis. This ensures that DTensor evenly distributes the training data to the 'batch' mesh dimension. Note that in DTensor, the batch size always refers to the global batch size; therefore, the batch size should be chosen such that it can be divided evenly by the size of the batch mesh dimension. Additional DTensor APIs to simplify tf.data integration are planned, so please stay tuned.

\

def repack_local_tensor(x, layout):   # Repacks a local Tensor-like to a DTensor with layout   # This function assumes a single-client application   x = tf.convert_to_tensor(x)   sharded_dims = []    # For every sharded dimension, use tf.split to split the along the dimension.   # The result is a nested list of split-tensors in queue[0].   queue = [x]   for axis, dim in enumerate(layout.sharding_specs):     if dim == dtensor.UNSHARDED:       continue     num_splits = layout.shape[axis]     queue = tf.nest.map_structure(lambda x: tf.split(x, num_splits, axis=axis), queue)     sharded_dims.append(dim)    # Now you can build the list of component tensors by looking up the location in   # the nested list of split-tensors created in queue[0].   components = []   for locations in layout.mesh.local_device_locations():     t = queue[0]     for dim in sharded_dims:       split_index = locations[dim]  # Only valid on single-client mesh.       t = t[split_index]     components.append(t)    return dtensor.pack(components, layout)  def repack_batch(x, y, mesh):   # Pack training data batches into DTensors along the batch axis   x = repack_local_tensor(x, layout=dtensor.Layout(['batch', dtensor.UNSHARDED], mesh))   y = repack_local_tensor(y, layout=dtensor.Layout(['batch'], mesh))   return x, y 

Training

Write a traceable function that executes a single training step given a batch of data. This function does not require any special DTensor annotations. Also write a function that executes a test step and returns the appropriate performance metrics.

\

@tf.function def train_step(model, x_batch, y_batch, loss, metric, optimizer):   # Execute a single training step   with tf.GradientTape() as tape:     y_pred = model(x_batch)     batch_loss = loss(y_pred, y_batch)   # Compute gradients and update the model's parameters   grads = tape.gradient(batch_loss, model.trainable_variables)   optimizer.apply_gradients(grads)   # Return batch loss and accuracy   batch_acc = metric(y_pred, y_batch)   return batch_loss, batch_acc  @tf.function def test_step(model, x_batch, y_batch, loss, metric):   # Execute a single testing step   y_pred = model(x_batch)   batch_loss = loss(y_pred, y_batch)   batch_acc = metric(y_pred, y_batch)   return batch_loss, batch_acc 

Now, train the MLP model for 3 epochs with a batch size of 128.

\

# Initialize the training loop parameters and structures epochs = 3 batch_size = 128 train_losses, test_losses = [], [] train_accs, test_accs = [], [] optimizer = Adam(mlp_model.trainable_variables)  # Format training loop for epoch in range(epochs):   batch_losses_train, batch_accs_train = [], []   batch_losses_test, batch_accs_test = [], []    # Iterate through training data   for x_batch, y_batch in train_data:     x_batch, y_batch = repack_batch(x_batch, y_batch, mesh)     batch_loss, batch_acc = train_step(mlp_model, x_batch, y_batch, cross_entropy_loss, accuracy, optimizer)    # Keep track of batch-level training performance     batch_losses_train.append(batch_loss)     batch_accs_train.append(batch_acc)    # Iterate through testing data   for x_batch, y_batch in test_data:     x_batch, y_batch = repack_batch(x_batch, y_batch, mesh)     batch_loss, batch_acc = test_step(mlp_model, x_batch, y_batch, cross_entropy_loss, accuracy)     # Keep track of batch-level testing     batch_losses_test.append(batch_loss)     batch_accs_test.append(batch_acc)  # Keep track of epoch-level model performance   train_loss, train_acc = tf.reduce_mean(batch_losses_train), tf.reduce_mean(batch_accs_train)   test_loss, test_acc = tf.reduce_mean(batch_losses_test), tf.reduce_mean(batch_accs_test)   train_losses.append(train_loss)   train_accs.append(train_acc)   test_losses.append(test_loss)   test_accs.append(test_acc)   print(f"Epoch: {epoch}")   print(f"Training loss: {train_loss.numpy():.3f}, Training accuracy: {train_acc.numpy():.3f}")   print(f"Testing loss: {test_loss.numpy():.3f}, Testing accuracy: {test_acc.numpy():.3f}") 

\

Epoch: 0 Training loss: 1.850, Training accuracy: 0.343 Testing loss: 1.375, Testing accuracy: 0.504 Epoch: 1 Training loss: 1.028, Training accuracy: 0.674 Testing loss: 0.744, Testing accuracy: 0.782 Epoch: 2 Training loss: 0.578, Training accuracy: 0.839 Testing loss: 0.486, Testing accuracy: 0.869 

Performance evaluation

Start by writing a plotting function to visualize the model's loss and accuracy during training.

\

def plot_metrics(train_metric, test_metric, metric_type):   # Visualize metrics vs training Epochs   plt.figure()   plt.plot(range(len(train_metric)), train_metric, label = f"Training {metric_type}")   plt.plot(range(len(test_metric)), test_metric, label = f"Testing {metric_type}")   plt.xlabel("Epochs")   plt.ylabel(metric_type)   plt.legend()   plt.title(f"{metric_type} vs Training Epochs"); 

\

plot_metrics(train_losses, test_losses, "Cross entropy loss") 

\

\

plot_metrics(train_accs, test_accs, "Accuracy") 

\

Saving your model

The integration of tf.saved_model and DTensor is still under development. As of TensorFlow 2.9.0, tf.saved_model only accepts DTensor models with fully replicated variables. As a workaround, you can convert a DTensor model to a fully replicated one by reloading a checkpoint. However, after a model is saved, all DTensor annotations are lost and the saved signatures can only be used with regular Tensors. This tutorial will be updated to showcase the integration once it is solidified.

Conclusion

This notebook provided an overview of distributed training with DTensor and the TensorFlow Core APIs. Here are a few more tips that may help:

  • The TensorFlow Core APIs can be used to build highly-configurable machine learning workflows with support for distributed training.
  • The DTensor concepts guide and Distributed training with DTensors tutorial contain the most up-to-date information about DTensor and its integrations.

For more examples of using the TensorFlow Core APIs, check out the guide. If you want to learn more about loading and preparing data, see the tutorials on image data loading or CSV data loading.

\n

\ \

:::info Originally published on the TensorFlow website, this article appears here under a new headline and is licensed under CC BY 4.0. Code samples shared under the Apache 2.0 License.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

The Urgency Index: BullZilla “Sell-Out Clock” Is the Hottest Metric in Best Crypto to Buy Now as XRP and Cardano Stable

The Urgency Index: BullZilla “Sell-Out Clock” Is the Hottest Metric in Best Crypto to Buy Now as XRP and Cardano Stable

What if the best crypto to buy right now wasn’t a top-20 coin, but a presale project exploding so fast that stages flip every 48 hours, or sooner if $100,000 pours in? That’s exactly what’s happening with the BullZilla presale, now considered one of the most explosive launches of 2025. While the broader market gains momentum, BullZilla crypto is moving at an unmatched speed, triggering intense FOMO and attracting early investors seeking massive upside. The BZIL presale is built on a unique stage progression system that rewards early buyers with massive ROI. BullZilla coin buyers in Stage 13 have already seen ROI boosts exceeding 1,500% against its listing price. This performance alone secures BullZilla’s status among the best crypto to buy right now, combining scarcity, narrative-driven branding, and deflationary mechanics that mimic the success arcs of previous 1000x meme tokens. Even as XRP jumps and Cardano holds firm, BullZilla price action continues to dominate investor conversations. The presale tally has crossed $1 million, over 3,600 holders, and more than 32 billion BZIL tokens sold. Meanwhile, staged increases, such as the jump from $0.00032572 to $0.00033238, demonstrate that early buyers benefit instantly. It’s no surprise that traders repeatedly call BullZilla the best crypto to buy right now, driven by its high-energy presale momentum. BullZilla Presale: The New Gold Standard for Early-Stage ROI The BullZilla presale is engineered to reward urgency. With price increases locked every 48 hours or once each stage hits $100,000, investors find themselves in a high-adrenaline race to secure tokens before the next price bump. This structure alone elevates BZIL into the category of the best crypto to buy right now, particularly for anyone who understands how early-stage tokenomics create exponential returns. The Urgency Index: BullZilla "Sell-Out Clock" Is the Hottest Metric in Best Crypto to Buy Now as XRP and Cardano Stable 4 BullZilla price has been rising with precision and consistency. From earlier phases to Stage 13, early supporters witnessed 5,564.69% ROI, proving that entry timing is everything. Beyond ROI, scarcity ensures long-term value. Token burns are hard-coded into supply mechanics, with each burn tightening the supply and increasing token desirability. Combined with active staking, referral bonuses, and cinematic branding, BullZilla crypto surpasses traditional presales and justifies its title as the best crypto to buy right now for high-growth seekers. As bullish sentiment rises across the market, BZIL presale stands out as the project moving with the greatest velocity. Its ability to generate organic hype without relying on artificial inflation or paid influencer campaigns further solidifies its reputation as the best crypto to buy right now. Scarcity, Burns & Stage 13B: BullZilla’s Formula for Explosive Gains One of BullZilla’s most powerful catalysts is the scarcity baked into its tokenomics. Stage 13B, priced at $0.00033238 is witnessing rapid depletion, with less than 90,000 tokens remaining. Over 666,666 tokens have already been burned, proving that BullZilla’s deflationary mechanics are not theoretical, they are actively shaping supply and investor expectations. As supply shrinks and demand accelerates, BullZilla coin naturally strengthens its position as the best crypto to buy right now, especially for investors seeking tokens with built-in scarcity. Historically, meme coins with aggressive burn structures have outperformed expectations (e.g., SHIB’s early surge), and BullZilla crypto mirrors this pattern with even tighter presale controls. The storytelling aspect of BullZilla also amplifies its appeal. Unlike generic meme coins, BZIL introduces stage names like Zilla Sideways Smash, a branding strategy that enhances memorability and community engagement. This narrative construction makes investors feel connected to the project’s progression, increasing loyalty and enthusiasm. With each price surge, burned token event, and presale milestone, BullZilla adds another layer to its identity, strengthening its claim as the best crypto to buy right now. XRP ($XRP): Strong Momentum, But Still Overshadowed by BullZilla’s Presale Pace XRP has recorded a 7% jump, reaching $2.19 in the last 24 hours. Momentum is strong, fueled by positive sentiment and increased inflows of liquidity. For traditional crypto traders, this is encouraging, but compared to the explosive movement in the BullZilla presale, XRP’s pace appears more stable than aggressive. XRP remains a reliable asset backed by institutional interest and large-scale adoption. It has strong fundamentals, a resilient community, and long-term relevance in the payments sector. However, XRP’s growth curve is steady rather than exponential. When compared to BullZilla coin’s rapid-staging price increases, XRP doesn’t deliver the immediate high-risk, high-reward opportunity that traders seeking the best crypto to buy right now often chase. XRP is strong, but it is not multiplying investor capital at the same speed as BZIL presale. The difference is simple: XRP grows with utility and market cycles, while BullZilla grows through staged presale mechanics designed to maximize early ROI. Cardano (ADA): Stability, Expansion, and Slow-Building Growth Cardano trades with consistent performance, driven by ongoing ecosystem development and staking participation. Its layered blockchain architecture and research-focused roadmap keep it positioned as a dependable long-term investment. ADA remains one of the most academically respected blockchains in the world. But the challenge for Cardano is time. Its growth is slow, steady, and fundamentally driven, not explosive. For investors prioritizing immediate gains or early-stage risk plays, ADA cannot compete with the energy, scarcity mechanics, and stage-based ROI of the BullZilla presale. While ADA is excellent for holding, staking, and long-term stability, it lacks the rapid movement that makes BullZilla the best crypto to buy right now. Cardano is a backbone asset in any diversified portfolio. But for traders looking for a high-octane opportunity where small capital can generate exponential growth, BullZilla price action remains unmatched. How to Join BullZilla Before Stage 13C Hits For investors ready to enter one of the best crypto to buy right now, the steps are simple: Visit the official BullZilla presale portal.Connect your Web3 wallet.Purchase BZIL using ETH, USDT, or card. Stake immediately to earn rewards. Use referral codes for up to 10% bonuses. With stages progressing rapidly, timing is crucial. Each delay risks entering at a higher BullZilla price, reducing overall token allocation and potential ROI. The Urgency Index: BullZilla "Sell-Out Clock" Is the Hottest Metric in Best Crypto to Buy Now as XRP and Cardano Stable 5 Conclusion: BullZilla Dominates the Market Conversation The crypto market is gaining momentum, but no project is generating more excitement than the BZIL presale. With explosive early-stage ROI, rapid stage progression, token burns, scarcity mechanics, and narrative-driven hype, BullZilla crypto stands alone as the best crypto to buy right now for investors seeking exponential returns. XRP is climbing, Cardano remains fundamentally strong, but neither matches BullZilla’s presale velocity. With a price of $0.00033238, over 32 billion tokens sold, 3,600+ holders, and millions raised, the BullZilla presale is quickly becoming the most-watched meme coin launch of 2025. If you’re looking for the best crypto to buy right now, the window to enter BullZilla before Stage 13C is closing fast. The Urgency Index: BullZilla "Sell-Out Clock" Is the Hottest Metric in Best Crypto to Buy Now as XRP and Cardano Stable 6 For More Information:  BZIL Official Website Join BZIL Telegram Channel Follow BZIL on X  (Formerly Twitter) Summary The article spotlights BullZilla as the breakout opportunity in the crypto market, emphasizing the explosive momentum of the BZIL presale, which is already accelerating through stages that shift every 48 hours or once $100,000 is raised. Investors are urged to join the earliest round to secure the highest possible gains before prices increase. Alongside BullZilla, the article compares XRP and Cardano, but reinforces that BullZilla’s early–stage mechanics create a uniquely powerful setup for rapid growth. Throughout the piece, the phrase “best crypto to buy right now” is repeatedly positioned to establish BZIL as the top contender in the current market, supported by hype-driven analysis of BullZilla price potential, BullZilla crypto appeal, and the expanding excitement around the BZIL presale Read More: The Urgency Index: BullZilla “Sell-Out Clock” Is the Hottest Metric in Best Crypto to Buy Now as XRP and Cardano Stable">The Urgency Index: BullZilla “Sell-Out Clock” Is the Hottest Metric in Best Crypto to Buy Now as XRP and Cardano Stable
Share
Coinstats2025/12/08 02:15
Exploring Market Buzz: Unique Opportunities in Cryptocurrencies

Exploring Market Buzz: Unique Opportunities in Cryptocurrencies

In the ever-evolving world of cryptocurrencies, recent developments have sparked significant interest. A closer look at pricing forecasts for Cardano (ADA) and rumors surrounding a Solana (SOL) ETF, coupled with the emergence of a promising new entrant, Layer Brett, reveals a complex market dynamic. Cardano's Prospects: A Closer Look Cardano, a stalwart in the blockchain space, continues to hold its ground with its research-driven development strategy. The latest price predictions for ADA suggest potential gains, predicting a double or even quadruple increase in its valuation. Despite these optimistic forecasts, the allure of exponential gains drives traders toward more speculative ventures. The Buzz Around Solana ETF The potential introduction of a Solana ETF has the crypto community abuzz, potentially catapulting SOL prices to new heights. As investors await regulatory decisions, the impact of such an ETF on Solana's value could be substantial, potentially reaching up to $300. However, as with Cardano, the substantial market capitalization of Solana may temper its growth potential. Why Layer Brett is Gaining Traction Amidst established names, a new contender, Layer Brett, has started to capture the market's attention with its early presale stages. Offering a low entry price of just $0.0058 and promising over 700% in staking rewards, Layer Brett presents a tempting proposition for those looking to maximize returns. Comparative Analysis: ADA, SOL, and $LBRETT While both ADA and SOL offer stable investment choices with reliable growth, Layer Brett emerges as a high-risk, high-reward option that could potentially offer significantly higher returns due to its nascent market position and aggressive economic model. Initial presale pricing lets investors get in on the ground floor. Staking rewards currently exceed 690%, a persuasive incentive for early adopters. Backed by Ethereum's Layer 2 for enhanced transaction speed and reduced costs. A community-focused $1 million giveaway to further drive engagement and investor interest. Predicted by some analysts to offer up to 50x returns in coming years. Shifting Sands: Investor Movements As the crypto market landscape shifts, many investors, including those traditionally holding ADA and SOL, are beginning to diversify their portfolios by turning to high-potential opportunities like Layer Brett. The combination of strategic presale pricing and significant staking rewards is creating a momentum of its own. Act Fast: Time-Sensitive Opportunities As September progresses, opportunities to capitalize on these low entry points and high yield offerings from Layer Brett are likely to diminish. With increasing attention and funds being directed towards this new asset, the window to act is closing quickly. Invest in Layer Brett now to secure your position before the next price hike and staking rewards reduction. For more information, visit the Layer Brett website, join their Telegram group, or follow them on X by clicking the following links: Website Telegram X Disclaimer: This is a sponsored press release and is for informational purposes only. It does not reflect the views of Bitzo, nor is it intended to be used as legal, tax, investment, or financial advice.
Share
Coinstats2025/09/18 18:39
XRP’s Potential Surge Above $15 Amid Technical Patterns and Regulatory Clarity

XRP’s Potential Surge Above $15 Amid Technical Patterns and Regulatory Clarity

The post XRP’s Potential Surge Above $15 Amid Technical Patterns and Regulatory Clarity appeared on BitcoinEthereumNews.com. XRP is poised for a potential surge above $15 in the coming years, driven by historical technical patterns mirroring 2017 breakouts, spiking on-chain velocity in 2025, and emerging U.S. regulatory clarity that could classify it as a commodity, boosting investor confidence and institutional inflows. XRP technical patterns suggest a 600%+ gain, targeting $15 or higher based on multi-year chart analysis since 2014. On-chain velocity has reached record highs in 2025, indicating accelerated transaction activity and sustained price momentum. A proposed U.S. Senate bill could reclassify XRP as a commodity under CFTC oversight, potentially unlocking billions in institutional investment, according to regulatory experts. Discover XRP’s breakout potential with technical signals and regulatory tailwinds driving massive gains in 2025. Stay ahead of the crypto surge—explore key insights and predictions now. What Is Driving XRP’s Potential Price Surge in 2025? XRP’s potential price surge in 2025 stems from a confluence of technical chart patterns, surging on-chain metrics, and favorable regulatory developments in the U.S. Historical analysis shows XRP forming identical breakout structures to its 2017 rally, which could propel the price from current levels around $2.10 to over $15. This momentum is amplified by record transaction velocity and the prospect of commodity status, attracting institutional capital previously sidelined by uncertainty. How Do Historical Technical Patterns Support XRP’s Breakout? XRP’s price history reveals a series of descending triangles and consolidation phases that have preceded explosive rallies, providing a strong foundation for current predictions. From 2014, XRP formed its first major descending triangle over 1,209 days, followed by a sharp decline and subsequent reversal marked by false breakdowns below support levels. This pattern led to a dramatic surge from 2020 lows to nearly $2.00 in 2021, demonstrating XRP’s resilience. Entering 2022 and 2023, the asset consolidated between $0.40 and $0.50, building pressure for the next…
Share
BitcoinEthereumNews2025/12/08 02:54