Language: EN

csharp-tensorflow-net

Integrating TensorFlow in C# with TensorFlow.NET

TensorFlow.NET is a C# library that allows developers to integrate TensorFlow, a popular machine learning library from Google, into .NET applications.

This tool facilitates the use of advanced machine learning and neural network capabilities directly from the .NET development environment.

TensorFlow.NET is a .NET version of TensorFlow that allows .NET developers to utilize TensorFlow’s capabilities without having to switch to Python or other languages. With TensorFlow.NET, you can create, train, and deploy machine learning models directly in your .NET application.

Main Features,

  • Full Compatibility: Provides an almost identical API to TensorFlow in Python.
  • Interoperability: Allows integration with other .NET libraries and facilitates deployment in production environments.
  • Ease of Use: Simplifies the machine learning process in .NET applications while maintaining the familiarity of the .NET ecosystem.

TensorFlow.NET is Open Source. For more details, visit the GitHub repository, where you will find additional documentation and usage examples.

Installation and Setup

To start using TensorFlow.NET in your .NET project, follow these steps:

  1. Create a New Project

You can create a new console project in Visual Studio or using the .NET CLI:

dotnet new console -n MyTensorFlowApp
cd MyTensorFlowApp
  1. Add TensorFlow.NET to Your Project

Install the TensorFlow.NET NuGet package using the .NET CLI:

Install-Package TensorFlow.NET
Install-Package TensorFlow.Keras

Alternatively, you can add the package through the NuGet Package Manager in Visual Studio.

  1. Add TensorFlow Binaries

Finally, we need to install the TensorFlow binaries. For that, we have to choose the one appropriate for our machine, from the following.

### Install tensorflow binary
### For CPU version
Install-Package SciSharp.TensorFlow.Redist

### For GPU version (CUDA and cuDNN are required)
Install-Package SciSharp.TensorFlow.Redist-Windows-GPU

How to Use TensorFlow.NET

TensorFlow.NET makes it easy to create and train machine learning models. Below, we show you how to build a simple linear regression model.

Linear Regression Example

using static Tensorflow.Binding;
using static Tensorflow.KerasApi;
using Tensorflow;
using Tensorflow.NumPy;

var training_steps = 1000;
var learning_rate = 0.01f;
var display_step = 100;

// Sample data
var X = np.array(3.3f, 4.4f, 5.5f, 6.71f, 6.93f, 4.168f, 9.779f, 6.182f, 7.59f, 2.167f,
		 7.042f, 10.791f, 5.313f, 7.997f, 5.654f, 9.27f, 3.1f);

var Y = np.array(1.7f, 2.76f, 2.09f, 3.19f, 1.694f, 1.573f, 3.366f, 2.596f, 2.53f, 1.221f,
		 2.827f, 3.465f, 1.65f, 2.904f, 2.42f, 2.94f, 1.3f);
var n_samples = X.shape[0];

// We can set a fixed init value in order to demo
var W = tf.Variable(-0.06f, name: "weight");
var b = tf.Variable(-0.73f, name: "bias");
var optimizer = keras.optimizers.SGD(learning_rate);

// Run training for the given number of steps.
foreach (var step in range(1, training_steps + 1))
{
// Run the optimization to update W and b values.
// Wrap computation inside a GradientTape for automatic differentiation.
using var g = tf.GradientTape();
// Linear regression (Wx + b).
var pred = W * X + b;
// Mean square error.
var loss = tf.reduce_sum(tf.pow(pred - Y, 2)) / (2 * n_samples);
// should stop recording
// Compute gradients.
var gradients = g.gradient(loss, (W, b));

// Update W and b following gradients.
optimizer.apply_gradients(zip(gradients, (W, b)));

if (step % display_step == 0)
{
	pred = W * X + b;
	loss = tf.reduce_sum(tf.pow(pred - Y, 2)) / (2 * n_samples);
	print($"step: {step}, loss: {loss.numpy()}, W: {W.numpy()}, b: {b.numpy()}");
}
}

In this example,

  • Data Definition: The input data x_data and y_data are defined.
  • Model Variables: W and b are the variables that will be adjusted during training.
  • Linear Regression Model: A function linear_model is defined that represents the linear regression equation.
  • Loss Function: The loss function loss calculates the mean squared error between predicted and actual values.
  • Optimizer: The gradient descent optimizer is used to minimize the loss function.
  • Training: The model is trained by adjusting W and b to minimize the loss function.