Numpy Gradient | Descent Optimizer of Neural Networks

Are you a Data Science and Machine Learning enthusiast? Then you may know numpy.The scientific calculating tool for N-dimensional array providing Python the processing speed like FORTRAN and C. This can do various things like an array to the image. array to list, etc. Similarly, is the numpy.gradient() method a highly advanced tool used at the level of neural networks.

What is a Gradient in Layman Language?

In simple mathematics, the gradient is the slope of the graph or the tangential value of the angle forming the line connecting two points in 2D and a plane in 3D. But in scientific terms, the gradient of a function becomes the greatest increase or decrease of a function calculated by partial derivative of all points in the function.

In NumPy, we basically calculate the gradient descent, shifting the function towards a negative gradient to decrease the difference in the greatest increase and decrease of the function.

What Numpy Gradient is?

As per, the numpy gradient is used to compute gradient using second-order accurate central differences in the interior points and either first or second-order accurate one-sides (forward or backward) differences at the boundaries. In other words, from all ways find the shortest path that covers almost all points

For example, consider going from the peak of a hill to its foothill. But with a condition that you are blindfolded and only can know things like your present height and distance traveled. It will take various steps to come down and by checking the inputs given by equipment and in the end binding the best way of descent to the bottom. Similarly is the working of gradient descent in NumPy

Syntax to be used


This contains various parameters, but it is not necessary to write the same way always you can directly write numpy.gradient(f) wherein place of ‘f‘ you can use a single array or multiple arrays

Going for the Parameters :

ParametersCompulsory or not

Array : f

the array of numbers are the inputs which are used to find the gradient

variable arguement or vararg

Spacing between the array values. It is the default unitary spacing for all dimensions. Spacing can be specified using:

  1. Single scalar to specify a sample distance for all dimensions.
  2. N scalars to specify a constant sample distance for each dimension. i.e., dxdydz, …
  3. N arrays to specify the coordinates of the values along each dimension of F. The length of the array must match the size of the corresponding dimension
  4. Any combination of N scalars/arrays with the meaning of 2. and 3.

If axis is given, the number of varargs must equal the number of axes. Default: 1.


It can None type, int type, or a tuple of int type. It decides direction so as to calculate the gradient. 0 for row and c1 for column-wise direction. None is used when the gradient is calculated from all directions. The axis may be negative, for this case, it counts from the last to the first axis.


It can be 1 or 2. It is used with respect to the boundaries aspect. The gradient is calculated using N-th order accurate differences at the boundaries. Default: 1

Return Value

It returns an N-dimensional array or a list of N-dimensional array. In other words, it returns a set of ndarrays (depends on the number of dimensions) that corresponds to the derivatives of the array with respect to each dimension. Each derivative has the same shape as the array

Examples to understand the use


import numpy as np
f = np.array([2,4,5,6,7,8], dtype = float)
array([2. , 1.5, 1. , 1. , 1. , 1. ])
array([1. , 0.75, 0.5 , 0.5 , 0.5 , 0.5 ])

The second one has changed spacing so variant result

Similarly we can use it for multiple arrays

array1 = np.array([1,2,4,5,7], dtype = float)
array2 = np.array([2,3,4,7,8], dtype = float)
array([1.  , 1.5  , 1.58333333, 1.58333333, 2. ])

And N-dimensional array as well. Consequently, this returns the same number of arrays as the dimensions with the same dimensions

array_2d = np.array([[11,22,33],[14,15,16]], dtype=float)
[array([[ 3., -7., -17.],
           [ 3., -7., -17.]]), array([[11., 11., 11.],
           [ 1., 1., 1.]])]

The spacing can uniform using fixed value

x= [1,2,3,4,5,6]
array([2., 4., 5., 6., 7., 8.])
array([2. , 1.5, 1. , 1. , 1. , 1. ])

Or non uniform:

y= np.array([1.3,2.2,3.4,4.2,5.1,6.2],dtype=float)
array([2.22222222, 1.62698413, 1.08333333, 1.18464052, 1.02020202,

We can fix the axis in which the gradient is calculated

np.gradient(np.array([[11,23,34,45],[22,33,44,55]],dtype =float),axis=0)
array([[11., 10., 10., 10.],
          [11., 10., 10., 10.]])
np.gradient(np.array([[11,23,34,45],[22,33,44,55]],dtype =float),axis=1)
array([[12. , 11.5, 11. , 11. ],
          [11. , 11. , 11. , 11. ]]

We can fix boundaries of gradient

a= np.array([24,34,45,56],dtype= float)
array([10. , 10.5, 11. , 11. ])
array([ 9.5, 10.5, 11. , 11. ])

A short function using numpy Gradient :

def elevation_gradient(elevation):
    """Calculate the two-dimensional gradient vector for an elevation raster.
    :param elevation: a raster giving linear scale unit heights.
    Return a raster with 2 planes giving, respectively, the dz/dx and dz/dy
    values measured in metre rise per horizontal metre travelled.
    dx, dy = np.gradient(
    # Convert from metre rise / pixel run to metre rise / metre run.
    dx *= 1.0 / (elevation.pixel_linear_shape[1])
    dy *= 1.0 / (elevation.pixel_linear_shape[0])
    return similar_raster(np.dstack((dx, dy)), elevation)

Numpy Diff vs Gradient

There is another function of numpy similar to gradient but different in use i.e diff

As per, used to calculate n-th discrete difference along given axis

numpy.diff(a,n=1,axis=-1,prepend=<no value>,append=<no value>)

While diff simply gives difference from matrix slice.The gradient return the array of gradients along the dimensions provided whereas gradient produces a set of gradients of an array along all its dimensions while preserving its shape

b= np.array([2,3,4,7,8],dtype=float)
array([1., 1., 3., 1.])
array([1., 1., 2., 2., 1.])

What’s Next?

NumPy is very powerful and incredibly essential for information science in Python. That being true, if you are interested in data science in Python, you really ought to find out more about Python.

You might like our following tutorials on numpy.

Numpy Gradient in Neural Network

Neural Network is a prime user of a numpy gradient. The algorithm used is known as the gradient descent algorithm. Basically used to minimize the deviation of the function from the path required to get the training done. Mathematically it’s a vector that gives us the direction in which the loss function increases faster. So we should move in the opposite direction if we try to minimize it.

Taking the example of mountain descent, the place where position changes, height changes, and slope changes. To find the path that as minimum slope and minimum direction change required.

Notify of
Inline Feedbacks
View all comments