LanguagesHow to Represent a Matrix in Go

# How to Represent a Matrix in Go

Developer.com content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

An array of elements in a tabular or grid form is called a matrix. If the number layout is in the row and column form, it is called a two-dimensional matrix. Typically, a matrix is represented in an array in any language, but there are other ways to represent it. In the Go programming language, the easiest way to represent a two-dimensional matrix is by using a slice. Here, in this Go programming tutorial, we will learn how to implement some matrix functionality using multidimensional slices.

## What is the Difference Between Arrays and Slices in Go?

The representation of arrays and slices is significantly different in Go. A slice is dynamic in nature and an array is more static, although both represent a contiguous memory allocation. A slice is a reference type that references a segment of an array. A slice may also reference a part, or a complete, array and, like arrays, can store similar types of elements. A slice, therefore, is also an array that pours a context of dynamism onto the underlying array, which otherwise is a static contiguous memory allocation. To better understand how this works, we suggest you read our tutorial: Revisiting Arrays and Slices in Go.

Note, it is not that an array cannot be used for matrix representation. In fact, if we know the dimension of an array prior to using it, the array is just fine. Slice is simply more advantageous and is a convenient way to represent a matrix due to its dynamic nature. When we can do everything that an array does with a slice, plus benefit from its advantages, it becomes an obvious choice for developers to use.

## What is a Matrix in Go?

Apart from pure mathematics, a matrix in the Go programming language has numerous uses in computers, physics, engineering, and any discipline that wants to play with numbers or represent them in some way. From a layman’s perspective, a matrix is a way to represent numbers in a tabular fashion. This tabular form has a number of rows and columns. This vertical and horizontal coincidence or rows and columns forms a cell where the values are placed.

Now, take this idea of a matrix and imagine what are the possible ways a computer can use them. One obvious place we see a matrix being used is the monitor screen, which is actually a matrix of pixels – whether they are 800×600 or 1024×768 in resolution or some other combination. If a developer wants to store and analyze some data, it is convenient to place that information in some form of a matrix. Matrices are also used in cryptography, cybersecurity, gaming, graphics, robotics, kinematics, and GIS – the list goes on and on.

If you look around, the dates in a calendar are actually a matrix; the keyboard is a matrix of keys; even the road network of a city is a matrix. So, anything you spread out in the form of a grid is actually a matrix. This is a common-sense understanding not appropriated with mathematical nuances. Mathematical definitions can, at times, be quite dense and difficult to grasp. This overly simplistic explanation relies most on a common-sense approach rather than getting into the nuances of mathematics. The point is, when you represent something in the form of a matrix, analyzing things becomes easier. This actually is the essence of why a matrix is used in computing or in any other discipline. In programming and software development, a matrix is represented with the help of an array – a linear contiguous memory allocation where each element is accessed through an index value starting with 0. The array must also have a size associated with it that represents its length. Since a matrix has both rows and columns, its size must represent both row length and column length, such as – 3×4, 5×5, or 6×2 matrices. If both rows and columns have the same length, it is known as a square matrix. There are different types of matrices, but square matrices are the most convenient to work with and are widely used in computing. In fact, any matrix can be converted into a square matrix by appending 0s into the cells that do not contain any values.

### Rules for Using Matrix in Go

Some basic calculations associated with matrices are: addition, subtraction, and multiplication. Although there are other operations, to keep it simple, we will focus on these three basic mathematical operations only. The rules associated with these operations and operators are as follows:

• For addition and subtraction of two matrices, the number of columns and rows (dimension) should be the same for both the matrices.
• For multiplication the number of columns of the first matrix should be the same as the number of rows of the second matrix otherwise multiplication is not possible.
• In order to divide one matrix with another, we need to do two things: find the inverse of a matrix (keep in mind that only square matrices can have an inverse) and, after inverting the matrix, multiplication must be possible according to rule 2.

Note: Matrix division is way more complex and needs a complete writeup of its own – we will need to cover that in a separate Go tutorial. Here, for the sake of simplicity and brevity, we will focus on the above three basic operations only.

## How to Add and Subtract Matrices in Go

Let us write some Golang code to add and subtract two matrices.We will create two functions called AddMatrix and SubMatrix that take two 2D slices as their arguments, performs a calculation, and then returns the resultant matrix, which is another 2D slice. Before we do the calculation, we will populate the matrices with some random values. Also, note that for all purposes, we will use square matrices. ```package main

import (
"fmt"
"math/rand"
"time"
)

func AddMatrix(matrix1 [][]int, matrix2 [][]int) [][]int {
result := make([][]int, len(matrix1))
for i, a := range matrix1 {
for j, _ := range a {
result[i] = append(result[i], matrix1[i][j]+matrix2[i][j])
}
}
return result
}

func SubMatrix(matrix1 [][]int, matrix2 [][]int) [][]int {
result := make([][]int, len(matrix1))
for i, a := range matrix1 {
for j, _ := range a {
result[i] = append(result[i], matrix1[i][j]-matrix2[i][j])
}
}
return result
}

func populateRandomValues(size int) [][]int {

m := make([][]int, size)
for i := 0; i < size; i++ {
for j := 0; j < size; j++ {
m[i] = append(m[i], rand.Intn(10)-rand.Intn(9))
}
}
return m
}

func main() {
rand.Seed(time.Now().Unix())
var size int
fmt.Println("Enter size of the square matrix: ")
fmt.Scanln(&size)
x1 := populateRandomValues(size)
x2 := populateRandomValues(size)

fmt.Println("matrix1:", x1)
fmt.Println("matrix2:", x2)

fmt.Println("SUB: matrix1 - matrix2: ", SubMatrix(x1, x2))
}

```

This results in the following output when you run it in your integrated development environment (IDE) or code editor: ## Matrix Multiplication in Go

Matrix multiplication in Go is a bit complex. As mentioned in the rules above, matrix multiplication is only possible if the number of columns and rows of two matrices are the same. Since we are using a square matrix for our purpose, there is no issue here; we will simply add one more function called MulMatrix to the above program. ```func MulMatrix(matrix1 [][]int, matrix2 [][]int) [][]int {
result := make([][]int, len(matrix1))
for i := 0; i < len(matrix1); i++ {
result[i] = make([]int, len(matrix1))
for j := 0; j < len(matrix2); j++ {
for k := 0; k < len(matrix2); k++ {
result[i][j] += matrix1[i][k] * matrix2[k][j]
}
}
}
return result
}

func main() {
//...
fmt.Println("MUL: matrix1 * matrix2: ", MulMatrix(x1, x2))
}
```

Once more, this results in the output: ## Final Thoughts on How to Represent a Matrix in Go

As we can see, representing a matrix with slices in Go is pretty straightforward. A slice can be treated as simple as an array, accessing its value with the help of indices; however, understand that a slice is more frequently used in Go due to the flexibility it provides above static arrays. Matrices are represented with two-dimensional arrays or slices. They have numerous uses in computing with numerous operations. Addition, subtraction, and multiplication are just the start.

Subscribe to Developer Insider for top news, trends & analysis