Blog / Systems

Queue | Data Structures with Go

Martin Dichtler
Martin Dichtler
December 14, 2025

Queue | Data Structures with Go

FIFO & Memory Management

behindthe.dev

LIFO to FIFO

In the first post of the series we introduced Stack as our first data structure which follows the LIFO principle (last in first out). Similar to stack is our next data structure Queue, that follows FIFO (first in first out).

In the previous analogy we talked about a stack of coins, where new coins get added to the top and they have to be removed from the top for us to reach the bottom.

Queue as the name suggests works exactly like a queue at a DMV, you take a number, the earlier you are the lower number you have, and new numbers are called in order, essentially first come, first served.

While Stack is useful for linear history, queues in real-world engineering are useful for job queues, but also in DSA specifically will become the backbone for building one of the most common search algorithms (BFS - breadth first search).

Slice vs Linked List as underlying data structure

For the most part, when you interview for SWE jobs you will be implementing Queue as a slice, and often enough this is going to be how you solve leetcode style problems involving queue. This is very fast way to show your understanding of the concept, while not spending too much time creating boilerplate.

The danger comes from not understanding how the slice will behave in memory. Slices have length, but they also have capacity, when you initiate an empty slice, you can pre-allocate capacity, this is done by the third parameter of make.

Slice capacity also increases as you push more elements to it, whenever the capacity changes, new underlying array has to be created. In practice this means copying the data into new larger array, and garbage collector needing to clean the previous one.

The important part to remember here is that the re-allocation happens when the slice grows, not when it shrinks. So your slice will continue to take as much memory, as it took in it's largest size.

In Go slice grows by allocating underlying array twice it's own capacity so for each item pushed we don't need to find new space on the heap to move the entire slice. This however means at large amount of data, for example 1M records, when you push one more record to it we hit 2M capacity even if there is only 1,000,001 records. Once you drop all of them from the slice, the underlying array remains at the 2M capacity, until it's eventually completely dereferenced and GC can clean it up. If your variable is global, or never dereferenced this is a memory leak.

// Naive implementation
func main() {
  // init empty slice, 0 bytes
  q := make([]rune, 0, 0)
  
  // Enqueue:
  // capacity growth 0->1, 4 bytes
  q = append(q, '1')
  // capacity growth 1->2, 8 bytes
  q = append(q, '2')
  // capacity growth 2->4, 16 bytes
  q = append(q, '3')

  // Pop:
  // Len 1
  q = q[1:]
  // Len 1
  q = q[1:]
  // Len 0, but capacity remains at 4 and still takes 16 bytes!!
  q = q[1:]
}

We solve for this problem using linked lists, that do not create underlying arrays, and when a node is detached, the capacity is claimed back almost immediately by the garbage collector.

Nomenclature & flow

There are several important keywords to remember when talking about Queues:

  • Head: front of the line
  • Tail: back of the line
  • Enqueue: add to the back of the line (Tail)
  • Dequeue: remove from the front of the line (Head)

Before we move on to implementation the final part you have to understand is what happens at each operation. Imagine line of people standing in a queue at a supermarket, there is two flags, a green one indicating who goes first (Head), and a red one for the last person in the queue (Tail).

When a new person joins the queue (Enqueue), the person currently holding the red flag gives it to the person who just joined, as they became the new last person of that queue (Tail). When a person checks out from the front of the queue (Dequeue), they hand the green flag to the person standing right behind them moving the front of the queue (Head).

One thing you might have noticed, none of the people did care who is ahead of them, they only care who is behind them which tells us the queue will only care for a single direction, and we can use singly linked list.

Another way to imagine this is our queue looks like A (Head) -> B -> C -> D (Tail). When new person joins (E) they get the "tail" flag from D, new queue looks like: A (Head) -> B -> C -> D -> E (Tail). When A checks out, they hand their "head" flag over to B, and new queue looks like B (Head) -> C -> D -> E (Tail).

Developing Tests (TDD)

As we defined how queues work let's think about what's important to maintain. First one is order, at no point we can insert new items in the middle of the queue, when we add new item we should expect it to show up at the end.

Second is the size, the queue size can never be negative, at minimum has 0 elements. What happens when we try to remove element from already an empty queue? We throw an error as this is an impossible case!

Finally, for the sake of knowing who is first, we need to be able to look at the front of the queue (Peek), what happens when the queue is completely empty? One would be inclined to using nil however the proper choice here is also throwing an error. This is mostly because your implementation will often include generic type T, where nil isn't a valid choice for all types, rather we handle the return by returning both error, and the zero value of given type.

Zero values are different per type, for bool it is False, for int it is 0, for string it is "" you can try using them in A Tour of Go: https://go.dev/tour/basics/12

// queue_test.go
package queue

import (
	"testing"
)

func TestPeekEmpty(t *testing.T) {
	// should return an error
	queue := NewQueue[int8]()
	got, err := queue.Peek()
	if err == nil {
		t.Errorf("Empty queue Peek didn't return an error, got: %d", got)
	}
}

func TestDequeueEmpty(t *testing.T) {
	queue := NewQueue[int8]()
	got, err := queue.Dequeue()
	if err == nil {
		t.Errorf("Empty queue Deque didn't return an error, got: %d", got)
	}
}

func TestEnqueue(t *testing.T) {
	// initiate empty queue
	queue := NewQueue[int8]()
	queue.Enqueue(2)
	// test case
	got, err:= queue.Peek()
	if err != nil {
		t.Errorf("Failed to peek: %v", err)
	}
	// expected value
	want := 2

	if got != int8(want) {
		t.Errorf("Failed to enqueue, expected: %d, got: %d", want, got)
	}
} 

func TestDequeue(t *testing.T) {
	queue := NewQueue[int8]()
	queue.Enqueue(5)
	queue.Enqueue(3)
	got, err := queue.Dequeue()
	if err != nil {
		t.Errorf("Failed to dequeue, error: %v", err)
	}

	want := 5
	if got != int8(want)  {
		t.Errorf("Failed to dequeue, expected: %d, got: %d", want, got)
	}

	// lets also validate correctly that 3 is now at the front
	got, err = queue.Peek()
	want = 3
	if got != int8(want) {
		t.Errorf("Peek returned wrong value after dequeue, expected: %d, received: %d", want, got)
	}
}


func TestSize(t *testing.T) {
	queue := NewQueue[int8]()
	got := queue.Size()

	want := 0
	if got != int32(want) {
		t.Errorf("Size of an empty queue must be zero, received: %d", got)
	} 

	// lets add items
	queue.Enqueue(2)
	queue.Enqueue(4)
	queue.Enqueue(5)
	queue.Enqueue(2)

	got = queue.Size()

	want = 4

	if got != int32(want) {
		t.Errorf("Size not matching after enqueue, received: %d, expected: %d", got, want)
	}

	_, err := queue.Dequeue()
	if err != nil {
		t.Errorf("Failed to dequeue when testing size: %v, expected 3 more elements after dequeue", err)
	}
	_, err = queue.Dequeue()
	if err != nil {
		t.Errorf("Failed to dequeue when testing size: %v, expected 2 more elements after dequeue", err)
	}
	got = queue.Size()
	want = 2 
	if got != int32(want) {
		t.Errorf("Size not matching after dequeue, received: %d, expected: %d", got, want)
	}
}

Implementing Queue

First, let's setup our boilerplate, we need to define the Node of the linked list. As discussed because we only care for a single direction we can only point to the next node. We also need to define Queuer, the interface holding our methods, the Queue struct itself, that includes Head, Tail, and keeps track of its size, and finally the constructor NewQueue.

package queue

import "fmt"


type Node[T any] struct {
	Value T
	next *Node[T]
}

type Queuer[T any] interface {
	Enqueue(T)
	Dequeue() (T, error)
	Size() int32
	Peek() (T, error)
}


type Queue[T any] struct {
	head *Node[T]
	tail *Node[T]
	size int32
}


func NewQueue[T any]() Queuer[T]{
	return &Queue[T]{
		head: nil,
		tail: nil,
		size: 0,
	}
}

After our initial boilerplate, let's work on adding first method that is Enqueue. Since Enqueue cannot fail* we are safe to increment the size of our queue immediately.

There is possibility of a failure if we run out of memory, but in terms of the logic itself, we never expect it to fail.

We then need to create a new Node for our new item which will give us memory pointer. We check if our queue isn't empty, if it is we simply set both head & tail to our node and can early return. If there are items we point the current tail's next to the new node, and subsequently set the tail pointer to the new node, this ensures it remains linked to the rest of the queue.

func (qs *Queue[T]) Enqueue(val T) {

	qs.size++;
	node := &Node[T]{Value: val}
	// if head doesn't exist, assume no tail either
	if qs.head == nil {
		qs.tail = node
		qs.head = node
		return
	}

	// add to the tail
	// set pointer of current tail to our node
	qs.tail.next = node

	// set new tail of the current node
	qs.tail = node

}

In order to handle Dequeue, we have to think about edge cases we defined during our tests, one of which is what happens when we call dequeue on an empty queue? We need to both return an error, and a zero value for the generic type.

If the queue isn't empty, we simply decrement the size, we take a reference to the current head so we can extract its value before we overwrite the Head pointer. We then move our head forward, handing over the imaginary green flag to the next person in line.

Before we can return the value we just dequeued, we have to check if there was another person in the line, since we are dealing with pointers we can simply check if the new head is not nil. If it is we have to make sure we also update our tail.

Finally, we can return the value of the dequeued item. We return the value itself to avoid possibly modifying or accessing the .next of the value we removed, and also to ensure it gets cleaned up by GC.

func (qs *Queue[T]) Dequeue() (T, error) {
	if qs.head == nil {
		// queue is empty
		var zero T
		return zero, fmt.Errorf("Queue is empty")
	}

	// decrement size only after validating queue is not empty
	qs.size--
	curr := qs.head
	qs.head = qs.head.next
	
	if qs.head == nil {
		qs.tail = nil
	}

	return  curr.Value, nil

}

To finish off our implementation we have Peek & Size. Since we are tracking size in a variable to avoid traversing our linked list whenever we want to know the value, we can simply return the current value of size.

We also identified that Peek can fail, so let's make sure to care for that condition by returning an error and a zero value for given type, in all other cases let's simply return the value.

func (qs *Queue[T]) Size() int32 {
	return qs.size
}

func (qs *Queue[T]) Peek() (T, error) {
	if qs.head == nil {
		var zero T
		return zero, fmt.Errorf("Queue is empty")
	}
	return qs.head.Value, nil
}

Summary

Both Stack & Queue and the bread and butter of DSA, as we dive deeper with the series you will continue to use them in more complicated algorithms, it is important that you understand them well and I strongly recommend putting in time to practice them as much as you can.

If you are specifically learning DSA for upcoming interviews, you can test your skills with Leetcode 75 / Leetcode Interview 150 - that personally have significantly helped me to crack Google interviews and cover this extensively.

Read more: Data structures with Go Joy of returning to Firebase after over 5 years

Enjoy this content?

Add behindthe.dev to your Google News sources to get notified of new articles.

Follow on Google News