Contiguous Memory Allocation


Contiguous Memory Allocation

Contiguous memory allocation is a memory allocation method that gives a process or a file a single contiguous section of memory. Does this approach consider the size of the file or process, as well as the maximum size to which the file or process will expand?

The operating system allocates ample contiguous memory blocks to that file, taking into account the file’s potential growth and memory request. The operating system will assign those many contiguous blocks of memory to that file, based on this potential expansion and the file’s memory request.

In this segment, we’ll go over contiguous memory allocation in-depth, including its benefits and drawbacks. So, let’s get started.

Memory Allocation

Both the operating system and userspace must be stored in main memory. The userspace now needs to handle a number of user processes. In addition, we want all of these user processes to be in main memory at the same time.

The issue now is how to distribute the available memory space among the user processes in a ready queue.

When a process is moved from the ready queue to main memory for execution, contiguous memory blocks are allocated to the process based on its requirements. Now, in order to assign contiguous space to user processes, the memory may be divided into fixed-sized or variable-sized partitions.

Fixed-Sized Partition: A fixed-sized partition divides memory into fixed-sized blocks, each containing exactly one operation. However, since the number of processes is determined by the partition size, the degree of multiprogramming will be limited.

The operating system maintains a table that contains details about all memory parts that are occupied by processes and all memory parts that are still usable for processes in the variable size partition method.

How holes are created in the memory?

The entire memory space is initially accessible to user processes as a large block, a void. When the processes arrive in memory, execute, end, and exit the memory, a series of holes of varying sizes will appear.

The holes in the memory of variable size are created when files A and C release the memory allocated to them, as seen in the diagram above.

The operating system uses the variable size partition method to analyze the process’s memory specifications and decide if it has a memory block of the appropriate size. If a match is found, the memory block is allocated to the operation. If that’s not the case, it looks in the ready queue for a method with a lower memory requirement.

The operating system allocates memory to the process before the next process in the ready queue has a memory constraint that it cannot fulfill. If it doesn’t have a memory block (hole) big enough to hold the operation, it stops allocating memory to it.

If the memory block (hole) is too big for the operation, it’ll be split in two. The arrived process gets one half of the memory block, while the other half is returned to the collection of holes. When a process finishes and the memory it was provided is released, the memory is returned to the set of holes. The two holes in the set of holes that are adjacent to each other are combined to form one big hole.

At this stage, the operating system decides whether the newly created wide free hole will meet the memory requirements of any process in the ready queue. The method continues.

Memory Management

We saw how the operating system allocates contiguous memory to processes in the previous portion. We’ll look at how the operating system chooses a free hole from the collection of available holes.

To pick a hole from a set of holes, the operating system uses either the block allocation list or the bit map.

Block Allocation List

There are two tables in the block allocation list. The entries of the blocks that are assigned to the different files are contained in one table. The entries of the holes that are free and can be assigned to the phase in the waiting queue are stored in the other table.

Now that we have entries of free blocks to choose from, we can use one of the following strategies: first-fit, best-fit, or worst-fit strategies.


The quest here begins at the beginning of the table or where the previous first-fit search finished. During the hunt, the first hole that is wide enough for a method to fit into is chosen.


This approach necessitates sorting the list of available holes by number. The smallest hole that is wide enough for the operation is then chosen from the list of available holes. This strategy eliminates memory waste because it does not allocate a larger void, leaving some memory even after the process has accommodated the gap.


This approach necessitates sorting the entire list of available holes. The largest hole among the free holes is chosen once more. This strategy leaves the largest remaining hole, which could be useful for the next process.

Bit Map

Only the free or allocated block is tracked by the bit map process. One bit represents one block; bit 0 represents a free block, while bit 1 indicates that the block is assigned to a file or operation.

It does not have entries of the files or processes to which the specific blocks are allocated. Normally, implementing the first fit will search the number of consecutive zeros/free blocks required by a file of process. Having found that much of consecutive zeros it allocates a file or process to those blocks.

But implementing best-fit or worse-fit will be expensive, as the table of free blocks sorted according to the hole size has to be maintained. But the bit map method is easy to implement.


External or internal fragmentation are two types of fragmentation. External fragmentation occurs when the available free memory blocks are too limited and even non-contiguous. Internal fragmentation, on the other hand, happens when a process does not completely use the memory allotted to it.

Memory compaction is a solution to the issue of external fragmentation. All of the contents of the memory are shuffled into one region of memory, resulting in a large block of memory that can be allocated to one or more new processes.

The last operation, as shown in the diagram below, involves moving C, B, and D downwards to create a wide hole.

Advantages & Disadvantages Contiguous Memory Allocation

Memory waste and inflexibility are the key drawbacks to contiguous memory allocation. When allocating memory to a file or a process, bear in mind that it will expand over time. However, many of the blocks allocated to a method or a file remain unutilized until it develops. Furthermore, they are unable to be allocated to other systems, resulting in memory waste.

If a process or file grows larger than expected, i.e. beyond the allocated memory block, the process or file will abort with the message “No disk space,” resulting in inflexibility.

Contiguous memory allocation has the benefit of rising processing speed. The operating system eliminates head movements by using buffered I/O and reading process memory blocks sequentially. The processing is sped up as a result of this.

Key Takeaways

  • A fixed-size partition scheme or a variable-sized partition scheme may be used to assign memory.
  • The first-fit, best-fit, and worst-fit methods for selecting a hole from a list of free holes are available in the block allocation list.
  • Bit map keeps track of free memory blocks; each memory block has one bit, with bit 0 indicating that the block is free and bit 1 indicating that the block is assigned to a file or operation. Fragmentation occurs when memory is allocated in a non-contiguous manner. External or internal fragmentation is both possible.
  • Memory waste and inflexibility arise from contiguous memory allocation.
  • Contiguous memory allocation will improve processing speed if the operating system uses buffered I/O during processing.

So, contiguous memory allocation is the key. We’ve spoken about how memory is allocated and controlled. In a contiguous memory allocation, how does fragmentation happen? It has both benefits and drawbacks.


Please enter your comment!
Please enter your name here