Dividing memory into fixed-sized pages for efficient allocation
Paging is a memory management scheme that eliminates the problem of external fragmentation by dividing physical memory into fixed-sized blocks called frames and logical memory into blocks of the same size called pages. When a process is to be executed, its pages are loaded into any available memory frames from secondary storage. The hardware support for paging includes a page table for each process that maps logical page numbers to physical frame numbers. The key advantages of paging are that it eliminates external fragmentation (since any page can go into any frame) and simplifies memory allocation (OS only needs to keep track of free frames). However, paging can still suffer from internal fragmentation (the last page of a process may not be completely full). Modern systems use multi-level paging to handle large address spaces efficiently. The translation lookaside buffer (TLB) is used to cache recent page table lookups and speed up address translation. Paging also forms the basis for virtual memory systems where not all pages need to be in memory simultaneously, allowing processes to be larger than physical memory. The page table structure, TLB management, and page replacement algorithms are critical components that determine the performance of paged memory systems.