Question 1
Which of the following page replacement algorithms suffers from Belady’s anomaly?
A
FIFO
B
LRU
C
Optimal Page Replacement
D
Both LRU and FIFO
Memory Management    
Discuss it


Question 1 Explanation: 
Belady’s anomaly proves that it is possible to have more page faults when increasing the number of page frames while using the First in First Out (FIFO) page replacement algorithm. See the example given on Wiki Page.
Question 2
What is the swap space in the disk used for?
A
Saving temporary html pages
B
Saving process data
C
Storing the super-block
D
Storing device drivers
Memory Management    
Discuss it


Question 2 Explanation: 
Swap space is typically used to store process data. See this for more details.
Question 3
Increasing the RAM of a computer typically improves performance because:
A
Virtual memory increases
B
Larger RAMs are faster
C
Fewer page faults occur
D
Fewer segmentation faults occur
Memory Management    
Discuss it


Question 3 Explanation: 
When there is more RAM, there would be more mapped virtual pages in physical memory, hence fewer page faults. A page fault causes performance degradation as the page has to be loaded from secondary device.
Question 4
A computer system supports 32-bit virtual addresses as well as 32-bit physical addresses. Since the virtual address space is of the same size as the physical address space, the operating system designers decide to get rid of the virtual memory entirely. Which one of the following is true?
A
Efficient implementation of multi-user support is no longer possible
B
The processor cache organization can be made more efficient now
C
Hardware support for memory management is no longer needed
D
CPU scheduling can be made more efficient now
Memory Management    
Discuss it


Question 4 Explanation: 
For supporting virtual memory, special hardware support is needed from Memory Management Unit. Since operating system designers decide to get rid of the virtual memory entirely, hardware support for memory management is no longer needed
Question 5
A CPU generates 32-bit virtual addresses. The page size is 4 KB. The processor has a translation look-aside buffer (TLB) which can hold a total of 128 page table entries and is 4-way set associative. The minimum size of the TLB tag is:
A
11 bits
B
13 bits
C
15 bits
D
20 bits
Memory Management    
Discuss it


Question 5 Explanation: 
Size of a page = 4KB = 2^12 Total number of bits needed to address a page frame = 32 – 12 = 20 If there are ‘n’ cache lines in a set, the cache placement is called n-way set associative. Since TLB is 4 way set associative and can hold total 128 (2^7) page table entries, number of sets in cache = 2^7/4 = 2^5. So 5 bits are needed to address a set, and 15 (20 – 5) bits are needed for tag.
Question 6
Virtual memory is
A
Large secondary memory
B
Large main memory
C
Illusion of large main memory
D
None of the above
Memory Management    
Discuss it


Question 6 Explanation: 
Virtual memory is illusion of large main memory.
Question 7
Page fault occurs when
A
When a requested page is in memory
B
When a requested page is not in memory
C
When a page is currupted
D
When an exception is thrown
Memory Management    
Discuss it


Question 7 Explanation: 
Page fault occurs when a requested page is mapped in virtual address space but not present in memory.
Question 8
Thrashing occurs when
A
When a page fault occurs
B
Processes on system frequently access pages not memory
C
Processes on system are in running state
D
Processes on system are in waiting state
Memory Management    
Discuss it


Question 8 Explanation: 
Thrashing occurs processes on system require more memory than it has. If processes do not have “enough” pages, the pagefault rate is very high. This leads to: – low CPU utilization – operating system spends most of its time swapping to disk The above situation is called thrashing
Question 9
A computer uses 46–bit virtual address, 32–bit physical address, and a three–level paged page table organization. The page table base register stores the base address of the first–level table (T1), which occupies exactly one page. Each entry of T1 stores the base address of a page of the second–level table (T2). Each entry of T2 stores the base address of a page of the third–level table (T3). Each entry of T3 stores a page table entry (PTE). The PTE is 32 bits in size. The processor used in the computer has a 1 MB 16 way set associative virtually indexed physically tagged cache. The cache block size is 64 bytes. What is the size of a page in KB in this computer? (GATE 2013)
A
2
B
4
C
8
D
16
Memory Management    
Discuss it


Question 9 Explanation: 
Let the page size is of 'x' bits

Size of T1 = 2 ^ x bytes

(This is because T1 occupies exactly one page)

Now, number of entries in T1 = (2^x) / 4

(This is because each page table entry is 32 bits
  or 4 bytes in size)

Number of entries in T1 = Number of second level 
page tables

(Because each I-level page table entry stores the 
 base address of page of II-level page table)

Total size of second level page tables = ((2^x) / 4) * (2^x)

Similarly, number of entries in II-level page tables = Number
 of III level page tables = ((2^x) / 4) * ((2^x) / 4)

Total size of third level page tables = ((2^x) / 4) * 
                                        ((2^x) / 4) * (2^x)

Similarly, total number of entries (pages) in all III-level 
page tables = ((2^x) / 4) * ((2^x) / 4) * ((2^x) / 4)
            = 2^(3x - 6)

Size of virtual memory = 2^46

Number of pages in virtual memory = (2^46) / (2^x) = 2^(46 - x)

Total number the pages in the III-level page tables = 
                              Number of pages in virtual memory

2^(3x - 6) = 2^(46 - x)

3x - 6 = 46 - x

4x = 52
x = 13

That means, page size is of 13 bits
or Page size = 2^13 bytes = 8 KB 
Question 10
Consider data given in the above question. What is the minimum number of page colours needed to guarantee that no two synonyms map to different sets in the processor cache of this computer? (GATE CS 2013)
A
2
B
4
C
8
D
16
Memory Management    
Discuss it


Question 10 Explanation: 
1 MB 16-way set associative virtually indexed physically tagged cache(VIPT). 
The cache block size is 64 bytes.

No of blocks is 2^20/2^6 = 2^14.

No of sets is 2^14/2^4 = 2^10.

VA(46)
+-------------------------------+
tag(30) , Set(10) , block offset(6)
+-------------------------------+

In VIPT if the no. of bits of page offset = 
                  (Set+block offset) then only one page color is sufficient.

but we need 8 colors because the number bits where the cache set index and 
physical page number over lap is 3 so 2^3 page colors is required.(option 
c is ans). 
Question 11
Consider the virtual page reference string 1, 2, 3, 2, 4, 1, 3, 2, 4, 1 On a demand paged virtual memory system running on a computer system that main memory size of 3 pages frames which are initially empty. Let LRU, FIFO and OPTIMAL denote the number of page faults under the corresponding page replacements policy. Then
A
OPTIMAL < LRU < FIFO
B
OPTIMAL < FIFO < LRU
C
OPTIMAL = LRU
D
OPTIMAL = FIFO
GATE CS 2012    Memory Management    
Discuss it


Question 11 Explanation: 
First In First Out (FIFO) This is the simplest page replacement algorithm. In this algorithm, operating system keeps track of all pages in the memory in a queue; oldest page is in the front of the queue. When a page needs to be replaced page in the front of the queue is selected for removal. Optimal Page replacement: in this algorithm, pages are replaced which are not used for the longest duration of time in the future. Least Recently Used (LRU) In this algorithm page will be replaced which is least recently used. Solution: the virtual page reference string is 1, 2, 3, 2, 4, 1, 3, 2, 4, 1 size of main memory pages frames is 3. For FIFO: total no of page faults are 6 (depicted in bold and italic) nitika_42 For optimal: total no of page faults are 5 (depicted in bold and italic) nitika_42_1 For LRU: total no of page faults are 9 (depicted in bold and italic) nitika_42_2 The Optimal will be 5, FIFO 6 and LRU 9. so, OPTIMAL < FIFO < LRU option (B) is correct answer. See http://www.geeksforgeeks.org/operating-systems-set-5/ This solution is contributed by Nitika Bansal
Question 12
Let the page fault service time be 10ms in a computer with average memory access time being 20ns. If one page fault is generated for every 10^6 memory accesses, what is the effective access time for the memory?
A
21ns
B
30ns
C
23ns
D
35ns
Memory Management    GATE CS 2011    
Discuss it


Question 12 Explanation: 
Let P be the page fault rate
Effective Memory Access Time = p * (page fault service time) + 
                               (1 - p) * (Memory access time)
                             = ( 1/(10^6) )* 10 * (10^6) ns +
                               (1 - 1/(10^6)) * 20 ns
                             = 30 ns (approx)   
Question 13
A system uses FIFO policy for page replacement. It has 4 page frames with no pages loaded to begin with. The system first accesses 100 distinct pages in some order and then accesses the same 100 pages but now in the reverse order. How many page faults will occur?
A
196
B
192
C
197
D
195
Memory Management    GATE CS 2010    
Discuss it


Question 14
In which one of the following page replacement policies, Belady’s anomaly may occur?
A
FIFO
B
Optimal
C
LRU
D
MRU
Memory Management    GATE-CS-2009    
Discuss it


Question 14 Explanation: 
Belady’s anomaly proves that it is possible to have more page faults when increasing the number of page frames while using the First in First Out (FIFO) page replacement algorithm. See the wiki page for an example of increasing page faults with number of page frames.
Question 15
The essential content(s) in each entry of a page table is / are
A
Virtual page number
B
Page frame number
C
Both virtual page number and page frame number
D
Access right information
Memory Management    GATE-CS-2009    
Discuss it


Question 15 Explanation: 

A page table entry must contain Page frame number. Virtual page number is typically used as index in page table to get the corresponding page frame number. See this for details.

Question 16
A multilevel page table is preferred in comparison to a single level page table for translating virtual address to physical address because
A
It reduces the memory access time to read or write a memory location.
B
It helps to reduce the size of page table needed to implement the virtual address space of a process.
C
It is required by the translation lookaside buffer.
D
It helps to reduce the number of page faults in page replacement algorithms.
Memory Management    GATE-CS-2009    
Discuss it


Question 16 Explanation: 
The size of page table may become too big (See this) to fit in contiguous space. That is why page tables are typically divided in levels.
Question 17
A processor uses 36 bit physical addresses and 32 bit virtual addresses, with a page frame size of 4 Kbytes. Each page table entry is of size 4 bytes. A three level page table is used for virtual to physical address translation, where the virtual address is used as follows • Bits 30-31 are used to index into the first level page table • Bits 21-29 are used to index into the second level page table • Bits 12-20 are used to index into the third level page table, and • Bits 0-11 are used as offset within the page The number of bits required for addressing the next level page table (or page frame) in the page table entry of the first, second and third level page tables are respectively.
A
20, 20 and 20
B
24, 24 and 24
C
24, 24 and 20
D
25, 25 and 24
Memory Management    GATE CS 2008    
Discuss it


Question 17 Explanation: 
Virtual address size = 32 bits Physical address size = 36 bits Physical memory size = 2^36 bytes Page frame size = 4K bytes = 2^12 bytes No. of bits for offset (or number of bits required for accessing location within a page frame) = 12. No. of bits required to access physical memory frame = 36 - 12 = 24 So in third level of page table, 24 bits are required to access an entry. 9 bits of virtual address are used to access second level page table entry and size of pages in second level is 4 bytes. So size of second level page table is (2^9)*4 = 2^11 bytes. It means there are (2^36)/(2^11) possible locations to store this page table. Therefore the second page table requires 25 bits to address it. Similarly, the third page table needs 25 bits to address it. multilevel-page-table
Question 18
A virtual memory system uses First In First Out (FIFO) page replacement policy and allocates a fixed number of frames to a process. Consider the following statements:

P: Increasing the number of page frames allocated to a 
   process sometimes increases the page fault rate.
Q: Some programs do not exhibit locality of reference. 
Which one of the following is TRUE?
A
Both P and Q are true, and Q is the reason for P
B
Both P and Q are true, but Q is not the reason for P.
C
P is false, but Q is true
D
Both P and Q are false
Memory Management    GATE-CS-2007    
Discuss it


Question 18 Explanation: 
First In First Out Page Replacement Algorithms: This is the simplest page replacement algorithm. In this algorithm, operating system keeps track of all pages in the memory in a queue, oldest page is in the front of the queue. When a page needs to be replaced page in the front of the queue is selected for removal. FIFO Page replacement algorithms suffers from Belady’s anomaly : Belady’s anomaly states that it is possible to have more page faults when increasing the number of page frames. Solution: Statement P: Increasing the number of page frames allocated to a process sometimes increases the page fault rate. Correct, as FIFO page replacement algorithm suffers from belady’s anomaly which states above statement. Statement Q: Some programs do not exhibit locality of reference. Correct, Locality often occurs because code contains loops that tend to reference arrays or other data structures by indices. So we can write a program does not contain loop and do not exhibit locality of reference. So, both statement P and Q are correct but Q is not the reason for P as Belady’s Anomaly occurs for some specific patterns of page references. See Question 1 of http://www.geeksforgeeks.org/operating-systems-set-13/ Reference : http://quiz.geeksforgeeks.org/operating-system-page-replacement-algorithm/ This solution is contributed by Nitika Bansal
Question 19
A process has been allocated 3 page frames. Assume that none of the pages of the process are available in the memory initially. The process makes the following sequence of page references (reference string): 1, 2, 1, 3, 7, 4, 5, 6, 3, 1 If optimal page replacement policy is used, how many page faults occur for the above reference string?
A
7
B
8
C
9
D
10
Memory Management    GATE-CS-2007    
Discuss it


Question 19 Explanation: 
Optimal replacement policy looks forward in time to see which frame to replace on a page fault. 1 23    -> 1,2,3 //page faults 173      ->7 143  ->4 153 -> 5 163  -> 6 Total=7 So Answer is A
Question 20
Consider the data given in above question. Least Recently Used (LRU) page replacement policy is a practical approximation to optimal page replacement. For the above reference string, how many more page faults occur with LRU than with the optimal page replacement policy?
A
0
B
1
C
2
D
3
Memory Management    GATE-CS-2007    
Discuss it


Question 20 Explanation: 
LRU replacement policy: The page that is least recently used is being Replaced. Given String:   1, 2, 1, 3, 7, 4, 5, 6, 3, 1 123  // 1 ,2, 3 //page faults 173 ->7 473 ->4 453 ->5 456 ->6 356 ->3 316 ->1 Total 9 In http://geeksquiz.com/gate-gate-cs-2007-question-82/, In optimal Replacement total page faults=7 Therefore 2 more page faults  Answer is C
Question 21
Assume that there are 3 page frames which are initially empty. If the page reference string is 1, 2, 3, 4, 2, 1, 5, 3, 2, 4, 6, the number of page faults using the optimal replacement policy is__________.
A
5
B
6
C
7
D
8
Memory Management    GATE-CS-2014-(Set-1)    
Discuss it


Question 21 Explanation: 
In optimal page replacement replacement policy, we replace the place which is not used for longest duration in future.
Given three page frames.

Reference string is 1, 2, 3, 4, 2, 1, 5, 3, 2, 4, 6

Initially, there are three page faults and entries are
1  2  3

Page 4 causes a page fault and replaces 3 (3 is the longest
distant in future), entries become
1  2  4
Total page faults =  3+1 = 4

Pages 2 and 1 don't cause any fault.

5 causes a page fault and replaces 1, entries become
5  2  4
Total page faults =  4 + 1 = 5

3 causes a page fault and replaces 1, entries become
3  2  4
Total page faults =  5 + 1 = 6

3, 2 and 4 don't cause any page fault.

6 causes a page fault.
Total page faults =  6 + 1 = 7
Question 22
A computer has twenty physical page frames which contain pages numbered 101 through 120. Now a program accesses the pages numbered 1, 2, …, 100 in that order, and repeats the access sequence THRICE. Which one of the following page replacement policies experiences the same number of page faults as the optimal page replacement policy for this program?
A
Least-recently-used
B
First-in-first-out
C
Last-in-first-out
D
Most-recently-used
Memory Management    GATE-CS-2014-(Set-2)    
Discuss it


Question 22 Explanation: 
The optimal page replacement algorithm swaps out the page whose next use will occur farthest in the future. In the given question, the computer has 20 page frames and initially page frames are filled with pages numbered from 101 to 120. Then program accesses the pages numbered 1, 2, …, 100 in that order, and repeats the access sequence THRICE. The first 20 accesses to pages from 1 to 20 would definitely cause page fault. When 21st is accessed, there is another page fault. The page swapped out would be 20 because 20 is going to be accessed farthest in future. When 22nd is accessed, 21st is going to go out as it is going to be the farthest in future. The above optimal page replacement algorithm actually works as most recently used in this case. As a side note, the first 100 would cause 100 page faults, next 100 would cause 81 page faults (1 to 19 would never be removed), the last 100 would also cause 81 page faults.
Question 23
A system uses 3 page frames for storing process pages in main memory. It uses the Least Recently Used (LRU) page replacement policy. Assume that all the page frames are initially empty. What is the total number of page faults that will occur while processing the page reference string given below? 4, 7, 6, 1, 7, 6, 1, 2, 7, 2
A
4
B
5
C
6
D
7
Memory Management    GATE-CS-2014-(Set-3)    
Discuss it


Question 23 Explanation: 
What is a Page fault ? An interrupt that occurs when a program requests data that is not currently in real memory. The interrupt triggers the operating system to fetch the data from a virtual memory and load it into RAM. Now, 4, 7, 6, 1, 7, 6, 1, 2, 7, 2 is the reference string, you can think of it as data requests made by a program. Now the system uses 3 page frames for storing process pages in main memory. It uses the Least Recently Used (LRU) page replacement policy.
[ ] - Initially page frames are empty.i.e. no 
      process pages in main memory.

[ 4 ] - Now 4 is brought into 1st frame (1st 
        page fault) 
Explanation: Process page 4 was requested by the program, but it was not in the main memory(in form of page frames),which resulted in a page fault, after that process page 4 was brought in the main memory by the operating system.

[ 4 7 ] - Now 7 is brought into 2nd frame 
         (2nd page fault) - Same explanation.

[ 4 7 6 ] - Now 6 is brought into 3rd frame
           (3rd page fault)

[ 1 7 6 ] - Now 1 is brought into 1st frame, as 1st 
         frame was least recently used(4th page fault). 
After this 7, 6 and 1 are were already present in the frames hence no replacements in pages.

[ 1 2 6 ] - Now 2 is brought into 2nd frame, as 2nd
          frame was least recently used(5th page fault).

[ 1 2 7 ] -Now 7 is brought into 3rd frame, as 3rd frame
          was least recently used(6th page fault).  
Hence, total number of page faults(also called pf) are 6. Therefore, C is the answer.
Question 24
Consider a paging hardware with a TLB. Assume that the entire page table and all the pages are in the physical memory. It takes 10 milliseconds to search the TLB and 80 milliseconds to access the physical memory. If the TLB hit ratio is 0.6, the effective memory access time (in milliseconds) is _________.
A
120
B
122
C
124
D
118
Memory Management    GATE-CS-2014-(Set-3)    
Discuss it


Question 24 Explanation: 
TLB stands for Translation Lookaside Buffer. In Virtual memory systems, the cpu generates virtual memory addresses. But, the data is stored in actual physical memory i.e. we need to place a physical memory address on the memory bus to fetch the data from the memory circuitry. So, a special table is maintained by the operating system called the Page table. This table contains a mapping between the virtual addresses and physical addresses. So, every time a cpu generates a virtual address, the operating system page table has to be looked up to find the corresponding physical address. To speed this up, there is hardware support called the TLB. The TLB is a high speed cache of the page table i.e. contains recently accessed virtual to physical translations. TLB hit ratio- A TLB hit is the no of times a virtual-to-physical address translation was already found in the TLB, instead of going all the way to the page table which is located in slower physical memory. TLB hit ratio is nothing but the ratio of TLB hits/Total no of queries into TLB. In the case that the page is found in the TLB (TLB hit) the total time would be the time of search in the TLB plus the time to access memory, so TLB_hit_time := TLB_search_time + memory_access_time In the case that the page is not found in the TLB (TLB miss) the total time would be the time to search the TLB (you don't find anything, but searched nontheless) plus the time to access memory to get the page table and frame, plus the time to access memory to get the data, so TLB_miss_time := TLB_search_time + memory_access_time + memory_access_time But this is in individual cases, when you want to know an average measure of the TLB performance, you use the Effective Access Time, that is the weighted average of the previous measures EAT := TLB_miss_time * (1- hit_ratio) + TLB_hit_time * hit_ratio. EAT := (TLB_search_time + 2*memory_access_time) * (1- hit_ratio) + (TLB_search_time + memory_access_time)* hit_ratio. As both page table and page are in physical memory T(eff) = hit ratio * (TLB access time + Main memory access time) + (1 - hit ratio) * (TLB access time + 2 * main memory time) = 0.6*(10+80) + (1-0.6)*(10+2*80) = 0.6 * (90) + 0.4 * (170) = 122
This solution is contributed Nitika Bansal
Question 25
The memory access time is 1 nanosecond for a read operation with a hit in cache, 5 nanoseconds for a read operation with a miss in cache, 2 nanoseconds for a write operation with a hit in cache and 10 nanoseconds for a write operation with a miss in cache. Execution of a sequence of instructions involves 100 instruction fetch operations, 60 memory operand read operations and 40 memory operand write operations. The cache hit-ratio is 0.9. The average memory access time (in nanoseconds) in executing the sequence of instructions is __________.
A
1.26
B
1.68
C
2.46
D
4.52
Memory Management    GATE-CS-2014-(Set-3)    
Discuss it


Question 25 Explanation: 
The question is to find the time taken for,
"100 fetch operation and 60 operand red operations and 40 memory
operand write operations"/"total number of instructions".

Total number of instructions= 100+60+40 =200

Time taken for 100 fetch operations(fetch =read)
= 100*((0.9*1)+(0.1*5)) // 1 corresponds to time taken for read 
                        // when there is cache hit

= 140 ns //0.9 is cache hit rate

Time taken for 60 read operations = 60*((0.9*1)+(0.1*5))
                                  = 84ns

Time taken for 40 write operations = 40*((0.9*2)+(0.1*10)) 
                                   = 112 ns

// Here 2 and 10 the time taken for write when there is cache 
// hit and no cahce hit respectively

So,the total time taken for 200 operations is = 140+84+112 
                                             = 336ns

Average time taken = time taken per operation = 336/200 
                                              = 1.68 ns 
Question 26
A CPU generates 32-bit virtual addresses. The page size is 4 KB. The processor has a translation look-aside buffer (TLB) which can hold a total of 128 page table entries and is 4-way set associative. The minimum size of the TLB tag is:
A
11 bits
B
13 bits
C
15 bits
D
20 bits
Memory Management    GATE-CS-2006    
Discuss it


Question 26 Explanation: 
Virtual Memory would not be very effective if every memory address had to be translated by looking up the associated physical page in memory. The solution is to cache the recent translations in a Translation Lookaside Buffer (TLB). A TLB has a fixed number of slots that contain page table entries, which map virtual addresses to physical addresses. Solution Size of a page = 4KB = 2^12 means 12 offset bits CPU generates 32-bit virtual addresses Total number of bits needed to address a page frame = 32 – 12 = 20 If there are ‘n’ cache lines in a set, the cache placement is called n-way set associative. Since TLB is 4 way set associative and can hold total 128 (2^7) page table entries, number of sets in cache = 2^7/4 = 2^5. So 5 bits are needed to address a set, and 15 (20 – 5) bits are needed for tag. Option (C) is the correct answer. See Question 3 of http://www.geeksforgeeks.org/operating-systems-set-14/ This solution is contributed by Nitika Bansal
Question 27
A computer system supports 32-bit virtual addresses as well as 32-bit physical addresses. Since the virtual address space is of the same size as the physical address space, the operating system designers decide to get rid of the virtual memory entirely. Which one of the following is true?
A
Efficient implementation of multi-user support is no longer possible
B
The processor cache organization can be made more efficient now
C
Hardware support for memory management is no longer needed
D
CPU scheduling can be made more efficient now
Memory Management    GATE-CS-2006    
Discuss it


Question 28
The minimum number of page frames that must be allocated to a running process in a virtual memory environment is determined by
A
the instruction set architecture
B
page size
C
physical memory size
D
number of processes in memory
Memory Management    GATE-CS-2004    
Discuss it


Question 28 Explanation: 
There are two important tasks in virtual memory management: a page-replacement strategy and a frame-allocation strategy. Frame allocation strategy says gives the idea of minimum number of frames which should be allocated. The absolute minimum number of frames that a process must be allocated is dependent on system architecture, and corresponds to the number of pages that could be touched by a single (machine) instruction. So, it is instruction set architecture i.e. option (A) is correct answer. See Question 3 of http://www.geeksforgeeks.org/operating-systems-set-4/ Reference: https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/9_VirtualMemory.html This solution is contributed by Nitika Bansal
Question 29
Consider a system with a two-level paging scheme in which a regular memory access takes 150 nanoseconds, and servicing a page fault takes 8 milliseconds. An average instruction takes 100 nanoseconds of CPU time, and two memory accesses. The TLB hit ratio is 90%, and the page fault rate is one in every 10,000 instructions. What is the effective average instruction execution time?
A
645 nanoseconds
B
1050 nanoseconds
C
1215 nanoseconds
D
1230 nanoseconds
Memory Management    GATE-CS-2004    
Discuss it


Question 29 Explanation: 

niraml_47

                    Figure : Translation Lookaside Buffer[5]

As shown in figure, to find frame number for corresponding page number, at first TLB (Translation Lookaside Buffer) is checked whether it has that desired page number- frame number pair entry or not, if yes then it’s TLB hit otherwise it’s TLB miss. In case of miss the page number is searched into page table. In two-level paging scheme, the memory is referred two times to obtain corresponding frame number.

If a virtual address has no valid entry in the page table, then any attempt by your pro- gram to access that virtual address will cause a page fault to occur .In case of page fault, the required frame is brought in main memory from secondary memory,time taken to service the page fault is called page fault service time.

We have to caculate average execution time(EXE), lets suppose average memory ac- cess time to fetch is M, then EXE = 100ns + 2*150 (two memory references to fetch instruction) + M ...1

Now we have to calculate average memory access time M, since page fault is 1 in 10,000 instruction and then M = (1 1/104 )(M EM ) + (1/104) 8ms ...2

Where MEM is memory access time when page is present in memory. Now we calcu- late MEM MEM = .9(TLB Access Time)+.1(TLB Access Time+2*150ns)

Here TLB Acess Time is not given lets assume it 0. So MEM=.9(0)+.1(300ns) =30ns , put MEM value in equation(2). M = (1 1/104 )(30ns) + (1/104 ) 8ms = 830ns

Put this M's value in equation(1), EXE=100ns+300ns+830ns=1230ns , so Ans is option(4).

This sulotion is contributed Nirmal Bhardwaj .

Question 30
In a system with 32 bit virtual addresses and 1 KB page size, use of one-level page tables for virtual to physical address translation is not practical because of
A
the large amount of internal fragmentation
B
the large amount of external fragmentation
C
the large memory overhead in maintaining page tables
D
the large computation overhead in the translation process
Memory Management    GATE-CS-2003    
Discuss it


Question 30 Explanation: 
Question 31
Which of the following is NOT an advantage of using shared, dynamically linked libraries as opposed to using statically linked libraries ?
A
Smaller sizes of executable files
B
Lesser overall page fault rate in the system
C
Faster program startup
D
Existing programs need not be re-linked to take advantage of newer versions of libraries
Memory Management    GATE-CS-2003    
Discuss it


Question 31 Explanation: 
Refer Static and Dynamic Libraries In Non-Shared (static) libraries, since library code is connected at compile time, the final executable has no dependencies on the the library at run time i.e. no additional run-time loading costs, it means that you don’t need to carry along a copy of the library that is being used and you have everything under your control and there is no dependency.
Question 32
A processor uses 2-level page tables for virtual to physical address translation. Page tables for both levels are stored in the main memory. Virtual and physical addresses are both 32 bits wide. The memory is byte addressable. For virtual to physical address translation, the 10 most significant bits of the virtual address are used as index into the first level page table while the next 10 bits are used as index into the second level page table. The 12 least significant bits of the virtual address are used as offset within the page. Assume that the page table entries in both levels of page tables are 4 bytes wide. Further, the processor has a translation look-aside buffer (TLB), with a hit rate of 96%. The TLB caches recently used virtual page numbers and the corresponding physical page numbers. The processor also has a physically addressed cache with a hit rate of 90%. Main memory access time is 10 ns, cache access time is 1 ns, and TLB access time is also 1 ns. Assuming that no page faults occur, the average time taken to access a virtual address is approximately (to the nearest 0.5 ns)
A
1.5 ns
B
2 ns
C
3 ns
D
4 ns
Memory Management    GATE-CS-2003    
Discuss it


Question 32 Explanation: 
The possibilities are
 TLB Hit*Cache Hit +
 TLB Hit*Cache Miss + 
 TLB Miss*Cache Hit +
 TLB Miss*Cache Miss
= 0.96*0.9*2 + 0.96*0.1*12 + 0.04*0.9*22 + 0,04*0.1*32
= 3.8
≈ 4 
Why 22 and 32? 22 is because when TLB miss occurs it takes 1ns and the for the physical address it has to go through two level page tables which are in main memory and takes 2 memory access and the that page is found in cache taking 1 ns which gives a total of 22
Question 33
A processor uses 2-level page tables for virtual to physical address translation. Page tables for both levels are stored in the main memory. Virtual and physical addresses are both 32 bits wide. The memory is byte addressable. For virtual to physical address translation, the 10 most significant bits of the virtual address are used as index into the first level page table while the next 10 bits are used as index into the second level page table. The 12 least significant bits of the virtual address are used as offset within the page. Assume that the page table entries in both levels of page tables are 4 bytes wide. Further, the processor has a translation look-aside buffer (TLB), with a hit rate of 96%. The TLB caches recently used virtual page numbers and the corresponding physical page numbers. The processor also has a physically addressed cache with a hit rate of 90%. Main memory access time is 10 ns, cache access time is 1 ns, and TLB access time is also 1 ns. Suppose a process has only the following pages in its virtual address space: two contiguous code pages starting at virtual address 0x00000000, two contiguous data pages starting at virtual address 0×00400000, and a stack page starting at virtual address 0×FFFFF000. The amount of memory required for storing the page tables of this process is:
A
8 KB
B
12 KB
C
16 KB
D
20 KB
Memory Management    GATE-CS-2003    
Discuss it


Question 33 Explanation: 
Breakup of given addresses into bit form:-
32bits are broken up as 10bits (L2) | 10bits (L1) | 12bits (offset)

first code page:
0x00000000 = 0000 0000 00 | 00 0000 0000 | 0000 0000 0000

so next code page will start from
0x00001000 = 0000 0000 00 | 00 0000 0001 | 0000 0000 0000

first data page:
0x00400000 = 0000 0000 01 | 00 0000 0000 | 0000 0000 0000

so next data page will start from
0x00401000 = 0000 0000 01 | 00 0000 0001 | 0000 0000 0000

only one stack page:
0xFFFFF000 = 1111 1111 11 | 11 1111 1111 | 0000 0000 0000

Now, for second level page table, we will just require 1 Page 
which will contain following 3 distinct entries i.e. 0000 0000 00,
0000 0000 01, 1111 1111 11.
Now, for each of these distinct entries, we will have 1-1 page
in Level-1.

Hence, we will have in total 4 pages and page size = 2^12 = 4KB.
Therefore, Memory required to store page table = 4*4KB = 16KB.
Question 34
Which of the following is not a form of memory?
A
instruction cache
B
instruction register
C
instruction opcode
D
translation lookaside buffer
Memory Management    GATE-CS-2002    
Discuss it


Question 34 Explanation: 
Instruction Cache - Used for storing instructions that are frequently used Instruction Register - Part of CPU's control unit that stores the instruction currently being executed Instruction Opcode - It is the portion of a machine language instruction that specifies the operation to be performed Translation Lookaside Buffer - It is a memory cache that stores recent translations of virtual memory to physical addresses for faster access.   So, all the above except Instruction Opcode are memories. Thus, C is the correct choice.   Please comment below if you find anything wrong in the above post.
Question 35
The optimal page replacement algorithm will select the page that
A
Has not been used for the longest time in the past.
B
Will not be used for the longest time in the future.
C
Has been used least number of times.
D
Has been used most number of times.
Memory Management    GATE-CS-2002    
Discuss it


Question 35 Explanation: 
The optimal page replacement algorithm will select the page whose next occurrence will be after the longest time in future. For example, if we need to swap a page and there are two options from which we can swap, say one would be used after 10s and the other after 5s, then the algorithm will swap out the page that would be required 10s later. Thus, B is the correct choice.   Please comment below if you find anything wrong in the above post.
Question 36
Dynamic linking can cause security concerns because:
A
Security is dynamic
B
The path for searching dynamic libraries is not known till runtime
C
Linking is insecure
D
Crytographic procedures are not available for dynamic linking
Memory Management    GATE-CS-2002    
Discuss it


Question 36 Explanation: 
Static Linking and Static Libraries is the result of the linker making copy of all used library functions to the executable file. Static Linking creates larger binary files, and need more space on disk and main memory. Examples of static libraries (libraries which are statically linked) are, .a files in Linux and .lib files in Windows. Dynamic linking and Dynamic Libraries Dynamic Linking doesn’t require the code to be copied, it is done by just placing name of the library in the binary file. The actual linking happens when the program is run, when both the binary file and the library are in memory. Examples of Dynamic libraries (libraries which are linked at run-time) are, .so in Linux and .dll in Windows. In Dynamic Linking,the path for searching dynamic libraries is not known till runtime    
Question 37
Which of the following statements is false?
A
Virtual memory implements the translation of a program‘s address space into physical memory address space
B
Virtual memory allows each program to exceed the size of the primary memory
C
Virtual memory increases the degree of multiprogramming
D
Virtual memory reduces the context switching overhead
Memory Management    GATE-CS-2001    
Discuss it


Question 37 Explanation: 
See question 4 of http://www.geeksforgeeks.org/operating-systems-set-2/
Question 38
The process of assigning load addresses to the various parts of the program and adjusting the code and date in the program to reflect the assigned addresses is called
A
Assembly
B
Parsing
C
Relocation
D
Symbol resolution
Memory Management    GATE-CS-2001    
Discuss it


Question 38 Explanation: 

Relocation of code is the process done by the linker-loader when a program is copied from external storage into main memory.
A linker relocates the code by searching files and libraries to replace symbolic references of libraries with actual usable addresses in memory before running a program.
 
Thus, option (C) is the answer.
 
Please comment below if you find anything wrong in the above post.
Question 39
Where does the swap space reside?
A
RAM
B
Disk
C
ROM
D
On-chip cache
Memory Management    GATE-CS-2001    
Discuss it


Question 39 Explanation: 
Swap space is an area on disk that temporarily holds a process memory image.   When memory is full and process needs memory, inactive  parts of process are put in swap space of disk.
Question 40
Consider a virtual memory system with FIFO page replacement policy. For an arbitrary page access pattern, increasing the number of page frames in main memory will
A
always decrease the number of page faults
B
always increase the number of page faults
C
sometimes increase the number of page faults
D
never affect the number of page faults
Memory Management    GATE-CS-2001    
Discuss it


Question 40 Explanation: 
Question 41
Consider a machine with 64 MB physical memory and a 32-bit virtual address space. If the page size is 4KB, what is the approximate size of the page table?
A
16 MB
B
8 MB
C
2 MB
D
24 MB
Memory Management    GATE-CS-2001    
Discuss it


Question 41 Explanation: 
Question 42
Suppose the time to service a page fault is on the average 10 milliseconds, while a memory access takes 1 microsecond. Then a 99.99% hit ratio results in average memory access time of (GATE CS 2000)
A
1.9999 milliseconds
B
1 millisecond
C
9.999 microseconds
D
1.9999 microseconds
Memory Management    GATE-CS-2000    
Discuss it


Question 42 Explanation: 
If any page request comes it will first search into page table, if present, then it will directly fetch the page from memory, thus in this case time requires will be only memory access time. But if required page will not be found, first we have to bring it out and then go for memory access. This extra time is called page fault service time. Let hit ratio be p , memory access time be t1 , and page fault service time be t2.
Hence, average memory access time = p*t1 + (1-p)*t2
				    =(99.99*1 +  0.01*(10*1000 + 1))/100
                                                             =1.9999 *10^-6 sec

This explanation is contributed by Abhishek Kumar. Also, see question 1 of http://www.geeksforgeeks.org/operating-systems-set-3/
Question 43
Consider a system with byte-addressable memory, 32 bit logical addresses, 4 kilobyte page size and page table entries of 4 bytes each. The size of the page table in the system in megabytes is ___________
A
2
B
4
C
8
D
16
Memory Management    GATE-CS-2015 (Set 1)    
Discuss it


Question 43 Explanation: 
Number of entries in page table = 232 / 4Kbyte  
                                = 232 / 212 
                                        = 220

Size of page table = (No. page table entries)*(Size of an entry) 
                   = 220 * 4 bytes 
                   = 222 
                   = 4 Megabytes
Question 44
A computer system implements a 40 bit virtual address, page size of 8 kilobytes, and a 128-entry translation look-aside buffer (TLB) organized into 32 sets each having four ways. Assume that the TLB tag does not store any process id. The minimum length of the TLB tag in bits is _________
A
20
B
10
C
11
D
22
Memory Management    GATE-CS-2015 (Set 2)    
Discuss it


Question 44 Explanation: 
Total virtual address size = 40

Since there are 32 sets, set offset = 5

Since page size is 8kilobytes, word offset = 13

Minimum tag size = 40 - 5- 13 = 22
Question 45
Consider six memory partitions of size 200 KB, 400 KB, 600 KB, 500 KB, 300 KB, and 250 KB, where KB refers to kilobyte. These partitions need to be allotted to four processes of sizes 357 KB, 210 KB, 468 KB and 491 KB in that order. If the best fit algorithm is used, which partitions are NOT allotted to any process?
A
200 KB and 300 KB
B
200 KB and 250 KB
C
250 KB and 300 KB
D
300 KB and 400 KB
Memory Management    GATE-CS-2015 (Set 2)    
Discuss it


Question 45 Explanation: 
Best fit allocates the smallest block among those that are large enough for the new process. So the memory blocks are allocated in below order.
357 ---> 400
210 ---> 250
468 ---> 500
491 ---> 600
Sot the remaining blocks are of 200 KB and 300 KB Refer http://courses.cs.vt.edu/~csonline/OS/Lessons/MemoryAllocation/index.html for details of all allocation strategies.
Question 46
A Computer system implements 8 kilobyte pages and a 32-bit physical address space. Each page table entry contains a valid bit, a dirty bit three permission bits, and the translation. If the maximum size of the page table of a process is 24 megabytes, the length of the virtual address supported by the system is _______________ bits
A
36
B
32
C
28
D
40
Memory Management    GATE-CS-2015 (Set 2)    
Discuss it


Question 46 Explanation: 
Max size of virtual address can be calculated by 
calculating maximum number of page table entries.

Maximum Number of page table entries can be calculated 
using given maximum page table size and size of a page 
table entry.

Given maximum page table size = 24 MB

Let us calculate size of a page table entry.

A page table entry has following number of bits.
1 (valid bit) + 
1 (dirty bit) + 
3 (permission bits) + 
x bits to store physical address space of a page.

Value of x = (Total bits in physical address) - 
             (Total bits for addressing within a page)
Since size of a page is 8 kilobytes, total bits needed within
a page is 13.
So value of x = 32 - 13 = 19

Putting value of x, we get size of a page table entry =
                                   1 + 1 + 3  + 19 = 24bits.

Number of page table entries 
           = (Page Table Size) / (An entry size)
           =  (24 megabytes / 24 bits)                             
           =  223

Vrtual address Size 
             = (Number of page table entries) * (Page Size)
             =  223 * 8 kilobits
             = 236  
Therefore, length of virtual address space = 36
Question 47
Which one of the following is NOT shared by the threads of the same process?
A
Stack
B
Address Space
C
File Descriptor Table
D
Message Queue
Memory Management    GATE-IT-2004    
Discuss it


Question 47 Explanation: 
  Threads can not share stack (used for maintaining function calls) as they may have their individual function call sequence. os Image Source: https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/4_Threads.html  
Question 48
Consider a fully associative cache with 8 cache blocks (numbered 0-7) and the following sequence of memory block requests: 4, 3, 25, 8, 19, 6, 25, 8, 16, 35, 45, 22, 8, 3, 16, 25, 7 If LRU replacement policy is used, which cache block will have memory block 7?  
A
4
B
5
C
6
D
7
Memory Management    GATE-IT-2004    
Discuss it


Question 48 Explanation: 
Block size is =8 Given 4, 3, 25, 8, 19, 6, 25, 8, 16, 35, 45, 22, 8, 3, 16, 25, 7 So from 0 to 7 ,we have
  • 4 3 25 8 19 6 16 35    //25,8 LRU so next 16,35 come in the block.
  •  45 3 25 8 19 6 16 35
  • 45 22 25 8 19 6 16 35
  • 45 22 25 8 19 6 16 35
  • 45 22 25 8 3 6 16 35     //16 and 25 already there
  • 45 22 25 8 3 7 16 35   //7 in 5th block Therefore , answer is  B
Question 49
The storage area of a disk has innermost diameter of 10 cm and outermost diameter of 20 cm. The maximum storage density of the disk is 1400bits/cm. The disk rotates at a speed of 4200 RPM. The main memory of a computer has 64-bit word length and 1µs cycle time. If cycle stealing is used for data transfer from the disk, the percentage of memory cycles stolen for transferring one word is  
A
0.5%
B
1%
C
5%
D
10%
Memory Management    GATE-IT-2004    
Discuss it


Question 49 Explanation: 

Inner most diameter = 10 cm Storage density = 1400 bits/cm
Capacity of each track : = 3.14 * diameter * density = 3.14 * 10 * 1400 = 43960 bits
Rotational latency = 60/4200 =1/70 seconds
It is given that the main memory of a computer has 64-bit word length and 1µs cycle time.
Data transferred in 1 sec = 64 * 106 bits Data read by disk in 1 sec = 43960 * 70 = 3.08 * 106 bits
Total memory cycle = (3.08 * 106) / (64 * 106) = 5%
 
Thus, option (C) is correct.
 
Please comment below if you find anything wrong in the above post.
Question 50
A disk has 200 tracks (numbered 0 through 199). At a given time, it was servicing the request of reading data from track 120, and at the previous request, service was for track 90. The pending requests (in order of their arrival) are for track numbers. 30 70 115 130 110 80 20 25. How many times will the head change its direction for the disk scheduling policies SSTF(Shortest Seek Time First) and FCFS (First Come Fist Serve)
A
2 and 3
B
3 and 3
C
3 and 4
D
4 and 4
Memory Management    GATE-IT-2004    
Discuss it


Question 50 Explanation: 
  According to Shortest Seek Time First: 90-> 120-> 115-> 110-> 130-> 80-> 70-> 30-> 25-> 20 Change of direction(Total 3);  120->15; 110->130; 130->80 According to First Come First Serve: 90-> 120-> 30-> 70-> 115-> 130-> 110-> 80-> 20-> 25 Change of direction(Total 4);  120->30; 30->70; 130->110;20->25 Therefore,Answer is C  
Question 51
In a virtual memory system, size of virtual address is 32-bit, size of physical address is 30-bit, page size is 4 Kbyte and size of each page table entry is 32-bit. The main memory is byte addressable. Which one of the following is the maximum number of bits that can be used for storing protection and other information in each page table entry?
A
2
B
10
C
12
D
14
Memory Management    GATE-IT-2004    
Discuss it


Question 51 Explanation: 

Virtual memory = 232 bytes Physical memory = 230 bytes
Page size = Frame size = 4 * 103 bytes = 22 * 210 bytes = 212 bytes
Number of frames = Physical memory / Frame size = 230/212 = 218
Therefore, Numbers of bits for frame = 18 bits
Page Table Entry Size = Number of bits for frame + Other information Other information = 32 - 18 = 14 bits
 
Thus, option (D) is correct.
 
Please comment below if you find anything wrong in the above post.
Question 52
In a particular Unix OS, each data block is of size 1024 bytes, each node has 10 direct data block addresses and three additional addresses: one for single indirect block, one for double indirect block and one for triple indirect block. Also, each block can contain addresses for 128 blocks. Which one of the following is approximately the maximum size of a file in the file system?  
A
512 MB
B
2GB
C
8GB
D
16GB
Memory Management    GATE-IT-2004    
Discuss it


Question 52 Explanation: 
The diagram is taken from Operating System Concept book.
Maximum size of the File System = Summation of size of all the data blocks
                                  whose addresses belongs to the file.

Given:
Size of 1 data block = 1024 Bytes
No. of addresses which 1 data block can contain = 128

Now, Maximum File Size can be calculated as:
10 direct addresses of data blocks = 10*1024
1 single indirect data block = 128*1024
1 doubly indirect data block = 128*128*1024
1 triple indirect data block = 128*128*128*1024

Hence,
Max File Size = 10*1024 + 128*1024 + 128*128*1024 + 
                128*128*128*1024 Bytes
              = 2113674*1024 Bytes
              = 2.0157 GB ~ 2GB
Question 53
A two-way switch has three terminals a, b and c. In ON position (logic value 1), a is connected to b, and in OFF position, a is connected to c. Two of these two-way switches S1 and S2 are connected to a bulb as shown below.p1 Which of the following expressions, if true, will always result in the lighting of the bulb ?
A
S1.S2'
B
S1+S2
C
(S1⊕S2)'
D
S1⊕S2
Memory Management    Gate IT 2005    
Discuss it


Question 53 Explanation: 
If we draw truth table of the above circuit,it'll be S1     S2    Bulb 0         0       On 0         1        Off 1         0        Off 1          1         On = (S1⊕ S2)'   Therefore answer is C
Question 54
Consider a 2-way set associative cache memory with 4 sets and total 8 cache blocks (0-7) and a main memory with 128 blocks (0-127). What memory blocks will be present in the cache after the following sequence of memory block references if LRU policy is used for cache block replacement. Assuming that initially the cache did not have any memory block from the current job? 0 5 3 9 7 0 16 55  
A
0 3 5 7 16 55
B
0 3 5 7 9 16 55
C
0 5 7 9 16 55
D
3 5 7 9 16 55
Memory Management    Gate IT 2005    
Discuss it


Question 54 Explanation: 
2-way set associative cache memory, .i.e K = 2.

No of sets is given as 4, i.e. S = 4 ( numbered 0 - 3 )

No of blocks in cache memory is given as 8, i.e. N =8 ( numbered from 0 -7)

Each set in cache memory contains 2 blocks.

The number of blocks in the main memory is 128, i.e  M = 128.  ( numbered from 0 -127)
A referred block numbered X of the main memory is placed in the 
set numbered ( X mod S ) of the the cache memory. In that set, the 
block can be placed at any location, but if the set has already become
 full, then the current referred block of the main memory should replace
 a block in that set according to some replacement policy. Here 
the replacement policy is LRU ( i.e. Least Recently Used block should 
be replaced with currently referred block).

X ( Referred block no ) and 
the corresponding Set values are as follows:

X-->set no ( X mod 4 )

0--->0   ( block 0 is placed in set 0, set 0 has 2 empty block locations,
              block 0 is placed in any one of them  )

5--->1   ( block 5 is placed in set 1, set 1 has 2 empty block locations,
              block 5 is placed in any one of them  )

3--->3  ( block 3 is placed in set 3, set 3 has 2 empty block locations,
             block 3 is placed in any one of them  )

9--->1  ( block 9 is placed in set 1, set 1 has currently 1 empty block location,
             block 9 is placed in that, now set 1 is full, and block 5 is the 
             least recently used block  )

7--->3  ( block 7 is placed in set 3, set 3 has 1 empty block location, 
             block 7 is placed in that, set 3 is full now, 
             and block 3 is the least recently used block)

0--->block 0 is referred again, and it is present in the cache memory in set 0,
            so no need to put again this block into the cache memory.

16--->0  ( block 16 is placed in set 0, set 0 has 1 empty block location, 
              block 0 is placed in that, set 0 is full now, and block 0 is the LRU one)

55--->3 ( block 55 should be placed in set 3, but set 3 is full with block 3 and 7, 
             hence need to replace one block with block 55, as block 3 is the least 
             recently used block in the set 3, it is replaced with block 55.
Hence the main memory blocks present in the cache memory are : 0, 5, 7, 9, 16, 55 . (Note: block 3 is not present in the cache memory, it was replaced with block 55 )   Read the following articles to learn more related to the above question: Cache Memory Cache Organization | Introduction
Question 55
Q81 Part_A A disk has 8 equidistant tracks. The diameters of the innermost and outermost tracks are 1 cm and 8 cm respectively. The innermost track has a storage capacity of 10 MB. What is the total amount of data that can be stored on the disk if it is used with a drive that rotates it with (i) Constant Linear Velocity (ii) Constant Angular Velocity?  
A
(i) 80 MB (ii) 2040 MB
B
(i) 2040 MB (ii) 80 MB
C
(i) 80 MB (ii) 360 MB
D
(i) 360 MB (ii) 80 MB
Memory Management    Gate IT 2005    
Discuss it


Question 55 Explanation: 

Constant linear velocity :
Diameter of inner track = d = 1cm Circumference of inner track : = 2 * 3.14 * (d/2) = 3.14 cm
Storage capacity = 10 MB (given) Circumference of all equidistant tracks : = 2 * 3.14 *(0.5 + 1 + 1.5 + 2 + 2.5 + 3+ 3.5 + 4) = 113.14cm
Here, 3.14 cm holds 10 MB. Therefore, 1 cm holds 3.18 MB. 113.14 cm holds 113.14 * 3.18 = 360 MB. Total amount of data that can be stored on the disk = 360 MB
 
Constant angular velocity :
In case of CAV, the disk rotates at a constant angular speed. Same rotation time is taken by all the tracks. Total amount of data that can be stored on the disk = 8 * 10 = 80 MB
 
Thus, option (D) is correct.
 
Please comment below if you find anything wrong in the above post.
Question 56
Consider a computer system with 40-bit virtual addressing and page size of sixteen kilobytes. If the computer system has a one-level page table per process and each page table entry requires 48 bits, then the size of the per-process page table is _________megabytes.   Note : This question was asked as Numerical Answer Type.
A
384
B
48
C
192
D
96
Memory Management    GATE-CS-2016 (Set 1)    
Discuss it


Question 56 Explanation: 
Size of memory = 240 Page size = 16KB = 214   No of pages= size of Memory/ page size = 240 / 214 = 226 Size of page table = 226 * 48/8 bytes = 26*6 MB =384 MB   Thus, A is the correct choice.
Question 57
Consider a computer system with ten physical page frames. The system is provided with an access sequence a1, a2, ..., a20, a1, a2, ..., a20), where each ai number. The difference in the number of page faults between the last-in-first-out page replacement policy and the optimal page replacement policy is __________ [Note that this question was originally Fill-in-the-Blanks question]
A
0
B
1
C
2
D
3
Memory Management    GATE-CS-2016 (Set 1)    
Discuss it


Question 57 Explanation: 
LIFO stands for last in, first out a1 to a10 will result in page faults, So 10 page faults from a1 to a10. Then a11 will replace a10(last in is a10), a12 will replace a11 and so on till a20, so 10 page faults from a11 to a20 and a20 will be top of stack and a9…a1 are remained as such. Then a1 to a9 are already there. So 0 page faults from a1 to a9. a10 will replace a20, a11 will replace a10 and so on. So 11 page faults from a10 to a20. So total faults will be 10+10+11 = 31. Optimal a1 to a10 will result in page faults, So 10 page faults from a1 to a10. Then a11 will replace a10 because among a1 to a10, a10 will be used later, a12 will replace a11 and so on. So 10 page faults from a11 to a20 and a20 will be top of stack and a9…a1 are remained as such. Then a1 to a9 are already there. So 0 page faults from a1 to a9. a10 will replace a1 because it will not be used afterwards and so on, a10 to a19 will have 10 page faults. a20 is already there, so no page fault for a20. Total faults 10+10+10 = 30. Difference = 1
Question 58
In which one of the following page replacement algorithms it is possible for the page fault rate to increase even when the number of allocated frames increases?
A
LRU (Least Recently Used)
B
OPT (Optimal Page Replacement)
C
MRU (Most Recently Used)
D
FIFO (First In First Out)
Memory Management    GATE-CS-2016 (Set 2)    
Discuss it


Question 58 Explanation: 
In some situations FIFO page replacement gives more page faults when increasing the number of page frames. This situation is Belady’s anomaly. Belady’s anomaly proves that it is possible to have more page faults when increasing the number of page frames while using the First in First Out (FIFO) page replacement algorithm. For example, if we consider reference string 3 2 1 0 3 2 4 3 2 1 0 4 and 3 slots, we get 9 total page faults, but if we increase slots to 4, we get 10 page faults. rsz_beladys_anomaly2
Question 59
The address sequence generated by tracing a particular program executing in a pure demand paging system with 100 bytes per page is
0100, 0200, 0430, 0499, 0510, 0530, 0560, 0120, 0220, 0240, 0260, 0320, 0410.
Suppose that the memory can store only one page and if x is the address which causes a page fault then the bytes from addresses x to x + 99 are loaded on to the memory.
How many page faults will occur ?
A
0
B
4
C
7
D
8
Memory Management    Gate IT 2007    
Discuss it


Question 59 Explanation: 

Address	        Page faults	last byte in memory
0100 		page fault, 	 199 
0200 		page fault, 	 299
0430		page fault,      529 
0499 		no page fault
0510 		no page fault 
0530 		page fault,     629
0560            no page fault 
0120 		page fault,     219
0220	 	page fault,     319
0240            no page fault 
0260            no page fault 
0320		page fault,     419
0410            no page fault
So, 7 is the answer- (C)
Question 60
A paging scheme uses a Translation Look-aside Buffer (TLB). A TLB-access takes 10 ns and a main memory access takes 50 ns. What is the effective access time(in ns) if the TLB hit ratio is 90% and there is no page-fault?
A
54
B
60
C
65
D
75
Memory Management    Gate IT 2008    
Discuss it


Question 60 Explanation: 
Effective access time = hit ratio * time during hit + miss ratio * time during miss TLB time = 10ns, Memory time = 50ns Hit Ratio= 90% E.A.T. = (0.90)*(60)+0.10*110 =65
Question 61
Assume that a main memory with only 4 pages, each of 16 bytes, is initially empty. The CPU generates the following sequence of virtual addresses and uses the Least Recently Used (LRU) page replacement policy.
0, 4, 8, 20, 24, 36, 44, 12, 68, 72, 80, 84, 28, 32, 88, 92
How many page faults does this sequence cause? What are the page numbers of the pages present in the main memory at the end of the sequence?
A
6 and 1, 2, 3, 4
B
7 and 1, 2, 4, 5
C
8 and 1, 2, 4, 5
D
9 and 1, 2, 3, 5
Memory Management    Gate IT 2008    
Discuss it


Question 61 Explanation: 
2008_41_sol
Question 62
Match the following flag bits used in the context of virtual memory management on the left side with the different purposes on the right side of the table below.
2008_56
A
I-d, II-a, III-b, IV-c
B
I-b, II-c, III-a, IV-d
C
I-c, II-d, III-a, IV-b
D
I-b, II-c, III-d, IV-a
Memory Management    Gate IT 2008    
Discuss it


Question 63
Consider a computer with a 4-ways set-associative mapped cache of the following characteristics: a total of 1 MB of main memory, a word size of 1 byte, a block size of 128 words and a cache size of 8 KB. The number of bits in the TAG, SET and WORD fields, respectively are:
A
7, 6, 7
B
8, 5, 7
C
8, 6, 6
D
9, 4, 7
Memory Management    Computer Organization and Architecture    Gate IT 2008    
Discuss it


Question 63 Explanation: 
  According to the question it is given that No. of bytes in a word= 1byte No. of words per block of memory= 128 words Total size of the cache memory= 8 KB So the total number of block can be calculated as under Cache size/(no. words per block* size of 1 word) = 8KB/( 128*1) =64 Since, it is given that the computer has a 4 way set associative memory. Therefore, Total number of sets in the cache memory given = number of cache blocks given/4 = 64/4 = 16 So, the number of SET bits required = 4 as 16= power(2, 4). Thus, with 4 bits we will be able to get 16 possible output bits As per the question only physical memory information is given we can assume that cache memory is physically tagged. So, the memory can be divided into 16 regions or blocks. Size of the region a single set can address = 1MB/ 16 = power(2, 16 )Bytes = power(2, 16) / 128 = power(2, 9) cache blocks Thus, to uniquely identify these power(2, 9) blocks we will need 9 bits to tag these blocks. Thus, TAG= 9 Cache block is 128 words so for indicating any particular block we will need 7 bits as 128=power(2,7). Thus, WORD = 7. Hence the answer will be (TAG, SET, WORD) = (9,4,7).   This solution is contributed by Namita Singh.
Question 64
Consider a computer with a 4-ways set-associative mapped cache of the following character­istics: a total of 1 MB of main memory, a word size of 1 byte, a block size of 128 words and a cache size of 8 KB. While accessing the memory location 0C795H by the CPU, the contents of the TAG field of the corresponding cache line is
A
000011000
B
110001111
C
00011000
D
110010101
Memory Management    Computer Organization and Architecture    Gate IT 2008    
Discuss it


Question 64 Explanation: 
  TAG will take 9 bits SET will need 4 bits and WORD will need 7 bits of the cache memory location Thus, using the above conclusion as derived in previous question. The memory location 0C795H can be written as 0000 1100 0111 1001 0101 Thus TAG= 9 bits = 0000 1100 0 SET =4 bits =111 1 WORD = 7 bits =001 0101 Therefore, the matching option is option A.   This solution is contributed by Namita Singh .
Question 65
Linked Questions 58-59
Assume GeeksforGeeks implemented the new page replacement algorithm in virtual memory and given its name as ‘Geek’. Consider the working strategy of Geek as following-
  • Each page in memory maintains a count which is incremented if the page is referred and no page fault occurs.
  • If a page fault occurs, the physical page with zero count or smallest count is replaced by new page and if more than one page with zero count or smallest count then it uses FIFO strategy to replace the page.
Find the number of page faults using Geeks algorithm for the following reference string (assume three physical frames are available which are initially free)
Reference String : “A B C D A B E A B C D E B A D”
A
7
B
9
C
11
D
13
Memory Management    GATE 2017 Mock    
Discuss it


Question 65 Explanation: 
mock_58
Question 66
If LRU and Geek page replacement are compared (in terms of page faults) only for above reference string then find the correct statement from the following:
A
LRU and Geek are same
B
LRU is better than Geek
C
Geek is better than LRU
D
None
Memory Management    GATE 2017 Mock    
Discuss it


Question 66 Explanation: 
mock_59
There are 66 questions to complete.

GATE CS Corner


See Placement Course for placement preparation, GATE Corner for GATE CS Preparation and Quiz Corner for all Quizzes on GeeksQuiz.