Question 1 
What is recurrence for worst case of QuickSort and what is the time complexity in Worst case?
Recurrence is T(n) = T(n2) + O(n) and time complexity is O(n^2)  
Recurrence is T(n) = T(n1) + O(n) and time complexity is O(n^2)  
Recurrence is T(n) = 2T(n/2) + O(n) and time complexity is O(nLogn)  
Recurrence is T(n) = T(n/10) + T(9n/10) + O(n) and time complexity is O(nLogn) 
Discuss it
Question 1 Explanation:
The worst case of QuickSort occurs when the picked pivot is always one of the corner elements in sorted array. In worst case, QuickSort recursively calls one subproblem with size 0 and other subproblem with size (n1). So recurrence is
T(n) = T(n1) + T(0) + O(n)
The above expression can be rewritten as
T(n) = T(n1) + O(n)
1
void exchange(int *a, int *b)
{
int temp;
temp = *a;
*a = *b;
*b = temp;
}
int partition(int arr[], int si, int ei)
{
int x = arr[ei];
int i = (si  1);
int j;
for (j = si; j <= ei  1; j++)
{
if(arr[j] <= x)
{
i++;
exchange(&arr[i], &arr[j]);
}
}
exchange (&arr[i + 1], &arr[ei]);
return (i + 1);
}
/* Implementation of Quick Sort
arr[] > Array to be sorted
si > Starting index
ei > Ending index
*/
void quickSort(int arr[], int si, int ei)
{
int pi; /* Partitioning index */
if(si < ei)
{
pi = partition(arr, si, ei);
quickSort(arr, si, pi  1);
quickSort(arr, pi + 1, ei);
}
}
[/sourcecode]
Question 2 
Suppose we have a O(n) time algorithm that finds median of an unsorted array.
Now consider a QuickSort implementation where we first find median using the above algorithm, then use median as pivot. What will be the worst case time complexity of this modified QuickSort.
O(n^2 Logn)  
O(n^2)  
O(n Logn Logn)  
O(nLogn) 
Discuss it
Question 2 Explanation:
If we use median as a pivot element, then the recurrence for all cases becomes
T(n) = 2T(n/2) + O(n)
The above recurrence can be solved using Master Method. It falls in case 2 of master method.
Question 3 
Given an unsorted array. The array has this property that every element in array is at most k distance from its position in sorted array where k is a positive integer smaller than size of array. Which sorting algorithm can be easily modified for sorting this array and what is the obtainable time complexity?
Insertion Sort with time complexity O(kn)  
Heap Sort with time complexity O(nLogk)  
Quick Sort with time complexity O(kLogk)  
Merge Sort with time complexity O(kLogk) 
Discuss it
Question 3 Explanation:
See http://www.geeksforgeeks.org/nearlysortedalgorithm/ for explanation and implementation.
Question 4 
Which of the following is not true about comparison based sorting algorithms?
The minimum possible time complexity of a comparison based sorting algorithm is O(nLogn) for a random input array  
Any comparison based sorting algorithm can be made stable by using position as a criteria when two elements are compared  
Counting Sort is not a comparison based sorting algortihm  
Heap Sort is not a comparison based sorting algorithm. 
Discuss it
Question 4 Explanation:
See http://www.geeksforgeeks.org/lowerboundoncomparisonbasedsortingalgorithms/ for point A. See http://www.geeksforgeeks.org/stabilityinsortingalgorithms/ for B. C is true, count sort is an Integer Sorting algorithm.
Question 5 
What is time complexity of fun()?
int fun(int n) { int count = 0; for (int i = n; i > 0; i /= 2) for (int j = 0; j < i; j++) count += 1; return count; }
O(n^2)  
O(nLogn)  
O(n)  
O(nLognLogn) 
Discuss it
Question 5 Explanation:
For a input integer n, the innermost statement of fun() is executed following times.
n + n/2 + n/4 + ... 1
So time complexity T(n) can be written as
T(n) = O(n + n/2 + n/4 + ... 1) = O(n)
The value of count is also n + n/2 + n/4 + .. + 1
Question 6 
What is the time complexity of fun()?
int fun(int n) { int count = 0; for (int i = 0; i < n; i++) for (int j = i; j > 0; j) count = count + 1; return count; }
Theta (n)  
Theta (n^2)  
Theta (n*Logn)  
Theta (nLognLogn) 
Discuss it
Question 6 Explanation:
The time complexity can be calculated by counting number of times the expression "count = count + 1;" is executed. The expression is executed 0 + 1 + 2 + 3 + 4 + .... + (n1) times.
Time complexity = Theta(0 + 1 + 2 + 3 + .. + n1) = Theta (n*(n1)/2) = Theta(n^2)
Question 7 
The recurrence relation capturing the optimal time of the Tower of Hanoi problem with n discs is. (GATE CS 2012)
T(n) = 2T(n – 2) + 2  
T(n) = 2T(n – 1) + n
 
T(n) = 2T(n/2) + 1  
T(n) = 2T(n – 1) + 1 
Discuss it
Question 7 Explanation:
Following are the steps to follow to solve Tower of Hanoi problem recursively.
Let the three pegs be A, B and C. The goal is to move n pegs from A to C. To move n discs from peg A to peg C: move n1 discs from A to B. This leaves disc n alone on peg A move disc n from A to C move n?1 discs from B to C so they sit on disc nThe recurrence function T(n) for time complexity of the above recursive solution can be written as following. T(n) = 2T(n1) + 1
Question 8 
Let w(n) and A(n) denote respectively, the worst case and average case running time of an algorithm executed on an input of size n. which of the following is ALWAYS TRUE? (GATE CS 2012)
(A)
(B)
(C)
(D)
(A)
(B)
(C)
(D)
A  
B  
C  
D 
Discuss it
Question 8 Explanation:
The worst case time complexity is always greater than or same as the average case time complexity.
Question 9 
Which of the following is not O(n^2)?
(15^10) * n + 12099  
n^1.98  
n^3 / (sqrt(n))  
(2^20) * n 
Discuss it
Question 9 Explanation:
The order of growth of option c is n^2.5 which is higher than n^2.
Question 10 
Which of the given options provides the increasing order of asymptotic complexity of functions f1, f2, f3 and f4?
f1(n) = 2^n f2(n) = n^(3/2) f3(n) = nLogn f4(n) = n^(Logn)
f3, f2, f4, f1  
f3, f2, f1, f4  
f2, f3, f1, f4  
f2, f3, f4, f1 
Discuss it
Question 10 Explanation:
f1(n) = 2^n f2(n) = n^(3/2) f3(n) = nLogn f4(n) = n^(Logn)Except f3, all other are exponential. So f3 is definitely first in output. Among remaining, n^(3/2) is next. One way to compare f1 and f4 is to take Log of both functions. Order of growth of Log(f1(n)) is Θ(n) and order of growth of Log(f4(n)) is Θ(Logn * Logn). Since Θ(n) has higher growth than Θ(Logn * Logn), f1(n) grows faster than f4(n). Following is another way to compare f1 and f4. Let us compare f4 and f1. Let us take few values to compare
n = 32, f1 = 2^32, f4 = 32^5 = 2^25 n = 64, f1 = 2^64, f4 = 64^6 = 2^36 ............... ...............Also see http://www.wolframalpha.com/input/?i=2^n+vs+n^%28log+n%29 Thanks to fella26 for suggesting the above explanation.
Question 11 
Consider the following program fragment for reversing the digits in a given integer to obtain a new integer. Let n = D1D2…Dm
int n, rev; rev = 0; while (n > 0) { rev = rev*10 + n%10; n = n/10; }The loop invariant condition at the end of the ith iteration is: (GATE CS 2004)
n = D1D2….Dmi and rev = DmDm1…Dmi+1  
n = Dmi+1…Dm1Dm and rev = Dm1….D2D1  
n != rev  
n = D1D2….Dm and rev = DmDm1…D2D1

Discuss it
Question 11 Explanation:
We can get it by taking an example like n = 54321. After 2 iterations, rev would be 12 and n would be 543.
Question 12 
What is the best time complexity of bubble sort?
N^2  
NlogN  
N  
N(logN)^2 
Discuss it
Question 12 Explanation:
The bubble sort is at its best if the input data is sorted. i.e. If the input data is sorted in the same order as expected output. This can be achieved by using one boolean variable. The boolean variable is used to check whether the values are swapped at least once in the inner loop.
Consider the following code snippet:
1
int main()
{
int arr[] = {10, 20, 30, 40, 50}, i, j, isSwapped;
int n = sizeof(arr) / sizeof(*arr);
isSwapped = 1;
for(i = 0; i < n  1 && isSwapped; ++i)
{
isSwapped = 0;
for(j = 0; j < n  i  1; ++j)
if (arr[j] > arr[j + 1])
{
swap(&arr[j], &arr[j + 1]);
isSwapped = 1;
}
}
for(i = 0; i < n; ++i)
printf("%d ", arr[i]);
return 0;
}
[/sourcecode]
Please observe that in the above code, the outer loop runs only once.
Question 13 
What is the worst case time complexity of insertion sort where position of the data to be inserted is calculated using binary search?
N  
NlogN  
N^2  
N(logN)^2 
Discuss it
Question 13 Explanation:
Applying binary search to calculate the position of the data to be inserted doesn't reduce the time complexity of insertion sort. This is because insertion of a data at an appropriate position involves two steps:
1. Calculate the position.
2. Shift the data from the position calculated in step #1 one step right to create a gap where the data will be inserted.
Using binary search reduces the time complexity in step #1 from O(N) to O(logN). But, the time complexity in step #2 still remains O(N). So, overall complexity remains O(N^2).
Question 14 
The tightest lower bound on the number of comparisons, in the worst case, for comparisonbased sorting is of the order of
N  
N^2  
NlogN  
N(logN)^2 
Discuss it
Question 14 Explanation:
The number of comparisons that a comparison sort algorithm requires increases in proportion to Nlog(N), where N is the number of elements to sort. This bound is asymptotically tight:
Given a list of distinct numbers (we can assume this because this is a worstcase analysis), there are N factorial permutations exactly one of which is the list in sorted order. The sort algorithm must gain enough information from the comparisons to identify the correct permutations. If the algorithm always completes after at most f(N) steps, it cannot distinguish more than 2^f(N) cases because the keys are distinct and each comparison has only two possible outcomes. Therefore,
2^f(N) >= N! or equivalently f(N) >= log(N!).
Since log(N!) is Omega(NlogN), the answer is NlogN.
For more details, read here
Question 15 
In a modified merge sort, the input array is splitted at a position onethird of the length(N) of the array. What is the worst case time complexity of this merge sort?
N(logN base 3)  
N(logN base 2/3)  
N(logN base 1/3)  
N(logN base 3/2) 
Discuss it
Question 15 Explanation:
The time complexity is given by:
T(N) = T(N/3) + T(2N/3) + N
Solving the above recurrence relation gives, T(N) = N(logN base 3/2)
Question 16 
What is the time complexity of the below function?
void fun(int n, int arr[]) { int i = 0, j = 0; for(; i < n; ++i) while(j < n && arr[i] < arr[j]) j++; }
O(n)  
O(n^2)  
O(nlogn)  
O(n(logn)^2) 
Discuss it
Question 16 Explanation:
In the first look, the time complexity seems to be O(n^2) due to two loops. But, please note that the variable j is not initialized for each value of variable i. So, the inner loop runs at most n times. Please observe the difference between the function given in question and the below function:
1
void fun(int n, int arr[])
{
int i = 0, j = 0;
for(; i < n; ++i)
{
j = 0;
while(j < n && arr[i] < arr[j])
j++;
}
}
[/sourcecode]
Question 17 
In a competition, four different functions are observed. All the functions use a single for loop and within the for loop, same set of statements are executed. Consider the following for loops:
A) for(i = 0; i < n; i++) B) for(i = 0; i < n; i += 2) C) for(i = 1; i < n; i *= 2) D) for(i = n; i > 1; i /= 2)If n is the size of input(positive), which function is most efficient(if the task to be performed is not an issue)?
A  
B  
C  
D 
Discuss it
Question 17 Explanation:
The time complexity of first for loop is O(n).
The time complexity of second for loop is O(n/2), equivalent to O(n) in asymptotic analysis.
The time complexity of third for loop is O(logn).
The fourth for loop doesn't terminate.
Question 18 
The following statement is valid.
log(n!) = (n log n).
True  
False 
Discuss it
Question 18 Explanation:
Order of growth of [Tex]\log n![/Tex] and [Tex]n\log n[/Tex] is same for large values of [Tex]n[/Tex], i.e., [Tex]\theta (\log n!) = \theta (n\log n)[/Tex]. So time complexity of fun() is [Tex] \theta (n\log n)[/Tex].
The expression [Tex]\theta (\log n!) = \theta (n\log n)[/Tex] can be easily derived from following Stirling's approximation (or Stirling's formula).
[Tex] \log n! = n\log n  n +O(\log(n))\ [/Tex]
Question 19 
What does it mean when we say that an algorithm X is asymptotically more efficient than Y?
X will be a better choice for all inputs  
X will be a better choice for all inputs except small inputs  
X will be a better choice for all inputs except large inputs  
Y will be a better choice for small inputs 
Discuss it
Question 19 Explanation:
In asymptotic analysis we consider growth of algorithm in terms of input size. An algorithm X is said to be asymptotically better than Y if X takes smaller time than y for all input sizes n larger than a value n0 where n0 > 0.
Question 20 
What is the time complexity of Floyd–Warshall algorithm to calculate all pair shortest path in a graph with n vertices?
O(n^2logn)  
Theta(n^2logn)  
Theta(n^4)  
Theta(n^3) 
Discuss it
Question 20 Explanation:
Floyd–Warshall algorithm uses three nested loops to calculate all pair shortest path. So, time complexity is Thete(n^3). Read here for more details.
Question 21 
A list of n string, each of length n, is sorted into lexicographic order using the mergesort algorithm. The worst case running time of this computation is
(A)
(B)
(C)
(D)
A  
B  
C  
D 
Discuss it
Question 21 Explanation:
The recurrence tree for merge sort will have height Log(n). And O(n^2) work will be done at each level of the recurrence tree (Each level involves n comparisons and a comparison takes O(n) time in worst case). So time complexity of this Merge Sort will be [Tex]O (n^2 log n) [/Tex].
Question 22 
In quick sort, for sorting n elements, the (n/4)th smallest element is selected as pivot using an O(n) time algorithm. What is the worst case time complexity of the quick sort?
(A) (n)
(B) (nLogn)
(C) (n^2)
(D) (n^2 log n)
A  
B  
C  
D 
Discuss it
Question 22 Explanation:
The recursion expression becomes:
T(n) = T(n/4) + T(3n/4) + cn
After solving the above recursion, we get [Tex]\theta[/Tex](nLogn).
Question 23 
Consider the Quicksort algorithm. Suppose there is a procedure for finding a pivot element which splits the list into two sublists each of which contains at least onefifth of the elements. Let T(n) be the number of comparisons required to sort n elements. Then
T(n) <= 2T(n/5) + n  
T(n) <= T(n/5) + T(4n/5) + n  
T(n) <= 2T(4n/5) + n  
T(n) <= 2T(n/2) + n 
Discuss it
Question 23 Explanation:
For the case where n/5 elements are in one subset, T(n/5) comparisons are needed for the first subset with n/5 elements, T(4n/5) is for the rest 4n/5 elements, and n is for finding the pivot.
If there are more than n/5 elements in one set then other set will have less than 4n/5 elements and time complexity will be less than T(n/5) + T(4n/5) + n because recursion tree will be more balanced.
Question 24 
Consider the following functions:
(A) f(n) = O(g(n)); g(n) = O(h(n))
(B) f(n) = (g(n)); g(n) = O(h(n))
(C) g(n) = O(f(n)); h(n) = O(f(n))
(D) h(n) = O(f(n)); g(n) = (f(n))
f(n) = 2^n g(n) = n! h(n) = n^lognWhich of the following statements about the asymptotic behavior of f(n), g(n), and h(n) is true?
(A) f(n) = O(g(n)); g(n) = O(h(n))
(B) f(n) = (g(n)); g(n) = O(h(n))
(C) g(n) = O(f(n)); h(n) = O(f(n))
(D) h(n) = O(f(n)); g(n) = (f(n))
A  
B  
C  
D 
Discuss it
Question 24 Explanation:
According to order of growth: h(n) < f(n) < g(n) (g(n) is asymptotically greater than f(n) and f(n) is asymptotically greater than h(n) )
We can easily see above order by taking logs of the given 3 functions
lognlogn < n < log(n!) (logs of the given f(n), g(n) and h(n)).Note that log(n!) = [Tex]\theta[/Tex](nlogn)
Question 25 
In the following C function, let n >= m.
(A) (logn)
(B) (n)
(C) (loglogn)
(D) (sqrt(n))
int gcd(n,m) { if (n%m ==0) return m; n = n%m; return gcd(m, n); }How many recursive calls are made by this function?
(A) (logn)
(B) (n)
(C) (loglogn)
(D) (sqrt(n))
A  
B  
C  
D 
Discuss it
Question 25 Explanation:
Above code is implementation of the Euclidean algorithm for finding Greatest Common Divisor (GCD).
Please see http://mathworld.wolfram.com/EuclideanAlgorithm.html for time complexity.
Question 26 
Which of the following sorting algorithms has the lowest worstcase complexity?
Merge Sort  
Bubble Sort  
Quick Sort  
Selection Sort 
Discuss it
Question 26 Explanation:
Worst case complexities for the above sorting algorithms are as follows:
Merge Sort — nLogn
Bubble Sort — n^2
Quick Sort — n^2
Selection Sort — n^2
Question 27 
Consider the following functions
Which of the following is true? (GATE CS 2000)
(a) h(n) is 0(f(n))
(b) h(n) is 0(g(n))
(c) g(n) is not 0(f(n))
(d) f(n) is 0(g(n))
(a) h(n) is 0(f(n))
(b) h(n) is 0(g(n))
(c) g(n) is not 0(f(n))
(d) f(n) is 0(g(n))
a  
b  
c  
d 
Discuss it
Question 27 Explanation:
g(n) = 2^[Tex](\sqrt{n} \log{n} )[/Tex] = n^[Tex](\sqrt{n})[/Tex]
f(n) and g(n) are of same asymptotic order and following statements are true.
f(n) = O(g(n))
g(n) = O(f(n)).
(a) and (b) are false because n! is of asymptotically higher order than n^[Tex](\sqrt{n})[/Tex].
Question 28 
Consider the following three claims
I (n + k)^m = (n^m), where k and m are constants
II 2^(n + 1) = 0(2^n)
III 2^(2n + 1) = 0(2^n)
Which of these claims are correct? (GATE CS 2003)
I and II  
I and III  
II and III  
I, II and III 
Discuss it
Question 28 Explanation:
(I) (n+m)^k = n^k + c1*n^(k1) + ... k^m = [Tex]\theta[/Tex](n^k) (II) 2^(n+1) = 2*2^n = O(2^n)
Question 29 
Let s be a sorted array of n integers. Let t(n) denote the time taken for the most efficient algorithm to determined if there are two elements with sum less than 1000 in s. which of the following statements is true? (GATE CS 2000)
a) t (n) is 0 (1)
b) n < t (n) < n
c) n log 2 n < t (n) <
d) t (n) =
a) t (n) is 0 (1)
b) n < t (n) < n
c) n log 2 n < t (n) <
d) t (n) =
a  
b  
c  
d 
Discuss it
Question 29 Explanation:
Let array be sorted in ascending order, if sum of first two elements is less than 1000 then there are two elements with sum less than 1000 otherwise not. For array sorted in descending order we need to check last two elements. For an array data structure, number of operations are fixed in both the cases and not dependent on n, complexity is O(1)
Question 30 
Consider the following function
int unknown(int n) { int i, j, k = 0; for (i = n/2; i <= n; i++) for (j = 2; j <= n; j = j * 2) k = k + n/2; return k; }What is the returned value of the above function? (GATE CS 2013)
(A) (B) (C) (D)
A  
B  
C  
D 
Discuss it
Question 30 Explanation:
The outer loop runs n/2 or
[Tex]\Theta(n)[/Tex]
times. The inner loop runs
[Tex]\Theta(Logn)[/Tex]
times (Note that j is multiplied by 2 in every iteration). So the statement "k = k + n/2;" runs
[Tex]\Theta(nLogn)[/Tex]
times. The statement increases value of k by n/2. So the value of k becomes n/2*
[Tex]\Theta(nLogn)[/Tex]
which is
[Tex]\Theta(n^2Logn)[/Tex]
Question 31 
The number of elements that can be sorted in time using heap sort is
(A) (B) (C) (d)
A  
B  
C  
D 
Discuss it
Question 31 Explanation:
Time complexity of Heap Sort is [Tex]\Theta(mLogm)[/Tex] for m input elements. For m = [Tex]\Theta(Log n/(Log Log n))[/Tex], the value of [Tex]\Theta(m * Logm)[/Tex] will be [Tex]\Theta( [Log n/(Log Log n)] * [Log (Log n/(Log Log n))] )[/Tex] which will be [Tex]\Theta( [Log n/(Log Log n)] * [ Log Log n  Log Log Log n] )[/Tex] which is [Tex]\Theta(Log n)[/Tex]
Question 32 
Consider the following two functions. What are time complexities of the functions?
int fun1(int n) { if (n <= 1) return n; return 2*fun1(n1); }
int fun2(int n) { if (n <= 1) return n; return fun2(n1) + fun2(n1); }
O(2^n) for both fun1() and fun2()  
O(n) for fun1() and O(2^n) for fun2()  
O(2^n) for fun1() and O(n) for fun2()  
O(n) for both fun1() and fun2() 
Discuss it
Question 32 Explanation:
Time complexity of fun1() can be written as
T(n) = T(n1) + C which is O(n)
Time complexity of fun2() can be written as
T(n) = 2T(n1) + C which is O(2^n)
Question 33 
In quick sort, for sorting n elements, the (n/4)th smallest element is selected as pivot using an O(n) time algorithm. What is the worst case time complexity of the quick sort?
<pre>
(A) (n)
(B) (nLogn)
(C) (n^2)
(D) (n^2 log n) </pre>
A  
B  
C  
D 
Discuss it
Question 33 Explanation:
Answer(B)
The recursion expression becomes:
T(n) = T(n/4) + T(3n/4) + cn
After solving the above recursion, we get \theta(nLogn).
Question 34 
Consider the Quicksort algorithm. Suppose there is a procedure for finding a pivot element which splits the list into two sublists each of which contains at least onefifth of the elements. Let T(n) be the number of comparisons required to sort n elements. Then
T(n) <= 2T(n/5) + n  
T(n) <= T(n/5) + T(4n/5) + n  
T(n) <= 2T(4n/5) + n  
T(n) <= 2T(n/2) + n 
Discuss it
Question 34 Explanation:
For the case where n/5 elements are in one subset, T(n/5) comparisons are needed for the first subset with n/5 elements, T(4n/5) is for the rest 4n/5 elements, and n is for finding the pivot.
If there are more than n/5 elements in one set then other set will have less than 4n/5 elements and time complexity will be less than T(n/5) + T(4n/5) + n because recursion tree will be more balanced.
Question 35 
Consider the following segment of Ccode:
int j, n; j = 1; while (j <= n) j = j*2;The number of comparisons made in the execution of the loop for any n > 0 is: Base of Log is 2 in all options.
CEIL(logn) + 2  
n  
CEIL(logn)  
FLOOR(logn) + 2 
Discuss it
Question 35 Explanation:
We can see it by taking few examples like n = 1, n = 3, etc. For example, for n=5 we have the following (4) comparisons:  1 <= 5 (T) 2 <= 5 (T) 4 <= 5 (T) 8 <= 5 (F)  CEIL(log_2 n)+2 = CEIL(log_2 5) + 2 = CEIL(2.3) + 2 = 3 + 2 = 5
Question 36 
Consider the following Cprogram fragment in which i, j and n are integer variables.
for (i = n, j = 0; i >0; i /= 2, j += i);Let val(j) denote the value stored in the variable j after termination of the for loop. Which one of the following is true? (A) val(j) = (logn) (B) vaI(j) = (sqrt(n)) (C) val(j) = (n) (D) val(j) = (nlogn)
A  
B  
C  
D 
Discuss it
Question 36 Explanation:
The variable j is initially 0 and value of j is sum of values of i. i is initialized as n and is reduced to half in each iteration.
j = n/2 + n/4 + n/8 + .. + 1 = Θ(n)
Note the semicolon after the for loop, so there is nothing in the body.
Same as question 1 of http://www.geeksforgeeks.org/clanguageset6/
Question 37 
The minimum number of comparisons required to find the minimum and the maximum of 100 numbers is _________________.
148  
147  
146  
140 
Discuss it
Question 37 Explanation:
To find minimum and maximum element out of n numbers, we need to have at least (3n/22) comparisons.
Question 38 
Consider the following pseudo code. What is the total number of multiplications to be performed?
D = 2 for i = 1 to n do for j = i to n do for k = j + 1 to n do D = D * 3
Half of the product of the 3 consecutive integers.  
Onethird of the product of the 3 consecutive integers.  
Onesixth of the product of the 3 consecutive integers.  
None of the above. 
Discuss it
Question 38 Explanation:
See question 2 of http://www.geeksforgeeks.org/datastructuresalgorithmsset33/
Question 39 
You have an array of n elements. Suppose you implement quicksort by always choosing the central element of the array as the pivot. Then the tightest upper bound for the worst case performance is
O(n^{2})  
O(nLogn)  
Theta(nLogn)  
O(n^{3}) 
Discuss it
Question 39 Explanation:
The central element may always be an extreme element, therefore time complexity in worst case becomes O(n^{2})
Question 40 
Consider the following Cfunction:
double foo (int n) { int i; double sum; if (n = = 0) return 1.0; else { sum = 0.0; for (i = 0; i < n; i++) sum += foo (i); return sum; } }The space complexity of the above function is:
O(1)  
O(n)  
O(n!)  
O(n^{n}) 
Discuss it
Question 40 Explanation:
Note that the function foo() is recursive. Space complexity is O(n) as there can be at most O(n) active functions (function call frames) at a time.
Question 41 
Consider the following Cfunction:
double foo (int n) { int i; double sum; if (n = = 0) return 1.0; else { sum = 0.0; for (i = 0; i < n; i++) sum += foo (i); return sum; } }Suppose we modify the above function foo() and store the values of foo (i), 0 < = i < n, as and when they are computed. With this modification, the time complexity for function foo() is significantly reduced. The space complexity of the modified function would be:
O(1)  
O(n)  
O(n!)  
O(n^{n}) 
Discuss it
Question 41 Explanation:
Space complexity now is also O(n).
We would need an array of size O(n). The space required for recursive calls would be O(1) as the values would be taken from stored array rather than making function calls again and again.
Question 42 
Two matrices M1 and M2 are to be stored in arrays A and B respectively. Each array can be stored either in rowmajor or columnmajor order in contiguous memory locations. The time complexity of an algorithm to compute M1 × M2 will be
best if A is in rowmajor, and B is in column major order  
best if both are in rowmajor order  
best if both are in columnmajor order  
independent of the storage scheme 
Discuss it
Question 42 Explanation:
This is a trick question. Note that the questions asks about time complexity, not time taken by the program. for time complexity, it doesn't matter how we store array elements, we always need to access same number of elements of M1 and M2 to multiply the matrices. It is always constant or O(1) time to do element access in arrays, the constants may differ for different schemes, but not the time complexity.
Question 43 
Let A[1, ..., n] be an array storing a bit (1 or 0) at each location, and f(m) is a unction whose time complexity is θ(m). Consider the following program fragment written in a C like language:
counter = 0; for (i = 1; i < = n; i++) { if (A[i] == 1) counter++; else { f(counter); counter = 0; } }The complexity of this program fragment is
Ω(n^{2})  
Ω(nlog n) and O(n^{2})  
θ(n)  
O(n) 
Discuss it
Question 43 Explanation:
Please note that inside the else condition, f() is called first, then counter is set to 0.
Consider the following cases:
a) All 1s in A[]: Time taken is Θ(n) as only counter++ is executed n times. b) All 0s in A[]: Time taken is Θ(n) as only f(0) is called n times c) Half 1s, then half 0s: Time taken is Θ(n) as only f(n/2) is called once.
Question 44 
The recurrence equation
T(1) = 1 T(n) = 2T(n  1) + n, n ≥ 2evaluates to
2^{n + 1} n  2  
2^{n}  n  
2^{n + 1}  2n  2  
2^{n}  n 
Discuss it
Question 44 Explanation:
One way to solve is to use hit and try method. Given T(n) = 2T(n1) + n and T(1) = 1 For n = 2, T(2) = 2T(21) + 2 = 2T(1) + 2 = 2.1 + 2 = 4 Now when you will put n = 2 in all options, only 1st option 2^(n+1)  n  2 satisfies it.
Question 45 
Consider the following three claims
1. (n + k)^{m} = Θ(n^{m}), where k and m are constants 2. 2^{n + 1} = O(2^{n}) 3. 2^{2n + 1} = O(2^{n})Which of these claims are correct ?
1 and 2  
1 and 3  
2 and 3  
1, 2, and 3 
Discuss it
Question 45 Explanation:
(n + k)^{m} and Θ(n^{m}) are asymptotically same as theta notation can always be written by taking the leading order term in a polynomial expression.
2^{n + 1} and O(2^{n}) are also asymptotically same as 2^{n + 1} can be written as 2 * 2^{n} and constant multiplication/addition doesn't matter in theta notation.
2^{2n + 1} and O(2^{n}) are not same as constant is in power.
See Asymptotic Notations for more details.
Question 46 
In a permutation a1.....an of n distinct integers, an inversion is a pair (ai, aj) such that i < j and ai > aj. What would be the worst case time complexity of the Insertion Sort algorithm, if the inputs are restricted to permutations of 1.....n with at most n inversions?
Θ (n^{2})  
Θ (n log n)  
Θ (n^{1.5})  
Θ (n) 
Discuss it
Question 46 Explanation:
Insertion sort runs in Θ(n + f(n)) time, where f(n) denotes the number of inversion initially present in the array being sorted.
Source: http://cs.xidian.edu.cn/jpkc/Algorithm/down/Solution%20to%2024%20Inversions.pdf
Question 47 
Randomized quicksort is an extension of quicksort where the pivot is chosen randomly. What is the worst case complexity of sorting n numbers using randomized quicksort?
O(n)  
O(n Log n)  
O(n^{2})  
O(n!) 
Discuss it
Question 47 Explanation:
Randomized quicksort has expected time complexity as O(nLogn), but worst case time complexity remains same. In worst case the randomized function can pick the index of corner element every time.
Question 48 
f1(n); f4(n); f2(n); f3(n)  
f1(n); f2(n); f3(n); f4(n);  
f2(n); f1(n); f4(n); f3(n)  
f1(n); f2(n); f4(n); f3(n) 
Discuss it
Question 49 
Which one of the following is the recurrence equation for the worst case time complexity of the Quicksort algorithm for sorting n(≥ 2) numbers? In the recurrence equations given in the options below, c is a constant.
T(n) = 2T (n/2) + cn  
T(n) = T(n – 1) + T(0) + cn  
T(n) = 2T (n – 2) + cn  
T(n) = T(n/2) + cn 
Discuss it
Question 49 Explanation:
In worst case, the chosen pivot is always placed at a corner position and recursive call is made for following.
a) for subarray on left of pivot which is of size n1 in worst case.
b) for subarray on right of pivot which is of size 0 in worst case.
Question 50 
Consider the following C function.
int fun1 (int n) { int i, j, k, p, q = 0; for (i = 1; i<n; ++i) { p = 0; for (j=n; j>1; j=j/2) ++p; for (k=1; k<p; k=k*2) ++q; } return q; }Which one of the following most closely approximates the return value of the function fun1?
n^{3}  
n (logn)^{2}  
nlogn  
nlog(logn) 
Discuss it
Question 50 Explanation:
int fun1 (int n) { int i, j, k, p, q = 0; // This loop runs Θ(n) time for (i = 1; i < n; ++i) { p = 0; // This loop runs Θ(Log n) times. Refer this for (j=n; j > 1; j=j/2) ++p; // Since above loop runs Θ(Log n) times, p = Θ(Log n) // This loop runs Θ(Log p) times which loglogn for (k=1; k < p; k=k*2) ++q; } return q; }T(n) = n(logn + loglogn) T(n) = n(logn) dominant But please note here we are return q which lies in loglogn so ans should be T(n) = nloglogn Refer this for details.
Question 51 
An unordered list contains n distinct elements. The number of comparisons to find an element in this list that is neither maximum nor minimum is
Θ(nlogn)  
Θ(n)  
Θ(logn)  
Θ(1) 
Discuss it
Question 51 Explanation:
We only need to consider any 3 elements and compare them. So the number of comparisons is constants, that makes time complexity as Θ(1)
The catch here is, we need to return any element that is neither maximum not minimum.
Let us take an array {10, 20, 15, 7, 90}. Output can be 10 or 15 or 20
Pick any three elements from given liar. Let the three elements be 10, 20 and 7.
Using 3 comparisons, we can find that the middle element is 10.
Question 52 
Only I  
Only II  
I or III or IV but not II  
II or III or IV but not I 
Discuss it
Question 52 Explanation:
X = Sum of the cubes of {1, 2, 3, .. n
X = n^{2} (n+1)^{2} / 4
Question 53 
In the following table, the left column contains the names of standard graph algorithms and the right column contains the time complexities of the algorithms. Match each algorithm with its time complexity.
1. BellmanFord algorithm 2. Kruskal’s algorithm 3. FloydWarshall algorithm 4. Topological sorting  A : O ( m log n) B : O (n^{3}) C : O (nm) D : O (n + m) 
1→ C, 2 → A, 3 → B, 4 → D  
1→ B, 2 → D, 3 → C, 4 → A  
1→ C, 2 → D, 3 → A, 4 → B  
1→ B, 2 → A, 3 → C, 4 → D 
Discuss it
Question 53 Explanation:
 BellmanFord algorithm: Time complexity: O(VE)
 Kruskal’s algorithm:Time Complexity: O(ElogE) or O(ElogV). Sorting of edges takes O(ELogE) time. After sorting, we iterate through all edges and apply findunion algorithm. The find and union operations can take atmost O(LogV) time. So overall complexity is O(ELogE + ELogV) time. The value of E can be atmost V^2, so O(LogV) are O(LogE) same. Therefore, overall time complexity is O(ElogE) or O(ElogV)
 FloydWarshall algorithm: Time Complexity: O(V^3)
 Topological sorting: Time Complexity: The above algorithm is simply DFS with an extra stack. So time complexity is same as DFS which is O(V+E).
Question 54 
Let T(n) be a function defined by the recurrence
T(n) = 2T(n/2) + √n for n ≥ 2 and
T(1) = 1
Which of the following statements is TRUE?
T(n) = θ(log n)  
T(n) = θ(√n)  
T(n) = θ(n)  
T(n) = θ(n log n) 
Discuss it
Question 54 Explanation:
n^{(logba)} = n which is = n^(1.5) = O(sqrt n)
then by applying case 1 of master method we get T(n) = Θ(n)
Please refer http://www.geeksforgeeks.org/analysisalgorithmset4mastermethodsolvingrecurrences/ for more details.
Question 55 
The worst case running times of Insertion sort, Merge sort and Quick sort, respectively, are:
Θ(n log n), Θ(n log n) and Θ(n^{2})
 
Θ(n^{2}), Θ(n^{2}) and Θ(n Log n)
 
Θ(n^{2}), Θ(n log n) and Θ(n log n)
 
Θ(n^{2}), Θ(n log n) and Θ(n^{2}) 
Discuss it
Question 55 Explanation:
 Insertion Sort takes Θ(n^{2}) in worst case as we need to run two loops. The outer loop is needed to one by one pick an element to be inserted at right position. Inner loop is used for two things, to find position of the element to be inserted and moving all sorted greater elements one position ahead. Therefore the worst case recursive formula is T(n) = T(n1) + Θ(n).
 Merge Sort takes Θ(n Log n) time in all cases. We always divide array in two halves, sort the two halves and merge them. The recursive formula is T(n) = 2T(n/2) + Θ(n).
 QuickSort takes Θ(n^{2}) in worst case. In QuickSort, we take an element as pivot and partition the array around it. In worst case, the picked element is always a corner element and recursive formula becomes T(n) = T(n1) + Θ(n). An example scenario when worst case happens is, arrays is sorted and our code always picks a corner element as pivot.
Question 56 
Assume that the algorithms considered here sort the input sequences in ascending order. If the input is already in ascending order, which of the following are TRUE ?
I. Quicksort runs in Θ(n^{2}) time II. Bubblesort runs in Θ(n^{2}) time III. Mergesort runs in Θ(n) time IV. Insertion sort runs in Θ(n) time
I and II only  
I and III only  
II and IV only  
I and IV only 
Discuss it
Question 56 Explanation:
I. Given an array in ascending order, Recurrence relation for total number of comparisons for quicksort will be
T(n) = T(n1)+O(n) //partition algo will take O(n) comparisons in any case.
= O(n^2)
II. Bubble Sort runs in Θ(n^2) time
If an array is in ascending order, we could make a small modification in Bubble Sort Inner for loop which is responsible for bubbling the kth largest element to the end in kth iteration. Whenever there is no swap after the completion of inner for loop of bubble sort in any iteration, we can declare that array is sorted in case of Bubble Sort taking O(n) time in Best Case.
III. Merge Sort runs in Θ(n) time
Merge Sort relies on Divide and Conquer paradigm to sort an array and there is no such worst or best case input for merge sort. For any sequence, Time complexity will be given by following recurrence relation,
T(n) = 2T(n/2) + Θ(n) // InPlace Merge algorithm will take Θ(n) due to copying an entire array.
= Θ(nlogn)
IV. Insertion sort runs in Θ(n) time
Whenever a new element which will be greater than all the elements of the intermediate sorted subarray ( because given array is sorted) is added, there won't be any swap but a single comparison. In n1 passes we will be having 0 swaps and n1 comparisons.
Total time complexity = O(n) // N1 Comparisons
This solution is contributed by Pranjul Ahuja
//// For an array already sorted in ascending order, Quicksort has a complexity Θ(n^{2}) [Worst Case] Bubblesort has a complexity Θ(n) [Best Case] Mergesort has a complexity Θ(n log n) [Any Case] Insertsort has a complexity Θ(n) [Best Case]
//// For an array already sorted in ascending order, Quicksort has a complexity Θ(n^{2}) [Worst Case] Bubblesort has a complexity Θ(n) [Best Case] Mergesort has a complexity Θ(n log n) [Any Case] Insertsort has a complexity Θ(n) [Best Case]
Question 57 
A problem in NP is NPcomplete if
It can be reduced to the 3SAT problem in polynomial time  
The 3SAT problem can be reduced to it in polynomial time  
It can be reduced to any other problem in NP in polynomial time  
some problem in NP can be reduced to it in polynomial time 
Discuss it
Question 57 Explanation:
A problem in NP becomes NPC if all NP problems can be reduced to it in polynomial time. This is same as reducing any of the NPC problem to it. 3SAT being an NPC problem, reducing it to a NP problem would mean that NP problem is NPC.
Please refer: http://www.geeksforgeeks.org/npcompletenessset1/
Question 58 
The characters a to h have the set of frequencies based on the first 8 Fibonacci numbers as follows
a : 1, b : 1, c : 2, d : 3, e : 5, f : 8, g : 13, h : 21/
A Huffman code is used to represent the characters. What is the sequence of characters corresponding to the following code?
110111100111010
fdheg  
ecgdf  
dchfg  
fehdg 
Discuss it
Question 58 Explanation:
Background Required  Generating Prefix codes using Huffman Coding.
First we apply greedy algorithm on the frequencies of the characters to generate the binary tree as shown in the Figure given below. Assigning 0 to the left edge and 1 to the right edge, prefix codes for the characters are as
below.
a  1111110
b  1111111
c  111110
d  11110
e  1110
f  110
g  10
h  0
Given String can be decomposed as
110 11110 0 1110 10
f d h e g
This solution is contributed by Pranjul Ahuja .
Question 59 
What is the size of the smallest MIS(Maximal Independent Set) of a chain of nine nodes?
5  
4  
3  
2 
Discuss it
Question 59 Explanation:
A set of vertices is called independent set such that no two vertices in the set are adjacent. A maximal independent set (MIS) is an independent set which is not subset of any other independent set.
The question is about smallest MIS. We can see in below diagram, the three highlighted vertices (2nd, 5th and 8th) form a maximal independent set (not subset of any other MIS) and smallest MIS.
000000000
Question 60 
Arrange the following functions in increasing asymptotic order:
A. n^{1/3}
B. e^{n}
C. n^{7/4}
D. n log^{9}n
E. 1.0000001^{n}
A. n^{1/3}
B. e^{n}
C. n^{7/4}
D. n log^{9}n
E. 1.0000001^{n}
A, D, C, E, B  
D, A, C, E, B  
A, C, D, E, B  
A, C, D, B, E 
Discuss it
Question 61 
Which of the following is TRUE?
The cost of searching an AVL tree is θ (log n) but that of a binary search tree is O(n)  
The cost of searching an AVL tree is θ (log n) but that of a complete binary tree is θ (n log n)  
The cost of searching a binary search tree is O (log n ) but that of an AVL tree is θ(n)  
The cost of searching an AVL tree is θ (n log n) but that of a binary search tree is O(n) 
Discuss it
Question 61 Explanation:
AVL tree is a balanced tree.
AVL tree's time complexity of searching = θ(logn)
But a binary search tree, may be skewed tree, so in worst case BST searching time = θ(n)
AVL tree's time complexity of searching = θ(logn)
But a binary search tree, may be skewed tree, so in worst case BST searching time = θ(n)
Question 62 
The two numbers given below are multiplied using the Booth's algorithm.
Multiplicand : 0101 1010 1110 1110
Multiplier: 0111 0111 1011 1101
How many additions/Subtractions are required for the multiplication of the above two numbers?
Multiplicand : 0101 1010 1110 1110
Multiplier: 0111 0111 1011 1101
How many additions/Subtractions are required for the multiplication of the above two numbers?
6  
8  
10  
12 
Discuss it
Question 63 
If we use Radix Sort to sort n integers in the range (n^{k/2},n^{k}], for some k>0 which is independent of n, the time taken would be?
Θ(n)  
Θ(kn)  
Θ(nlogn)  
Θ(n^{2}) 
Discuss it
Question 63 Explanation:
Radix sort time complexity = O(wn)
for n keys of word size= w
=>w = log(n^{k})
O(wn)=O(klogn.n)
=> kO(nlogn)
for n keys of word size= w
=>w = log(n^{k})
O(wn)=O(klogn.n)
=> kO(nlogn)
Question 64 
The auxiliary space of insertion sort is O(1), what does O(1) mean ?
The memory (space) required to process the data is not constant.  
It means the amount of extra memory Insertion Sort consumes doesn't depend on the input. The algorithm should use the same amount of memory for all inputs.  
It takes only 1 kb of memory .  
It is the speed at which the elements are traversed. 
Discuss it
Question 64 Explanation:
The term O(1) states that the space required by the insertion sort is constant i.e., space required doesn't depend on input.
There are 64 questions to complete.